Platform Summary: IntelliFlex 1.x

Teradata® Hardware Product Guide

brand
Hardware
prodname
BAR Managed Servers
IntelliBase Platforms
IntelliFlex Platforms
category
Product Guide
featnum
B035-6200-068K
We reserve the right to replace any of the listed drives with an equivalent or better drive.
Feature Description
Processing Node Processing node types consist of a Dell R630 or Intel R1208WFT and include:
  • Teradata processing node
  • HSN: Teradata processing node with no access module processors (AMPs) assigned (see High Availability below)
Each cabinet contains up to:
  • Eight Teradata processing nodes or HSNs in a Hybrid system
  • Sixteen Teradata processing nodes or HSNs in an all-SSD system
Dell R630:
  • Processors and memory:
    • Two Intel Xeon 2.1 GHz E5-2620 v4 processors (20 MB L3 cache)
    • Eight-core processor configuration
    • 256 GB or 512 GB memory using 32 GB RDIMMs, 2400 MHz maximum speed (limited to 2133MHz due to limits of the E5-2620 v4 CPU), DDR4 (eight memory channels, two DIMMs per channel, four memory channels per CPU socket)
  • Disk drives:
    • Two 1.2 TB or 1.8 TB SAS (serial-attached SCSI) drives for OS and dump processing
    • Eight additional 1.2 TB or 1.8 TB SAS drive bays available for customer use
    An optional USB transport drive is available for customers who require Teradata GSC to ship physical media back to Teradata. The kit number for the USB transport drive is 2021-K943. The kit includes instructions.
Intel R1208WFT:
  • Processors and memory:
    • Two 2.1 GHz, eight-core, 11 MB cache, Xeon Silver 4110 processors, dual socket
    • 512 GB memory using 8 x 64 GB LRDIMMs, 2666 MHz DDR4
  • Disk drives:
    • Two 1.6 TB SED drives for boot/system operations
    • Up to six 1.6 TB SED drives for customer use
  • OCP Module (Bundled): OCP X557-T2 Dual port 10GbE (RJ45)
All Processing nodes have:
  • Onboard connectivity to the Server Management network
  • BYNET InfiniBand network adapter
  • [Optional] network communication adapter
  • [Optional] network connection to managed BAR servers
PE Node Dell R630 or Intel R1208WFT, configured for parsing engine (PE) processing only.
Dell R630:
  • Processors and memory:
    • Two Intel Xeon 2.1 GHz eight-core E5-2620 v4 processors (20 MB L3 cache)
    • 1 TB memory using 64 GB RDIMMs, or 256 GB or 512 GB memory using 32 GB RDIMMs; 2400 MHz maximum speed (limited to 2133MHz due to limits of the E5-2620 v4 CPU), DDR4 (eight memory channels, two DIMMs per channel, four memory channels per CPU socket)
  • Disk drives:
    • Two 1.2 TB or 1.8 TB SAS (serial-attached SCSI) drives for OS and dump processing
Intel R1208WFT:
  • Processors and memory:
    • Two 2.1 GHz, eight-core, 11 MB cache, Xeon Silver 4110 processors, dual socket
    • 512 GB memory using 8 x 64 GB LRDIMMs, 2666 MHz DDR4
  • Disk drives:
    • Two 1.6 TB SED drives for boot/system operations
    • Up to six 1.6 TB SED drives for customer use
All chassis have:
  • No disk array support
  • Onboard connectivity to the Server Management network
  • BYNET InfiniBand network adapter
  • [Optional] network communication adapter
Teradata Managed/Multipurpose Server (TMS) Dell R720, R720xd, R730, R730xd, or R740xd used for solution offerings peripheral to Teradata processing.

All TMSs have onboard connectivity to the Server Management network. TMSs connect to the BYNET switches depending on TMS application requirements.

SAS High-Performance Analytics (HPA) Worker Node Dell R720, R730, or R730xd server configured for SAS HPA in-memory software.
SAS Worker Node Dell R740xd server configured for SAS HPA or SAS Viya in-memory software.
Channel Server Dell R720 or R730 for remote interface between a Teradata Database and IBM mainframes:
  • Processors and memory:
    • Dell R720 with two 2.0 GHz six-core 15 MB L3 cache E5-2620 processors
    • Dell R730 with two 2.6 GHz eight-core 20 MB L3 cache E5-2640 v3 processors
    • Dell R720 with 128 GB memory using 16 GB DIMMs
    • Dell R730 with 256 GB memory using 32 GB LRDIMMs
  • Disk drives:
    • Two 1.2 TB SAS disk drives for OS and dump processing
  • Host channel adapters – maximum three FICON adapters
  • No disk array support
  • Onboard connectivity to the Server Management network
  • BYNET InfiniBand adapter connection to the BYNET network
  • [Optional] network communication adapter
Extended Channel Solution (ECS) Dell R730 for remote interface between a Teradata Database and an IBM (or compatible) mainframe:
  • Two 900 GB or 1.2 TB SAS (serial-attached SCSI) drives for OS
  • Dual 2.6 GHz eight-core 15MB L3 cache E5-2640 v3 processors
  • 256 GB memory using 32 GB DIMMs
  • Host channel adapters – maximum three FICON and/or ESCON adapters (can mix types in same node)
  • No disk array support
  • Onboard connectivity to the Server Management network
  • Ethernet adapter connection to the processing nodes
  • [Optional] network communication adapter
Enterprise Viewpoint Server Dell R720 or R730 that hosts the Viewpoint portal that contains portlets for customer management of Teradata Database. Systems that have more than 128 nodes or servers require dedicated Enterprise Viewpoint hardware (rather than a VMS with a Viewpoint virtual machine). For systems that use dedicated Enterprise Viewpoint hardware, the Viewpoint hardware can reside in a 9400 cabinet.
  • Adapter connection to Server Management network
  • SLES11 SP3
Enterprise Service Workstation (SWS) Dell R720 dedicated to system servicing and maintenance. Systems with an Intel R1208GZ or R1208WT System VMS and more than 128 disk arrays, or multi-system configurations, require dedicated Enterprise SWS hardware (rather than a VMS with an SWS virtual machine). For systems that use dedicated Enterprise SWS hardware, the SWS hardware resides in a 9400 cabinet.
  • Adapter connection to Server Management network
  • SLES11
BAR Storage Hardware

Teradata Multipurpose Storage Server (TMSS)

Dell R740xd as an NFS storage target with or without the Data Stream Utility (DSU):
  • Two 2.3 GHz 12-core Intel Xeon 5118 processors
  • 384 GB of memory using 32 GB LRDIMMs
  • Twelve 8 TB data drives for NFS
  • Ten 8 TB data drives for DSU
  • Two 2.4 TB system drives
Full Disk Encryption Full disk encryption for HDD and SSD disk storage drives in the database arrays. Processing nodes use self-encrypting drives (SED).
Disk Storage Clique configurations:
  • Flexible configurations with varying numbers of processing nodes and HSNs
  • Clique configurations may use SSD-only arrays, or combinations of SSD arrays and HDD arrays, for disk storage.
  • Each Fabric-Attached HDD disk array in an IntelliFlex cabinet contains six 5350 Camden drive enclosures:
    • Each drive enclosure contains two power supply/cooling modules.
    • The bottom drive enclosure contains two controller modules.
    • The remaining five drive enclosures contain ESMs (two ESMs per drive enclosure).
    • Each drive enclosure contains SAS storage drives.
  • Each Fabric-Attached SSD disk array in an IntelliFlex cabinet contains one 5350 Camden drive enclosure. Each enclosure contains two power supply/cooling modules, two controller modules, and SAS storage drives.
SAS storage drives:
  • 600 GB, 900 GB, or 1.2 TB HDD storage drives (2.5", 10K rpm) with write back cache
  • 1.6 TB SSD storage drives (2.5")
  • Four HDD slots for global hot spares (GHSs) per array
  • Two available SSD slots for global hot spares (GHSs) per array
HDD drive sizes cannot be mixed in an IntelliFlex system.
High Availability
  • HSNs: One node in a clique is configured as an HSN. Optionally, a second node can be configured as an additional HSN. Eliminates the degradation of database performance in the event of a node failure in the clique. Tasks assigned to the failed node are completely redirected to the HSN.
  • Global hot spares (GHS): Four HDDs per array are configured as hot spare drives. In the event of a drive failure on a RAID mirrored pair, the contents of the failed drive are copied into a hot spare drive from the mirrored surviving drive to repair the RAID pair. When the failed drive is replaced, a copy back operation occurs to restore data to the replaced drive.
  • Data resilience: Data protection is provided at the table level by automatically storing a copy of each permanent data row of a table on a different AMP. If an AMP fails, Teradata Database can access the copy and continue operation.
BYNET Interconnect Redundant BYNET switches using BYNET InfiniBand for Teradata processing node, PE node, and channel server communication:
  • The processing/storage cabinets contain redundant 36-port InfiniBand switch chassis for systems with a maximum of 468 BYNET-connected nodes (combined Teradata processing nodes and non-Teradata processing nodes: HSNs, PE nodes, channel servers, and TMSs).
  • If present, the BYNET cabinets contain one of the following redundant InfiniBand switch chassis configurations:
    • 324-port InfiniBand switch chassis for systems with a maximum 1,053 nodes (972 active nodes), combining Teradata processing nodes and non-Teradata processing nodes: HSNs, PE nodes, channel servers, and TMSs.
      The 324-port switch cabinet can have up to two optional Dell R720, R720xd, R730, or R730xd TMSs in any combination. Some configurations may also have an Intel R1208GZ or Intel R1208WT System VMS. If a System VMS is included, a KMM is also included.
    • 108 and 648-port switches are supported in UDA environments.
Adapters InfiniBand adapter (used for BYNET and storage connections):
  • Mellanox MCX556A-ECAT (ConnectX-5) InfiniBand adapter
  • Mellanox MCB194A-FCAT (Connect-IB) InfiniBand PCIe3 adapter
Communications adapters:
  • Intel I350-T4 quad 1 Gb PCIe2 copper adapter in processing nodes or TMSs
  • Intel X520-DA2 dual 10 Gb PCIe2 for processing nodes or TMSs
  • Intel X520-SR2 dual 10 Gb PCIe2 for processing nodes or TMSs
  • Intel X540-T2 dual-port 10 Gb 10GBase-T PCIe2 Base-T adapter in processing nodes or TMSs
  • X710-DA2 dual 10GSFP+ 10Gb PCIe 3.0 Ethernet adapter in Intel processing nodes
  • X710-DA4 quad 10GSFP+ 10Gb PCIe 3.0 Ethernet adapter in Intel processing nodes
  • XXV710-DA2 dual 25GSFP28 25Gb PCIe 3.0 Ethernet adapter in Intel processing nodes
  • QLogic QLE-2564, 8 Gb, 4-port, Fiber Channel
  • QLogic QLE-2694, 16 Gb, 4-port, Fiber Channel
Channel server adapters:
  • Luminex FICON LP12000 PCIe2 adapter
Cable Modes Both single-mode fiber (SMF) and multi-mode fiber (MMF) connections are supported for the 10Gb Ethernet optical interfaces into the servers in this Teradata system.
Virtualized Management Server (VMS)

(Intel R1208WFT)

A VMS is an Intel R1208WFT chassis using VMS 3.0. All systems require at least two VMS to host new integrated service VMs. Additional VMSs can be added as systems are expanded.

Any remaining resources on the VMS are reserved for future use and expansion. Do not load non-certified guest VMs to the VMS.

An Intel R1208WFT VMS supports up to 100 nodes and 100 disk arrays, or a combination of no more than 200 nodes and disk arrays, and consists of:
  • Two 2.1 GHz, eight-core, 11 MB cache, Xeon Silver 4110 processors, dual socket
  • 512 GB memory using 8 x 64 GB LRDIMMs, 2666 MHz DDR4
  • Two 1.6 TB SSD SED drives for boot/system operations
  • Up to six 1.6 TB SSD SED drives for customer use
  • OCP Module (Bundled): OCP X557-T2 Dual port 10GbE (RJ45)
All VMS have:
  • Onboard connectivity to Server Management network
  • Onboard connectivity to customer network
  • SLES 12 SP2

CMIC 12.08 or later, VMS 3.0 or later, and Viewpoint 16.00 or later are required for IntelliFlex VMS.

System VMS

(Intel R1208GZ or Intel R1208WT)

An Intel R1208GZ or R1208WT VMS is available in system VMS and cabinet VMS configurations:
  • An IntelliFlex cabinet can contain a system VMS or cabinet VMS.
  • A BYNET cabinet can contain a system or cabinet VMS.
A system VMS is an Intel R1208GZ or R1208WT chassis using VMS 2.0. A system VMS can host the following virtual machines (VMs):
  • CMIC: Server Management services for the system cabinets
  • SWS: Service entry for the system
  • Viewpoint: Manages only a single IntelliFlex 1.x system
Each Intel R1208GZ or R1208WT system VMS consists of:
  • CMIC, SWS, and Viewpoint VMs. The system VMS supports systems up to a maximum of 128 nodes and servers in a single system, or up to a maximum of 128 disk arrays (15 processing or storage cabinets).
  • Intel R1208WT: Two 2.4 GHz six-core 15 MB L3 cache E5-2620 v3 processors, dual socket
  • Intel R1208GZ: Two 2.0 GHz six-core 15 MB L3 cache E5-2620 processors, dual socket
  • 128 GB memory using 16 GB DIMMs
  • Two 1.2 TB RAID 1 drives for boot/system operations
  • Two 1.2 TB RAID 1 drives for Viewpoint data storage
  • Two 1.2 TB RAID 1 drives for future use
  • Onboard connectivity to Server Management network
  • Onboard or adapter connectivity to customer network
  • SLES 11 SP3

CMIC 12.01 or later, VMS 2.0 or later, and Viewpoint 15.11 or later are required for IntelliFlex system VMS.

Cabinet VMS

(Intel R1208GZ or Intel R1208WT only)

Each Intel R1208GZ or R1208WT cabinet VMS consists of:
  • CMIC VM
  • Intel R1208WT: One 2.4 GHz six-core 15 MB L3 cache E5-2620 v3 processor, dual socket
  • Intel R1208GZ: One 2.0 GHz six-core 15 MB L3 cache E5-2620 processor, dual socket
  • 64 GB memory using 16 GB DIMMs
  • Two 600 GB RAID 1 drives for boot/system operations
  • Onboard connectivity to Server Management network
  • Onboard or adapter connectivity to customer network
  • SLES 11 SP3 on VMS and CMIC VM
  • Add two additional 600Gb disks to become UDA-Enabling cabinet VMS

CMIC 12.01 or later and VMS 2.0 or later are required for IntelliFlex cabinet VMS.

Server Management Web (SMWeb) Services SMWeb 12.01 or later and ServiceConnect are required for Intel R1208GZ or R1208WT VMS

SMWeb 12.08 or later and ServiceConnect are required for Intel R1208WFT VMS

KMM Console If the system has a VMS in a TMS or BYNET cabinet, the rackmount KMM resides in the BYNET cabinet and has VGA and USB connections to the VMS. For all other configurations, the KMM resides in the IntelliFlex cabinet.
Cable Management Processing and storage cabinets are pre-wired for one clique, with a cable harness provided for Server Management Ethernet per node.

BYNET cabinets do not have cable harnesses.

Openings in cabinet sides near the rear of the rack accommodate inter-rack cabling.

AC Power Subsystem IntelliFlex processing or storage cabinets:
  • Four zero U PDUs, two on each inside rear panel of the cabinet.
  • Two or four AC feeder boxes on each inside rear panel of the cabinet. Available AC feeder box types: power types ABCD.
BYNET cabinet:
  • Two AC boxes of the same type per cabinet.
  • For cabinets with the 108-port or 324-port switch, the AC box type is a single-cord AC box with a plug rated at 30A for North America and rated at 32A for Europe. The plug is used for both single-phase (220-240V line-to-neutral) or phase-to-phase (208V) connections. Each cabinet has two AC boxes; each AC box has one power cord for a total of two power cords per cabinet.
  • For cabinets with the 648-port switch, the AC box type is a dual-cord AC box with a plug rated at 30A for North America and rated at 32A for Europe. The plug is used for both single-phase (220-240V line-to-neutral) or phase-to-phase (208V) connections. Each cabinet has two AC boxes; each AC box has two power cords for a total of four power cords per cabinet.
Operating System
  • SLES11 SP3 or later

Depending on node type, as noted under each node type description.

Teradata Database 15.10.04 or later
Contact your Teradata representative to confirm the exact version of Teradata Database and PDE packages required for this platform.