| Database Node |
Database nodes consist of a Intel R1208WF or MiTAC M50CYP1UR and include:- Database node
- HSN: Database node with no access module processors (AMPs) assigned (see High Availability)
Each cabinet contains up to: - Eight Database nodes or HSNs in a Hybrid system
- Sixteen Database nodes or HSNs in an all-SSD system
Intel R1208WF: - Processors and memory:
- Two 2.3 GHz, 24-core, 35.75 MB cache, Xeon Gold 5220R processors, dual socket
- 768 GB memory using 12 x 64 GB LRDIMMs, 2666 MHz DDR4
- Disk drives:
- Two 1.6 TB SED drives for boot/system operations
- Up to six 1.6 TB SED drives for customer use
- OCP Module (Bundled): OCP X527DA2 Dual port (SFP+)
All Processing nodes have: - Onboard connectivity to the Server Management network
- BYNET InfiniBand network adapter
- [Optional] network communication adapter
- [Optional] network connection to managed BAR servers
MiTAC M50CYP1UR: - Processors and memory:
- Two 2.2 GHz, 26-core, 39 MB cache, Xeon 5320 processors, dual socket
- 768 GB memory using 12 x 64 GB LRDIMMs, 3200 MHz DDR4
- Disk drives:
- Two 1.6 TB SED drives for boot/system operations
- Up to six 1.6 TB SED drives for customer use
- OCP Module (Bundled): OCP X710-T4L 1/10GbE Quad port (RJ45)
All Processing nodes have: - Onboard connectivity to the Server Management network
- BYNET InfiniBand network adapter
- [Optional] network communication adapter
- [Optional] network connection to managed BAR servers
|
| PE Node |
Intel
R1208WF:
- Processors and memory:
- Two 2.3 GHz, 24-core, 35.75 MB
cache, Xeon Gold 5220R processors, dual
socket
- 768 GB memory using 12 x 64 GB
LRDIMMs, 2666 MHz DDR4
- Disk drives:
- Two 1.6 TB SED drives for
boot/system operations
- Up to six 1.6 TB SED drives
for customer use
- OCP Module (Bundled): OCP
X527DA2 Dual port (SFP+)
All Processing nodes have:
- Onboard connectivity to the Server Management network
- BYNET InfiniBand network adapter
- [Optional] network communication
adapter
- [Optional] network connection to
managed BAR servers
MiTAC M50CYP1UR:
- Processors and memory:
- Two 2.2 GHz, 26-core, 39 MB
cache, Xeon 5320 processors, dual socket
- 768 GB memory using 12 x 64 GB
LRDIMMs, 3200 MHz DDR4
- Disk drives:
- Two 1.6 TB SED drives for
boot/system operations
- Up to six 1.6 TB SED drives
for customer use
- OCP Module (Bundled): OCP X710-T4L 1/10GbE Quad port (RJ45+)
All Processing nodes have:
- Onboard connectivity to the Server Management network
- BYNET InfiniBand network adapter
- [Optional] network communication
adapter
- [Optional] network connection to
managed BAR servers
|
| Teradata Managed/Multipurpose Server (TMS) |
Dell R730, R730xd, R740xd, or R760 used for solution offerings peripheral to Teradata Database processing. All TMSs have onboard connectivity to the Server Management network. TMSs connect to the BYNET switches depending on TMS application requirements.
|
| SAS High-Performance Analytics
(HPA) Worker Node |
Dell R730 or R730xd server
configured for SAS HPA in-memory software. |
| SAS Worker Node |
Dell R740xd server configured for SAS HPA or SAS Viya in-memory software. |
| Channel Server |
Dell R730 or R740 for remote interface between Teradata Database and IBM mainframes:- Processors and memory:
- Dell R730 with two 2.6 GHz eight-core 20 MB L3 cache E5-2640 v3 processors
- Dell R730 with 256 GB memory using 32 GB LRDIMMs
- Dell R740 with dual 2.3 GHz 18 core, 24.75 MB cache, Xeon Gole 6140 processors
- Dell 740 with 512 GB memory using eight 64 GB DIMMs
- Disk drives:
- Dell R730 with two 1.2 TB SAS disk drives for OS and dump processing
- Dell R740 with six 2.4 TB data drives, two 2.4 TB OS drives in Flex Bay
- Host channel adapters – maximum three FICON adapters
- No disk array support
- Onboard connectivity to the Server Management network
- BYNET InfiniBand adapter connection to the BYNET network
- [Optional] network communication adapter
|
| Extended Channel Solution (ECS) |
Dell R730 or R740 for remote interface between Teradata Database and an IBM (or compatible) mainframe:- Processors and memory:
- R730 with dual 2.6 GHz eight-core 15MB L3 cache E5-2640 v3 processors
- R730 with 256 GB memory using 32 GB DIMMs
- R740 with dual 2.3 GHz, 18 core, 24.75 MB cache, Xeon Gold 6140 processors
- R740 with 512 GB memory using eight 64 GB DIMMs
- Disk drives:
- R730 with two 900 GB or 1.2 TB SAS (serial-attached SCSI) drives for OS
- R740 with six 2.4 TB data drives, two 2.4 TB OS drives in Flex Bay
- Host channel adapters – maximum three FICON and/or ESCON adapters (can mix types in same node)
- No disk array support
- Onboard connectivity to the Server Management network
- Ethernet adapter connection to the processing nodes
- [Optional] network communication adapter
|
| Enterprise Viewpoint Server |
Dell R730, R730xd, R740xd, or R760 that hosts the Viewpoint portal that contains portlets for customer management of Teradata Database. Systems that have more than 128 nodes or servers require dedicated Enterprise Viewpoint hardware (rather than a VMS with a Viewpoint virtual machine). For systems that use dedicated Enterprise Viewpoint hardware, the Viewpoint hardware can reside in a 9400 cabinet.- Adapter connection to Server Management network
- SLES 15 SP6
- SLES 11 SP3
|
| BAR Storage Hardware Teradata Multipurpose Storage Server (TMSS)
|
Dell R740xd or R760 as an NFS storage target with or without the Data Stream Utility (DSU):- R740xd
- Two 2.3 GHz 12-core Intel Xeon 5118 processors
- 384 GB of memory using 32 GB LRDIMMs
- Twelve 8 TB data drives for NFS
- Ten 8 TB data drives for DSU
- Two 2.4 TB system drives
- R760
- Two 2.0GHz 4416+ 20-core Intel Xeon CPUs
- 768 GB of memory using twelve 64GB LRDIMMs
- Twelve 8 TB data drives for NFS
- Ten 8 TB data drives for DSU
- Two 2.4 TB OS drives
|
| Full Disk Encryption |
Full disk encryption for HDD and
SSD disk storage drives in the database arrays. Processing nodes use
self-encrypting drives (SED). |
| Disk Storage |
Clique configurations:
- Flexible configurations with varying numbers
of processing nodes and HSNs
- Clique configurations may use SSD-only
arrays, or combinations of SSD arrays and HDD arrays, for
disk storage.
- Each Fabric-Attached HDD disk array in an
IntelliFlex cabinet contains six 5350 Camden
drive enclosures:
- Each drive enclosure contains two
power supply/cooling modules.
- The bottom drive enclosure contains
two controller modules.
- The remaining five drive enclosures
contain ESMs (two ESMs per drive enclosure).
- Each drive enclosure contains SAS
storage drives.
- Each Fabric-Attached SSD disk array in an
IntelliFlex cabinet contains one 5350 Camden
drive enclosure. Each enclosure contains two power
supply/cooling modules, two controller modules, and SAS
storage drives.
SAS storage drives:
- 600 GB, 900 GB, or 1.2 TB HDD storage
drives (2.5", 10K rpm) with write back cache
- 1.6 TB SSD storage drives (2.5")
- Four HDD slots for global hot spares
(GHSs) per array
- Two available SSD slots for global hot
spares (GHSs) per array
HDD drive sizes cannot be mixed
in an IntelliFlex system.
|
| High Availability |
- HSNs: One node in a clique is configured as
an HSN. Optionally, a second node can be configured as an
additional HSN. Eliminates the degradation of database
performance in the event of a node failure in the clique.
Tasks assigned to the failed node are completely redirected
to the HSN.
- Global hot spares (GHS): Four HDDs per
array are configured as hot spare drives. In the event of a
drive failure on a RAID mirrored pair, the contents of the
failed drive are copied into a hot spare drive from the
mirrored surviving drive to repair the RAID pair. When the
failed drive is replaced, a copy back operation occurs to
restore data to the replaced drive.
- Fallback: A Teradata Database feature that protects data in case of an AMP vproc failure. Fallback is especially useful in applications that require high availability. All databases or users are set to FALLBACK even if you specify NO FALLBACK in the ALTER TABLE or MODIFY DATABASE/USER REQUEST. You cannot override the Fallback setting during or after table creation.
Disabling Fallback can result in data loss.
Fallback is automatic and transparent, protecting your data by storing a second copy of each row of a table on a different AMP in the same cluster. If an AMP fails, the system accesses the Fallback rows to meet requests. Fallback provides AMP fault tolerance at the table level. With Fallback tables, if one AMP fails, all data is still available. You can continue using Fallback tables without losing access to data. Fallback guarantees that the two copies of a row will always be on different AMPs. If either AMP fails, the alternate row is still available on the other AMP.
|
| BYNET Interconnect |
Redundant BYNET switches using BYNET InfiniBand for database node, PE node, and channel server communication:- The processing/storage cabinets contain redundant 36/40-port InfiniBand switch chassis for systems with a maximum of 468 BYNET-connected nodes (combined database nodes and non-database nodes: HSNs, PE nodes, channel servers, and TMSs).
- If present, the BYNET cabinets contain the following redundant InfiniBand switch chassis configurations:
- 108-port switch is supported in UDA environments.
|
| Adapters |
InfiniBand adapter (used for
BYNET and storage connections):- Nvidia/Mellanox 900-9X7AH-0078-DTZ ConnectX-7
- Mellanox MCX653106A-ECAT (ConnectX-6) InfiniBand adapter
- Mellanox MCX556A-ECAT (ConnectX-5)
InfiniBand adapter
- Mellanox MCB194A-FCAT (Connect-IB)
InfiniBand PCIe3 adapter
Communications adapters: - E810-XXVDA2 dual 25GSFP28 25Gb Ethernet PCIe 4.0
- E810-XXVDA4 quad 25GSFP28 25Gb Ethernet PCIe 4.0
- E810-CQDA2 dual 100GQSFP28 100Gb Ethernet PCIe 4.0
- Intel I350-T4 quad 1 Gb PCIe2 copper adapter in processing nodes or TMSs
- Intel X520-DA2 dual 10 Gb PCIe2 for
processing nodes or TMSs
- Intel X520-SR2 dual 10 Gb PCIe2 for
processing nodes or TMSs
- Intel X540-T2 dual-port 10 Gb 10GBase-T
PCIe2 Base-T adapter in processing nodes or TMSs
- X710-T2 dual 10GBASE-T 10Gb Ethernet PCIe 3.0
- X710-T4 dual 10GBASE-T 10Gb Ethernet PCIe 3.0
- X710-DA2 dual 10GSFP+ 10Gb PCIe 3.0
Ethernet adapter in Intel processing nodes
- X710-DA4 quad 10GSFP+ 10Gb PCIe 3.0
Ethernet adapter in Intel processing nodes
- XXV710-DA2 dual 25GSFP28 25Gb PCIe 3.0
Ethernet adapter in Intel processing nodes
- QLogic QLE-2564, 8 Gb, 4-port, Fiber
Channel
- QLogic QLE-2694, 16 Gb, 4-port, Fiber
Channel
Channel server adapters:
- Luminex FICON LP12000 PCIe2
adapter
|
| Cable Modes |
Both single-mode fiber (SMF) and multi-mode fiber (MMF) connections are supported for the 10Gb Ethernet optical interfaces into the servers in this Teradata Database system. |
| Virtualized Management Server (VMS) |
Three generations of VMS. VMS 3.0 and VMS 4.0 are an Intel R1208WF chassis. VMS 5.0 is a MiTAC M50CYP1UR chassis. All systems require at least two VMS to host new integrated service VMs. Additional VMSs can be added as systems are expanded. Intel R1208WF VMS
Any remaining resources on the VMS are reserved for future use and expansion. Do not load non-certified guest VMs to the VMS.
An Intel R1208WF VMS supports up to 100 nodes and 100 disk arrays, or a combination of no more than 200 nodes and disk arrays, and consists of: - Two 2.1 GHz, eight-core, 11 MB cache, Xeon Skylake Silver 4110 processors, dual socket
- 512 GB memory using 8 x 64 GB LRDIMMs, 2666 MHz DDR4
- Two 1.6 TB SSD SED drives RAID1 for boot/system operations
- Six 1.6 TB SSD SED drives RAID6 for CMIC, Viewpoint, and SWS applications
- OCP Module (Bundled): OCP X527DA2 Dual port (SFP+)
All VMS have: - Onboard connectivity to Server Management network
- Onboard connectivity to customer network
- SLES 12 SP3, SLES 15 SP2
CMIC 13.x or later, VMS 3.0 or later, and Viewpoint 16.00 or later are required for IntelliFlex VMS.
Intel R1208WF VMS 4.0
Any remaining resources on the VMS are reserved for future use and expansion. Do not load non-certified guest VMs to the VMS.
An Intel R1208WF VMS supports up to 100 nodes and 100 disk arrays, or a combination of no more than 200 nodes and disk arrays, and consists of: - Two 2.1 GHz, ten-core, 13.75 MB cache, Xeon Cascade Lake Silver 4210 processors, dual socket
- 512 GB memory using 8 x 64 GB LRDIMMs, 2666 MHz DDR4
- Six 1.6 TB SSD SED drives RAID6 for both boot/system operations and CMIC, Viewpoint, and SWS applications
- OCP Module (Bundled): OCP X527DA2 Dual port (SFP+)
All VMS have: - Onboard connectivity to Server Management network
- Onboard connectivity to customer network
- SLES 12 SP3, SLES 15 SP2, SLES 15 SP6
CMIC 15.x or later, VMS 3.0 or later, and Viewpoint 16.00 or later are required for IntelliFlex VMS.
MiTAC M50CYP1UR VMS 5.0
Any remaining resources on the VMS are reserved for future use and expansion. Do not load non-certified guest VMs to the VMS.
A MiTAC M50CYP1UR VMS supports up to 100 nodes and 100 disk arrays, or a combination and consists of: - Two 2.1 GHz, tweleve-core, 18 MB cache, Xeon Ice Lake 4310 processors, dual socket
- 512 GB memory using 8 x 64 GB RDIMMs, 3200 MHz DDR4
- Four 1.6 TB SSD SED drives RAID5 for both boot/system operations and CMIC, Viewpoint, and SWS applications
- OCP Module (Bundled): OCP X710-T4L 1GbE Quad port (RJ45)
All VMS have: - Onboard connectivity to Server Management network
- Onboard connectivity to customer network
- SLES 15 SP6
CMIC 15.x or later, VMS 5.0 or later, and Viewpoint 16.00 or later are required for IntelliFlex VMS.
|