Teradata Database Node |
Teradata Database nodes consist of a Intel R1208WF or MiTAC
M50CYP1UR and include:
- Teradata Database node
- HSN: Teradata Database node with no access module
processors (AMPs) assigned (see High Availability)
Each cabinet contains up to:
- Eight Teradata Database nodes or HSNs in a Hybrid
system
- Sixteen Teradata Database nodes or HSNs in an all-SSD
system
Intel R1208WF:
- Processors and memory:
- Two 2.3 GHz, 24-core, 35.75 MB
cache, Xeon Gold 5220R processors, dual
socket
- 768 GB memory using 12 x 64 GB
LRDIMMs, 2666 MHz DDR4
- Disk drives:
- Two 1.6 TB SED drives for
boot/system operations
- Up to six 1.6 TB SED drives
for customer use
- OCP Module (Bundled): OCP X527DA2 Dual
port (SFP+)
All Processing nodes have:
- Onboard connectivity to the Server Management network
- BYNET InfiniBand network adapter
- [Optional] network communication
adapter
- [Optional] network connection to
managed BAR servers
MiTAC M50CYP1UR:
- Processors and memory:
- Two 2.2 GHz, 26-core, 39 MB
cache, Xeon 5320 processors, dual socket
- 768 GB memory using 12 x 64 GB
LRDIMMs, 3200 MHz DDR4
- Disk drives:
- Two 1.6 TB SED drives for
boot/system operations
- Up to six 1.6 TB SED drives
for customer use
- OCP Module (Bundled): OCP X710-T4L
15GbE Quad port (RJ45+)
All Processing nodes have:
- Onboard connectivity to the Server Management network
- BYNET InfiniBand network adapter
- [Optional] network communication
adapter
- [Optional] network connection to
managed BAR servers
|
PE Node |
Intel
R1208WF:
- Processors and memory:
- Two 2.3 GHz, 24-core, 35.75 MB
cache, Xeon Gold 5220R processors, dual
socket
- 768 GB memory using 12 x 64 GB
LRDIMMs, 2666 MHz DDR4
- Disk drives:
- Two 1.6 TB SED drives for
boot/system operations
- Up to six 1.6 TB SED drives
for customer use
- OCP Module (Bundled): OCP
X527DA2 Dual port (SFP+)
All Processing nodes have:
- Onboard connectivity to the Server Management network
- BYNET InfiniBand network adapter
- [Optional] network communication
adapter
- [Optional] network connection to
managed BAR servers
MiTAC M50CYP1UR:
- Processors and memory:
- Two 2.2 GHz, 26-core, 39 MB
cache, Xeon 5320 processors, dual socket
- 768 GB memory using 12 x 64 GB
LRDIMMs, 3200 MHz DDR4
- Disk drives:
- Two 1.6 TB SED drives for
boot/system operations
- Up to six 1.6 TB SED drives
for customer use
- OCP Module (Bundled): OCP
X710-T4L 1GbE Quad port(RJ45+)
All Processing nodes have:
- Onboard connectivity to the Server Management network
- BYNET InfiniBand network adapter
- [Optional] network communication
adapter
- [Optional] network connection to
managed BAR servers
|
Teradata Managed/Multipurpose
Server (TMS) |
Dell R730, R730xd, or R740xd used
for solution offerings peripheral to Teradata Database processing. All TMSs have
onboard connectivity to the Server Management network. TMSs
connect to the BYNET switches depending on TMS application
requirements.
|
SAS High-Performance Analytics
(HPA) Worker Node |
Dell R730 or R730xd server
configured for SAS HPA in-memory software. |
SAS Worker Node |
Dell R740xd server configured
for SAS HPA or SAS Viya in-memory software. |
Channel Server |
Dell R730 for remote interface
between Teradata Database and IBM mainframes:
- Processors and memory:
- Dell R730 with two 2.6 GHz
eight-core 20 MB L3 cache E5-2640 v3 processors
- Dell R730 with 256 GB memory using
32 GB LRDIMMs
- Disk drives:
- Two 1.2 TB SAS disk drives for OS
and dump processing
- Host channel adapters – maximum three FICON
adapters
- No disk array support
- Onboard connectivity to the Server Management network
- BYNET InfiniBand adapter connection to the
BYNET network
- [Optional] network communication
adapter
|
Extended Channel Solution
(ECS) |
Dell R730 for remote interface
between Teradata Database and an IBM (or compatible) mainframe:
- Two 900 GB or 1.2 TB SAS (serial-attached
SCSI) drives for OS
- Dual 2.6 GHz eight-core 15MB L3 cache
E5-2640 v3 processors
- 256 GB memory using 32 GB DIMMs
- Host channel adapters – maximum three FICON
and/or ESCON adapters (can mix types in same node)
- No disk array support
- Onboard connectivity to the Server Management network
- Ethernet adapter connection to the
processing nodes
- [Optional] network communication
adapter
|
Enterprise Viewpoint
Server |
Dell R730, R730xd, or R740xd that hosts the Viewpoint portal that contains portlets for customer management of Teradata Database. Systems that have more than 128 nodes or servers require dedicated Enterprise Viewpoint hardware (rather than a VMS with a Viewpoint virtual machine). For systems that use dedicated Enterprise Viewpoint hardware, the Viewpoint hardware can reside in a 9400 cabinet.- Adapter connection to Server Management network
- SLES11 SP3
|
BAR Storage Hardware Teradata Multipurpose Storage Server
(TMSS)
|
Dell R740xd as an NFS storage target with or
without the Data Stream Utility (DSU):
- Two 2.3 GHz 12-core Intel Xeon 5118
processors
- 384 GB of memory using 32 GB LRDIMMs
- Twelve 8 TB data drives for NFS
- Ten 8 TB data drives for DSU
- Two 2.4 TB system drives
|
Full Disk Encryption |
Full disk encryption for HDD and
SSD disk storage drives in the database arrays. Processing nodes use
self-encrypting drives (SED). |
Disk Storage |
Clique configurations:
- Flexible configurations with varying numbers
of processing nodes and HSNs
- Clique configurations may use SSD-only
arrays, or combinations of SSD arrays and HDD arrays, for
disk storage.
- Each Fabric-Attached HDD disk array in an
IntelliFlex cabinet contains six 5350 Camden
drive enclosures:
- Each drive enclosure contains two
power supply/cooling modules.
- The bottom drive enclosure contains
two controller modules.
- The remaining five drive enclosures
contain ESMs (two ESMs per drive enclosure).
- Each drive enclosure contains SAS
storage drives.
- Each Fabric-Attached SSD disk array in an
IntelliFlex cabinet contains one 5350 Camden
drive enclosure. Each enclosure contains two power
supply/cooling modules, two controller modules, and SAS
storage drives.
SAS storage drives:
- 600 GB, 900 GB, or 1.2 TB HDD storage
drives (2.5", 10K rpm) with write back cache
- 1.6 TB SSD storage drives (2.5")
- Four HDD slots for global hot spares
(GHSs) per array
- Two available SSD slots for global hot
spares (GHSs) per array
HDD drive sizes cannot be mixed
in an IntelliFlex system.
|
High Availability |
- HSNs: One node in a clique is configured as
an HSN. Optionally, a second node can be configured as an
additional HSN. Eliminates the degradation of database
performance in the event of a node failure in the clique.
Tasks assigned to the failed node are completely redirected
to the HSN.
- Global hot spares (GHS): Four HDDs per
array are configured as hot spare drives. In the event of a
drive failure on a RAID mirrored pair, the contents of the
failed drive are copied into a hot spare drive from the
mirrored surviving drive to repair the RAID pair. When the
failed drive is replaced, a copy back operation occurs to
restore data to the replaced drive.
- Fallback: A Teradata Database feature that protects data in case
of an AMP vproc failure. Fallback is especially useful in
applications that require high availability. All databases
or users are set to FALLBACK even if you specify NO FALLBACK
in the ALTER TABLE or MODIFY DATABASE/USER REQUEST. You
cannot override the Fallback setting during or after table
creation.
Disabling Fallback can
potentially result in data loss.
Fallback is automatic and transparent,
protecting your data by storing a second copy of each
row of a table on a different AMP in the same cluster.
If an AMP fails, the system accesses the Fallback rows
to meet requests. Fallback provides AMP fault tolerance
at the table level. With Fallback tables, if one AMP
fails, all data is still available. You can continue
using Fallback tables without losing access to data.
Fallback guarantees that the two copies of a row will
always be on different AMPs. If either AMP fails, the
alternate row is still available on the other
AMP.
|
BYNET Interconnect |
Redundant BYNET switches using
BYNET InfiniBand for Teradata Database node, PE node, and channel server
communication:
- The processing/storage cabinets contain redundant 36/40-port InfiniBand switch chassis for systems with a maximum of 468 BYNET-connected nodes (combined Teradata Database nodes and non-Teradata Database nodes: HSNs, PE nodes, channel servers, and TMSs).
- If present, the BYNET cabinets contain the following redundant InfiniBand switch chassis configurations:
- 108-port switch is supported in UDA environments.
|
Adapters |
InfiniBand adapter (used for
BYNET and storage connections):- Mellanox MCX653106A-ECAT (ConnectX-6) InfiniBand adapter
- Mellanox MCX556A-ECAT (ConnectX-5)
InfiniBand adapter
- Mellanox MCB194A-FCAT (Connect-IB)
InfiniBand PCIe3 adapter
Communications adapters:
- Intel I350-T4 quad 1 Gb PCIe2 copper
adapter in processing nodes or TMSs
- Intel X520-DA2 dual 10 Gb PCIe2 for
processing nodes or TMSs
- Intel X520-SR2 dual 10 Gb PCIe2 for
processing nodes or TMSs
- Intel X540-T2 dual-port 10 Gb 10GBase-T
PCIe2 Base-T adapter in processing nodes or TMSs
- X710-DA2 dual 10GSFP+ 10Gb PCIe 3.0
Ethernet adapter in Intel processing nodes
- X710-DA4 quad 10GSFP+ 10Gb PCIe 3.0
Ethernet adapter in Intel processing nodes
- XXV710-DA2 dual 25GSFP28 25Gb PCIe 3.0
Ethernet adapter in Intel processing nodes
- QLogic QLE-2564, 8 Gb, 4-port, Fiber
Channel
- QLogic QLE-2694, 16 Gb, 4-port, Fiber
Channel
Channel server adapters:
- Luminex FICON LP12000 PCIe2
adapter
|
Cable Modes |
Both single-mode fiber (SMF) and
multi-mode fiber (MMF) connections are supported for the 10Gb
Ethernet optical interfaces into the servers in this Teradata Database system. |
Virtualized Management Server
(VMS) (Intel R1208WF)
|
A VMS is an Intel R1208WF chassis
using VMS 3.0. All systems require at least two VMS to host new
integrated service VMs. Additional VMSs can be added as systems are
expanded. Any remaining resources on the VMS
are reserved for future use and expansion. Do not load
non-certified guest VMs to the VMS.
An
Intel R1208WF VMS supports up to 100 nodes and 100 disk arrays,
or a combination of no more than 200 nodes and disk arrays, and
consists of:
- Two 2.1 GHz, eight-core, 11 MB cache,
Xeon Silver 4110 processors, dual socket
- 512 GB memory using 8 x 64 GB LRDIMMs,
2666 MHz DDR4
- Two 1.6 TB SSD SED drives for
boot/system operations
- Up to six 1.6 TB SSD SED drives for
customer use
- OCP Module (Bundled): OCP X557-T2 Dual
port 10GbE (RJ45)
All VMS have:
- Onboard connectivity to Server
Management network
- Onboard connectivity to customer
network
- SLES 12 SP2
CMIC 13.x or later, VMS 3.0 or
later, and Viewpoint 16.00 or later are required for IntelliFlex VMS.
|