Clique and System Architecture for Analytics Database Configuration - Clique and System Architecture for Analytics Database Configuration - Teradata VantageCore VMware

Teradata® VantageCore powered by Dell Technologies - Enterprise Data Warehouse

Deployment
VantageCore
Edition
VMware
Product
Teradata VantageCore VMware
Published
November 2024
ft:locale
en-US
ft:lastEdition
2024-11-18
dita:mapPath
csc1707954420061.ditamap
dita:ditavalPath
ayr1485454803741.ditaval
dita:id
csc1707954420061
Product Category
Cloud

Analytics Database components (compute, storage, and networking) are constructed within a Teradata cluster, called a clique, for internal fault tolerance. All nodes within the clique have access to all storage volumes (LUNs) in the clique, and the failure of any node within the clique is automatically handled by migrating its workload to the Hot Standby Node (HSN) within that same clique.

If multiple nodes fail, the Teradata software worker units (AMPs) are redistributed to the remaining online nodes in the clique. If the number of online VMs in the clique is equal to or greater than the Minimum Nodes Per Clique in the CTL setting, then the clique remains available and active.

The clique's ratio of compute-to-storage provides the right balance of CPU and storage performance to meet your performance requirements. The supported clique configurations are as follows:

Physical Clique Size Configuration Number of ESXi Hosts for Analytics Database+HSN Number of Arrays for Analytics Database
4+1 Balanced (4:3) 5 3
8+1 Balanced (4:3) 9 6

Node-to-Node and Node-to-Storage communication is achieved over a dual-redundant 25GbE switch. This connectivity is facilitated by first-tier 25GbE (leaf) switches in the same rack as the nodes and arrays and for multi-clique systems second-tier 100GbE switches that provide redundant high-speed connectivity between each pair of leaf switches in the system.

The following image shows a single 4+1 balanced clique configuration across such a network.

4+1 Clique Configuration


Example of a clique configuration

To avoid network congestion and over subscription of network resources, the node and arrays within a clique are connected to the same pair of network leaf switches. With this configuration, storage I/O traffic does not travel from leaf to spine, and traffic across the first-tier leaf uplinks is limited to BYNET only.

While two 25GbE leaf switches are sufficient for a single-clique and single rack instance, multi-clique systems require two 100GbE second-tier switches to connect each of the cliques together.

The following image shows this connectivity:


Multi-clique switch topology