15.00 - 15.10 - Hardware Fault Tolerance - Teradata Database

Teradata Database Introduction to Teradata

Teradata Database
Release Number
Content Type
User Guide
Publication ID
English (United States)
Last Update

Teradata Database provides the following facilities for hardware fault tolerance.




Multiple BYNETs

Multinode Teradata Database servers are equipped with at least two BYNETs. Interprocessor traffic is never stopped unless all BYNETs fail. Within a BYNET, traffic can often be rerouted around failed components.

RAID disk units

  • Teradata Database servers use Redundant Arrays of Independent Disks (RAIDs) configured for use as RAID1, RAID5, or RAIDS.
  • Non-array storage cannot use RAID technology.

  • RAID1 arrays offer mirroring, the method of maintaining identical copies of data.
  • RAID5 or RAIDS protects data from single-disk failures with a 25% increase in disk storage to provide parity.
  • RAID1 provides better performance and data protection than RAID5/RAIDS, but is more expensive.
  • Multiple client-server connections

    In a client-server environment, multiple connections between mainframe and workstation-based clients ensure that most processing continues even if one or several connections between the clients and server are not working.

    Vproc migration is a software feature supporting this hardware issue.

    Isolation from client hardware defects

    In a client-server environment, a server is isolated from many client hardware defects and can continue processing in spite of such defects.

    Power supplies and fans

    Each cabinet in a configuration has redundant power supplies and fans to ensure fail-safe operation.

    Hot swap capability for node components

    Teradata Database can allow some components to be removed and replaced while the system is running. This process is known as hot swap. Teradata Database offers hot swap capability for the following:

  • Disks within RAID arrays
  • Fans
  • Power supplies
  • Cliques

  • A clique is a group of nodes sharing access to the same disk arrays. The nodes and disks are interconnected through FC buses and each node can communicate directly to all disks. This architecture provides and balances data availability in the case of a node failure.
  • A clique supports the migration of vprocs following a node failure. If a node in a clique fails, then its vprocs migrate to another node in the clique and continue to operate while recovery occurs on their home node. Migration minimizes the performance impact on the system.
  • PEs that manage TPA-hosted physical channel connections cannot migrate because they depend on the hardware that is physically attached to the assigned node.
  • PEs for workstation-attached connections do migrate when a node failure occurs, as do all AMP vprocs.
  • To ensure maximum fault tolerance, no more than one node in a clique is placed in the same cabinet. Usually the battery backup feature makes this precaution unnecessary, but if you want maximum fault tolerance, then plan your cliques so the nodes are never in the same cabinet.
  • For more information on the topics presented in this chapter, see the following Teradata Database and Teradata Tools and Utilities books.


    IF you want to learn more about…


    Software Fault Tolerance, including:

  • Vproc Migration and Fallback Tables
  • Clusters (AMP clusters, one-cluster and small cluster configurations)
  • Journaling and Backup/Archive/Recovery (online archiving)
  • Table Rebuild Utility
  • Database Administration
  • Teradata Archive/Recovery Utility Reference
  • SQL Data Definition Language
  • Utilities
  • Hardware Fault Tolerance

    Database Design