Physical Database Integrity | Database Design | Teradata Vantage - 17.10 - Physical Database Integrity - Advanced SQL Engine - Teradata Database

Teradata Vantage™ - Database Design

Advanced SQL Engine
Teradata Database
Release Number
July 2021
English (United States)
Last Update

Physical database integrity checking mechanisms usually detect data corruption caused by lost writes or bit, byte, and byte string errors. Most hardware devices protect against data corruption automatically by means of various error detection and correction algorithms. For example, bit- and byte-level corruption of disk I/O is usually detected, and often corrected, by error checking and correcting mechanisms at the level of the disk drive hardware, and if the corruption is detected but cannot be corrected, the pending I/O request fails.

Similarly, bit- and byte-level corruption of an I/O in transit might be detected by various parity or error checking and correcting mechanisms in memory and at each intermediate communication link in the path. Again, if the corruption is detected, but cannot be corrected, the pending I/O request fails.

CHECKSUM Integrity Checking and Physical Database Integrity

It is still possible for corrupted data to be written to the database, however. To minimize this problem, users can specify that checksums be performed on individual base tables. Checksums check the integrity of database disk I/O operations. A checksum is a numeric value computed from data. For a given set of data, the checksum value is always the same, if the data is unchanged.

Users can specify checksum for individual base tables in DDL using the ALTER TABLE, CREATE HASH INDEX, CREATE JOIN INDEX, and CREATE TABLE requests. See Teradata Vantage™ - SQL Data Definition Language Syntax and Examples, B035-1144 for information about using these statements.

Because calculating checksums requires system resources and may affect system performance, system-wide checksums are disabled by default on most platforms. Contact Teradata Services if you suspect disk corruption.

FALLBACK Protection and Physical Database Integrity

Fallback protection is another important data integrity mechanism. Fallback works by writing the same data to two different AMPs within a cluster. If the AMP that manages the primary copy of the data goes down, you can still access the fallback version from the other AMP.

You cannot use the NO FALLBACK option and the NO FALLBACK default on platforms optimized for fallback.

Keep in mind that if you specify fallback for a table, you double the amount of disk space required to store the same quantity of data. The amount of disk space required by a table is also doubled if you configure your system for RAID1 mirroring. This means that if you configure your disk for RAID1 mirroring and also specify fallback protection for a table, you actually quadruple the amount of disk space required to store the same quantity of data.

Also be aware that when a table is defined with fallback, it imposes a performance penalty for all DELETE, INSERT, and UPDATE operations on the table because each such operation must be executed twice in order to update both the primary table and its fallback table.

The system defaults to bringing Vantage up when AMPs are down on the assumption that any down AMPs can run in fallback mode. If your site does not use fallback for its critical tables, you probably want to keep Vantage down in this situation. To enable the logic to keep Vantage down, you can use the Maximum Fatal AMPs option from the SCREEN DBS command of the DBS Control utility (see Teradata Vantage™ - Database Utilities, B035-1102 for documentation of the SCREEN DBS command).

Vantage also provides a means for using fallback to deal with read errors caused by bad data blocks (see About Reading or Repairing Data from Fallback).