16.10 - Physical Database Integrity - Teradata Database

Teradata Database Design

Product
Teradata Database
Release Number
16.10
Release Date
June 2017
Content Type
User Guide
Publication ID
B035-1094-161K
Language
English (United States)

Physical database integrity checking mechanisms usually detect data corruption caused by lost writes or bit, byte, and byte string errors. Most hardware devices protect against data corruption automatically by means of various error detection and correction algorithms. For example, bit- and byte-level corruption of disk I/O is usually detected, and often corrected, by error checking and correcting mechanisms at the level of the disk drive hardware, and if the corruption is detected but cannot be corrected, the pending I/O request fails.

Similarly, bit- and byte-level corruption of an I/O in transit might be detected by various parity or error checking and correcting mechanisms in memory and at each intermediate communication link in the path. Again, if the corruption is detected, but cannot be corrected, the pending I/O request fails.

CHECKSUM Integrity Checking and Physical Database Integrity

It is still possible for corrupted data to be written to the database, however. To minimize this problem, users can specify that checksums be performed on individual base tables. Checksums check the integrity of Teradata Database disk I/O operations. A checksum is a numeric value computed from data. For a given set of data, the checksum value is always the same, if the data is unchanged.

Users can specify checksum for individual base tables in DDL via the ALTER TABLE, CREATE HASH INDEX, CREATE JOIN INDEX, and CREATE TABLE requests. See SQL Data Definition Language for information about using these statements.

Because calculating checksums requires system resources and may affect system performance, system-wide checksums are disabled by default on most platforms. Contact Teradata Customer Support if you suspect disk corruption.

FALLBACK Protection and Physical Database Integrity

Fallback protection is another important data integrity mechanism. Fallback works by writing the same data to two different AMPs within a cluster. If the AMP that manages the primary copy of the data goes down, you can still access the fallback version from the other AMP.

Keep in mind that if you specify fallback for a table, you double the amount of disk space required to store the same quantity of data. The amount of disk space required by a table is also doubled if you configure your system for RAID1 mirroring. This means that if you configure your disk for RAID1 mirroring and also specify fallback protection for a table, you actually quadruple the amount of disk space required to store the same quantity of data.

Also be aware that when a table is defined with fallback, it imposes a performance penalty for all DELETE, INSERT, and UPDATE operations on the table because each such operation must be executed twice in order to update both the primary table and its fallback table.

The system defaults to bringing Teradata Database up when AMPs are down on the assumption that any down AMPs can run in fallback mode. If your site does not use fallback for its critical tables, you probably want to keep Teradata Database down in this situation. To enable the logic to keep Teradata Database down, you can use the Maximum Fatal AMPs option from the SCREEN DBS command of the DBS Control utility (see Utilities for documentation of the SCREEN DBS command).

Teradata Database also provides a means for using fallback to deal with read errors caused by bad data blocks (see About Reading or Repairing Data from Fallback).