Physical Database Integrity | Database Design | VantageCloud Lake - Physical Database Integrity - Teradata Vantage

Teradata® VantageCloud Lake

Deployment
VantageCloud
Edition
Lake
Product
Teradata Vantage
Published
January 2023
Language
English (United States)
Last Update
2024-04-03
dita:mapPath
phg1621910019905.ditamap
dita:ditavalPath
pny1626732985837.ditaval
dita:id
phg1621910019905

Physical database integrity checking mechanisms typically detect data corruption caused by lost writes or bit, byte, and byte string errors. Hardware devices typically protect against data corruption automatically by means of error detection and correction algorithms. For example, bit- and byte-level corruption of disk I/O is typically detected, and often corrected, by error checking and correcting mechanisms at the level of the disk drive hardware, and if the corruption is detected but cannot be corrected, the pending I/O request fails.

Similarly, bit- and byte-level corruption of an I/O in transit may be detected by parity or error checking and correcting mechanisms in memory and at each intermediate communication link in the path. Again, if the corruption is detected, but cannot be corrected, the pending I/O request fails.

CHECKSUM Integrity Checking and Physical Database Integrity

However, corrupted data can be written to the database. To minimize this problem, users can specify that checksums be performed on individual base tables. Checksums check the integrity of database disk I/O operations. A checksum is a numeric value computed from data. For a given set of unchanged data, the checksum value is constant.

Users can specify checksum for individual base tables in DDL using the ALTER TABLE,CREATE JOIN INDEX, and CREATE TABLE requests.

Because calculating checksums requires system resources and may affect system performance, system-wide checksums are disabled by default on most platforms. Contact Teradata Support if you suspect disk corruption.

For Object File System tables, this is a default setting that you cannot change.

FALLBACK Protection and Physical Database Integrity

Fallback protection is another important data integrity mechanism. Fallback works by writing the same data to two different AMPs within a compute cluster. If the AMP that manages the primary copy of the data goes down, you can still access the fallback version from the other AMP.

You cannot use the NO FALLBACK option and the NO FALLBACK default on platforms optimized for fallback.

If you specify fallback for a table, you double the amount of disk space required to store the same quantity of data. The amount of disk space required by a table is also doubled if you configure your system for RAID1 mirroring. Therefore, if you configure your disk for RAID1 mirroring and also specify fallback protection for a table, you quadruple the amount of disk space required to store the same quantity of data.

A table defined with fallback imposes a performance penalty for DELETE, INSERT, and UPDATE operations on the table, because each such operation must run twice to update the primary table and its fallback table.

The system defaults to bringing Vantage up when AMPs are down, on the assumption that down AMPs can run in fallback mode. If your site does not use fallback for critical tables, keep Vantage down in this situation.

Vantage also provides a means for using fallback to deal with read errors caused by bad data blocks (see Reading or Repairing Data from Fallback).