15.00 - Restrictions on Load Utilities - Teradata Database

Teradata Database Design

Teradata Database
User Guide

Restrictions on Load Utilities

You cannot use FastLoad, MultiLoad, or the Teradata Parallel Transporter operators LOAD and UPDATE to load data into base tables that have unique secondary indexes. If you attempt to load data into base tables with USIs using these utilities, the load operation aborts and returns an error message to the requestor.

Before you can load data into a USI-indexed base table, you must first drop all defined USIs before you can run FastLoad, MultiLoad, or the Teradata Parallel Transporter operators LOAD and UPDATE.

Load utilities like Teradata Parallel Data Pump, BTEQ, and the Teradata Parallel Transporter operators INSERT and STREAM, which perform standard SQL row inserts and updates, are supported for USI‑indexed tables.

You cannot drop a USI to enable batch data loading by utilities such as MultiLoad and FastLoad as long as requests are running that use that index. Each such query places an ACCESS or READ lock on the index subtable while it is running, so it blocks the completion of any DROP INDEX transactions until the ACCESS or READ lock is removed. Furthermore, as long as a DROP INDEX transaction is waiting to get an EXCLUSIVE lock or is running, requests and batch data loading jobs against the underlying table of the index cannot begin processing.

ALWAYS GENERATED … NO CYCLE identity columns can be a better choice than USIs for the task of enforcing row uniqueness in multiset NUPI tables because they are supported by both MultiLoad and FastLoad. However, identity columns cannot be used to facilitate row access or joins.

If you define any non‑primary index uniqueness constraints on a table, you must drop them all before you use MultiLoad or FastLoad to load rows into that table, then recreate them after the load operation completes (see “Using Unique Secondary Indexes to Enforce Row Uniqueness” on page 457 for a workaround).

If the MultiLoad or FastLoad operation loads any duplicate rows into the table, then you cannot recreate a uniqueness constraint on the table. You must first detect the duplicates and then remove them from the table.

Application‑based methods of duplicate row management make several assumptions that are difficult to enforce. These assumptions are.

  • The application code used to detect and reject duplicate rows is implemented identically across all applications in the enterprise.
  • The application code used to detect and reject duplicate rows, even if universally implemented, is correct for all situations.
  • All updates to the database are made by means of those applications. This assumption neglects the possibility of ad hoc interactive SQL updates that bypass the application‑based duplicate row detection and rejection code.
  • See Chapter 12: “Designing for Database Integrity” for more information about the advantages of using system‑based declarative constraints to enforce database integrity.

    See Chapter 16: “Design Issues for Tactical Queries” for a description of the special design considerations that must be evaluated for using USIs to support tactical queries.