Import Tasks - MultiLoad

Teradata® MultiLoad Reference - 20.00

Release Number
October 2023
English (United States)
Last Update
Product Category
Teradata Tools and Utilities

An import task quickly applies large amounts of client data to one or more tables or views on the database.

If a database restart/recovery operation interrupts a Teradata MultiLoad import task, the utility resumes processing from the point of interruption. It does not roll back and repeat processing that took place before the interruption. This effectively eliminates the processing penalty that would be incurred by transaction roll back during a restart/recovery operation.

The following table describes considerations when programming import tasks in the Teradata MultiLoad job script.

Import Task Considerations for Teradata MultiLoad  
Topic Import Task Considerations
Archive/Recovery Utility Statements These statements include:
  • DUMP
When associated with a Teradata MultiLoad target table, the database will:
  • Reject all ROLLBACK and ROLLFORWARD statements, regardless of operational phase, and whether Teradata MultiLoad is active or not.
  • Execute all RESTORE statements, without regard for Teradata MultiLoad locks. If a Teradata MultiLoad utility task is active in the application phase, however, the database delays execution of a RESTORE statement until the application phase completes and releases the write locks.
  • Process DUMP statements differently, depending on the operational state of Teradata MultiLoad:

    If Teradata MultiLoad is not in the application phase, then the database processes the DUMP statement normally.

    If Teradata MultiLoad is in the application phase, but inactive, then the database processes the DUMP statement, but returns only the table header with field 4 cleared. This means that the table will be restored empty.

    If Teradata MultiLoad is in the application phase and active, the database delays processing of the DUMP statement until the application phase completes. If the application phase ends normally, the database processes the DUMP normally. If the application phase was interrupted before a normal completion, the database processes the DUMP statement but returns only the table header.

The application phase can prevent normal processing of a DUMP statement. Take the following precautions to protect the target tables before starting a Teradata MultiLoad job:

  1. Dump the target tables.
  2. Execute a CHECKPOINT statement against the associated permanent journal.

Now, if a table becomes corrupted because of bad data or an operational error, restore the table from the dump and roll it forward to the checkpoint.

Column Value Specifications When considered in conjunction with each applicable imported record, column values must be explicitly specified for all columns:
  • Of the primary index of update and delete target tables
  • Of the partitioning columns of update and delete target tables if the target table has Partitioned Primary Indexes (PPI).
  • Except those for which a default value is specified, including null, of insert target tables
To fulfill the requirement for update and delete target tables, use the AND logical operator with a series of terms in either of the following forms:
  • column_reference =colon_variable_reference
  • column_reference = constant
Teradata MultiLoad cannot process UPDATE or DELETE statements with OR logical operators. Instead, use two separate DML statements and apply them conditionally.
Data Source Restriction All of the source data for a Teradata MultiLoad import task must originate from a data source on the client system. It cannot originate from existing tables in the database.

Additionally, no statement within an import task can even reference any table or any row in the database that is not a specified import task table.

DROP, and Other DDL Statements Teradata MultiLoad rejects all DDL statements except DROP during an import task. During the application phase, however, while it is actively applying the source data to the target tables, Teradata MultiLoad delays both processing of the DROP statements and rejection of the others. After the application phase completes, Teradata MultiLoad processes any waiting DROP statements.
Though this is the usual treatment for DROP statements, some cases may require that Teradata MultiLoad be stopped. At all other times, Teradata MultiLoad processes DROP statements immediately.
Invalid Command Parameters The following command parameters are not allowed in a Teradata MultiLoad delete task:
  • The CHECKPOINT and SESSIONS parameters of the BEGIN MLOAD command are not allowed because there is no row distribution in a Teradata MultiLoad delete task.
  • The ERRLIMIT parameter of the BEGIN MLOAD command is not allowed because the error types that can be limited are not recognized in a Teradata MultiLoad delete task.
  • The FOR and THRU parameters of the IMPORT command are not allowed because only one data record can be imported for a Teradata MultiLoad delete task.
  • The DML LABEL command and the APPLY clause of the IMPORT command are not allowed because the single imported data record is unconditionally applied by the single DELETE statement in a Teradata MultiLoad delete task.
Locks Teradata MultiLoad uses the access lock to allow concurrent select access to all affected target tables throughout the entire import task.
Concurrent DML statements that require read or write locks are permitted during the acquisition phase:
  • Until Teradata MultiLoad finishes acquiring data from the client system
  • While conducting a sort of the acquired data

Except for select access, concurrent DML statements are not permitted during the application and cleanup phases of an import task.

Multiple Operations Each import task can include multiple INSERT, UPDATE, and DELETE statements, and the multiple DML operations can be conditionally applied to as many as five tables with a single pass of the client file.

When associated with a client system file, each DML operation must provide a value for each nonnullable or nondefaultable column making up the primary index of the target table or the underlaying table of a target view.

For each DML operation:

  1. The client system sends a model DML statement, accompanied by a unique identifier, called a match tag, to the database.
  2. When processing the client data, Teradata MultiLoad evaluates each record according to the WHERE condition specifications of the IMPORT command. Each condition signifies a set of DML operations.
  3. When a client record satisfies a condition, Teradata MultiLoad combines the record with the match tag for that set of DML operations and sends it to the database.
  4. The database then performs the DML operation specified by the model statement.

If a single client record satisfies more than one condition, Teradata MultiLoad sends multiple copies of the record to the database, each with a different match tag.

Return Code Depending on the particular customer application that is using MultiLoad, rows in either error table (ET or UV) are not necessarily unexpected, so setting a non-zero return/exit code is not the correct course of action in all cases.
Statements Supported Teradata MultiLoad supports any combination of the following DML statements when importing data from one or more sources:

Each group of DML statements must be preceded by a DML LABEL command that defines the label and error treatment options.

Each data source:
  • Contains a record for each table row to which one or more DML statements apply.
  • Is identified by an IMPORT command that references the DML statements and a LAYOUT command to match the records of the data source to the DML statements.

Each LAYOUT command requires a sequence of FIELD commands, FILLER commands, and TABLE commands to define the record layout of a data source.

Target Tables Each import task can access up to five target tables on the database. To perform an import task, the appropriate access permission (for INSERT, UPDATE, and DELETE) are required on each target table.
Each target table:
  • Can, but need not:

    - Be empty

    - Have NUSIs

  • Cannot have unique secondary indexes
  • Cannot have primary time index

Each import task command need not access the same target table.

Upsert Function The Teradata MultiLoad upsert function applies consecutive UPDATE and INSERT statements to the same row.

If the UPDATE statement fails because the target table row does not exist, Teradata MultiLoad automatically executes the INSERT statement, completing the operation in a single pass instead of two.

If the Update statement fails due to an invalid length of data in the statement's WHERE clause, modify the data length and resubmit the statement. This might occur when the length of the data field in the UPDATE statement is larger than the Primary Index column length. Possible workarounds are using FILLER or an explicit cast of the data field to match the primary index column length.