Job Example 7: Mini-Batch Loading - Parallel Transporter

Teradata Parallel Transporter User Guide

Product
Parallel Transporter
Release Number
15.10
Language
English (United States)
Last Update
2018-10-07
dita:id
B035-2445
lifecycle
previous
Product Category
Teradata Tools and Utilities

Job Example 7: Mini-Batch Loading

Job Objective:

Read data directly from one or more external flat files and write it to a Teradata Database table.

Note: This job represents a special case of high speed loading, where the destination table is already populated, or has join indexes or other restrictions that prevent it from being accessed by the Load operator. Because of this, the job includes an intermediate step that loads the data into a staging table and then uses the DDL operator with INSERT…SELECT to move the data into the final destination table.

Data Flow Diagrams

Figure 31 shows a flow diagram of the elements of Job Example 7.

Figure 31: Job Example PTS00013 -- Mini-Batch Loading

Sample Script

For the sample script that corresponds to this job, see the following script in the sample/userguide directory:

PTS00013: Mini-Batch Loading into Teradata Database Tables.

Rationale

This job uses:

  • DDL operator because it can DROP/CREATE staging tables and target tables prior to loading, DROP unneeded tables at the conclusion of the job, and load the production table from the staging table using INSERT…SELECT.
  • DataConnector operator because it is the only producer operator that reads data from external flat files and from the Named Pipes access module.
  • Load operator because it is the consumer operator that offers the best performance for high speed writing of a large number of rows into an empty Teradata Database table.