Job Example 1: High Speed Bulk Loading into an Empty Table - Parallel Transporter

Teradata Parallel Transporter User Guide

Product
Parallel Transporter
Release Number
15.10
Language
English (United States)
Last Update
2018-10-07
dita:id
B035-2445
lifecycle
previous
Product Category
Teradata Tools and Utilities

Job Example 1: High Speed Bulk Loading into an Empty Table

Job Objective

Read large amounts of data directly from an external flat file, or from an access module, and write it to an empty Teradata Database table. If the source data is an external flat file, this job is equivalent to using the Teradata FastLoad utility. If the data source is a named pipe, the job is equivalent to using the Teradata standalone DataConnector utility to access data from a named pipe, through an access module, and write it to a temporary flat file, and then running a separate FastLoad job to load the data from the temporary file.

Note: In cases where data is read from more than one source file, use UNION ALL to combine the data before loading into a Teradata Database table, as shown in Job Example 1C.

Data Flow Diagrams

Figure 20 through Figure 22 show flow diagrams of the elements in each of the three variations of Job Example 1.

Figure 20: Job Example PTS00001 -- Reading Data from a Flat File for High Speed Loading
Figure 21: Job Example PTS0002A, PTS0002B -- Reading Data from a Named Pipe for High Speed Loading
Figure 22: Job Example PTS00003 -- Reading Data from Multiple Flat Files for High Speed Loading

Samples Scripts

For the sample scripts that correspond to the three variations of this job, see the following scripts in the sample/userguide directory:

  • PTS00001: High Speed Bulk Loading from Flat Files into an Empty Teradata Database Table.
  • PTS0002A, PTS0002B: High Speed Bulk Loading from a Named Pipe into an Empty Teradata Database Table.
  • PTS00003A: High Speed Loading from Two Flat Files into an Empty Teradata Database Table.
  • PTS00003B: High Speed Loading from a Flat File into an Empty Teradata Database Table.
  • Rationale

    This job uses:

  • DDL operator because it can DROP/CREATE tables needed for the job prior to loading and DROP unneeded tables at the conclusion of the job.
  • DataConnector operator because it is the only producer operator that reads data from external flat files and from the Named Pipes access module.
  • Load operator because it is the consumer operator that offers the best performance for high speed writing of a large number of rows into an empty Teradata Database table.