Job Example 1: High-Speed Bulk Loading into an Empty Table - Parallel Transporter

Teradata® Parallel Transporter User Guide

Product
Parallel Transporter
Release Number
17.10
Published
February 2022
Language
English (United States)
Last Update
2022-02-04
dita:mapPath
kyx1608578396289.ditamap
dita:ditavalPath
tvt1507315030722.ditaval
dita:id
B035-2445
lifecycle
previous
Product Category
Teradata Tools and Utilities

Job Objective

Read large amounts of data directly from an external flat file, or from an access module, and write it to an empty database table. If the source data is an external flat file, this job is equivalent to using the Teradata FastLoad utility. If the data source is a named pipe, the job is equivalent to using the Teradata standalone DataConnector utility to access data from a named pipe, through an access module, and write it to a temporary flat file, and then running a separate FastLoad job to load the data from the temporary file.

In cases where data is read from more than one source file, use UNION ALL to combine the data before loading into a database table, as shown in Job Example 1C.

Data Flow Diagrams

The following figures show flow diagrams of the elements in each of the three variations of Job Example 1.

Job Example PTS00001 – Reading Data from a Flat File for High-Speed Loading

Job Example PTS0002A, PTS0002B – Reading Data from a Named Pipe for High-Speed Loading

Job Example PTS00003 – Reading Data from Multiple Flat Files for High-Speed Loading

Sample Scripts

For the sample scripts that correspond to the three variations of this job, see the following scripts in the sample/userguide directory:
  • PTS00001: High Speed Bulk Loading from Flat Files into an Empty Teradata Database Table.
  • PTS0002A, PTS0002B: High Speed Bulk Loading from a Named Pipe into an Empty Teradata Database Table.
  • PTS00003A: High Speed Loading from Two Flat Files into an Empty Teradata Database Table.
  • PTS00003B: High Speed Loading from a Flat File into an Empty Teradata Database Table.

Rationale

This job uses:
  • DDL operator because it can DROP/CREATE tables needed for the job prior to loading and DROP unneeded tables at the conclusion of the job.
  • DataConnector operator because it is the only producer operator that reads data from external flat files and from the Named Pipes access module.
  • Load operator because it is the consumer operator that offers the best performance for high speed writing of a large number of rows into an empty database table.