Job Example 10: Loading Hadoop Files Using the HDFS API Interface - Parallel Transporter

Teradata® Parallel Transporter User Guide

Product
Parallel Transporter
Release Number
17.00
Published
August 31, 2020
Language
English (United States)
Last Update
2020-08-27
dita:mapPath
zae1544831938751.ditamap
dita:ditavalPath
tvt1507315030722.ditaval
dita:id
B035-2445
lifecycle
previous
Product Category
Teradata Tools and Utilities

Job Objective

The Teradata Parallel Transporter sample script loads five rows from a flat file located in Hadoop HDFS to a database table.

Data Flow Diagram

The following figure shows a flow diagram of the elements in Job Example 10.

Job Example PTS00029 – Read HDFS and Load into the Database

Sample Script

For the sample script that corresponds to this job, see the following script in the sample/userguide directory:

PTS00029: Read HDFS flat file.

Rationale

This job uses:
  • DataConnector operator template as the producer because it can read files in the HDFS filesystem, referencing values defined in the job variable file without requiring an explicit operator definition.
  • Load operator template as the consumer because it is the consumer operator that offers the best performance for high speed writing of a large number of rows into an empty database table.