Basic Processing - Parallel Transporter

Teradata® Parallel Transporter User Guide

Product
Parallel Transporter
Release Number
17.00
Published
August 31, 2020
Language
English (United States)
Last Update
2020-08-27
dita:mapPath
zae1544831938751.ditamap
dita:ditavalPath
tvt1507315030722.ditaval
dita:id
B035-2445
lifecycle
previous
Product Category
Teradata Tools and Utilities

Teradata PT can load data into, and export data from, any accessible database object in the database or other data store using Teradata PT operators or access modules.

Multiple targets are possible in a single Teradata PT job. A data target or destination for a Teradata PT job can be any of the following:
  • Databases (both relational and non-relational)
  • Database servers
  • Data storage devices
  • File objects, texts, and comma separated values (CSV)
    Full tape support is not available for any function in Teradata PT for workstation-attached client systems. To import or export data using a tape, a custom access module must be written to interface with the tape device. See Teradata® Tools and Utilities Access Module Programmer Guide, B035-2424 for information about how to write a custom access module.
When job scripts are submitted, Teradata PT can do the following:
  • Analyze the statements in the job script.
  • Initialize its internal components.
  • Create, optimize, and execute a parallel plan for completing the job by:
    • Creating instances of the required operator objects.
    • Creating a network of data streams that interconnect the operator instances.
    • Coordinating the execution of the operators.
  • Coordinate checkpoint and restart processing.
  • Restart the job automatically when the database signals a restart.
  • Terminate the processing environments.
Between the data source and destination, Teradata PT jobs can:
  • Retrieve, store, and transport specific data objects using parallel data streams.
  • Merge or split multiple parallel data streams.
  • Duplicate data streams for loading multiple targets.
  • Filter, condition, and cleanse data.