Job Example 2: Perform INSERT, UPDATE, and DELETE in Multiple Tables - Parallel Transporter

Teradata® Parallel Transporter User Guide - 17.20

Deployment
VantageCloud
VantageCore
Edition
Enterprise
IntelliFlex
Lake
VMware
Product
Parallel Transporter
Release Number
17.20
Published
June 2022
Language
English (United States)
Last Update
2023-08-25
dita:mapPath
uzp1645128359760.ditamap
dita:ditavalPath
tvt1507315030722.ditaval
dita:id
B035-2445
Product Category
Teradata Tools and Utilities

Job Objective

Read data directly from non-database source files, or from an access module, and perform INSERT, DELETE, and UPDATE operations on multiple database tables. The loading part of this job is equivalent to the most common use of the Teradata MultiLoad utility.

Data Flow Diagrams

The following figures show diagrams of the job elements for the two variations of Job Example 2.

Job Example PTS00004 – Reading Data from a Flat File

Job Example PTS00005 – Reading Data from a Named Pipe

Sample Scripts

For the sample scripts that correspond to the two variations this job, see the following scripts in the sample/userguide directory:
  • PTS00004: Reading Data Direct from Source Files and Performing an UPSERT on the Teradata Database Tables.
  • PTS00005: Reading Data from a Named Pipe and Performing an UPSERT on Multiple Teradata Database Tables.

Rationale

This job uses:
  • DDL operator because it can DROP/CREATE target tables and DROP work tables.
  • DataConnector operator because it is the only producer operator that reads data from non-database, non-ODBC data sources and from Named Pipes.
  • Update operator as the consumer operator because it can perform INSERT, UPDATE, and DELETE operations into either new or preexisting database tables.