TPump Tips - Parallel Data Pump

Teradata® Parallel Data Pump Reference

Product
Parallel Data Pump
Release Number
17.00
Published
June 2020
Language
English (United States)
Last Update
2020-06-19
dita:mapPath
ioq1544831946920.ditamap
dita:ditavalPath
obe1474387269547.ditaval
dita:id
B035-3021
lifecycle
previous
Product Category
Teradata Tools and Utilities
  • High pack factors can increase TPump throughput. When high pack factors cannot be used, increasing the number of sessions is another way to boost TPump throughput if the client can support them.
  • To reduce data load latency and improve real-time availability of single rows, reduce the pack factor or use the LATENCY option on BEGIN LOAD.
  • If input data contains errors, a low pack factor will reduce the overhead of rolling back the request that contains the error, and re-process all error-free rows.
  • Speed up TPump startup by using persistent macros and specifying TPump’s recommended pack factor from a previous/similar run. Refer to the TPump EXECUTE command for more information on using predefined macros.
  • When selecting the number of sessions, consider the total system load at the time the TPump job is run. When multiple TPump jobs are running, consider a number of sessions equal to the number of AMPs in the system, or less.
  • If multiple TPump jobs may update rows from the same table with the same primary index value, manually partition the data on the primary index of the table, so all rows with the same PI value are directed to the same TPump job. Then also specify SERIALIZE ON to force the rows with the same NUPI value to a single session within that TPump job, further reducing possible contention.
  • If the target table of inserts in the database participates in a join index, consider directing TPump to insert into a non-indexed staging table. An insert/select from this staging table into the base table at regular intervals is likely to be a better-performing approach to updating a table when a join index is involved. Prior to the insert/select, a UNION can be used to make sure data recently inserted into the staging table is included in query answer sets.
  • Assign the TPump user to a higher priority performance group or workload in Teradata workload management when the TPump job runs at the same time as decision support queries, if the TPump completion time is more critical than the other work active in the system.
  • To ensure that TPump is able to perform single-AMP operations on each input record, include the entire primary index value for the row being updated among the columns passed to the database. With SERIALIZE ON in TPump, the user must properly identify the column(s) of the primary index to TPump via the KEY option on the FIELD command.
  • Use the latest versions of client TPump/Teradata DataConnector/Teradata CLIv2 and the latest versions of Vantage.
  • Concurrency is key to TPump job throughput. The goal is to maximize concurrent DML operations expressed as (SESSIONS * PACK) while respecting other work running on the system at the same time.
  • With ARRAYSUPPORT turned off, the maximum concurrent AWT usage for a TPump job is typically 20 * the number of SESSIONS. Avoid situations where the sum of (20 * SESSIONS) for all active TPump jobs exceeds the number of AMPs on the system. The maximum number of AWTs used on one TPump pack (one multi-statement request) can differ depending on whether the target table has fallback or USIs. There are four distinct cases and maximum AWT usage:
    • Non-USI, non-fallback 20
    • Non-USI, fallback 20 to 40
    • USI, non-fallback 1 to 2 to 21 max
    • USI, fallback 1 to 4 to 23 max

    The lower range of values will be most likely and should probably be assumed, like 1-to-2 AWTs or 1-to-4 AWTs used in parallel, when there is a USI on the target table. When a USI is on a table, expect significantly less parallelism in terms of AWT usage. Parallelism of the insert steps is severely reduced in the USI case because the USI maintenance is serialized to make uniqueness violations reproducible.

  • With ARRAYSUPPORT turned on, avoid situations where the sum of (SESSIONS * PACK) for all active TPump jobs exceeds the number of AMPs on the system.
  • Table row sizes and block sizes affect TPump throughput. Large rows reduce the number of rows per block and require additional I/O to process a given number of rows. Smaller table block size improves performance. Smaller table blocks are more readily cached in FSG cache and require less overall physical I/O.
  • Monitor TPump job statements with DBQL for conditions like varying step times and parsing. These negatively throughput.