NAME - Parallel Data Pump

Teradata Parallel Data Pump Reference

Product
Parallel Data Pump
Release Number
15.10
Language
English (United States)
Last Update
2018-10-07
dita:id
B035-3021
lifecycle
previous
Product Category
Teradata Tools and Utilities

NAME

Purpose  

The NAME command assigns a unique job name identifier to the environmental variable &SYSJOBNAME.

Syntax  

where

 

Syntax Element

Description

jobname

Character string that identifies the name of a job in a maximum of 16 characters

If this command is not specified, the default job name of ltdbase_logtable is used, where:

  • ltdbase is a character string of up to the first seven characters of the name of the database containing the log table.
  • logtable is a character string with the first eight characters of the log table name.
  • Usage Notes  

    The NAME environmental command must be used only once, in order to set the job name and the variable &SYSJOBNAME. Further attempts to execute the command will fail.

    The NAME command sets the variable &SYSJOBNAME to the specified string. The string is truncated to 16 characters. It is an error to use this command more than once in a Teradata TPump script or after the first BEGIN LOAD command in the script.

    If &SYSJOBNAME is not set using the NAME command, it defaults to MYYYYMMDD_HHMMSS_LLLLL, where

    M = macro
    MM = month
    DD = day
    YYYY = year
    hh = hour
    mm = minute
    ss = second
    lllll = is the low order 5 digits of the logon sequence number returned by the database from the .LOGON command.

    This variable is not set until created with the NAME command, or with the first BEGIN LOAD by default. Any attempt to use it before a NAME command is issued (or before the first BEGIN LOAD if there is no NAME command), results in a syntax error. This variable is significant because it is used by Teradata TPump when composing default names for various database artifacts, namely the error table and Teradata TPump‑created macros.

    Note: If serialization for two or more DML statements is required, they cannot be put in different partitions. Serialization requires that all DML statements with identical hash values of the rows be submitted from the same session.