Required and Optional Attributes - Parallel Transporter

Teradata Parallel Transporter Reference

Product
Parallel Transporter
Release Number
16.10
Published
July 2017
Language
English (United States)
Last Update
2018-06-28
dita:mapPath
egk1499705348414.ditamap
dita:ditavalPath
Audience_PDF_include.ditaval
dita:id
B035-2436
lifecycle
previous
Product Category
Teradata Tools and Utilities

Use the attribute definition list syntax in the Teradata PT DEFINE OPERATOR statement to declare the required and optional attribute values for the DataConnector operator.

Parallel processing of multiple files is permitted. Multiple instances of the producer DataConnector operator are allowed by specifying a base directory in the DirectoryPath attribute, and then a wildcard in the FileName attribute as a selection basis for a series of files to be read.

The specification of any attributes that begin with 'Hadoop' will cause the DataConnector operator to process Hadoop files, directories, and tables, rather than files and directories in the local filesystem. For more information, see Processing Hadoop Files and Tables.

where:

DataConnector Attribute Descriptions 
Syntax Element Description
AcceptExcessColumns = ‘option’ Optional attribute that specifies whether or not rows with extra columns are acceptable.

Valid values are:

  • 'Y[es]' = rows with extra columns are truncated to the number of columns defined in the schema, and then they are sent downstream.

    The edited record is sent to the Teradata Database and the original record is saved in the record error file.

  • 'N[o]' = AcceptExcessColumns is not invoked (default).
  • ‘YesWithoutLog’ = the edited row is sent to the Teradata Database, but the original record is not saved in the record error file.
AcceptMissing
Columns = ‘option’ Optional attribute that determines how rows in which the column count is less than defined in the schema are treated.

Valid values are:

  • 'Y[es]' = the row is to be extended to the correct number of columns. Each appended column will be a zero length column and be processed according to the value of the NullColumns attribute. The edited record is sent to the Teradata Database and the original record is saved in the record error file.
  • 'N[o]' = AcceptMissingColumns is not invoked (default).
  • ‘YesWithoutLog’ = the edited row is sent to the Teradata Database, but the original record is not saved in the record error file.
AccessModuleInitStr = 'initString' Optional attribute that specifies the initialization string for the specified access module.

For the initString values the Initialization String section for each module in the Teradata Tools and Utilities Access Module Reference (B035-2425).

AccessModuleName = 'name' Optional attribute that specifies the name of the access module file, where the value for name is dependent on the following:

Teradata Access Module for Amazon S3 for Teradata Parallel Transporter

  • libs3axsmod.so on Linux platform

Teradata Access Module for Named Pipes for Teradata Parallel Transporter

  • libnp_axsmod.dylib on the Apple OS X platform
  • np_axsmod.soon all other UNIX platforms
  • np_axsmod.dllon Windows platforms

Teradata Access Module for WebSphere MQ for Teradata Parallel Transporter (client version)

  • libmqsc.dylibon the Apple OS X platform
  • libmqsc.soon all other UNIX platforms
  • libmqsc.dll on Windows platforms

Teradata Access Module for WebSphere MQ for Teradata Parallel Transporter (server version)

  • libmqs.dylib on the Apple OS X platform
  • libmqs.so on all other UNIX platforms
  • libmqs.dll on Windows platforms

Teradata Access Module for OLE DB for Teradata Parallel Transporter

  • oledb_axsmod.dll on Windows platforms

Teradata Access Module for Kafka for Teradata Parallel Transporter

  • libkafkaaxsmod.so on Linux platform
Teradata Access Module for Azure for Teradata Parallel Transporter
  • libazureaxsmod.so on Linux platform

Custom Access Modules

Use your shared library file name if you use a custom access module.

Access module names do not need a suffix since the operator appends the correct suffix for the platform used.

Large File Access Module is no longer available because the DataConnector operator now supports file sizes greater than 2 gigabytes on Windows, HP-UX, AIX, and Solaris running on SPARC systems when system parameters are appropriately set.

Teradata PT supports the standalone version of the Teradata Access Module for Named Pipes and the Teradata Access Module for WebSphere MQ.

AppendDelimiter Optional attribute that adds a delimiter at the end of every record written. Use AppendDelimiter when creating delimited output files.

When the last column in the record is NULL, a trailing delimiter denotes that the column is NULL.

Valid values are:

  • 'Y[es]' = Adds a delimiter at the end of every record written.
  • 'N[o]' = Does not add a delimiter at the end of every record written (default).
ArchiveDirectoryPath = ‘pathName’ Defines the complete pathname of a directory to which all processed files are moved from the current directory (specified with the DirectoryPath attribute).

This attribute is required when specifying a value for the VigilMaxFiles attribute.

When multiple instances of the DataConnector Consumer are requested, the output file names are appended with a sequence number. After each checkpoint is completed, the current output file for each instance is closed and archived and a new file is opened for each instance with the instance number and incremented sequence number appended.

ArchiveFatal = ‘option’ Defines what action to take if an archive (file move) fails.

Valid values are”

  • 'Y[es]' = the job terminates (default).
  • 'N[o]' = processing continues with a warning.
CloseQuoteMark = " The closing quote mark character.

May be any single or multibyte value from the session character set. For example, ‘ " ‘ or ‘ | | ‘

The default value is that value provided for the attribute OpenQuoteMark.

DirectoryPath = 'pathName' Optional attribute that supports the FileName attribute wildcard feature.

Use this attribute to specify an existing base directory path (or z/OS PDS dataset name) for the location of the file (or PDS members) indicated by the FileName attribute. This attribute cannot be used if a z/OS data set (DD:DATA) is specified in the FileName attribute.

To specify an z/OS PDS data set with a JCL DD statement, prefix the DirectoryPath attribute value with ‘DD:’ as shown in the following example:

DirectoryPath='DD:<ddname>'

To specify the z/OS PDS data set directly, use the following syntax:

DirectoryPath=’//’’dataset-name’’’

This attribute defaults to the directory in which the job is executing (the job working directory specified in the DEFINE JOB statement).

If the directory syntax is included in the FileName attribute, then the DirectoryPath attribute is expected to be empty.

If the DataConnector is a consumer instance, the DirectoryPath attribute is also expected to be empty.

If the DataConnector is a producer instance, the Directory Path specification is prepended to the file name only if no directory names appear within the FileName attribute.

EnableScan = ‘mode’ Optional attribute that bypasses the directory scan logic when using access modules.
  • ‘Y[es]’ = operator retains its original behavior, which is to automatically scan directories (default).
  • ‘N[o]’ = operator bypasses the directory scan feature and passes directly to the access module only the file specified in the FileName attribute.

If this attribute is set to ‘No’ while a wildcard character is specified in the FileName attribute, a warning message is generated in the DataConnector log.

ErrorLimit =errorLimit errorLimit = (0 - 2147483647)

0 = Default (Unlimited)

Optional attribute that specifies the approximate number of records that can be stored in the error row file before the DataConnector operator job is terminated. If ErrorLimit is not specified, it is the same as an ErrorLimit value of 0. The ErrorLimit specification applies to each instance of the DataConnector operator.

When the "RecordErrorFileName" attribute is defined (previously known as “RowErrFileName” that will be obsolete in the next release), error records are saved in the specified file and the job continues to process additional records without exiting with a fatal error.

When the new attribute "ErrorLimit" is also declared, the previously described error processing continues until the specified error-limit amount is reached, and then the job exits with a fatal error.

For information about the effects of the ErrorLimit attribute, see the Teradata Parallel Transporter User Guide (B035-2445).

For a list of obsolete syntax, which are supported but no longer documented, see Deprecated Syntax.
EscapeQuoteDelimiter=‘close-quote Optional attribute that allows you to define the escape quote character within delimited data. The default value. is ‘close-quote’. See Rules for Quoted Delimited Data Handling.

When processing data in delimited format, if the EscapeQuoteDelimiter precedes either the OpenQuoteMark or the CloseQuoteMark, that instance of the quote mark (either open or close) is included in the data rather than marking the beginning or end of a quoted string.

EscapeTextDelimiter = ‘character’ Optional attribute that allows you to define the delimiter escape character within delimited data. There is no default data.

When processing data in delimited format, if the escape sequence defined by EscapeTextDelimiter precedes the delimiter, that instance of the delimiter is included in the data rather than marking the end of the column. If the escape defined by EscapeTextDelimiter is not immediately followed by the delimiter character, the data is considered to be ordinary and no further processing is performed.

For example, if the default delimiter is the pipe ( | ) and the EscapeTextDelimiter is the backslash, then column data input of abc\|def| would be loaded as abc|def.

FileList = 'option' Optional attribute used in conjunction with fileName.

Valid values are:

'Y[es]'= the file specified by FileName contains a list of files to be processed.

'N[o]' = the file specified by FileName does not contain a list of files to be processed.

   VARCHAR FileList = 'Y'
DataConnector operator supports a FileList file encoded in ASCII on network-attached platforms and EBCDIC on mainframe-attached platforms.
FileName = 'fileName' Required attribute that specifies the name of the file to be processed.

In some cases, the access module specified using the AccessModuleName attribute may not use or recognize file names and, therefore, may not require specification of a value for the FileName attribute. For example, the Teradata Access Module for IBM Websphere MQ does not require a file name specification.

When used with the FileList attribute, fileName is expected to contain a list of names of the files to be processed, each with a full path specification. In this case, wildcard characters are not supported for either the FileName attribute or the filenames it contains. Multiple instances of the operator can be used to process the list of files in parallel.

On Windows platforms, using the wildcard character (*) in the 'filename' operator attribute may inadvertently include more files than you desire. For example, if you specify *.dat, a directory scan of the folder will find files as if you had specified*.dat*; for example, files with the extension .data, .date, and .dat071503 will also be found. Therefore, you may need to first remove extraneous files from your folder.

Reading and writing of a GZIP compressed file is supported on all OS platforms. The support for this is enabled automatically based on the file extension. The standard file name extension for gzip files is "*.gz".

Reading and writing of a ZIP compressed file is supported on Windows and Unix, but not on IBM z/OS. The support for this is enabled automatically based on the file extension. The standard file name extension for zip files is "*.zip".

Only single files are supported with the ZIP format for both reading and writing.

Reading and writing of GZIP and ZIP files is not supported when using Hadoop/HDFS.

For additional z/OS dataset syntax, see the table Valid Filename Syntax.

Format = 'format' Required attribute that specifies the logical record format of the data. No system default exists.

Format can have any of the following values:

  • 'Binary' = 2-byte integer, n, followed by n bytes of data. This data format requires rows to be 64KB (64260 data bytes) or smaller. In this format:

    The data is prefixed by a record-length marker.

    The record-length marker does not include the length of the marker itself.

    The record-length is not part of the transmitted data.

  • 'Binary4' = 4-byte integer, followed by n bytes of data. This data format supports rows up to 1MB (1000000 data bytes) in size. In this format:

    The data is prefixed by a record-length marker.

    The record-length marker does not include the length of the marker itself.

    The record-length is not part of the transmitted data.

  • 'Delimited' = in text format with each field separated by a delimiter character. When you specify Delimited format, you can use the optional TextDelimiter attribute to specify the delimiter character. The default is the pipe character ( | ).
    When the format attribute of the DataConnector Producer is set to 'delimited', the associated Teradata PT schema object must be comprised of only VARCHAR and/or VARDATE columns. Specifying non-VARCHAR or non-VARDATE columns results in an error.
  • 'Formatted' = both prefixed by a record-length marker and followed by an end-of-record marker. This data format requires rows to be 64KB (64260 data bytes) or smaller. In this format:

    The record-length marker does not include the length of the marker itself.

    Neither the record-length nor the end-of-record marker is part of the transmitted data.

  • 'Formatted4' = both prefixed by a 4-byte record-length marker and followed by an end-of-record marker. This data format supports rows up to 1MB (1000000 data bytes) in size. In this format:

    The record-length marker does not include the length of the marker itself.

    Neither the record-length nor the end-of-record marker is part of the transmitted data.

  • 'Text' = character data separated by an end-of-record (EOR) marker. The EOR marker can be either a single-byte linefeed (X'0A') or a double-byte carriage-return/line-feed pair (X'0D0A'), as defined by the first EOR marker encountered for the first record. This format restricts column data types to CHAR or ANSIDATE only.
  • 'Unformatted' = not formatted. Unformatted data has no record or field delimiters, and is entirely described by the specified Teradata PT schema.
HadoopBlockSize= (x * 1K bytes)' Optional attribute that specifies the size of the block/buffer, in 1K increments, when writing Hadoop/HDFS files. The HadoopBlockSize value can be defined anywhere from 1 to x 1K bytes, where x is arbitrary. The typical default Hadoop/HDFS Cluster Block Size is 64MB which is also what TPT uses: (65536 * 1024 = 64MB).

Before using this attribute to change the default, consult your system administrator. This value affects memory consumption (internal buffer allocated at runtime is twice this size), and should not be changed indiscriminately.

Valid values are:

  • 1 - 2147483647
  • 0 = Default Value
Default value = 65536.
HadoopFileFormat= 'hadoopFileFormat' Optional attribute that specifies the format of the file that the TDCH job should process. For more information about the DataConnector's Hadoop interfaces and the Teradata Connector for Hadooptutorial for supported and default values, see Processing Hadoop Files and Tables.
HadoopHost= 'hadoopHostName' Optional attribute that specifies the host name or IP address of the NameNode in a Hadoop cluster.

When launching a TDCH job, this value should be the host name or IP address of the node in the Hadoop cluster on which the TPT job is being run. This host name or IP address should be reachable by all DataNodes in the Hadoop cluster. For more information about the DataConnector's Hadoop interfaces. see Processing Hadoop Files and Tables.

When launching a HDFS API job this value indicates the cluster where the HDFS operation will be performed and can be set as follows:

“default” = The default name-node declared in the Hadoop HDFS configuration file.

<host-name>:<port> = The host-name/ip-address and port of the name-node on the cluster where the HDFS operation is to be performed. The “:<port>” value is optional.

HadoopJobType= 'hadoopJobType' Optional attribute that specifies the type of TDCH job to launch. For more information about the DataConnector's Hadoop interfaces and the Teradata Connector for Hadoop tutorial for supported and default values, see Processing Hadoop Files and Tables.
HadoopNumMappers= 'hadoopNumMappers' Optional attribute that specifies the number of mappers that the TDCH will launch. For more information about the DataConnector's Hadoop interfaces and the Teradata Connector for Hadoop tutorial for supported and default values, see Processing Hadoop Files and Tables.
HadoopSeparator= 'hadoopSeparator' Optional attribute that specifies the character(s) that separate fields in the file processed by the TDCH job. This attribute is only valid when 'HadoopFileFormat' is set to 'textfile', which is the attribute's default value. For more information about the DataConnector's Hadoop interfaces and the Teradata Connector for Hadoop tutorial for supported and default values, see Processing Hadoop Files and Tables.
HadoopSourceDatabase='hadoopSourceDatabase' Optional attribute that specifies the name of the source database in Hive or Hcatalog from which data is exported. For more information about the DataConnector's Hadoop interfaces and the Teradata Connector for Hadoop tutorial for supported and default values, see Processing Hadoop Files and Tables.
HadoopSourceFieldNames = 'hadoopSourceFieldNames' Optional attribute that specifies the names of the fields to export from the source HDFS files, or from the source Hive and HCatalog tables, in comma separated format. For more information about the DataConnector's Hadoop interfaces and the Teradata Connector for Hadoop tutorial for supported and default values, see Processing Hadoop Files and Tables.
HadoopSourcePartitionSchema= 'hadoopSourcePartitionSchema' Optional attribute that specifies the full partition schema of the source table in Hive, in comma separated format. This attribute is only valid when 'HadoopJobType' is set to 'hive'. For more information about the DataConnector's Hadoop interfaces and the Teradata Connector for Hadoop tutorial for supported and default values, see Processing Hadoop Files and Tables.
HadoopSourcePaths= 'hadoopSourcePaths' Optional attribute that specifies the directory of the to-be-exported source files in HDFS. This attribute is required when 'HadoopJobType' is set to 'hdfs', optional when 'HadoopJobType' is set to 'hive', and invalid when 'HadoopJobType' is set to 'hcat'. For more information about the DataConnector's Hadoop interfaces and the Teradata Connector for Hadoop tutorial for supported and default values, see Processing Hadoop Files and Tables.
HadoopSourceTable = 'hadoopSourceTable' Optional attribute that specifies the name of the source table in Hive or Hcatalog from which data is exported. This attribute is required when 'HadoopJobType' is set to 'hcat', optional when 'HadoopJobType' is set to 'hive', and invalid when 'HadoopJobType' is set to 'hdfs'. For more information about the DataConnector's Hadoop interfaces and the Teradata Connector for Hadoop tutorial for supported and default values, see Processing Hadoop Files and Tables.
HadoopSourceTableSchema= 'hadoopSourceTableSchema' Optional attribute that specifies the full column schema of the source table in Hive or Hcatalog, in comma separated format. For more information about the DataConnector's Hadoop interfaces and the Teradata Connector for Hadooptutorial for supported and default values, see Processing Hadoop Files and Tables.
HadoopTargetDatabase= 'hadoopTargetDatabase' Optional attribute that specifies the name of the Target database in Hive or Hcatalog to which data is imported. It is optional with a 'hive' or 'hcat' job and not valid with an 'hdfs' job. For more information about the DataConnector's Hadoop interfaces and the Teradata Connector for Hadooptutorial for supported and default values, see Processing Hadoop Files and Tables.
HadoopTargetFieldNames = 'hadoopTargetFieldNames' Optional attribute that specifies the names of the fields to write to the target file in HDFS, or to the target Hive and HCatalog table, in comma separated format. For more information about the DataConnector's Hadoop interfaces and the Teradata Connector for Hadooptutorial for supported and default values, see Processing Hadoop Files and Tables.
HadoopTargetPartitionSchema= 'hadoopTargetPartitionSchema' Optional attribute that specifies the names of the fields to write to the target file in HDFS, or to the target Hive and HCatalog table, in comma separated format. For more information about the DataConnector's Hadoop interfaces and the Teradata Connector for Hadooptutorial for supported and default values, see Processing Hadoop Files and Tables.
HadoopTargetPaths= 'hadoopTargetPaths' Optional attribute that specifies the directory of the to-be-imported source files in HDFS. This attribute is required when 'HadoopJobType' is set to 'hdfs', optional when 'HadoopJobType' is set to 'hive', and invalid when 'HadoopJobType' is set to 'hcat'. For more information about the DataConnector's Hadoop interfaces and the Teradata Connector for Hadooptutorial for supported and default values, see Processing Hadoop Files and Tables.
HadoopTargetTable= 'hadoopTargetTable' Optional attribute that specifies the name of the target table in Hive or Hcatalog where data will be imported. This attribute is required when 'HadoopJobType' is set to 'hcat', optional when 'HadoopJobType' is set to 'hive', and invalid when 'HadoopJobType' is set to 'hdfs'. For more information about the DataConnector's Hadoop interfaces and the Teradata Connector for Hadooptutorial for supported and default values, see Processing Hadoop Files and Tables.
HadoopTargetTableSchema= 'hadoopTargetTableSchema' Optional attribute that specifies the full column schema of the target table in Hive or Hcatalog, in comma separated format. For more information about the DataConnector's Hadoop interfaces and the Teradata Connector for Hadooptutorial for supported and default values, Processing Hadoop Files and Tables.
HadoopUser= 'hadoopUser' Optional attribute that specifies the name of the Hadoop user to utilize when reading and writing files via the HDFSAPI interface. The currently logged-in user-name where the TPT HDFS job is running is used when this attribute is not specified. For more information about the DataConnector's Hadoop interfaces, see Processing Hadoop Files and Tables.
IndicatorMode = 'mode' Optional attribute that specifies whether the number of indicator bytes is included at the beginning of each record.
  • 'Y[es]' = indicator mode data. This value is not valid for the ‘text’ or ‘delimited’ record formats.
  • 'N[o]' = nonindicator mode data (default).
MaxColumnCountErrs = numberOfErrors Optional attributes that specifies the maximum number of column count errors to be written to the private log.

Valid values: >1 . . . <99999

If the number of column count errors encountered reaches the value specified, a message is issued to both the private and public logs that no additional errors will be written to these logs.

The total number of these error rows written to the private log is shown in the private log at termination.

NamedPipeTimeOut = seconds Optional attribute that enables checking of named pipes (FIFOs). If seconds is set to a positive number, the DataConnector operator will check the pipe for data every second until either data becomes available, or the amount of time specified is reached and the job terminates. If the attribute is not specified, no checking of pipes will be performed. This will yield faster performance, but may also result in a hung job if data is not available in the pipe when it is read.

This attribute is only for jobs that use the DataConnector operator to read pipes directly. It is not used when the Named Pipe Access Module (NPAM) performs the pipe I/O.

IOBufferSize = bytes Optional attribute that specifies the size of the buffer, in bytes, required to handle the largest record expected. (The internal buffer allocated at runtime is twice this size.)

The IOBufferSize value can be defined anywhere from 1 to n bytes, where n is arbitrary. However, defining an excessive buffer size can lead to memory allocation problems.

The maximum size that can be defined is, whichever is less, available memory or 2147483639 bytes on UNIX, Linux, and Windows systems.

The maximum on MVS systems is 16777215 bytes.

The default is 131072 (decimal) and 128K (hex) bytes.

If the MultipleReaders (see below) feature is invoked, then the default is 1048575 bytes.

MultipleReaders = 'option' Optional attribute that, when set to 'Yes', instructs the Data Connector producer operator that more than one instance can be used to read a single file in parallel.
NotifyExit = 'inmodName' Optional attribute that specifies the name of the user-defined notify exit routine with an entry point named _dynamn. If no value is supplied, the following default name is used:
  • libnotfyext.dll for Windows platforms
  • libnotfyext.dylib for Apple OS X platform
  • libnotfyext.so for all other UNIX platforms
  • NOTFYEXT for z/OS platforms

See Deprecated Syntax for information about providing your own notify exit routine.

NotifyLevel = 'notifyLevel' Optional attribute that specifies the level at which certain events are reported.

Valid values are:

  • 'Off' = no notification of events is provided (default).
  • 'Low' = 'Yes' in the Low Notification Level column.
  • 'Med' = 'Yes' in the Medium Notification Level column.
  • 'High' = 'Yes' in the High Notification Level column.
NotifyMethod = 'notifyMethod Optional attribute that specifies the method for reporting events. The methods are:
  • 'None' = no event logging is done (default).
  • 'Msg' = sends the events to a log.
  • 'Exit' = sends the events to a user-defined notify exit routine.
NotifyString = 'notifyString' Optional attribute that specifies a user-defined string to precede all messages sent to the system log. This string is also sent to the user-defined notify exit routine. The maximum length of the string is:
  • 80 bytes, if NotifyMethod is 'Exit'
  • 16 bytes, if NotifyMethod is 'Msg'
NullColumns= ‘option Determines the contents of columns created by the job due to delimited data that does not specify all columns required by the schema.

To utilize this attribute, AcceptMissingColumns must be 'Y[es]' or 'Y[eswithoutlog]' and QuotedData must be 'Y[es]' or 'O[ptional].'

Valid values are:

  • 'Y[es]' = New job-created columns will be NULL (default).
  • 'N[o]' = New job-created columns will contain the empty string "".

For these examples, the delimiter character is the default | character, QuotedData is enabled and AcceptMissingColumns is 'Y'. The example schema is:

...
(VARCHAR(5), VARCHAR(5), VARCHAR(5), VARCHAR(5), VARCHAR(5), VARCHAR(5))
...

The first example data record is:

"abc"|""||"def"

The schema requires 6 fields but the record only provides 4.

Fields 1, 2, and 4 contain the strings "abc", "", and "def".

Note that "" is not NULL. Rather, it is a character string of zero length. It is handled in the same manner as any other string.

Field 3 is an explicitly provided NULL column. Because it is part of the original record, it is not affected by the NullColumns attribute.

Fields 5 and 6 are not provided and must be created by the DataConnector.

The NullColumns attribute can be used to modify these new operator-created columns.

If NullColumns is set to 'Y[es]', or the default behavior is used, the result will be as if the data file contained the record

"abc"|""||"def"|||

where both newly created columns are NULL.

But if NullColumns = 'N[o]' is used, the behavior will be as if the record was defined as

"abc"|""||"def"|""|""

where the newly created columns contain empty strings.

Note that fields 2 and 3, which were both part of the original data record, are unchanged regardless of the NullColumns attribute setting.

OpenMode = 'mode' Optional attribute that specifies the read/write access mode.

Valid values are:

  • 'Read' = Read-only access.
  • 'Write' = Write-only access.
  • 'WriteAppend' = Write-only access appending to existing file.

If mode is not specified for OpenMode, it defaults to 'Read' for a producer instance and 'Write' for a consumer instance.

OpenQuoteMark = " Optional attribute that allows you to define the opening quote mark character within delimited data. The default value is ‘quote’.

May be any single or multibyte value from the session character set. For example, ‘ " ’ or ‘ | | ‘

PrivateLogName = 'logName' Optional attribute that specifies the name of a log that is maintained by the Teradata PT Logger inside the public log. The private log contains all of the diagnostic trace messages produced by the operator.

The file name is appended with the operator instance number. A "-1" is appended to the log name for instance 1. For example, if PrivateLogName = 'DClog', then the actual log name for instance 1 is DClog-1. Similarly, for instance 2, is DClog-2, etc.

The private log can be viewed using the tlogview command as follows, where jobid is the Teradata PT job name and privatelogname is the value for the operator’s PrivateLogName attribute:

   tlogview -j jobId -f privatelogname

If the private log is not specified, all output is stored in the public log.

For more information about the tlogview command, see Teradata PT Utility Commands.

QuotedData = ‘option Determines if data is expected to be enclosed within quotation marks.

Valid values are:

'Y[es]' = all columns are expected to be enclosed in quotation marks.

‘N[o]’ = columns enclosure within quotation marks is not expected (default).

‘Optional’ = columns can optionally be enclosed within quotation marks.

RecordErrorFileName = ‘filePath’ Optional attribute that specifies where error records are directed. Error records include those with either incorrect column counts or individual columns with invalid lengths.

If this attribute is undefined, the first occurrence of an error record will result in a fatal operator error and job termination.

RecordErrorVerbosity = ‘option Optional attribute that allows for annotations in the record error file.

Valid values are:

  • ‘Off’ = no annotations are to be inserted into the record error file (default).
  • ‘Low’ = the error message describing the nature of the error is included.
  • ‘Med’ = the file name and record number is included, along with error messages describing the nature of the error.
  • ‘High’ = the same as ‘Med’.
RecordsPerBuffer = count Optional attribute that defines the number of records to be processed by each instance during each processing phase. This attribute supports the MultipleReaders option only. This attribute is not relevant in any other scenario. The default is calculated by dividing the IOBufferSize by the number of slave reader instances.

That result is then divided by the maximum record size as defined by the schema.

The number of slave instances is equal to the total operator instances minus 1.

For example, if 10 reader instances are defined, the IOBufferSize is allowed to default (1048575) and the length of the schema is 400 bytes, then this value would default to 1048575 bytes / 9 instances / 400 bytes = 291 records.

RowsPerInstance = rows Optional attribute that specifies the maximum number of records processed by each instance of the operator.

This number spans files, meaning that processing continues over multiple files until the row limit is reached for each instance. If the limit is not reached for any instance, that instance ends normally.

   INTEGER RowsPerInstance = 1000

The limit is not effective across restarts, meaning the row count is reset to zero upon restart.

SkipRows = rows Optional attribute that specifies the number of rows to skip by each instance of the operator.

Whether SkipRows spans files or restarts with every file is governed by the value of SkipRowsEveryFile.

   INTEGER SkipRows = 1000
SkipRowsEveryFile = ‘option Optional attribute that governs the behavior of SkipRows (above).

When SkipRowsEveryFile is set to No (the default), SkipRows value is cumulative. That is, processing continues over multiple files until the specified number of rows to skip is reached. For example, if SkipRows = 1000, SkipRowsEveryFile = 'N', and 5 files to be processed each contain 300 rows, Files 1, 2, and 3 are skipped in their entirety, file 4 begins processing at row 101, and all of file 5 is processed. You might use this option to skip rows that were already processed in a failed job.

When SkipRowsEveryFile is set to Yes, SkipRows restarts at the beginning of each file. For example, if SkipRows = 5, SkipRowsEveryFile = 'Yes', and 5 files to be processed each contain 300 rows, the first 5 rows of each file are skipped and rows 6 through 300 are processed. You might use this option to skip repetitive header rows in each file to be processed.

   VARCHAR SkipRowsEveryFile = 'Y'
TextDelimiter = 'character Optional attribute that specifies the bytes that separate fields in delimited records. Any number of characters can be defined via the attribute assignment.

The default delimiter character is the pipe character ( | ). To embed a pipe delimiter character in your data, precede the pipe character with a backslash ( \ ).

To use the tab character as the delimiter character, specify TextDelimiter = 'TAB'. Use uppercase “TAB” not lowercase “tab”. The backslash is required if you want to embed a tab character in your data.
Timeout = seconds Optional attribute that specifies the number of seconds the system waits for input to finish.
  • Valid values are from 1 to 99999 seconds.
  • Not valid for a consumer instance of the operator. In this case, the attribute results in an error.
  • The attribute is passed to all attached access modules.

If no value is specified, the system does not wait for input to finish.

TraceLevel = 'level' Optional attribute that specifies the types of diagnostic information that are written by each instance of the operator to the public log (or private log, if one is specified using the PrivateLogName attribute).

The diagnostic trace function provides detailed information in the log file to aid in problem tracking and diagnosis. The trace levels are:

  • 'None' = disables the trace function (default). Status, error, and other messages default to the public log.

    The PrivateLogFile attribute default is used only if a TraceLevel attribute other than 'None' is specified. If a TraceLevel attribute other than 'None' is specified without a PrivateLog specification, the DataConnector operator generates a private log name and a message containing the private log name is issued in the public log.

    If no TraceLevel attribute is specified, or if the specified value is 'None', and the PrivateLogFile attribute is specified, the TraceLevel is set to 'Milestones'. The recommended TraceLevel value is 'None', which produces NO log file. Specifying any value greater than 'IO_Counts' produces a very large amount of diagnostic information.

  • 'Milestones' = enables the trace function only for major events such as initialization, access module attach/detach operations, file openings and closings, error conditions, and so on
  • 'IO_Counts' = enables the trace function for major events and I/O counts
  • 'IO_Buffers' = enables the trace function for major events, I/O counts, and I/O buffers
  • 'All' = enables the trace function for major events and I/O counts and buffers plus function entries.

If PrivateLogFile attribute specifies a log file without specifying the TraceLevel attribute, “minimal” statistics are displayed in the log file:

  • Name of files as they are processed
  • Notice when sending rows begins
  • On completion, the number of rows processed and the CPU time consumed.
  • Total files processed and CPU time consumed by each instance of the DataConnector operator.
The TraceLevel attribute is provided as a diagnostic aid only. The amount and type of additional information provided by this attribute will change to meet evolving needs from release to release.
TrimChar = ‘character’ Optional attribute that specifies the characters to be trimmed.

Rules for a trim character are:

  • The trim character must be a single character, but may be either a single-byte or multi-byte character. It is expressed in the client session character set.
  • By default, if character is not specified, the trim character is the blank (space) character. Trimming can be performed on either unquoted or quoted field values.
  • If a field consists solely of one or more trim characters, it will be a zerolength VARCHAR after trimming.
TrimColumns = ‘option Optional attribute that specifies whether characters are trimmed from column data.

Valid values are:

  • 'None' = no trimming (default)
  • 'Leading' = leading characters are trimmed
  • 'Trailing' = trailing characters are trimmed
  • 'Both' = both leading and trailing characters are trimmed
If TrimColumns and TruncateColumns are enabled, trimming occurs before truncating.
TruncateColumnData = ‘option’ Optional attribute that determines how columns whose length is greater than that defined in the schema are treated.

Valid values are:

  • ‘Y[es]' = the column is truncated to the maximum length and processed without an error being raised. The edited record is sent to the Teradata Database and the original record is saved in the record error file.
  • ‘N[o]’ = TruncateColumnData is not invoked (default).
  • ‘YesWithoutLog’ = the edited row is sent to the Teradata Database, but the original record is not saved in the record error file.
VigilElapsedTime = minutes Optional attribute that specifies the elapsed time from the beginning of the job to the end of the job.

This is the amount of time to wait from the VigilStartTime. VigilElapsedTime and VigilStopTime are interchangeable.

The VigilStartTime is required, but either VigilStopTime or VigilElapsedTime can be used to finish the window definition.

VigilElapsedTime is expressed in minutes. For example, a 2-hour and 15-minute window is indicated as:

VigilElapsedTime = 135
VigilMaxFiles = numberOfFiles Optional attribute that defines the maximum number of files that can be scanned in one pass. Greater values require more Teradata PT global memory and could degrade performance.

The valid value range of numberOfFiles is from 10 to 50000.

The default value is 2000.

Use of the VigilMaxFiles attribute requires that a value for the ArchiveDirectoryPath attribute be specified.

The attribute’s value can be modified during job execution using the External Command Interface. To change the value of VigilMaxFiles during execution, enter:

twbcmd  <Teradata PT job ID> <operator ID>  VigilMaxFiles  <number of files>
VigilNoticeFileName = 'noticeFileName' Optional attribute that specifies the name of the file in which the vigil notice flag is to be written. For example, to request that a record be written to the file /home/user/Alert.txt, specify the attribute as:
VigilNoticeFileName = '/home/user/Alert.txt'

If the directory path is not specified, the file is saved in the working directory.

Naming a file activates the notification feature.

VigilSortField = ‘sortTime’ Optional attribute that provides the capability for the directory vigil scan files to be sorted in the order of the time they were last modified.

The valid values of sortTime are:

  • TIME

    When VigilSortField = 'TIME' is specified, all files will be sorted according to the time they were last modified.

  • NAME

    When VigilSortField = 'NAME' is specified, all files are sorted by filename and processed in ascending alphabetical order.

  • NONE (default)

    A value of ‘NONE’ means that the sort feature is off.

Since times associated with the files are tracked to the nearest second, more than one file may have the same timestamp. When modification times for files are less than one second apart, the sort order of the files may not represent the actual order modified.

When using multiple instances, files cannot be processed in a specific sorted order.When this attribute is used, Teradata PT allows only a single instance of the DataConnector operator to be used in a job step. If more than one instance is specified, the job will fail.

This attribute can be used for a batch as well as an active directory scan.

This attribute is not available for z/OS systems.
VigilStartTime = 'yyyymmdd hh:mm:ss' Optional attribute that specifies the time to start the vigil time window, that is, the period during which the directory specified in the DirectoryPathName attribute is watched for the arrival of new files.

The stop time is expressed as follows:

  • yyyy is the 4-digit year (2000-3000)
  • mm is the month (1-12)
  • dd is the day of the month (1-31)
  • hh is the hour of the day (0-23)
  • mm is the minute (0-59)
  • ss is the second (0-59)

For example, August 23, 2002, start 9:22:56 a.m. becomes:

VigilStartTime = '20020823 09:22:56'

This attribute is required for the VigilWaitTime attribute to work.

VigilStopTime = ‘yyyymmdd hh:mm:ss Optional attribute that specifies the time to stop the vigil time window, that is, the period during which the directory specified in the DirectoryPathName is watched for the arrival of new files.

The start time is expressed as follows:

  • yyyy is the 4-digit year (2000-3000)
  • mm is the month (1-12)
  • dd is the day of the month (1-31)
  • hh is the hour of the day (0-23)
  • mm is the minute (0-59)
  • ss is the second (0-59)

For example, August 23, 2002, stop 2 p.m. becomes:

VigilStopTime  = '20020823 14:00:00'
VigilWaitTime = waitSeconds Optional attribute that specifies the amount of time to wait before starting to check the directory again if no new files were found.

A wait time of 2 minutes becomes:

VigilWaitTime = 120

The wait time defaults to 60 seconds only if VigilStartTime is specified.

The attribute’s value can be modified during job execution using the External Command Interface. To change the value of VigilWaitTime during execution, enter:

twbcmd  <Teradata PT job ID> <operator ID>  VigilWaitTime  <Seconds>