17.10 - Initialization Strings - Access Module

Teradata® Tools and Utilities Access Module Reference

Product
Access Module
Release Number
17.10
Published
October 2021
Language
English (United States)
Last Update
2021-11-02
dita:mapPath
uur1608578381725.ditamap
dita:ditavalPath
obe1474387269547.ditaval

Following are the initialization strings for loading and exporting data using the Teradata Access Module for Kafka.

An initialization string consists of a series of keyword and value pairs separated by blanks. Each keyword is preceded by a hyphen, and is not case-sensitive. The value is either an integer or a string. If desired, you can specify keywords in a file. For information on this approach, see the description below for the PARAMFILE keyword.

If a space character is part of the value of an initialization string parameter, enclose the value of the initialization string parameter in double quotes.
Initialization Strings



Syntax Element Description
-ADD_LINE_FEED This optional parameter accepts either 'Y' or 'N'. The Kafka access module will append a newline character to the message it receives from the Kafka server if -ADD_LINE_FEED=Y is specified in the initialization string.
-BATCHMODE This parameter is applicable in the import scenario. If -BATCHMODE=Y, then Kafka Axsmod will fetch messages in batches. Otherwise, it will fetch a single message in every read request.
-BLOCKSIZE The block size, in bytes, of the data transfer operation, where n can be between 1000 bytes to 16MB. The default, if you do not specify the block_size parameter, is 1MB.
-BROKERS Kafka is a cluster of one or more servers. Each server is called a broker. This parameter takes the broker_ip_address:port of the leader broker. You can specify multiple brokers using a "," separator in a multi-broker environment. For example: broker1_ip_address:port,broker2_ip_address:port,broker3_ip_address:port is a valid value for the -BROKERS keyword.
-CLOSEQUOTE This optional parameter specifies a close quote, where a close quote character is different from the open quote character.
-CONFIG This parameter sets the configuration properties at server side. For example, if -CONFIG compression.codec=gzip is specified in the initialization string, then gzip message compression will be enabled in the export scenario and message decompression from gzip compression will be enabled in the import scenario.
-DEFERRED_MODE This optional parameter accepts either 'Y' or 'N'. Deferred mode is specified by -DEFERRED_MODE=Y. The Kafka access module creates a LOB object containing a message it receives from the Kafka server, and passes the LOB object name to the TPT DC operator instead of the actual message, so that TPT loading operators can load the LOB object in deferred mode.
-LOBDIR This optional parameter can be used alongside the -DEFERRED_MODE parameter to specify a directory in which to store LOB objects.
-MODE An application can be a consumer of data or producer of data. -MODE specifies the application category. It accepts two values:
  • P – represents producer
  • C – represents consumer
-MSGLIMIT This parameter is applicable in the import scenario. msglimit allow users to specify the number of messages to be read from a particular topic or partition of a topic.

msglimit can be applied at the topic level (or) partition level.

msglimit at the top level:

When msglimit is applied at the topic level, the parameter -msglimit <n> must be included in the initialization string.

For example:
-msglimit 500

AccessModuleKafka will read 500 messages at the top level and terminate.

msglimit at the partition level:

When msglimit is applied at the partition level, the following syntax should be used:
-P p1(n1),p2(n2),p3(n3)
Where p1,p2,p3 denotes partition number
      n1,n2,n3 denotes msglimit for the partitions.

ex: -P 0(200),1(500),2(400)
AccessModuleKafka will read 200 messages from partition 0.
AccessModuleKafka will read 500 messages from partition 1.
AccessModuleKafka will read 400 messages from partition 2.

RowsPerInstance should not be used with AccessModuleKafka as it can result in data loss with batchmode or non-batchmode if there is more than one row per message.

It is not recommended to use the -MSGLIMIT feature of the Kafka access module alongside the RowsPerInstance feature of TPT, as both features provide similar functionality and are incompatible with each other.

-OFFSETS This is an optional parameter.

The following can be specified as a value to the '-O' or '-OFFSETS' parameter

  • Stored – Kafka Axsmod will use the next offset to last consumed message offset, stored on the broker by the previous run, of the partitions as starting offset to consume messages.
  • beginning – Kafka Axsmod will consume messages from the beginning of partitions and not use the offsets of the partitions stored on broker by the previous run as starting offset to consume messages.
  • end – Kafka Axsmod will consume messages from the end of partitions and not use offsets of the partitions stored on broker by the previous run as starting offset to consume messages.
  • p1(o1),p2(o2),p3(o3) … – Allows the user to specify an offset of his/her choice as a starting offset for a partition to consume messages instead of using offsets of the partitions stored on broker by the previous run to consume messages. p1, p2... stands for partition numbers and o1, o2.. stands for corresponding starting offsets of the partitions in the syntax.

The offsets of the partitions stored on the broker by the previous run will be used as starting offsets to consume messages by default if this parameter is not used in the initialization string.

-OFFSET_STORE_DIRECTORY This optional parameter is applicable only in consumer mode. The parameter accepts either <Directory Name> or BROKER as a value. An offset file named <TopicName>-<PartitionNumber> is created under the directory for each partition when <Directory Name> is specified as a value to the -OSD parameter.

The last offset of each partition which is read by the Kafka Axsmod is written to the corresponding offset file. The offset files can be provided to the second job so that it can read from the next offset, where the first job left off.

The last offset of each partition which is read by the Kafka Axsmod is written on the BROKER when BROKER is specified as a value to the -OSD parameter. The user can instruct Kafka Axsmod to store offsets in Kafka broker by specifying '-OSD BROKER -CG <consumer_group_name>' or '-OSD BROKER -X group. id=<consumer_group_name>' in the initialization string. Consumer group is required for broker based offset storage.

-OPENQUOTE This optional parameter specifies an open quote, where an open quote character is different from a close quote character.
-PARAMFILE This optional parameter supplies the object name for a file that includes more parameters. For example:

-paramfile myParmFile

  • Each line in the parameter file may have a single keyword (parameter) and its value(s).
  • Empty lines will be ignored.
  • Lines beginning with the pound (#) character will be ignored.
  • Keywords must begin in the first column.
-PARTITION Message queues are maintained under Topic, and referred as partitions. This parameter takes the partition numbers. For example, if -PARTITION 0 is specified in the initialization string, Kafka Axsmod will read only from partition 0. Or, if -PARTITION 0,1,2,3 is specified, Kafka Axsmod will read form partition 0, partition 1, partition 2, and partition 3 under the Topic.

An alternative syntax can also be used, specifying -PARTITION [0-3] to instruct Kafka Axsmod to read from partition 0,partition 1, partition 2, and partition 3 under the Topic.

-QUOTE This optional parameter specifies a quote character. The Kafka access module will enclose the message it receives from the Kafka server with the specified quote character.
-RWAIT The number of seconds KAFKA AXSMOD will wait to read data when it receives an end of message error from the Kafka Server.

Acceptable Range: Enter 1-600 seconds or -1 for unlimited wait time.

Use -RWAIT instead of -WAIT, as the keyword -WAIT has been deprecated.

-SHOWP Displays the server properties. It will create a file named props in the working directory. The props file contains the configuration properties and their descriptions. The configuration properties listed in the props file can be set using the -CONFIG option of the Teradata Access Module for Kafka.
-SYNCMODE This parameter is applicable in export scenario. If -SYNCMODE=Y then Kafka Axsmod will be a synchronous producer else it will be an asynchronous producer.
-TIMEOUT Sets the timeout time of APIs provided by the Librdkafka library. The default value is 100ms.
-TOPIC Takes a name of Topic or Feed maintained by the Kafka Server.
-TRACELEVEL Sets the level of detail to be posted to the log file, where the following is true:
  • 0 : Disabled – No logging. This is the default if tracelevel is not specified.
  • 1 : Events – Events/Requests received by Kafka Axsmod from application.
  • 2 : Info – Provides the imported or exported message lengths and performance-related information.
  • 3 : ALL – Provides the hexadecimal dump of data which passed through the Teradata Access Module for Kafka.