How Listener Writes to Targets - Teradata Listener

Teradata® Listener™ User Guide

Product
Teradata Listener
Release Number
2.03
Published
September 2018
Language
English (United States)
Last Update
2018-10-01
dita:mapPath
kum1525897006440.ditamap
dita:ditavalPath
ft:empty
dita:id
B035-2910
lifecycle
previous
Product Category
Analytical Ecosystem

Teradata and Aster Targets

Teradata Listener uses a JDBC driver to write data to a Teradata Database or Teradata Aster Database.
  1. Listener stores each streaming message in a staging table. The staging table has three metadata columns and one raw data column for each record.
  2. In near real-time mode, Listener uses prepared SQL INSERT statements to micro-batch the data at an interval of 240ms, or 4000 records, whichever interval is reached first.
  3. The JDBC driver persists all the batched records into the Teradata Database or Teradata Aster Database.
  4. Listener receives acknowledgement that the data has been successfully persisted into the Teradata Database or Teradata Aster Database.
Teradata Listener can also write to a Teradata Database target using Teradata QueryGrid with passthrough or user-provided mapping. When the data ingestion rate is high, Teradata QueryGrid achieves high throughput. When the data ingestion rate is slow, data might be written faster without Teradata QueryGrid. If you are using Teradata QueryGrid, you must specify the following additional properties:
Property Description
Target Subtype Target subtype. Only value supported is querygrid.
Foreign Server Foreign server created previously using Teradata QueryGrid link between Listener and Teradata nodes. Required field if Target Subtype is querygrid.

HDFS Target with or without Kerberos

Listener writes HDFS target data in sequence file format (.seq) to the directory provided in the data_path field. In the example below, data is written to .seq files in /user/testuser/kerberos/{source_id}/.
This is a standard method for writing to HDFS targets.
{

    "target_id": "c1bd34bf-93e7-4ce2-b782-23d1c71e06d3",
    "source_id": "e750d1fe-2608-43f3-9d7d-6c1231d681a8",
    "bundle_interval": 100,
    "bundle_type": "records",
    "data_path": {
      "extension": "seq",
      "path": "/user/testuser/kerberos"
    },
    "target_type": "hdfs", 
    ....
 }
When a bundle_interval is specified (100 records in this example):
  1. Listener collects and holds data records in a temporary directory called /user/testuser/kerberos/+tmp.
  2. When there are 100 records in the tmp directory, Listener moves the data from the tmp directory to sequence files (.seq) in /user/testuser/kerberos/{source_id}/.
    If Listener does not collect 100 bundle_interval records before the default interval of 100 seconds, it moves the data it has collected at the default interval.

Sequence files (.seq) are in key value format delimited by a tab. The key is a random UUID and not associated with the Listener UUID metadata. The value is the data ingested and the metadata appended by Listener.