Listener Overview - Teradata Listener

Teradata® Listener™ User Guide

Product
Teradata Listener
Release Number
2.05
Published
March 2019
Language
English (United States)
Last Update
2019-04-25
dita:mapPath
vlj1546974296436.ditamap
dita:ditavalPath
ft:empty
dita:id
B035-2910
lifecycle
previous
Product Category
Analytical Ecosystem

Teradata Listener is a self-service solution for ingesting and distributing extremely fast-moving multiple, high volume data streams in near real-time or batch mode.



Listener continuously ingests data from REST and MQTT data stream sources and pushes it into the Listener data pipeline. Listener can ingest the following types of data sources:
Source Type Description Source Examples
REST In the pipeline, the Ingest REST API passes REST data to Kafka. REST sources use a REST API key, which is generated when you create the source, to ingest data and send messages.
  • web events
  • email
  • social media
  • twitter feeds
  • clickstream data
  • other REST sources
MQTT An MQTT source uses a central broker with messages organized by topics. You can subscribe to one or more MQTT topics on an MQTT broker.

In the data pipeline, Listener writes messages from the MQTT Subscriber to source topics in Kafka. You can optionally secure the MQTT subscription using an SSL certificate and private key.

Messages within MQTT have an associated Quality of Service (QoS) level, which determines the level of effort the broker uses to ensure a message is received. Listener supports messages with QoS level 0 (send at most once) or QoS level 1 (send at least once).
  • For QoS level 0, Listener writes the record to the source topic in Kafka. The broker assumes Listener handles the record as soon as it is sent.
  • For QoS level 1, Listener waits until a message is written to the source topic in Kafka before acknowledging success to the broker.
  • sensor
  • machine
  • telemetry
  • other IoT sources
In the data pipeline, Listener does the following:
  1. Ingests the REST and MQTT data streams into Kafka.
  2. Writers configured for a REST or MQTT source start reading the REST and MQTT data at specific batch intervals.
  3. The writer associated with a specific target system destination writes the REST or MQTT data streams to the target system destination.

Kafka uses a write-ahead log buffer to store and manage data sent to it. Kafka stores data so service interruptions in target systems do not result in data loss. Kafka holds data for only 72 hours by default. If the data is not consumed within 72 hours, the data is lost. The 72-hour default is configurable.

Writers manage how REST and MQTT data is stored, or persisted in the following target destinations:
Target Method Listener Writers Use to Write Data
Teradata Database JDBC driver or Teradata QueryGrid
Teradata QueryGrid can distribute data to Teradata Database systems to achieve high throughput when the data ingestion rate is high.
Aster JDBC driver
HDFS Sequence file format to a specified directory
HBase HBase Java library to write to HBase tables
Broadcast stream External apps through a WebSocket server
In addition to ingesting data streams and distributing the data to targets, Listener does the following:
  • Shows trends in data flow from sources and detects variances that may indicate problems upstream.
  • Uses the REST API to manage sources, users, and targets, and provide Listener status.

    For each data source and associated target, Listener continuously monitors incoming data streams, gathers metadata, and shows metrics for data flow over time, such as number of records and size of records.

Listener supports up to 10,000 active systems, 10,000 active sources, and 10,000 active targets.

Target systems can be on-premises, in the Teradata Cloud, or in the public cloud.