Teradata Listener is a self-service solution for ingesting and distributing extremely fast-moving multiple, high volume data streams in near real-time or batch mode.
|Source Type||Description||Source Examples|
|REST||In the pipeline, the Ingest REST API passes REST data to Kafka. REST sources use a REST API key, which is generated when you create the source, to ingest data and send messages.||
|MQTT||An MQTT source uses a central broker with messages organized by topics. You can subscribe to one or more MQTT topics on an MQTT broker.
In the data pipeline, Listener writes messages from the MQTT Subscriber to source topics in Kafka. You can optionally secure the MQTT subscription using an SSL certificate and private key.
Messages within MQTT have an associated Quality of Service (QoS) level, which determines the level of effort the broker uses to ensure a message is received. Listener supports messages with QoS level 0 (send at most once) or QoS level 1 (send at least once).
- Ingests the REST and MQTT data streams into Kafka.
- Writers configured for a REST or MQTT source start reading the REST and MQTT data at specific batch intervals.
- The writer associated with a specific target system destination writes the REST or MQTT data streams to the target system destination.
Kafka uses a write-ahead log buffer to store and manage data sent to it. Kafka stores data so service interruptions in target systems do not result in data loss. Kafka holds data for only 72 hours by default. If the data is not consumed within 72 hours, the data is lost. The 72-hour default is configurable.
|Target||Method Listener Writers Use to Write Data|
|Teradata Database||JDBC driver or Teradata QueryGrid
Teradata QueryGrid can distribute data to Teradata Database systems to achieve high throughput when the data ingestion rate is high.
|HDFS||Sequence file format to a specified directory|
|HBase||HBase Java library to write to HBase tables|
|Broadcast stream||External apps through a WebSocket server|
- Shows trends in data flow from sources and detects variances that may indicate problems upstream.
- Uses the REST API to manage sources, users, and targets, and provide Listener status.
For each data source and associated target, Listener continuously monitors incoming data streams, gathers metadata, and shows metrics for data flow over time, such as number of records and size of records.
Listener supports up to 10,000 active systems, 10,000 active sources, and 10,000 active targets.
Target systems can be on-premises, in the Teradata Cloud, or in the public cloud.