Listener continuously ingests data from REST, MQTT, and Kafka data stream sources and pushes it into the Listener data pipeline. Listener supports both Teradata Listener™ Kafka and customer-deployed Kafka, including customer-implemented Kafka connectors.
Customer-deployed Kafka can be configured during Listener installation. If you own Kafka clusters in addition to the default Kafka cluster, you can provide both broker and ZooKeeper details when creating a Kafka source in Listener. The default Kafka cluster is the one configured during installation and can be owned by either Listener or the customer.
Source Type | Description | Source Examples |
---|---|---|
REST | In the pipeline, the Ingest REST API passes REST data to Listener Kafka. REST sources use a REST API key, which is generated when you create the source, to ingest data and send messages. |
|
MQTT | An MQTT source uses a central broker with messages organized by topics. You can subscribe to one or more MQTT topics on an MQTT broker. In the data pipeline, Listener writes messages from the MQTT Subscriber to source topics in Kafka. You can optionally secure the MQTT subscription using an SSL certificate and private key. Messages within MQTT have an associated Quality of Service (QoS) level, which determines the level of effort the broker uses to ensure a message is received. Listener supports messages with QoS level 0 (send at most once) or QoS level 1 (send at least once).
|
|
KAFKA |
Listener supports customer-implemented Kafka connectors as a consumer of a subscribed topic that Listener uses as a source for data that can be written to systems supported for customer-deployed Kafka. Kafka uses a write-ahead log buffer to store and manage data sent to it. Kafka stores data so service interruptions in target systems do not result in data loss. Kafka holds data for only 72 hours by default. If the data is not consumed within 72 hours, the data is lost. The 72-hour default is configurable. |
|
- Ingests the REST, MQTT, and customer-owned Kafka data streams into Kafka.
- Writers configured for a REST, MQTT, or Kafka source start reading the REST, MQTT, or Kafka data at specific batch intervals.
- The writer associated with a specific target system destination writes the REST, MQTT, or Kafka data streams to the target system destination.
Kafka uses a write-ahead log buffer to store and manage data sent to it. Kafka stores data so service interruptions in target systems do not result in data loss. Kafka holds data for only 72 hours by default. If the data is not consumed within 72 hours, the data is lost. The 72-hour default is configurable.
Target | Method Listener Writers Use to Write Data |
---|---|
Teradata Database | JDBC driver or Teradata QueryGrid™
Teradata QueryGrid™
can distribute data to Teradata Database systems to achieve high throughput when the data ingestion rate is high.
|
HDFS | Sequence file format to a specified directory |
HBase | HBase Java library to write to HBase tables |
Broadcast stream | External apps through a WebSocket server |
- Shows trends in data flow from sources and detects variances that may indicate problems upstream.
- Uses the REST API to manage sources, users, and targets, and provide Listener status.
For each data source and associated target, Listener continuously monitors incoming data streams, gathers metadata, and shows metrics for data flow over time, such as number of records and size of records.
Listener supports time series data that you can write to primary time index (PTI) tables in Teradata NewSQL Engine systems. When creating a target, you can either map to the TD_TIMECODE PTI table column or allow Listener to insert the TD_TIMECODE column value as the timestamp when the record is inserted in the Teradata NewSQL Engine system.
Listener supports up to 10,000 active systems, 10,000 active sources, and 10,000 active targets.
Target systems can be on-premises, in the Teradata Cloud, or in the public cloud.