Listener Data Flow - Teradata Listener

Teradata® Listener™ User Guide

Product
Teradata Listener
Release Number
2.03
Published
September 2018
Language
English (United States)
Last Update
2018-10-01
dita:mapPath
kum1525897006440.ditamap
dita:ditavalPath
ft:empty
dita:id
B035-2910
lifecycle
previous
Product Category
Analytical Ecosystem
Listener continuously ingests data from REST and MQTT data stream sources and pushes it into the Listener data pipeline. Listener can ingest the following types of data sources:
Source Type Examples Description
REST
  • web events
  • email
  • social media
  • twitter feeds
  • clickstream data
  • other REST sources
In the pipeline, the Ingest REST API passes REST data to Kafka. REST sources use a REST API key, which is generated when you create the source, to ingest data and send messages.
MQTT
  • sensor
  • machine
  • telemetry
  • other IoT sources
An MQTT source uses a central broker with messages organized by topics. You can subscribe to one or more MQTT topics on an MQTT broker.

In the data pipeline, Listener writes messages from the MQTT Subscriber to source topics in Kafka. You can optionally secure the MQTT subscription using an SSL certificate and private key.

Messages within MQTT have an associated QoS (Quality of Service) level, which determines the broker's level of effort to ensure a message is received. Listener supports messages with QoS level 0 (send at most once) or QoS level 1 (send at least once).
  • For QoS level 0, Listener writes the record to the source topic in Kafka. The broker assumes Listener handles the record as soon as it is sent.
  • For QoS level 1, Listener waits until a message is written to the source topic in Kafka before acknowledging success to the broker.
In the data pipeline, Listener does the following:
  1. Ingests the REST and MQTT data streams into Kafka.

    Kafka uses a write-ahead-log buffer to store and manage data sent to it.

  2. Writers configured for a REST or MQTT source start reading the REST and MQTT data at specific batch intervals.
  3. The writer associated with a specific target system destination writes the REST or MQTT data streams to the target system destination.

    Writers manage how REST and MQTT data is stored, or persisted in the following target system destinations:

    • Teradata Database

      Teradata targets can be configured to use a JDBC driver or Teradata QueryGrid to write data. Teradata QueryGrid can distribute data to Teradata Database systems to achieve high throughput when the data ingestion rate is high.

    • HDFS

      HDFS and Hbase targets write data in sequence file format to a specified directory.

    • Hbase

      HDFS and Hbase targets write data in sequence file format to a specified directory.

    • Aster

      Aster targets use JDBC to write data.

    • Broadcast streams

      Listener sends broadcast streams to external apps through a WebSocket server.

    For more information about writers, see How Listener Writes to Targets.

In Listener, the REST API manages sources, users, and targets, and provides Listener status.

In addition, Listener does the following:
  • Uses Kafka to store data so service interruptions in target systems do not result in data loss. However, by default, Kafka holds data for only 72 hours. If the data is not consumed within 72 hours, the data is lost. The 72-hour default is configurable.
  • Shows trends in data flow from sources and detects variances that may indicate problems upstream.
Target systems can be on-premises, in the Teradata Cloud, or in the public cloud.

Listener supports up to 10,000 active systems, 10,000 active sources, and 10,000 active targets.