A queue table is a special database object: a persistent table used to handle queue-oriented data, such as event processing and asynchronous data loading applications, with subsequent complex processing of the buffered data load. The properties of queue tables are similar to those of ordinary base tables, with the additional unique property of behaving like an asynchronous first-in-first-out (FIFO) queue.
- The system clocks on MPP system nodes are not synchronized.
- The QITS value for a row might be user-supplied or updated, either of which could change its position in the queue.
- A transaction rollback restores rows with their original QITS value, which might be an earlier value than rows that have already been consumed.
- Insert operations within the same multistatement request might be assigned the same QITS value.
- If Teradata Workload Manager is enabled, 1 or more rules might defer consume mode operations from running. It is also true that the query scheduler might never see a queue table row because a consume mode operation might delete the row before it qualifies for a query scheduler rule. As a general rule, you should not create rules that affect SELECT AND CONSUME operations because such workload restrictions can easily lead to queue table rows being processed in an order that differs significantly from a “true” FIFO ordering. See Teradata® Query Scheduler User Guide, B035-2512 and Teradata® Query Scheduler Administrator Guide, B035-2511 for further information about rules.
You can think of a queue table as a regular table that also has a memory-resident cache associated with it that tracks the FIFO queue ordering of its rows (see Queue Table Cache). Additionally, consumed rows are retrieved and deleted from the database simultaneously, which ensures that no row can be processed more than once.
Because most, if not all, rows for a given queue table are memory-resident on a PE, they are processed similarly to primary index operations made against non-queue tables, which are single-AMP operations applied with a row-hash WRITE lock on the row.
- Low cardinality (implying that its rows are consumed at roughly the same rate as they are inserted).
- Infrequent UPDATE operations to its rows.
- Infrequent DELETE operations on its rows.
Unlike standard persistent table definitions, a queue table definition must always contain a user-defined insertion timestamp (QITS) as the first column of the table. The QITS column contains the time at which the row was inserted into the queue table, which the system then uses to approximate FIFO ordering. Even though the QITS value might not be unique, because timestamp values can repeat or be updated, the Teradata Database always ensures the uniqueness of the row in the queue.
- Enabling superior implementation of queue-oriented applications, such as message- or workflow-based models.
- Enabling internally- or externally-generated queries to wait actively for the appearance of event data in a queue without having to poll the database periodically to detect the presence of event-related data.
- Supporting asynchronous loading and processing program structures by which applications can insert event-related data into queue tables, which can then buffer that data until it can be processed by user-developed procedures that perform either complex analysis themselves or that insert the data into other processing queues for later analysis.
- Supporting the ability to develop interfaces between the database management system and third party message-based Enterprise Application Integration (EAI) applications. For example, you can write external adapters to insert data into queue tables by invoking stored procedures, macros, or SQL INSERT statements. Similarly, you can write external adapters to extract data from queue tables for insertion into third party EAI application queues.
- Event alerts.
When an event is detected by application code running in the database management system (such as a stored procedure), it can, for example, insert data from that incident into an event queue table by means of a trigger. An external event-processing application could then extract events from the database by submitting a SELECT AND CONSUME TOP 1 statement, which then waits for data to be inserted into a queue table. When data arrives at the queue, the waiting SELECT AND CONSUME TOP 1 statement returns a result to the external application, which then processes the data further.
The external application might then loop and submit another SELECT AND CONSUME TOP 1 statement to wait for further event data to be inserted into the queue table. This functionality eliminates the need for the polling loops required by applications, based on non-queue tables, that must blindly and repeatedly submit SELECT statements while waiting for an event to occur.
- Data loading with asynchronous processing.
A typical example of such data loading coupled with asynchronous processing might be data being loaded into a queue table from an external messaging source.
Each individual piece of data might require significant processing. Rather than processing the incoming data immediately upon its arrival in the database, the system can instead buffer it in a queue table. The newly inserted data might then be processed by a group of stored procedures coded with SELECT AND CONSUME TOP 1 statements that wait for data to be inserted into the queue table. These stored procedure groups would share the labor of processing the queue table data, with each piece of arriving event data being parceled to an available stored procedure for processing.
This technique smooths resource consumption in response to varying data loading inflow rates. By controlling the number of stored procedures that are available to process queue table data, you can manage the maximum rate of resource consumption as well as the maximum throughput rate.
For example, during a short epoch having a high inflow rate, the system can initially buffer the data in a queue table. When the inflow rate later falls below a user-determined critical level, waiting stored procedures can then process the buffered queue table data.
- Schedule changes.
When transportation carriers need to make an unanticipated schedule change, additional entities, such as booking and call center systems, need to be notified to handle the implications of the change.
For example, upon notification of the schedule change, the booking system can begin to rebook all affected transactions.
The front line for notifying and rebooking customers is the call center. Call centers work from a queue of customers, in this case a queue of customers who need to be notified of the schedule change. Calls can be prioritized by applying business rules that consider a number of factors, such as whether the change involves a long or short haul, how significant the change is, the length of the delay caused by the schedule change, and how valuable the customer is to the business.
The input of each of these systems to the analysis can be modeled as a queue table. For example, messages in 1 queue table might represent schedule changes. Stored procedures reading that queue table could insert messages into the queue tables for the booking and call center systems, and so on. Call center agents could then be assigned to customers who need to be notified of a scheduling change by an application that consumes data from the call center queue table.
You can create error tables to track batch insert and update errors on a queue table. See CREATE ERROR TABLE for further information about error tables. Also see “INSERT/INSERT … SELECT” and “MERGE” in Teradata Vantage™ SQL Data Manipulation Language , B035-1146 for information about how to specify that you want to track batch insert and update errors for those DML requests.