- The general purpose of queue tables is to facilitate the ability of some applications to interface with asynchronous tasks that occur outside the database.
The expected behavior for a queue table is that the producers of queue table rows are working at approximately the same speed as their consumers. Rows are consumed as quickly as they are inserted into the table.
An application that would typically populate a queue table with millions of rows before consuming any of them would not see any performance gain over using a non-queue table, so is not a good candidate for using the feature.
Asynchronous events within the database can also use queue tables. For example, suppose you have defined a trigger on an inventory table that checks for the quantity of an item falling under a threshold value. When the threshold is reached, the trigger fires and inserts a row into a queue table that is then processed by an application that reorders the low quantity item.
- The performance of a consume mode SELECT operation is very similar to that of selecting a single row from a non-queue table using a USI.
The performance is more expensive than selecting a single row from a non-queue table using a primary index.
- You should position your SELECT AND CONSUME TOP 1 statements as early as possible within a transaction to avoid conflicts with other database resources. This is to minimize the likelihood of a situation where the SELECT AND CONSUME TOP 1 statement would enter a delayed state while holding locks on resources that might be needed by other requests.
- You should avoid the following programming practices when issuing SELECT AND CONSUME statements.
- Coding any SELECT AND CONSUME statements within explicit transactions that might exert and hold a large number of locks on database objects.
- Coding large numbers of SELECT AND CONSUME statements within a transaction, especially if there are also DELETE and UPDATE operations made on the same queue table as SELECT AND CONSUME statements.
Each time the system performs a DELETE, MERGE, or UPDATE operation on a queue table, the FIFO cache for that table is spoiled. The next INSERT or SELECT AND CONSUME statement performed on the table initiates a full-table scan to rebuild the FIFO cache, which has a performance impact.
- When the number of queue table rows on a single PE frequently exceeds the size of the queue table cache, which is approximately 2,000 row entries per table or 20,000 row entries per PE (the definitive limit is 1 MB of entries), then the system must perform full-table scans more frequently to refresh the cache. Such states also increase the likelihood of errors related to lock-oriented transaction aborts and queue table cache overflow being returned to the requestor.
- An INSERT operation into a queue table does not affect response time when the system is not CPU-bound.
- An INSERT operation into a queue table is more expensive than an insert into a non-queue table because it forces an update of the FIFO cache on the affected PE.
- You should restrict DELETE, MERGE, or UPDATE operations on queue tables to exceptional conditions because of the negative effect on performance. The critical factor is not how many such operations you code, but how frequently those operations are performed. You can ignore this admonition if, for example, you run an application that performs many DELETE, MERGE, or UPDATE operations only under rarely occurring, exceptional conditions.
Otherwise, because of the likely performance deficits that result, you should code DELETE, MERGE, and UPDATE operations only sparingly, and these should never be frequently performed operations.
UPDATE, MERGE, and DELETE operations on a queue table are more costly than the same operations performed against a non-queue table because each such operation forces a full-table scan to rebuild the FIFO cache on the affected PE.
- Vantage uses the Teradata dynamic workload management software to manage all deferred requests (transactions in a delayed state) against a queue table. The Teradata dynamic workload management client application software does not need to be enabled to be used by the system to manage delayed consume mode requests
As a result, you should optimize your respective use of the 2 features because a large number of deferred requests against a queue table can have a negative effect on the ability of the Teradata dynamic workload management software to manage not just delayed consume mode queries, but all queries, optimally.
- The queue table FIFO cache on each PE supports a maximum of 100 queue tables.
When the number of active queue tables in the cache exceeds 100, the system performs full-table scans on all the tables in the cache and initiates a purge of the cache by taking one of the following actions, beginning the purge by first attempting the cache purge using the first method listed, then proceeding to the second if the first is either not applicable or exhausted.
Swap out a queue table that has been spoiled. For example, if a queue table has had a delete operation performed against it, it is a candidate for purge from the FIFO cache.
Purge an inactive queue table from the FIFO cache.
See Queue Table Cache for details.
- The queue table FIFO cache on each PE supports approximately 20,000 queue table row entries.
When the number of row entries in a the FIFO queue table cache exceeds the limit of roughly 20,000 row entries, the system purges the most recent entries from the longest queue to make room.
Because it is possible to control the ordering of queue entries by means of inserting rows with user-selected QITS values in place of system-determined default QITS values (see Ordering Queue Table Rows), the entries that are flushed might not actually be those most recently added to the queue.
For more information, see Queue Table Cache.
- Queue tables that hash to the same PE FIFO cache share the same pool of row entries.
- The ideal number of active queue tables per PE is 1.
To optimize the distribution of your queue tables across the PEs, consider creating them all at the same time.
- If you use a client application that is enabled for iterated request processing to insert rows into a queue table, take care to keep the number of records packed into the USING data buffer sent with the SQL request low.
For example, if you use the BTEQ .IMPORT command to insert rows into a queue table, keep the packing density of SQL operations per transaction less than 5 using the BTEQ .SET PACK command. See Basic Teradata® Query Reference, B035-2414. This ensures optimal performance by minimizing deadlocks and the retryable errors they can cause. Setting the .SET PACK command argument to a value greater than 1 enables iterated request processing, which invokes several restrictions on the DML USING clause. For information about these restrictions, see Teradata Vantage™ - SQL Data Manipulation Language, B035-1146.
By keeping the number of SQL operations per transaction low, you minimize the number of row hash-level WRITE locks placed on the table during the insert operations performed by the .IMPORT command. See Queue Table Cache.
If you know that you are not going to perform any update or delete operations on the queue table while inserting rows into it, and if the table has been consumed or inserted into since the last reset of the queue table cache, then packing density is not an issue.
The following list of issues applies to the performance-related aspects of queue tables.