Inserting Rows into Queue Tables - Teradata Vantage

Teradata® VantageCloud Lake

Deployment
VantageCloud
Edition
Lake
Product
Teradata Vantage
Published
January 2023
ft:locale
en-US
ft:lastEdition
2024-12-11
dita:mapPath
phg1621910019905.ditamap
dita:ditavalPath
pny1626732985837.ditaval
dita:id
phg1621910019905

The first column of a queue table is defined as a Queue Insertion TimeStamp (QITS) column. The values in the column determine the order of the rows in the queue, resulting in approximate first-in-first-out (FIFO) ordering.

If you want the QITS value of a row to indicate the time that the row was inserted into the queue table, then you can use the default value, the result of CURRENT_TIMESTAMP, instead of supplying a value. If you want to control the placement of a row in the FIFO order, you can supply a TIMESTAMP value for the QITS column.

For a multiple-statement request containing multiple INSERT requests that do not supply values for the QITS column, the QITS values are the same for every row inserted.

If you want unique QITS values for every row in a queue table, you can do any of the following things:
  • Supply a TIMESTAMP value for the QITS column in every INSERT request.
  • Avoid multiple-statement requests containing multiple INSERT statements that do not supply values for the QITS column.
  • Add incremental offsets to the current timestamp for the QITS column value in each INSERT request.

    For example:

         INSERT shopping_cart(CURRENT_TIMESTAMP + INTERVAL '0.001', 100)
        ;INSERT shopping_cart(CURRENT_TIMESTAMP + INTERVAL '0.002', 200)
        ;INSERT shopping_cart(CURRENT_TIMESTAMP + INTERVAL '0.003', 300);
Regarding performance, an INSERT operation into a queue table has the following effects:
  • Does not affect response time when the system is not CPU-bound.
  • Is more expensive than an INSERT into a base table, due to need to update an internal in-memory queue.

For details on queue tables and the queue table cache, see CREATE TABLE.

Inserting into Queue Tables Using Iterated Requests

If you use an INSERT request in an iterated request to insert rows into a queue table, you may have to limit the number of data records with each request to minimize the number of rowhash-level WRITE locks placed on the table and reduce the likelihood of deadlocks occurring because of resource conflicts between the locks and the all-AMPs table-level READ lock exerted by the internal row collection processing used by queue tables to update the internal queue table cache.

Conditions Maximum Data Records to Pack per Request
All of the following are true:
  • Queue table is not empty.
  • INSERT request or SELECT AND CONSUME request was performed from queue table after last system reset.
  • Insert operation does not update or delete queue table rows.
Does not matter.
Any of the following are true:
  • Queue table is empty.
  • INSERT request or SELECT AND CONSUME request was not performed from queue table after last system reset.
  • Insert operation updates or deletes queue table rows.

These conditions trigger internal row collection processing queue tables use to update internal queue table cache.

Four.

If you use BTEQ to import rows of data into queue table, use maximum value 4 with BTEQ .SET PACK command.