EXPLAIN Request Modifier Phrase Terminology | VantageCloud Lake - EXPLAIN Request Modifier Phrase Terminology - Teradata Vantage

Teradata® VantageCloud Lake

Deployment
VantageCloud
Edition
Lake
Product
Teradata Vantage
Published
January 2023
Language
English (United States)
Last Update
2024-04-03
dita:mapPath
phg1621910019905.ditamap
dita:ditavalPath
pny1626732985837.ditaval
dita:id
phg1621910019905

The following table explains terms used in EXPLAIN phrases that are not self-explanatory.

Phrase Explanation
:*
  Vantage has replaced values that have been masked from the explain output for a dynamic query plan.
m1 column partitions of …
  Up to m1 column partitions of the column-partitioned table or join index may need to be accessed. There are column partition contexts available for each of the m1 column partitions.

For this phrase to occur, the column-partitioned table or join index does not have row partitioning.

m1 2. One of the column partitions accessed may be the delete column partition.

Not all of the m1 column partitions may need to be accessed if no rows qualify.

A USING ROWID SPOOL phrase may follow the table or join index name. If so, m1 is the number of column partitions accessed when using the rowID spool while m2 in the USING ROWID SPOOL phrase is the number of column partitions accessed to build the rowID spool.

This phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that may read a column-partitioned source.

For a delete operation, Vantage does not immediately reclaim storage for any deleted rows other than LOBs and column partitions with ROW format. A subsequent fast path deletion of all the rows of the table or join index reclaims the storage of all deleted rows in the table, including previously deleted rows.

m1 column partitions (c1 contexts) of …
  Up to m1 column partitions of the column-partitioned table or join index may need to be accessed using c1 column partition contexts.

For this phrase to occur, the column-partitioned table or join index does not have row partitioning.

m1 ≥ 2.

2 < c1 < m1-1

c1 is equal to, or one less than, the number of available column partition contexts.

One of the column partitions accessed may be the delete column partition. Not all of the m1 column partitions may need to be accessed if no rows qualify.

A "using CP merge Spool" or "using covering CP merge Spool" phrase follows the table or join index name.

This phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that may read a column-partitioned source.

To access the m1 column partitions, columns from the column partitions are merged up to c1 column partitions at a time into a column-partitioned spool until all the projected columns have been retrieved for the qualifying rows.

One or more of the merges may require accessing the delete column partition. For each such merge, the delete column partition is included in m1.

Performance can degrade if c1 is much less than m1 and the retrieve is not selective. For tables on the Block File System, consider decreasing the number of column partitions that need to be accessed, decreasing the data block size for the table, combining column partitions so there are fewer column partitions, or increasing PPICacheThrP if there is enough available memory to do so.

For a delete operation, Vantage does not immediately reclaim storage for any deleted rows other than LOBs and column partitions with ROW format. A subsequent fast path deletion of all rows of the table or join index reclaims the storage of all deleted rows in the table, including previously deleted rows.

n partitions of …
  Only n of the combined partitions are accessed. n is greater than one.(all_amps

This phrase only applies to a row-partitioned, primary-indexed (RPPI) table or join index.

n1 combined partitions (one column partition) of …
  One column partition of multiple row partitions of the column-partitioned table or join index may need to be accessed.

For this phrase to occur, the table or join index is accessed using rowids.

A "using rowid Spool" phrase may follow the table or join index name. One column partition is accessed when using the rowID spool while m2 in the "using rowid Spool" phrase is the number of column partitions accessed to build the rowID spool.

The phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that may read a column-partitioned source.

n1 combined partitions (m1 column partitions) of …
  Rows and columns for up to m1 combined partitions of the table or join index may need to be accessed.

For the column-partitioning level, m1 column partitions may need to be accessed. There are column partition contexts available for each of the m1 column partitions.

For this phrase to occur, the column-partitioned table or join index has both row and column partitioning. One of the column partitions that may be accessed is the delete column partition.

Not all of the m1 column partitions may need to be accessed if no rows qualify.

A USING ROWID SPOOL phrase may follow the table or join index name. If so, m1 is the number of column partitions accessed when using the rowID spool while m2 in the USING ROWID SPOOL phrase is the number of column partitions accessed to build the rowID spool.

The phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that may read a column-partitioned source.

For a delete operation, Vantage does not immediately reclaim storage for any deleted rows other than LOBs and column partitions with ROW format. A subsequent fast path deletion of all the rows in a row partition reclaims the storage of all deleted rows in that row partition, including previously deleted rows in that partition. A subsequent fast path deletion of all the rows of the table or join index reclaims the storage of all deleted rows in the table, including previously deleted rows.

n1 combined partitions (m1 column partitions and c1 contexts) of …
  Rows and columns for up to n1 combined partitions of the table or join index may need to be accessed.

For the column-partitioning level, m1 column partitions may need to be accessed. For this phrase to occur, the column-partitioned table or join index has both row and column partitioning.

For the column-partitioning level, c1 column partition contexts are used to access up to m1 column partitions. n1 ≥ 2.

m1 ≥ 2.

2 < c1 < m1-1 and is equal to or one less than the number of available column partition contexts.

One of the column partitions that may be accessed is the delete column partition. Not all of the m1 column partitions may need to be accessed if no rows qualify. A "using CP merge Spool" or "using covering CP merge Spool" phrase follows the table or join index name.

The phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that may read a column-partitioned source. To access the m1 column partitions, columns from the column partitions are merged up to c1 column partitions at a time into a column-partitioned spool until all the projected columns have been retrieved for the qualifying rows.

One or more of the merges may require accessing the delete column partition. For each such merge, the delete column partition is included in m1.

Performance can degrade if c1 is much less than m1 and the retrieve is not selective. For tables on the Block File System, consider decreasing the number of column partitions that need to be accessed, decreasing the data block size for the table, combining column partitions so there are fewer column partitions, or increasing PPICacheThrP if there is enough available memory to do so.

For a delete operation, Vantage does not immediately reclaim storage for any deleted rows other than LOBs and column partitions with ROW format. A subsequent fast path deletion of all the rows in a row partition reclaims the storage of all deleted rows in that row partition, including previously deleted rows in that partition. A subsequent fast path delete of all the rows of the table or join index reclaims the storage of all deleted rows in the table, including previously deleted rows.

aggregate results are computed globally …
  Totals are aggregated across all AMPs in an indicated map.
For this case, Vantage performs all four steps of the ARSA aggregation algorithm:
  1. Aggregate locally on each AMP in the map.
  2. Redistribute the local aggregations to their target AMPs.
  3. Sort the redistributed aggregations.

    This stage of the process includes the elimination of duplicate rows.

  4. Aggregate globally to compute the final aggregation.
aggregate results are computed locally …
  Totals are aggregated locally on each AMP in an indicated map.

For this case, local aggregation is the only stage of the ARSA aggregation algorithm that Vantage performs.

all partitions of …
  Vantage accesses all combined partitions of an AMP for a primary index access to a single AMP for a primary-indexed row-partitioned table or join index.

This phrase is not applicable to column-partitioned tables or join indexes because those objects do not have a primary index.

a single column partition of …
  Only one column partition of the column-partitioned table or join index may need to be accessed. For this phrase to occur, the column-partitioned table or join index does not have row partitioning. Also, the delete column partition need not be accessed either because the column partition has ROW format or access is using rowIDs from an index or rowID spool.

A "using rowid Spool" phrase may follow the table or join index name. One column partition is accessed when using the rowID spool while m2 in the "using rowid Spool" phrase is the number of column partitions accessed to build the rowID spool.

This phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that may read a column-partitioned source.

For a delete operation, Vantage does not immediately reclaim storage for any deleted rows, other than LOBs and column partitions with ROW format. A subsequent fast path deletion of all of the rows of the table or join index reclaims the storage of all deleted rows in the table, including previously deleted rows.

a single combined partition of …
  Only one column partition of only one row partition of the column-partitioned table or join index may need to be accessed.

For this phrase to occur, the column-partitioned table or join index has to have both row and column partitioning. Also, the delete column partition need not be accessed either because the column partition has ROW format or because access is using rowids.

A "using rowid Spool" phrase may follow the table or join index name. One column partition is accessed when using the rowID spool while m2 in the "using rowid Spool" phrase is the number of column partitions accessed to build the rowID spool.

This phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that may read a column-partitioned source.

a single partition of …
  Only a single combined partition of a primary-indexed row-partitioned table or join index is accessed.

This phrase is not applicable to a column-partitioned table or join index.

(all_amps) …
  The spool is created on all AMPs in a map to which the step is sent. Because the spool is created on all AMPs, a dynamic AMP group is not needed.
all-AMPs JOIN step in mapname by way of a RowHash match scan …
  The join step is processed on all AMPs in the indicated map. The first row is retrieved from the first table; the hash code for that row is used to locate a row from the second table.
Each row-hash match is located and processed as follows:
  1. The row-hashes are compared. If not equal, the larger row-hash is used to read rows from the other table until a row-hash match is found, or until the table is exhausted.
  2. If match is found, each pair of rows with the same hash code is accessed (one at a time) from the two tables. For each such pair, if the join condition is satisfied, a join result row is produced.
  3. After all rows with the same row-hash are processed from both tables, one more row is read from each table. The row-hashes from these two rows are compared, restarting the compare process.
all-AMPs RETRIEVE step in mapname by way of an all-rows scan …
  All rows of a table are scanned row by row on all AMPs in the indicated map on which the table is stored.
all partitions of …
  All combined partitions of an AMP are accessed for a primary index access to a single AMP for a primary-indexed row-partitioned table or join index.
a rowkey-based …
  The join is hash-based by partition (that is, by the rowkey). There are equality constraints on all the partitioning columns and primary index columns. This allows for a faster join because each noneliminated row partition must be joined with at most only one other row partition.

When the phrase is not given, the join is hash based. That is, there are equality constraints on the primary index columns from which the hash is derived.

With either method, the join conditions must still be validated.

<BEGIN ROW TRIGGER LOOP>
  Processing of the trigger action statements defined in the row trigger starts from the current AMP step, step n, of the EXPLAIN text.

All AMP steps from the current step through the step in which the phrase END ROW TRIGGER LOOP for step n appears constitute the row trigger loop.

by skipping global aggregation
  When estimated to be cost-effective, Vantage avoids redistributing rows and aggregating these rows globally by skipping global aggregation.
by skipping local aggregation
  When estimated to be cost-effective, Vantage avoids the cost of first writing to spool and then sorting the spooled rows by skipping local aggregation.
by the hash code of ([database_name.]table_name.column_name).
  The spool sort is done on the specified column in row hash order.
by using cache during local aggregation
  When estimated to be cost-effective, Vantage avoids sorting large numbers of input rows by caching output rows during local aggregation and coalescing any duplicate rows in the cache.
by way of an all-rows scan
  All rows in the table are scanned because a single-partition scan is not possible for the specified predicate conditions. See Single Partition Scans and BEGIN/END Bound Functions.
by way of index # n and the bit map in Spool n
  The data row associated with the rowID is accessed only if the associated bit is turned on in the bit map.
by way of the primary AMP index …
  Conditions on the primary AMP index columns allow the system to access only a single AMP for the indicated table.
by way of the primary index …
  Conditions on the primary index columns allow the system to access only a single AMP for the indicated table. Primary indexes allow direct access to the rows with that primary index value in the table or in noneliminated combined row partitions.
by way of the sort key in spool field1 …
  Field1 is created to allow a tag sort.
by way of the unique primary index …
  Conditions on the unique primary index columns allow the system to access only a single AMP and, at most, a single row for the indicated table.
(compressed columns allowed)
  The target spool can have compressed values.
computed globally …
  The computation involves all the intermediate spool data based on compressed columns in the source data.
condition …
  An intermediate condition that join-qualifies table rows (as compared to the overall join condition).
duplicated on all AMPs in mapname
  A spool containing intermediate data that is used to produce a result is copied to all AMPs in the indicated map that contain data with which the intermediate data is compared.
eliminating duplicate rows …
  Duplicate rows can exist in spools, either as a result of selection of nonunique columns from any table or of selection from a MULTISET table. This is a DISTINCT operation.
<END ROW TRIGGER LOOP for step n.>
  Delimits the running of the last AMP step in the row trigger loop.

Control moves to the next step outside the trigger.

END TRANSACTION step …
  Processing is complete and any locks on the data may be released. Changes made by the transaction are committed.
enhanced by dynamic partition elimination …
  A join condition where dynamic row partition elimination has been used.
estimated size …
  This value is the estimated size of the spool file needed to accommodate spooled data. This is only an estimate. Collecting appropriate statistics can improve the accuracy of the estimate. See COLLECT STATISTICS (Optimizer Form).
estimated time …
  This approximate time is based on average times for the suboperations that comprise the overall operation, and the likely number of rows involved in the operation. The accuracy of the time estimate is also affected by the accuracy of the estimated size. This does not take into account other workloads running on the system.
run the following steps in parallel ...
  Identifies a set of AMP steps that can be processed concurrently. The explanatory text immediately following the list describes the execution of each step.
from n sources...

from n left sources...

from n right sources...

  RETRIEVE, JOIN, or aggregation from multiple sources that is performed in a single step.
(group_amps)
  A spool is created on a group of AMPs. Each AMP on which the step runs enters into a dynamic AMP group only if rows are generated for that AMP.
grouping by field n ([database_name.]table_name.column_expression).
  The specified grouping column expression is based on column n in the specified table.
Hash table is built from Spool n and is duplicated to all AMPs in mapname
  The step is an AllRowsOneAMP In-Memory hash join step.
in mapname
  The map defining a collection of AMPs.
in mapname covering mapname, mapname[, mapname…]
  Map that includes, at least, all the AMPs in the maps listed after "covering".
INSERT into a single column partition of …
  At least one row is being inserted into a table or join index with only one user-specified column partition.
INSERT into m3 column partitions of …
  At least one row is being inserted into a table or join index with m3 column partitions.

The value of m3 ≥ 2.

in view view_name
  The specified [database_name].table_name is accessed by means of the view view_name.
join condition …
  The overall constraint that governs the join.

In the following request, employee.empno = department.mgrno is the overall constraint governing the join:

SELECT deptname, name
FROM employee, department
WHERE employee.empno = department.mgrno;

Join conditions are also specified in the ON clause of an outer join or an ANSI SQL standard-formatted inner join.

JOIN step by way of an all-rows scan …
  Each AMP on which spooled rows, primary table rows, or both reside are searched row by row, and rows that satisfy the join condition are joined.
joined using a single partition exclusion hash join, with a join condition of (join_condition)
  Vantage joins two spools using a single-partition exclusion hash Join on join_condition.
(Last Use)
  Identifies the last reference to a spool that contains intermediate data. The spool is released following this AMP step.
(load isolated)…
locally on the AMPs …
  That portion of spooled intermediate or result data for which an AMP is responsible is stored on the AMP, and not duplicated on or redistributed to other AMPs that are processing the same request.
lock …
  The Lock Manager places an ACCESS, READ, WRITE, or EXCLUSIVE lock on the database object that is to be accessed by a request.
MERGE into a single column partition of …
  Rows are being merge inserted into a table or join index with only one user-specified column partition.
MERGE into m3 column partitions of …
  Rows are being inserted into a table or join index with m3 user-specified column partitions, using one column-partition context.

m3 ≥ 2.

Vantage can use either of two methods to insert the rows.
  • Take 1 pass over the source rows to insert the rows.

    The column values from each source row are buffered with a separate 128KB output buffer for each target column partition. Vantage writes a buffer to its corresponding column partition when full or when all source rows have been read).

    The source rows either need not be sorted for the MERGE step or have already been sorted by a previous AMP step. The phrase "with buffered output" is reported in the MERGE step for this method.

  • Fill a buffer with source rows and then sort it.

    Vantage then scans the rows in the buffer once for each column partition to insert the column partition values for that column partition. This is repeated until all the source rows have been read.

    This method avoids an AMP step to spool and sort the rows. This is efficient if enough data is inserted for each set of buffered source rows into each combined partition when there is row partitioning for a target column-partitioned table and the source is a NoPI table.

    If the source is not a NoPI table, this method can cause an inefficient rereading and rewriting of data blocks when a small number of column partition values are going into a combined partition at a time.

    The phrase "with buffered input and sorted input" is reported in the MERGE step for this method

The Optimizer recommends the least costly method to use, but an AMP may change to the other method as the rows are being inserted if the other method may perform better.

MERGE into m3 column partitions (c3 contexts) of …
  Rows are being inserted into a table or join index with m3 user-specified column partitions, using c3 column-partition contexts.

m3 ≥ 2.

2 < c3 < m3.

Vantage can use one of two methods to insert the rows.
  • Make "" passes over the source rows to insert them.

    The column values from each source row that is read are buffered with a separate 128KB output buffer for each target column partition being processed by a pass. A buffer is written to its corresponding column partition when full or when all source rows have been read.

    The source rows need not be sorted for the MERGE step or have already been sorted by a previous AMP step. The source must be spooled if the source is a table or join index with an ACCESS lock. This method requires multiple passes over the source rows, but is often less costly than the next method.

    Vantage reports the phrase "with buffered output" in the MERGE step for this method.

  • Fill a buffer with source rows and then sort it.

    Vantage scans the rows in the buffer once for each column partition to insert the column partition values for that column partition and repeats this stage of the process until all the source rows have been read. This avoids an AMP step to spool and sort the rows and avoids making multiple passes over the source rows. This method can be efficient if enough data is inserted for each set of buffered source rows into each combined partition when there is row partitioning for a target column-partitioned table and the source is a NoPI table.

    If the source is not a NoPI table, this method can cause an inefficient rereading and rewriting of data blocks. This inefficiency can occur when a small number of column partition values at a time are going into a combined partition.

    The phrase "with buffered input and sorted input" occurs in the MERGE step for this method.

The Optimizer recommends the method to use to the AMPs, but an AMP may change to the other method as the rows are inserted if the AMP determines that the other method may perform better.

merge with matched updates and unmatched inserts …
  An AMP step that can do any of the three following operations in a single step:
  • Both update and insert operations
  • Insert operations only
  • Update operations only

The step assumes that the source table is distributed on the join column of the source table, which is specified in the ON clause as an equality constraint with the primary index of the target table and sorted on the RowKey.

The step performs a RowKey-based Merge Join internally, identifying source rows that qualify for updating target rows and source rows that qualify for inserts, after which the step performs those updates and inserts.

This step is similar to the APPLY phase of MultiLoad in guaranteeing that the target table data block is read and written only once during the MERGE operation.

from foreign_table metadata

which is binpacked by size

by using external metadata in Spool n to access foreign_table

  At least one of the tables involved is a foreign table.

Foreign tables allow Vantage to access data from external object storage, such as AWS S3, without requiring you to manually move the data into the database first from where it natively resides. Foreign tables are identified by a hostname, path, and other metadata that point to the external object storage. Vantage can read and process semi-structured or unstructured external data in foreign tables using standard SQL. For example, you can use Teradata analytic functions to examine the data, join it to the relational data in the database, and issue queries against it as you can for other data in Vantage.

(no lock required)
  The lock required for the operation was acquired in a previous AMP step.

This phrase is not always reported for steps where a lock was acquired earlier.

no residual conditions …
  Rows are selected in their entirety; there are no specific search conditions. All applicable conditions have been applied to the rows.
normalizing rows …
  Vantage performs a normalize operation as part of sort.
of n3 combined partitions
  All the rows and columns in each of n3 combined partitions of a table or join index, which has both row and column partitioning, can be completely deleted as a fast path delete and storage for the deleted rows is recovered.

n3 > 1.

of n3 combined partitions from …
  All the rows and columns in each of n3 combined partitions of a table or join index that has both row and column partitioning are to be completely deleted as a fast path delete, and the storage for the deleted rows is recovered.

n3 is greater than one.

of n partitions from …
  All rows in each of n combined partitions can be completely deleted for a primary-indexed row-partitioned table or join index.

The value of n is greater than one.

This may allow faster deletion of entire partitions.

This phrase is not applicable to column-partitioned tables or join indexes because other phrases are used in those cases.

of a single partition from …
  The Optimizer determined that all rows in a single combined partition for a row-partitioned table or join index with a primary index can be completely deleted.

This may allow faster deletion of the entire combined partition.

This phrase is not applicable to column-partitioned tables or join indexes because there must be at least two column partitions for a column-partitioned table or join index: at least one specified column partition and the delete column partition.

on a reserved RowHash [in all partitions] [in n partitions] [in a single partition]…
  The system has placed a proxy lock on the target table to prevent global deadlock for a later all-AMP lock on the table. The optional phrases "in all partitions," "in n partitions," and "in a single partition" appear only if the table is row-partitioned.

The phrase "in all partitions" indicates a proxy for a table-level lock.

The phrase "in n partitions" indicates a proxy for a partition-range lock.

The phrase "in a single partition" indicates a proxy for a single-partition lock.

For more information, see Proxy Locks.

partial SUM
  A partial GROUP BY optimization has been done on the SUM step.
(partial-covering)
  The NUSI in question partially covers the query using a constraint scan.

For example, the system can probe a NUSI subtable to get the rowID values for base table rows that may qualify for the query, but the system must then also access the underlying base table for the NUSI to get the rows that definitely qualify.

post join condition of …
  The optimizer detected a delayed condition and processed that condition as a post-join condition. This phrase appears immediately after the regular join condition.
The following conditions trigger the use of post-join processing:
  • Predicates that contain a CASE statement, or NULL sensitive predicates that refer to the inner table of an outer join.
  • Predicates that contain a complex OR term used with an outer join.
  • Predicates that are used with the NOT IN or the NOT EXIST connecting term.
redistributed by hash code to all AMPs in mapname
  Given the values of an indexed or unindexed column, rows are sent to the AMPs in the indicated map that are responsible for storing the rows that use these values as a primary index.
right outer joined using a single partition hash join, with a join condition of (join_condition)
  Vantage joins a spool from the left relation to a right relation using a two-step single-partition right outer hash join on join_condition.
single-AMP JOIN step by way of the unique primary index …
  A row is selected on a single AMP using the unique primary index for the table. Using a value in the row that hashes to a unique index value in a second table, the first row is joined with a second row located on another AMP.
single-AMP RETRIEVE by way of unique index # n
  A single row of a table is selected using a unique secondary index value that hashes to the AMP on which the table row is stored.
single-AMP RETRIEVE step by way of the unique primary index …
  At most, a single row of a table is selected using a unique primary index value that hashes to the single AMP on which the data row is stored.

Although not explicitly stated in the LOCK portion of the Explain text, a rowhash lock is required because the table is accessed using the unique primary index.

SORT/GROUP
  Partial GROUP BY was used to reduce cardinality by simultaneously sorting and grouping.
SORT to normalize …
  Vantage performs the normalize operation as part of sort.
SORT to order Spool n by row hash …
  Rows in the spool are sorted by hash code to put the spool rows in the same order as rows in the primary table, or in another spool on the same AMP, with which those rows are to be matched.
SORT to order Spool n by row hash and the sort key in spool field1 eliminating duplicate rows …
  Rows in the spool are sorted by hash code using a uniqueness sort to eliminate duplicate rows. Uniqueness is based on the data in field1.
The contents of field1 depend on the query and may comprise any of the following:
  • A concatenation of all the fields in the spool row (used for queries with SELECT DISTINCT or that involve a UNION, INTERSECT, EXCEPT, or MINUS operation).
  • A concatenation of the rowIDs that identify the data rows from which the spool row was formed (used for complex queries involving subqueries).

Another value that uniquely defines the spool row (used for complex queries involving aggregates and subqueries).

SORT to order Spool 1 by the sort key in spool field1 …
  Rows in the spool are sorted by the value in field1. The contents of field1 are determined by the column set defined in the ORDER BY or WITH BY clause of the request being processed.

For example, SORT TO partition BY ROWKEY or SORT to order Spool 1 by the sort key in spool field1 (THU.JI1.b1).

SORT to partition by rowkey.
  A join spool is joined using a rowkey-based join and sorted to the appropriate partitions.
SORT to string_1 by string_2 …
  Rows are sorted to string_1 by their string_2 sort key.

For example, SORT TO partition BY ROWKEY or SORT to order Spool 1 by the sort key in spool field1 (THU.JI1.b1).

SORT to partition Spool n by rowkey …
  The Optimizer determined that a spool is to be partitioned based on the same partitioning expression as a table to which the spool is be joined. That is, the spool is to be sorted by rowkey (partition and hash). Partitioning the spool in this way allows for a faster join with the partitioned table. n is the spool number.
spool n
  Identifies a spool, which is used to contain data during an operation.
  • Spool 1 is typically used to hold a result before the result is returned to the user.
  • Spools 2, 3, and so on, contain intermediate data that produces a result.
Spool n statistics go into Spool n.
  In a dynamic EXPLAIN, identifies the spool where the statistics that were collected during the generation of the spool are located. If step generates multiple spools, all spools are included.
spool_n.Field …
  Identifies the field of the spooled rows that is being used in a join constraint or comparison operation.

For example:

personnel.employee.empno = spool_2.mgrno
Statement 1...

This term refers to the initiating request.

SUM step to normalize from …
  Vantage performs a normalize operation as part of the aggregation of an aggregation step.
table-level summary statistics
  Vantage is computing the summary statistics, not detailed statistics, for a table.
The actual size of Spool n is n row[s] (n bytes).
  Reports the actual size of the spool, if available.

After the steps where the location of the Spool n statistics is reported and the statistics are consolidated, if the statistics are retrieved from additional runtime information, the size of the spool is given.

The estimated time for this step is nn seconds.
  The estimated time for the reported Delete, Insert, Join, Merge, Merge Delete, Merge Update, Retrieval, Sum, or Update step in seconds.

The time shown is not clock time, but an arbitrary unit of measure that you can use to compare different operations.

the estimated time is nn seconds.
  The estimated time to completion for the entire request in seconds.

The time shown is not clock time, but an arbitrary unit of measure that you can use to compare different operations.

The following is the dynamic explain for the request.
  The query plan is a dynamic plan generated by incremental planning and execution (IPE) for this request. This phrase, if included, occurs before the first step of the dynamic plan.
The size of Spool x is estimated with low confidence to be y rows (z bytes).
  The estimated size of spool n (which is reported with Low Confidence - see About Optimizer Confidence Levels), where m is the estimated cardinality of the spool and o is the estimated number of bytes in the spool.

The spool size in bytes is calculated as follows:


Estimated spool size calculation

Row size is an estimated average row size if the row includes variable length or compressed columns.

… the Spatial index subtable …
  The index being created or dropped is a spatial NUSI.
This request is eligible for incremental planning and execution (IPE) but dynamic plan does not support USING request modifier. The following is the static plan for the request:
  This is a generic plan for the request that is eligible for incremental planning and execution; however, DYNAMIC EXPLAIN is not supported because the phrase being explained is preceded by a USING request modifier. The phrase, if included, occurs before the first step of the EXPLAIN text with a DYNAMIC EXPLAIN request modifier.
This repeated request is eligible for incremental planning and execution (IPE) but it may also be eligible to be cached so a generic plan is tried. The following is the static plan for the request:
  This is a generic plan for the request that is eligible for incremental planning and execution; however, this is a repeated request whose generic plan may be eligible to be cached, so Vantage tries a generic plan the second time. A generic plan is a static plan.

The phrase, if included, occurs before the first step of the generic plan in DBC.DBQLExplainTbl if the EXPLAIN text for the plan is being logged by DBQL. The phrase does not occur when you specify an EXPLAIN, STATIC EXPLAIN, or DYNAMIC EXPLAIN request modifier.

This request is eligible for incremental planning and execution (IPE) but doesn't meet thresholds. The following is the static plan for the request:
  Vantage reports this phrase before the first step of a static plan.

This is a specific static plan that is eligible for incremental planning and execution, but does not meet certain system-specified cost thresholds. If Vantage runs the request, this static plan is generated and run.

This request is eligible for incremental planning and execution (IPE) but dynamic explain does not support USING request modifier. The following is the static plan for the request:
  The request is eligible for incremental planning and execution based on this static plan; however, the dynamic plan cannot be displayed for requests with a USING request modifier, so a generic plan is displayed instead. This phrase, if included, occurs before the first step of the static plan.

A generic plan is a static plan. If the request runs, a generic plan, a specific static plan, or a dynamic plan may be generated and run. This phrase does not occur in DBC.DBQLExplainTbl in the EXPLAIN text for a plan being logged by DBQL.

This request is eligible for incremental planning and execution (IPE). The following is the static plan for the request:
  Vantage reports this phrase before the first step of a static plan.

This is a specific static plan that is eligible for incremental planning and execution and, if the request runs, a dynamic plan may be generated and run instead of this static plan. However, this static plan is the one used for evaluating workload filters, throttles, and classification criteria.

This request is eligible for incremental planning and execution (IPE). IPE is disabled. The following is the static plan for the request:
  This is a static plan for this request that is eligible for incremental planning and execution; however IPE is disabled for the system. If included, Vantage reports the phrase before the first step of the static plan.

If the request runs and IPE is still disabled, Vantage generates and runs a static plan; if the request runs and IPE is enabled, a dynamic plan is generated. To enable dynamic plans, contact Teradata Support.

two-AMP RETRIEVE step by way of unique index #n
  A row of a table is selected based on a USI value:
  • At most, a single row in the USI subtable is selected using the index value that hashes to the AMP on which the subtable row is stored.
  • The hash value in the index rowID determines the AMP on which the data row is stored.
unsatisfiable
  The Optimizer has added an implied constraint to identify an unsatisfiable condition for a Period column when a DML request is defined with a specific set of BEGIN and END search condition constraints. See Predicate Simplification.
using covering CP merge Spool q (s2 subrow partitions and Last Use)

using covering CP merge Spool q (s1 + s2 subrow partitions and Last Use)

  These phrases indicate that a column-partitioned merge spool is created and used to merge column partitions from the table or join index, then the resulting s2 subrow column partitions are read from the column-partitioned merge spool.

s1 indicates the number of intermediate subrow column partitions needed for the merges, while s1 + s2 indicates the number of merges needed.

If s 1 is not included, no intermediate subrow column partitions are needed for the merges, and s2 merges are required.

The number of merges needed depends on the number of column partitions that need to be accessed and the number of available column partition contexts as indicated in a preceding "from" phrase. This is the last usage of this column-partitioned merge spool, and can therefore be deleted.

q is the spool number for the column-partitioned merge spool.

using covering CP merge Spool q (s2 subrow partitions and Last Use) and rowid Spool k (Last Use)

using covering CP merge Spool q (s1 + s2 subrow partitions and Last Use) and rowid Spool k (Last Use)

  A column-partitioned merge spool is created and used to merge column partitions from the table or join index, and then the resulting s 2 subrow column partitions are read from the column-partitioned merge spool, driven by the rowid spool.

s 1 indicates the number of intermediate subrow column partitions needed for the merges.

s 1 + s 2 indicates the number of merges needed.

If s1 is not included, no intermediate subrow column partitions are needed for the merges and in addition, there are s2 merges needed.

The number of merges needed depends on the number of column partitions that need to be accessed and the number of available column partition contexts as indicated in a preceding "from" phrase.

This is the last usage of both this column-partitioned merge spool and rowID spool so these spools can be deleted.

q is the spool number for the column-partitioned merge spool.

k is the spool number for the rowID spool.

If rowID Spool k is generated by this AMP step instead of a previous step, a built from phrase follows this using phrase.

using CP merge Spool q (one subrow partition and Last Use)

using CP merge Spool q (s2 subrow partitions and Last Use)

using CP merge Spool q (s1 + s2 subrow partitions and Last Use)

  A column-partitioned merge spool is created and used to merge column partitions from the table or join index. Then, other column partitions from the table or join index and the resulting one subrow column partition or s2 subrow column partitions from the column-partitioned merge spool are read.

s1 indicates the number of intermediate subrow column partitions needed for the merges.

s1 + s2 indicates the number of merges needed.

If s1 is not included, no intermediate subrow column partitions are needed for the merges, and there is also one or s2 merges needed.

The number of merges needed depends on the number of column partitions that need to be accessed and the number of available column partition contexts, as indicated in a preceding "from" phrase.

This is the last usage of this column-partitioned merge spool, so they can be deleted.

q is the spool number for the column-partitioned merge spool.

using CP merge Spool q (one subrow partition and Last Use) and rowid Spool k (Last Use)

using CP merge Spool q (s2 subrow partitions and Last Use) and rowid Spool k (Last Use)

using CP merge Spool q (s1 + s2 subrow partitions and Last Use) and rowid Spool k (Last Use)

  A column-partitioned merge spool is created and used to merge column partitions from the table or join index, and then another column partitions from the table or join index and the resulting one subrow column partition or s2 subrow column partitions from the column-partitioned merge spool are read, driven by the rowid spool.

s1 indicates the number of intermediate subrow column partitions needed for the merges.

s1 + s2 indicates the number of merges needed.

If s1 is not included, no intermediate subrow column partitions are needed for the merges, and there is also one or s2 merges needed.

This is the last usage of both this column-partitioned merge spool and rowid spool so both can be deleted.

q is the spool number for the column-partitioned merge spool.

k is the spool number for the rowID spool.

If rowID Spool k is generated by this AMP step instead of a previous step, a "built from" phrase follows this "using" phrase.

using rowid Spool k (Last Use)
  A rowID spool contains the rowIDs of rows in the table that qualify, and then the column-partitioned table or join index is read, driven by this rowID spool.

This is the last usage of this rowID spool, so this rowID spool can be deleted.

k is the spool number for the rowID spool.

If rowID Spool k is generated by this AMP step instead of a previous step, a "built from" phrase follows this "using" phrase.

This phrase is used in AMP steps, such as RETRIEVE, JOIN, and MERGE, that may read a column-partitioned source.

using rowid Spool k (Last Use) built from m2 column partitions
  The rowID Spool k is built containing the rowIDs of rows in the table that qualify, and then the column-partitioned table or join index is read, driven by this rowID spool.

Up to m2 column partitions of the column-partitioned table or join index may need to be accessed to evaluate predicates and build the rowID spool.

There are column partition contexts available for each of the m2 column partitions.

For this phrase to occur, the column-partitioned table or join index cannot have row partitioning.

m2 2.

One of the column partitions accessed may be the delete column partition.

Not all of the m2 column partitions may need to be accessed if no rows qualify.

This phrase is used in AMP steps such as RETRIEVE, JOIN, and MERGE that may read a column-partitioned source.

using rowid Spool k (Last Use) built from m2 column partitions (c2 contexts)
  A rowID spool is created that contains the rowIDs of rows in the table or join index that qualify. Then the column-partitioned table or join index is read, driven by this rowID spool.

Up to m2 column partitions of the column-partitioned table or join index may need to be accessed using c2 column-partition contexts to evaluate predicates and build the rowID spool. For this phrase to occur, the column-partitioned table or join index does not have row partitioning.

k is the spool number for the rowID spool.

m2 2.

2 < c2 < m2-1 and is equal to or one less than the number of available column-partition contexts.

One of the column partitions accessed may be the delete column partition. Not all of the m2 column partitions may need to be accessed if no rows qualify.

This phrase is used in AMP steps such as RETRIEVE, JOIN, and MERGE that may read a column-partitioned source.

using rowid Spool k (Last Use) built from n2 combined partitions (m2 column partitions)
  The rowID Spool k is built containing the rowids of rows in the table or join index that qualify, and then the column-partitioned table or join index is read, driven by this rowID spool.

The rows and columns for up to n2 combined partitions of the table or join index (which has both row and column partitioning) may need to be accessed to evaluate predicates and build the rowID spool.

k is the spool number for the rowID spool.

There are column partition contexts available for each of the m2 column partitions.

m2 2.

One of the column partitions accessed may be the delete column partition.

Not all of the m2 column partitions may need to be accessed if no rows qualify.

This phrase is used in AMP steps such as RETRIEVE, JOIN, and MERGE that may read a column-partitioned source.

using rowid Spool k (Last Use) built from n2 combined partitions (m2 column partitions and c2 contexts)
  A rowID spool is created that contains the rowIDs of rows in the table that qualify. Then the column-partitioned table is read, driven by this rowID spool. This is the last usage of this rowID spool, which can be deleted.

k is the spool number for the rowID spool.

The rows and columns for up to n2 combined partitions of the table or join index, which has both row and column partitioning, may need to be accessed to evaluate predicates and build the rowID spool.

For the column-partitioning level, c2 column-partition contexts are used to access up to m2 column partitions.

n2 s and m2 2.

2 < c2 < m2-1 and is equal to or one less than the number of available column-partition contexts.

One of the column partitions that may be accessed is the delete column partition. Not all of the m2 column partitions may need to be accessed if no rows qualify.

This phrase is used in AMP steps such as RETRIEVE, JOIN, and MERGE that may read a column-partitioned source.

we compute the table-level summary statistics from spool n and save them into spool m
  Vantage takes rolled up preaggregated statistics information from spool n, computes table-level summary statistics from that data, and saves the computed summary statistics in spool m.
we do a BMSMS... (bit map set manipulation step)
  BMSMS (bit map set manipulation step) is a method for handling weakly selective secondary indexes that have been ANDed; NUSI bit mapping.

BMSMS intersects the following rowID bit maps:

1)   The bit map built for … by way of index #  n...
2)   The bit map built for … by way of index #  n...
…

The resulting bit map is placed in Spool n...

BMSMS...

Indicates that two or more bit maps are intersected by ANDing to form one large bit map.

index #  n...

Identifies, in the order in which they are ANDed, each nonunique secondary index used in the intersection. 

resulting bit map is placed in Spool  n...

Identifies the temporary file in which the large bit map produced by the BMSMS is stored to make it available for use in producing the final result.

See also EXPLAIN Request Modifier: Examples.

we do a GEOSPATIAL COLLECT STATISTICS summary step …
  Vantage uses a SUM step to aggregate geospatial NUSI statistics when index statistics are collected on a geospatial column.
we do an ABORT test …
  An ABORT or ROLLBACK statement was detected.
we do [an all-AMPs | a single-AMP | a group-AMPs] DISPATCHER RETRIEVE step from Spool 2 (Last use) by way of an all-rows scan and send the rows back to the Dispatcher.
  Vantage uses either an all-AMPs, a single AMP, or a group-AMPs step to retrieve rows from the spool. The consolidated rows from multiple AMPs are then sent back to the dispatcher. This step is used to process row-based triggers, noncorrelated scalar subqueries, and so on.
we do [an all-AMPs | a single-AMP | a group-AMPs] FEEDBACK RETRIEVE step from Spool n (Last Use) of Spool nstatistics.
  Vantage uses either an all-AMPs, a single AMP, or a group-AMPs step to retrieve statistics. When the statistics are computed during the spool building time, a feedback retrieve step consolidates the statistics feedback from the AMP local statistics to global statistics and sends the consolidated statistics back to the optimizer.
we do [an all-AMPs | a single-AMP | a group-AMPs] FEEDBACK RETRIEVE step from n% of Spool n to collect statistics.
  Vantage was unable to generate statistics while the spool was being generated, so the statistics were derived by sampling with the feedback retrieve step.
we do [an all-AMPs | a single-AMP | a group-AMPs] FEEDBACK RETRIEVE step from Spool n (Last use).
  The Optimizer generated results feedback steps.
we do an all-AMPs SUM step in mapname to aggregate from database_name.table_name by way of an all-rows scan with no residual conditions, grouping by field1 (database_name.table_name.column_name_1).
  Vantage uses an all-AMPs SUM step in the indicated map to aggregate statistics information from a single column of table or join index table_name in database or user database_name and groups the aggregated statistics by column_name_1.
we do an all-AMPs SUM step in mapname to aggregate from database_name.table_name by way of an all-rows scan with no residual conditions, grouping by field1 (database_name.table_name.column_name_1, database_name.table_name.column_name_2)
  Vantage uses an all-AMPs SUM step in the indicated map to aggregate statistics from table or join index table_name in database or user database_name and groups the aggregated statistics by (column_name_1, column_name_2).

For this case, column_name_1 and column_name_2 are highly nonunique. Therefore, the optimizer made a cost-based decision to pre-aggregate on those two columns and to roll up the result to individual columns. This optimization guarantees that the base table is read only once and the aggregations on the individual columns run faster because they are working on pre-aggregated data.

This optimization is used only for recollecting statistics, and is possible only when the applicable statistics are specified in a single COLLECT STATISTICS request.

we do an all-AMPs SUM step in mapname to aggregate from database_name.table_name by way of a traversal of index # n without accessing the base table with no residual conditions, grouping by field1 (database_name.table_name.column_name_1, database_name.table_name.column_name_2).
  Vantage uses an all-AMPs SUM step in the indicated map to aggregate statistics from multiple columns of table or join index table_name in database or user database_name and groups the aggregated statistics by column_name_1.

For this case, single-column statistics are collected for column_name_1 and for column_name_2, and multicolumn statistics are collected for (column_name_1, column_name_2).

The aggregation for the multicolumn case is done first. The result of this aggregation is used to roll up the statistics for individual statistics on column_name_1 and column_name_2. This way, the base table is read only once and the aggregations on the individual columns runs faster because they are working on the pre-aggregated data.

This optimization is possible only when all the three statistics are specified in a single COLLECT STATISTICS request.

we do a SMS (set manipulation step) …
  Combine rows under control of a UNION, EXCEPT, MINUS, or INTERSECT operator.
we lock a distinct [databasename.]”pseudo table” for [locking_severity] on a RowHash for deadlock prevention
  The pseudo lock that is used to serialize primary and fallback RowHash locks on a dictionary table for DDL processing. See Pseudo Table Locks for details.
we lock [databasename.]tablename for exclusive use on a reserved RowHash to prevent global deadlock
  The proxy lock that serializes the exclusive lock for databasename.tablename in the lock phrase that follows this one.
we lock [databasename.]tablename for [lock_severity]

we lock [databasename.]tablename for [lock_severity] on a single partition

we lock [databasename.]tablename for [lock_severity] on npartitions

  The Lock Manager sets an EXCLUSIVE lock on database_name.table_name. The last phrase occurs only if DDL processing is occurring on the DBC.AccessRights dictionary table.
we lock [databasename.]tablename for [locking_severity] on a [locking_level] for deadlock prevention
  The Lock Manager sets a lock for the indicated locking severity at the indicated locking level on databasename.tablename.
[locking_level] can be one of the following levels:
  • Reserved RowHash
  • Reserved RowHash in a single partition
  • Reserved RowHash in n partitions
  • Reserved RowHash in all partitions

The first levels is used for non-row-partitioned table and levels 2 to 4 are used for row-partitioned tables.

we lock [databasename.]tablename for single writer on a reserved RowHash to prevent global deadlock

we lock [databasename.]tablename for single writer on a reserved RowHash in all partitions to prevent global deadlock

we lock [databasename.]tablename for single writer

we lock [databasename.]tablename for single writer on a reserved RowHash in all partitions to serialize concurrent updates on the partition-locked table to prevent global deadlock

we lock [databasename.]tablename for single writer to serialize concurrent updates on the partition-locked table

  The first phrase indicates a proxy lock that is used for a non-row-partitioned table.

The second phrase indicates a proxy lock that is used for a row-partitioned table.

Both the first and second phrases indicate serialization locks for the proxy lock in the third phrase. A table-level single writer lock allows concurrent reads on the data that is not rowhash write locked.

The fourth phrase indicates a proxy lock used to serialize the all-AMP single writer lock at phrase 5. This type of single writer lock is used in certain cases of partition-locked bulk DML, where NUSI, USI or RI maintenance on the target table is required.

The fifth phrase indicates a single-writer lock that allows concurrent reads on the data in a table that is not write locked during bulk DML processes.

we save the UPDATED STATISTICS for ('a,b') from Spool m into Spool n, which is built locally on a single AMP in mapname derived from the hash of the table id. Statistics are collected since the age of the statistics (y days) exceeds the time threshold of x days and the estimated data change of z% exceeds the system-determined change threshold of zx%.
  The user specified a time threshold that is met and the specified system change threshold is also met, so Vantage recollects the requested statistics.
we save the UPDATED STATISTICS for ('a,b') from Spool m into Spool n, which is built locally on a single AMP in mapname derived from the hash of the table id. Statistics are collected since the estimated update of z% exceeds the system-determined update threshold of x%.
  The system-determined change threshold is met and the columns of interest have been updated after the last recollection of statistics, so Vantage recollects the requested statistics.
we save the UPDATED STATISTICS for ('a,b') from Spool m into Spool n, which is built locally on a single AMP in mapname derived from the hash of the table id. Statistics collected since the estimated data change could not be determined.
  The Optimizer has a difficult time estimating the cardinality changes and update activity on related columns, so Vantage does recollect the requested statistics.
This happens when the Optimizer detects one of the following 4 conditions:
  • UDI counts are inconsistent with dynamic AMP sampling or reliable trends, and are therefore not reliable.
  • The target table is small, where aggregation is estimated to take less than 0.5 seconds.
  • There is no reliable trend on the change of major statistics.
  • Sampled statistics in the latest collection did not follow the statistics estimated by trending.
we save the UPDATED GEOSPATIAL STATISTICS for column from Spool m (Last Use) into Spool n
  Vantage copies the updated geospatial NUSI statistics collected on column locally from spool m into spool n.
We send an END PLAN FRAGMENT step for plan fragment n.
  Indicates to the dispatcher the end of a plan fragment in a dynamic plan. The step occurs at the end of each plan fragment for a dynamic plan except for the last. This step is not sent to the AMPs.

n indicates the number of the plan fragment. There is no step in the plan to indicate the beginning of a plan fragment.

For a dynamic plan, the first step is the beginning of the first plan fragment. The first step after an END PLAN FRAGMENT step is the beginning of the next plan fragment. The last step of a dynamic plan is the end of the last plan fragment, and there is no END PLAN FRAGMENT step for the last plan fragment. The number of plan fragments in the plan and the number of the last plan fragment is the number in the last END PLAN FRAGMENT step plus 1.

We SKIP collecting STATISTICS for (‘a,b’), because the age of the statistics (n days) does not exceed the user-specified time threshold of m days and the estimate data change of o% does not exceed the user-specified change threshold of p%.
  The user specified a time threshold and a system-determined change threshold is enabled. Neither threshold is met, so Vantage does not recollect the specified statistics.
We SKIP collecting STATISTICS for ('a,b'), because the estimated data change of 5% does not exceed the user-specified change threshold of 20%.
  The user specified a data change threshold that the statistics do not meet, so Vantage does not recollect the specified statistics.
which is duplicated on all AMPs in mapname
  Relocating data in preparation for a join.
which is redistributed by hash code to all AMPs in mapname
  Relocating data by hash code in preparation for a join.
which is redistributed by the hash code of ([database_name.]table_name.column_expression)

which is redistributed by the hash code of ([database_name.]table_name.column_expression) to few AMPs in mapname.

which is redistributed by the hash code of ([database_name.]table_name.column_expression) to few or all AMPs in mapname.

which is redistributed by the hash code of ([database_name.]table_name.column_expression) to all AMPs in mapname.

  Relocating data by the hash code of the specified column expression based on a column set from table_name in preparation for a join.
which is redistributed randomly …
  When you specify HASH BY RANDOM for an INSERT ... SELECT request, the Optimizer redistributes blocks of selected rows randomly before those rows are optionally ordered locally by specifying a LOCAL ORDER BY clause, and inserting the rows locally into the target table. The Optimizer does this by taking a generated response spool, generating a random hash value for the spool, and changing the existing row hash value before redistributing the rows. This is useful when the result of the SELECT operation does not provide an even distribution.