The following table explains terms used in EXPLAIN phrases. Self-explanatory phrases are not included.
Phrase | Explanation |
---|---|
:* | |
Vantage replaced values that are masked from the explain output for a dynamic query plan, because a DBS Control field is set to mask intermediate results to be inserted into the request during incremental planning. | |
m1 column partitions of … | |
Up to m1 column partitions of the column-partitioned table or join index may need to be accessed. There are column partition contexts available for each of the m1 column partitions. For this phrase to occur, the column-partitioned table or join index does not have row partitioning. m1 ≥ 2. One of the column partitions accessed may be the delete column partition. Not all of the m1 column partitions may need to be accessed if no rows qualify. The phrase "using rowid Spool" may follow the table or join index name. If so, m1 is the number of column partitions accessed when using the rowID spool while m2 in the phrase is the number of column partitions accessed to build the rowID spool. This phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that may read a column-partitioned source. For a delete operation, Vantage does not immediately reclaim storage for any deleted rows other than LOBs and column partitions with ROW format. A subsequent fast path deletion of all the rows of the table or join index reclaims the storage of all deleted rows in the table, including previously deleted rows. |
|
m1 column partitions (c1 contexts) of … | |
Up to m1 column partitions of the column-partitioned table or join index may need to be accessed using c1 column partition contexts. For this phrase to occur, the column-partitioned table or join index does not have row partitioning. m1 ≥ 2. 2 < c1 < m1-1 c1 is equal to, or one less than, the number of available column partition contexts. One of the column partitions accessed may be the delete column partition. Not all of the m1 column partitions may need to be accessed if no rows qualify. The phrase "using CP merge Spool" or "using covering CP merge Spool" follows the table or join index name. This phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that may read a column-partitioned source. To access the m1 column partitions, columns from the column partitions are merged up to c1 column partitions at a time into a column-partitioned spool until all the projected columns have been retrieved for the qualifying rows. One or more of the merges may require accessing the delete column partition. For each such merge, the delete column partition is included in m1. Performance can degrade if c1 is much less than m1 and the retrieve is not selective. Consider decreasing the number of column partitions that need to be accessed, decreasing the data block size for the table, combining column partitions so there are fewer column partitions, or increasing PPICacheThrP if there is enough available memory to do so. For a delete operation, Vantage does not immediately reclaim storage for any deleted rows other than LOBs and column partitions with ROW format. A subsequent fast path deletion of all rows of the table or join index reclaims the storage of all deleted rows in the table, including previously deleted rows. |
|
n partitions of … | |
Only n of the combined partitions are accessed. n is greater than one.(all_amps This phrase only applies to a row-partitioned, primary-indexed (RPPI) table or join index. |
|
n1 combined partitions (one column partition) of … | |
One column partition of multiple row partitions of the column-partitioned table or join index may need to be accessed. For this phrase to occur, the table or join index is accessed with rowIDs. The phrase "using rowid Spool" may follow the table or join index name. If so, one column partition is accessed when using the rowID spool while m2 in the phrase is the number of column partitions accessed to build the rowID spool. The phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that may read a column-partitioned source. |
|
n1 combined partitions (m1 column partitions) of … | |
The rows and columns for up to m1 combined partitions of the table or join index may need to be accessed. For the column-partitioning level, m1 column partitions may need to be accessed. There are column partition contexts available for each of the m1 column partitions. For this phrase to occur, the column-partitioned table or join index has both row and column partitioning. One of the column partitions that may be accessed is the delete column partition. Not all of the m1 column partitions may need to be accessed if no rows qualify. The phrase "using rowid Spool" may follow the table or join index name. If so, m1 is the number of column partitions accessed when using the rowID spool while m2 in the phrase is the number of column partitions accessed to build the rowID spool. The phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that may read a column-partitioned source. For a delete operation, Vantage does not immediately reclaim storage for any deleted rows other than LOBs and column partitions with ROW format. A subsequent fast path deletion of all the rows in a row partition reclaims the storage of all deleted rows in that row partition, including previously deleted rows in that partition. A subsequent fast path deletion of all the rows of the table or join index reclaims the storage of all deleted rows in the table, including previously deleted rows. |
|
n1 combined partitions (m1 column partitions and c1 contexts) of … | |
The rows and columns for up to n1 combined partitions of the table or join index may need to be accessed. For the column-partitioning level, m1 column partitions may need to be accessed. For this phrase to occur, the column-partitioned table or join index has both row and column partitioning. For the column-partitioning level, c1 column partition contexts are used to access up to m1 column partitions. n1 ≥ 2. m1 ≥ 2. 2 < c1 < m1-1 and is equal to or one less than the number of available column partition contexts. One of the column partitions that may be accessed is the delete column partition. Not all of the m1 column partitions may need to be accessed if no rows qualify. The phrase "using CP merge Spool" or "using covering CP merge Spool" follows the table or join index name. The phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that may read a column-partitioned source. To access the m1 column partitions, columns from the column partitions are merged up to c1 column partitions at a time into a column-partitioned spool until all the projected columns have been retrieved for the qualifying rows. One or more of the merges may require accessing the delete column partition. For each such merge, the delete column partition is included in m1. Performance can degrade if c1 is much less than m1 and the retrieve is not selective. Consider decreasing the number of column partitions that need to be accessed, decreasing the data block size for the table, combining column partitions so there are fewer column partitions, or increasing PPICacheThrP if there is enough available memory to do so. For a delete operation, Vantage does not immediately reclaim storage for any deleted rows other than LOBs and column partitions with ROW format. A subsequent fast path deletion of all the rows in a row partition reclaims the storage of all deleted rows in that row partition, including previously deleted rows in that partition. A subsequent fast path delete of all the rows of the table or join index reclaims the storage of all deleted rows in the table, including previously deleted rows. |
|
aggregate results are computed globally … | |
Totals are aggregated across all AMPs in an indicated map. For this case, Vantage performs all four steps of the ARSA aggregation algorithm:
|
|
aggregate results are computed locally … | |
Totals are aggregated locally on each AMP in an indicated map. For this case, local aggregation is the only stage of the ARSA aggregation algorithm that Vantage performs. |
|
all partitions of … | |
Vantage accesses all combined partitions of an AMP for a primary index access to a single AMP for a primary-indexed row-partitioned table or join index. This phrase is not applicable to column-partitioned tables or join indexes because those objects do not have a primary index. |
|
a single column partition of … | |
Only one column partition of the column-partitioned table or join index may need to be accessed. For this phrase to occur, the column-partitioned table or join index does not have row partitioning. Also, this phrase indicates the delete column partition does not need to be accessed, either because the column partition has ROW format or access is by rowIDs from an index or rowID spool. The phrase "using rowid Spool" may follow the table or join index name. If so, one column partition is accessed when using the rowID spool while m2 in the "using rowid Spool" phrase is the number of column partitions accessed to build the rowID spool. This phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that may read a column-partitioned source. For a delete operation, Vantage does not immediately reclaim storage for any deleted rows, other than LOBs and column partitions with ROW format. A subsequent fast path deletion of all of the rows of the table or join index reclaims the storage of all deleted rows in the table, including previously deleted rows. |
|
a single combined partition of … | |
This phrase indicates that only one column partition of only one row partition of the column-partitioned table or join index may need to be accessed. For this phrase to occur, the column-partitioned table or join index has to have both row and column partitioning. Also, this phrase indicates that the delete column partition does not need to be accessed either because the column partition has ROW format or because access is by rowids. The phrase "using rowid Spool" may follow the table or join index name. If so, one column partition is accessed when using the rowID spool while m2 in the "using rowid Spool" phrase is the number of column partitions accessed to build the rowID spool. This phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that may read a column-partitioned source. |
|
a single partition of … | |
Only a single combined partition of a primary-indexed row-partitioned table or join index is accessed. This phrase is not applicable to a column-partitioned table or join index. |
|
(all_amps) … | |
The spool is created on all AMPs in a map to which the step is sent. Because the spool is created on all AMPs, a dynamic AMP group is not needed. | |
all-AMPs JOIN step in mapname by way of a RowHash match scan … | |
The join step is processed on all AMPs in the indicated map. The first row is retrieved from the first table; the hash code for that row is used to locate a row from the second table. Each row-hash match is located and processed as follows:
|
|
all-AMPs RETRIEVE step in mapname by way of an all-rows scan … | |
All rows of a table are scanned row by row on all AMPs in the indicated map on which the table is stored. | |
all partitions of … | |
All combined partitions of an AMP are accessed for a primary index access to a single AMP for a primary-indexed row-partitioned table or join index. | |
a rowkey-based … | |
The join is hash-based by partition (that is, by the rowkey). In this case, there are equality constraints on all the partitioning columns and primary index columns. This allows for a faster join since each noneliminated row partition needs to be joined with at most only one other row partition. When the phrase is not given, the join is hash based. That is, there are equality constraints on the primary index columns from which the hash is derived. Note that with either method, the join conditions must still be validated. |
|
<BEGIN ROW TRIGGER LOOP> | |
Processing of the trigger action statements defined in the row trigger starts from the current AMP step, step n, of the EXPLAIN text. All AMP steps from the current step through the step in which the phrase END ROW TRIGGER LOOP for step n appears constitute the row trigger loop. |
|
by skipping global aggregation | |
When estimated to be cost-effective, Vantage avoids redistributing rows and aggregating them globally by skipping global aggregation. | |
by skipping local aggregation | |
When estimated to be cost-effective, Vantage avoids the cost of first writing to spool and then sorting the spooled rows by skipping local aggregation. | |
by the hash code of ([database_name.]table_name.column_name). | |
The spool sort is done on the specified column in row hash order. | |
by using cache during local aggregation | |
When estimated to be cost-effective, Vantage avoids sorting large numbers of input rows by caching output rows during local aggregation and coalescing any duplicate rows in the cache. | |
by way of an all-rows scan | |
All rows in the table are scanned because a single-partition scan is not possible for the specified predicate conditions. See Single Partition Scans and BEGIN/END Bound Functions. | |
by way of index # n and the bit map in Spool n … | |
The data row associated with the rowID is accessed only if the associated bit is turned on in the bit map. | |
by way of the primary AMP index … | |
There are conditions on the primary AMP index columns that allow the system to access only a single AMP for the indicated table. | |
by way of the primary index … | |
There are conditions on the primary index columns that allow the system to access only a single AMP for the indicated table. Primary indexes allow direct access to the rows with that primary index value in the table or in noneliminated combined row partitions. | |
by way of the sort key in spool field1 … | |
Field1 is created to allow a tag sort. | |
by way of the unique primary index … | |
There are conditions on the unique primary index columns that allow the system to access only a single AMP and, at most, a single row for the indicated table. | |
(compressed columns allowed) | |
The target spool can have compressed values. | |
computed globally … | |
The computation involves all the intermediate spool data based on compressed columns in the source data. | |
condition … | |
An intermediate condition that join-qualifies table rows (as compared with the overall join condition). | |
duplicated on all AMPs in mapname… | |
A spool containing intermediate data that is used to produce a result is copied to all AMPs in the indicated map that contain data with which the intermediate data is compared. | |
eliminating duplicate rows … | |
Duplicate rows can exist in spools, either as a result of selection of nonunique columns from any table or of selection from a MULTISET table. This is a DISTINCT operation. | |
<END ROW TRIGGER LOOP for step n.> | |
Delimits the running of the last
AMP step in the row trigger loop. Control moves to the next step outside the trigger. |
|
END TRANSACTION step … | |
Processing is complete and that any locks on the data may be released. Changes made by the transaction are committed. | |
enhanced by dynamic partition elimination … | |
A join condition where dynamic row partition elimination has been used. | |
estimated size … | |
Estimated size of the spool file needed to accommodate spooled data. Collecting appropriate statistics can improve the accuracy of the estimate. See Teradata Vantage™ - SQL Data Definition Language Syntax and Examples, B035-1144. | |
estimated time … | |
Approximate time, based on average times for the suboperations that comprise the overall operation, and the likely number of rows involved in the operation. The accuracy of the time estimate is also affected by the accuracy of the estimated size. This does not take into account other workloads running on the system. | |
execute the following steps in parallel ... | |
A set of AMP steps that can be processed concurrently. The explanatory text immediately following the list describes the execution of each step. | |
from n sources... from n left sources... from n right sources... |
|
RETRIEVE, JOIN, or aggregation from multiple sources that is performed in a single step. | |
(group_amps) | |
A spool is created on a group of AMPs. Each AMP on which the step runs enters into a dynamic AMP group if rows are generated for that AMP. If rows are not generated for that AMP, that AMP does not enter into a dynamic AMP group. | |
grouping by field n ([database_name.]table_name.column_expression). | |
Specified grouping column expression is based on column n in the specified table. | |
Hash table is built from Spool n and is duplicated to all AMPs in mapname | |
The step is an AllRowsOneAMP In-Memory hash join step. | |
in mapname | |
The map defining a collection of AMPs. | |
in mapname covering mapname, mapname[, mapname…] | |
Map that includes, at least, all the AMPs in the maps listed after "covering". | |
INSERT into a single column partition of … | |
At least one row is being inserted into a table or join index with only one user-specified column partition. | |
INSERT into m3 column partitions of … | |
St least one row is being inserted into a table or join index with m3 column partitions. In this case, the value of m3 ≥ 2. |
|
in view view_name | |
The specified [database_name].table_name is accessed by means of the view view_name. | |
join condition … | |
The overall constraint that governs the join. In the following request, employee.empno = department.mgrno is the overall constraint governing the join: SELECT deptname, name FROM employee, department WHERE employee.empno = department.mgrno; Join conditions are also specified in the ON clause of an outer join or an ANSI SQL standard-formatted inner join. |
|
JOIN step by way of an all-rows scan … | |
Each AMP on which spooled rows, primary table rows, or both reside is searched row by row. Rows that satisfy the join condition are joined. | |
joined using a single partition exclusion hash join, with a join condition of (join_condition) | |
Vantage joins two spools using a single-partition exclusion hash Join on join_condition. | |
(Last Use) | |
Last reference to a spool that contains intermediate data. The spool is released following this AMP step. | |
(load committed)… | |
Read mode. Indicates that Vantage reads the rows that are load-committed. The read mode is (load uncommitted) indicates that Vantage ignores the logically deleted rows during processing. All other rows are read. For more information about load isolation, see Teradata Vantage™ - SQL Data Definition Language Syntax and Examples, B035-1144 and Teradata Vantage™ - SQL Data Manipulation Language, B035-1146. |
|
(load isolated)… | |
Modification operation performed in the step is load isolated. For this phrase to occur, the table being modified must be a load-isolated table or a join index defined on a load-isolated table. Delete operations that are load isolated do not physically delete the rows, but mark the rows as logically deleted and row-versioned. Update operations that are load isolated version the qualified rows such that older values are retained. Insert operations that are load isolated mark the rows with a load identity value that provides the commit property of the row to concurrent readers. During such modifications, a concurrent reader session on the table or join index can continue to obtain load-committed rows while the changes in the on-going load are not visible until committed. For more information about load isolation, see Teradata Vantage™ - SQL Data Definition Language Syntax and Examples, B035-1144 and Teradata Vantage™ - SQL Data Manipulation Language, B035-1146. |
|
locally on the AMPs … | |
That portion of spooled intermediate or result data for which an AMP is responsible is stored on the AMP, not duplicated on or redistributed to other AMPs that are processing the same request. | |
lock … | |
The Lock Manager places an ACCESS, READ, WRITE, or EXCLUSIVE lock on the database object that is to be accessed by a request. | |
MERGE into a single column partition of … | |
Rows are being merge inserted into a table or join index with only one user-specified column partition. | |
MERGE into m3 column partitions of … | |
Rows are being inserted into a table or join index with m3 user-specified column partitions, using one column-partition context. m3 ≥ 2. Vantage can use either of two methods to insert the rows.
The Optimizer recommends the least costly method to use, but an AMP may change to the other method as the rows are being inserted if the Optimizer determines the other method may perform better. |
|
MERGE into m3 column partitions (c3 contexts) of … | |
Rows are being inserted into a table or join index with m3 user-specified column partitions, using c3 column-partition contexts. m3 ≥ 2. 2 < c3 < m3. Vantage can use one of two methods to insert the rows.
The Optimizer recommends the method to use to the AMPs, but an AMP may change to the other method as the rows are inserted if the other method may perform better. |
|
merge with matched updates and unmatched inserts … | |
An AMP step that can do any of the three following operations in a single step:
The step assumes that the source table is always distributed on the join column of the source table, which is specified in the ON clause as an equality constraint with the primary index of the target table and sorted on the RowKey. The step performs a RowKey-based Merge Join internally, identifying source rows that qualify for updating target rows and source rows that qualify for inserts, and performing those updates and inserts. This step is similar to the APPLY phase of MultiLoad, guaranteeing that the target table data block is read and written only once during the MERGE operation. |
|
from foreign_table metadata which is binpacked by size by using external metadata in Spool n to access foreign_table |
|
At least one of the tables involved is a foreign table. Foreign tables allow Vantage to access data from external, cloud-based data storage, such as AWS S3, without requiring you to manually move the data into the database first from where it natively resides. Foreign tables are identified by a hostname, path, and other metadata that point to the external storage. Vantage can read and process semi-structured or unstructured external data in foreign tables using standard SQL. For example, you can use Teradata analytic functions to examine the data, join it to the relational data in the database, and issue queries against it as you can for other data in Vantage. |
|
(no lock required) | |
Lock required for the operation was acquired in a previous AMP step. This phrase is not always reported for steps where a lock was acquired earlier. |
|
(nonconcurrent load isolated)… | |
Modification operation is not load isolated. For this phrase to occur, the table being modified must be a load-isolated table or a join index defined on a load-isolated table. Such modifications are like regular modifications on a non-load-isolated table that do not version the rows. The difference is that EXCLUSIVE locks are applied when such changes occur. During such modification, a concurrent reader session is blocked (including the case when ACCESS lock is used) until the transaction closes. For more information about load isolation, see Teradata Vantage™ - SQL Data Definition Language Syntax and Examples, B035-1144 and Teradata Vantage™ - SQL Data Manipulation Language, B035-1146. |
|
no residual conditions … | |
Rows are selected in their entirety; there are no specific search conditions. All applicable conditions have been applied to the rows. | |
normalizing rows … | |
Vantage performs a normalize operation as part of sort. | |
of n3 combined partitions | |
All the rows and columns in each of n3 combined partitions of a table or join index, which has both row and column partitioning in this case, can be completely deleted as a fast path delete and storage for the deleted rows is recovered. n3 > 1. |
|
of n3 combined partitions from … | |
All the rows and columns in each of n3 combined partitions of a table or join index that has both row and column partitioning are to be completely deleted as a fast path delete, and the storage for the deleted rows is recovered. n3 is greater than one. |
|
of n partitions from … | |
All rows in each of n combined partitions can be completely deleted for a primary-indexed row-partitioned table or join index. The value of n is always greater than one. This may allow faster deletion of entire partitions. This phrase is not applicable to column-partitioned tables or join indexes because other phrases are used in those cases. |
|
of a single partition from … | |
This phrase indicates that the Optimizer determined that all rows in a single combined partition for a row-partitioned table or join index with a primary index can be completely deleted. This may allow faster deletion of the entire combined partition. This phrase is not applicable to column-partitioned tables or join indexes because there must be at least two column partitions for a column-partitioned table or join index: at least one specified column partition and the delete column partition. |
|
on a reserved RowHash [in all partitions] [in n partitions] [in a single partition]… | |
The system has placed a proxy lock on the target table to prevent global deadlock for a later all-AMP lock on the table. The optional phrases "in all partitions," "in n partitions," and "in a single partition" appear only if the table is row-partitioned. The phrase "in all partitions" indicates a proxy for a table-level lock. The phrase "in n partitions" indicates a proxy for a partition-range lock. The phrase "in a single partition" indicates a proxy for a single-partition lock. For more information, see Proxy Locks. |
|
partial SUM | |
A partial GROUP BY optimization has been done on the SUM step. | |
(partial-covering) | |
The NUSI in question partially covers the query using a constraint scan. For example, the system can probe a NUSI subtable to get the rowID values for base table rows that may qualify for the query, but must then also access the underlying base table for the NUSI to obtain the rows that definitely qualify. |
|
post join condition of … | |
The optimizer detected a delayed condition and processed it as a post-join condition. This phrase appears immediately after the regular join condition. The following conditions trigger the use of post-join processing:
|
|
redistributed by hash code to all AMPs in mapname … | |
Given the values of an indexed or unindexed column, rows are sent to the AMPs in the indicated map that are responsible for storing the rows that use these values as a primary index. | |
right outer joined using a single partition hash join, with a join condition of (join_condition) | |
Vantage joins a spool from the left relation to a right relation using a two-step single-partition right outer hash join on join_condition. | |
single-AMP JOIN step by way of the unique primary index … | |
A row is selected on a single AMP using the unique primary index for the table. Using a value in the row that hashes to a unique index value in a second table, the first row is joined with a second row located on another AMP. | |
single-AMP RETRIEVE by way of unique index # n … | |
A single row of a table is selected using a unique secondary index value that hashes to the AMP on which the table row is stored. | |
single-AMP RETRIEVE step by way of the unique primary index … | |
At most, a single row of a table is selected using a unique primary index value that hashes to the single AMP on which the data row is stored. Although not explicitly stated in the LOCK portion of the Explain text, a rowhash lock is required because the table is accessed by the unique primary index. |
|
SORT/GROUP | |
Partial GROUP BY was used to reduce cardinality by simultaneously sorting and grouping. | |
SORT to normalize … | |
Vantage performs the normalize operation as part of sort. | |
SORT to order Spool n by row hash … | |
Rows in the spool are sorted by hash code into the same order as rows in the primary table, or in another spool on the same AMP, with which the sorted rows are to be matched | |
SORT to order Spool n by row hash and the sort key in spool field1 eliminating duplicate rows … | |
Rows in the spool are sorted by hash code using a uniqueness sort to eliminate duplicate rows. Uniqueness is based on the data in field1. The contents of field1 depend on the query and may comprise any of the following:
Another value that uniquely defines the spool row (used for complex queries involving aggregates and subqueries). |
|
SORT to order Spool 1 by the sort key in spool field1 … | |
Rows in the spool are sorted by the value in field1. The contents of field1 are determined by the column set defined in the ORDER BY or WITH BY clause of the request being processed. For example, SORT TO partition BY ROWKEY or SORT to order Spool 1 by the sort key in spool field1 (THU.JI1.b1). |
|
SORT to partition by rowkey. | |
A join spool is joined using a rowkey-based join and sorted to the appropriate partitions. | |
SORT to string_1 by string_2 … | |
Rows are sorted to string_1 by their string_2 sort key. For example, SORT TO partition BY ROWKEY or SORT to order Spool 1 by the sort key in spool field1 (THU.JI1.b1). |
|
SORT to partition Spool n by rowkey … | |
The Optimizer determined that a spool is to be partitioned based on the same partitioning expression as a table to which the spool is be joined. That is, the spool is to be sorted by rowkey (partition and hash). Partitioning the spool in this way allows for a faster join with the partitioned table. n is the spool number. | |
spool n … | |
Identifies a spool, which is used
to contain data during an operation.
|
|
Spool n statistics go into Spool n. | |
In a dynamic EXPLAIN, identifies the spool where the statistics that were collected during the generation of the spool are located. If step generates multiple spools, all spools are included. | |
spool_n.Field … | |
Identifies the field of the spooled rows that is being used in a join constraint or comparison operation. For example: personnel.employee.empno = spool_2.mgrno Statement 1... This term refers to the initiating request. |
|
SUM step to normalize from … | |
Indicates that Vantage performs a normalize operation as part of the aggregation of an aggregation step. | |
table-level summary statistics | |
Explains that Vantage is computing the summary statistics, not detailed statistics, for a table. | |
The actual size of Spool n is n row[s] (n bytes). | |
Reports the actual size of the spool, if available. After the steps where the location of the Spool n statistics is reported and the statistics are consolidated, if the statistics are retrieved from additional runtime information, the size of the spool is given. |
|
The estimated time for this step is nn seconds. | |
The estimated time for the reported Delete, Insert, Join, Merge, Merge Delete, Merge Update, Retrieval, Sum, or Update step in seconds. The time shown is not clock time. Consider this time to be an arbitrary unit of measure that you can use to compare different operations. |
|
the estimated time is nn seconds. | |
The estimated time to completion for the entire request in seconds. The time shown is not clock time. Consider this time to be an arbitrary unit of measure that you can use to compare different operations. |
|
The following is the dynamic explain for the request. | |
The query plan is a dynamic plan generated by incremental planning and execution (IPE) for this request. This phrase, if included, occurs before the first step of the dynamic plan. | |
The size of Spool x is estimated with low confidence to be y rows (z bytes). | |
The estimated size of spool n (which is always reported with Low Confidence - see About Optimizer Confidence Levels), where m is the estimated cardinality of the spool and o is the estimated number of bytes in the spool. The spool size in bytes is calculated as follows: Row size is an estimated average row size if the row includes variable length or compressed columns. |
|
… the Spatial index subtable … | |
The index being created or dropped is a spatial NUSI. | |
This request is eligible for incremental planning and execution (IPE) but dynamic plan does not support USING request modifier. The following is the static plan for the request: | |
This is a generic plan for the request that is eligible for incremental planning and execution. However, DYNAMIC EXPLAIN is not supported because the phrase being explained is preceded by a USING request modifier. The phrase, if included, occurs before the first step of the EXPLAIN text with a DYNAMIC EXPLAIN request modifier. | |
This repeated request is eligible for incremental planning and execution (IPE) but it may also be eligible to be cached so a generic plan is tried. The following is the static plan for the request: | |
This is a generic plan for the request that is eligible for incremental planning and execution. However, this is a repeated request whose generic plan may be eligible to be cached, so Vantage tries a generic plan the second time. A generic plan is always a static plan. The phrase, if included, occurs before the first step of the generic plan in DBC.DBQLExplainTbl if the EXPLAIN text for the plan is being logged by DBQL. This phrase does not occur when you specify an EXPLAIN, STATIC EXPLAIN, or DYNAMIC EXPLAIN request modifier. |
|
This request is eligible for incremental planning and execution (IPE) but doesn't meet thresholds. The following is the static plan for the request: | |
Vantage reports this phrase before the first step of a static plan. This is a specific static plan that is eligible for incremental planning and execution, but does not meet certain system-specified cost thresholds. If Vantage runs the request, this static plan is generated and run. |
|
This request is eligible for incremental planning and execution (IPE) but dynamic explain does not support USING request modifier. The following is the static plan for the request: | |
The request is eligible for incremental planning and execution based on this static plan; however, the dynamic plan cannot be displayed for requests with a USING request modifier, so a generic plan is displayed instead. This phrase, if included, occurs before the first step of the static plan. A generic plan is a static plan. If the request is run, a generic plan, a specific static plan, or a dynamic plan may be generated and run. This phrase does not occur in DBC.DBQLExplainTbl in the EXPLAIN text for a plan being logged by DBQL. |
|
This request is eligible for incremental planning and execution (IPE). The following is the static plan for the request: | |
Vantage reports this phrase before the first step of a static plan. This is a specific static plan that is eligible for incremental planning and running and, if the request is run, a dynamic plan may be generated and run instead of this static plan. However, this static plan is the one used for evaluating workload filters, throttles, and classification criteria. |
|
This request is eligible for incremental planning and execution (IPE). IPE is disabled. The following is the static plan for the request: | |
This is a static plan for this request that is eligible for incremental planning and execution; however IPE is disabled for the system. If included, Vantage reports the phrase before the first step of the static plan. If the request is executed and IPE is still disabled, Vantage generates and executes a static plan; if the request is executed and IPE is enabled, a dynamic plan is generated. To enable dynamic plans, contact Teradata Support. |
|
two-AMP RETRIEVE step by way of unique index #n … | |
A row of a table is selected based
on a USI value:
|
|
unsatisfiable | |
The Optimizer has added an implied constraint to identify an unsatisfiable condition for a Period column when a DML request is defined with a specific set of BEGIN and END search condition constraints. See Predicate Simplification. | |
using covering CP
merge Spool q (s2 subrow partitions and
Last Use) using covering CP merge Spool q (s1 + s2 subrow partitions and Last Use) |
|
A column-partitioned merge spool is created and used to merge column partitions from the table or join index, then the resulting s2 subrow column partitions are read from the column-partitioned merge spool. s1 indicates the number of intermediate subrow column partitions needed for the merges, while s1 + s2 indicates the number of merges needed. If s 1 is not included, no intermediate subrow column partitions are needed for the merges, and s2 merges are required. The number of merges needed depends on the number of column partitions that need to be accessed and the number of available column partition contexts as indicated in a preceding "from" phrase. This is the last usage of this column-partitioned merge spool, and can therefore be deleted. q is the spool number for the column-partitioned merge spool. |
|
using covering CP
merge Spool q (s2 subrow partitions and Last Use) and rowid Spool k
(Last Use) using covering CP merge Spool q (s1 + s2 subrow partitions and Last Use) and rowid Spool k (Last Use) |
|
A column-partitioned merge spool is created and used to merge column partitions from the table or join index, and then the resulting s 2 subrow column partitions are read from the column-partitioned merge spool, driven by the rowid spool. s 1 indicates the number of intermediate subrow column partitions needed for the merges. s 1 + s 2 indicates the number of merges needed. If s1 is not included, no intermediate subrow column partitions are needed for the merges and in addition, there are s2 merges needed. The number of merges needed depends on the number of column partitions that need to be accessed and the number of available column partition contexts as indicated in a preceding "from" phrase. This is the last usage of both this column-partitioned merge spool and rowID spool, so these spools can be deleted. q is the spool number for the column-partitioned merge spool. k is the spool number for the rowID spool. If rowID Spool k is generated by this AMP step instead of a previous step, a "built from" phrase follows this "using" phrase. |
|
using CP merge Spool
q (one subrow partition and
Last Use) using CP merge Spool q (s2 subrow partitions and Last Use) using CP merge Spool q (s1 + s2 subrow partitions and Last Use) |
|
A column-partitioned merge spool is created and used to merge column partitions from the table or join index. Then, other column partitions from the table or join index and the resulting one subrow column partition or s2 subrow column partitions from the column-partitioned merge spool are read. s1 indicates the number of intermediate subrow column partitions needed for the merges. s1 + s2 indicates the number of merges needed. If s1 is not included, no intermediate subrow column partitions are needed for the merges, and there is also one or s2 merges needed. The number of merges needed depends on the number of column partitions that need to be accessed and the number of available column partition contexts, as indicated in a preceding "from" phrase. This is the last usage of this column-partitioned merge spool, so the spool can be deleted. q is the spool number for the column-partitioned merge spool. |
|
using CP merge Spool
q (one subrow partition and
Last Use) and rowid Spool k (Last Use) using CP merge Spool q (s2 subrow partitions and Last Use) and rowid Spool k (Last Use) using CP merge Spool q (s1 + s2 subrow partitions and Last Use) and rowid Spool k (Last Use) |
|
A column-partitioned merge spool is created and used to merge column partitions from the table or join index, and then another column partitions from the table or join index and the resulting one subrow column partition or s2 subrow column partitions from the column-partitioned merge spool are read, driven by the rowid spool. s1 indicates the number of intermediate subrow column partitions needed for the merges. s1 + s2 indicates the number of merges needed. If s1 is not included, no intermediate subrow column partitions are needed for the merges, and there is also one or s2 merges needed. This is the last usage of both this column-partitioned merge spool and rowid spool, so these spools can be deleted. q is the spool number for the column-partitioned merge spool. k is the spool number for the rowID spool. If rowID Spool k is generated by this AMP step instead of a previous step, a "built from" phrase follows this "using" phrase. |
|
using rowid Spool k (Last Use) | |
This phrase indicates that a rowID spool contains the rowIDs of rows in the table that qualify, and then the column-partitioned table or join index is read, driven by this rowID spool. This is the last usage of this rowID spool, so this spool can be deleted. k is the spool number for the rowID spool. If rowID Spool k is generated by this AMP step instead of a previous step, a "built from" phrase follows this "using" phrase. This phrase is used in AMP steps, such as RETRIEVE, JOIN, and MERGE, that may read a column-partitioned source. |
|
using rowid Spool k (Last Use) built from m2 column partitions | |
The rowID Spool k is built containing the rowIDs of rows in the table that qualify, and then the column-partitioned table or join index is read, driven by this rowID spool. Up to m2 column partitions of the column-partitioned table or join index may need to be accessed to evaluate predicates and build the rowID spool. There are column partition contexts available for each of the m2 column partitions. For this phrase to occur, the column-partitioned table or join index does not have row partitioning. m2 ≥ 2. One of the column partitions accessed may be the delete column partition. Not all m2 column partitions must be accessed if no rows qualify. This phrase is used in AMP steps such as RETRIEVE, JOIN, and MERGE that may read a column-partitioned source. |
|
using rowid Spool k (Last Use) built from m2 column partitions (c2 contexts) | |
A rowID spool is created that contains the rowIDs of rows in the table or join index that qualify. Then the column-partitioned table or join index is read, driven by this rowID spool. Up to m2 column partitions of the column-partitioned table or join index may need to be accessed using c2 column-partition contexts to evaluate predicates and build the rowID spool. For this phrase to occur, the column-partitioned table or join index does not have row partitioning. k is the spool number for the rowID spool. m2 ≥ 2. 2 < c2 < m2-1 and is equal to or one less than the number of available column-partition contexts. One of the column partitions accessed may be the delete column partition. Not all of the m2 column partitions may need to be accessed if no rows qualify. This phrase is used in AMP steps such as RETRIEVE, JOIN, and MERGE that may read a column-partitioned source. |
|
using rowid Spool k (Last Use) built from n2 combined partitions (m2 column partitions) | |
The rowID Spool k is built containing the rowids of rows in the table or join index that qualify, and then the column-partitioned table or join index is read, driven by this rowID spool. The rows and columns for up to n2 combined partitions of the table or join index (which has both row and column partitioning in this case) may need to be accessed to evaluate predicates and build the rowID spool. k is the spool number for the rowID spool. There are column partition contexts available for each of the m2 column partitions. m2 ≥ 2. One of the column partitions accessed may be the delete column partition. Not all of the m2 column partitions may need to be accessed if no rows qualify. This phrase is used in AMP steps such as RETRIEVE, JOIN, and MERGE that may read a column-partitioned source. |
|
using rowid Spool k (Last Use) built from n2 combined partitions (m2 column partitions and c2 contexts) | |
A rowID spool is created that contains the rowIDs of rows in the table that qualify. Then the column-partitioned table is read, driven by this rowID spool. This is the last usage of this rowID spool, so this spool can be deleted. k is the spool number for the rowID spool. The rows and columns for up to n2 combined partitions of the table or join index, which has both row and column partitioning, may need to be accessed to evaluate predicates and build the rowID spool. For the column-partitioning level, c2 column-partition contexts are used to access up to m2 column partitions. n2 ≥ s and m2 ≥ 2. 2 < c2 < m2-1 and is equal to or one less than the number of available column-partition contexts. One of the column partitions that may be accessed is the delete column partition. Not all of the m2 column partitions may need to be accessed if no rows qualify. This phrase is used in AMP steps such as RETRIEVE, JOIN, and MERGE that may read a column-partitioned source. |
|
we compute the table-level summary statistics from spool n and save them into spool m | |
Vantage takes rolled up pre-aggregated statistics information from spool n, computes table-level summary statistics from that data, and saves the summary statistics it computed in spool m. | |
we do a BMSMS... (bit map set manipulation step) | |
BMSMS (bit map set manipulation step) is a method for handling weakly selective secondary indexes that have been ANDed; NUSI bit mapping. BMSMS intersects the following rowID bit maps: 1) The bit map built for … by way of index # n... 2) The bit map built for … by way of index # n... … The resulting bit map is placed in Spool n... BMSMS... Indicates that two or more bit maps are intersected by ANDing to form one large bit map. index # n...
Identifies, in the order in which ANDed, each nonunique secondary index used in the intersection. resulting bit map is placed in Spool n...
Identifies the temporary file in which the large bit map produced by the BMSMS is stored to make the map available for use in producing the final result. See also EXPLAIN Request Modifier: Examples. |
|
we do a GEOSPATIAL COLLECT STATISTICS summary step … | |
Vantage uses a SUM step to aggregate geospatial NUSI statistics when index statistics are collected on a geospatial column. | |
we do an ABORT test … | |
An ABORT or ROLLBACK statement was detected. | |
we do [an all-AMPs | a single-AMP | a group-AMPs] DISPATCHER RETRIEVE step from Spool 2 (Last use) by way of an all-rows scan and send the rows back to the Dispatcher. | |
This phrase indicates that Vantage uses either an all-AMPs, a single AMP, or a group-AMPs step to retrieve rows from the spool. The consolidated rows from multiple AMPs are then sent back to the dispatcher. This step is used to process row-based triggers, noncorrelated scalar subqueries, and so on. | |
we do [an all-AMPs | a single-AMP | a group-AMPs] FEEDBACK RETRIEVE step from Spool n (Last Use) of Spool nstatistics. | |
Vantage uses an all-AMPs, a single AMP, or a group-AMPs step to retrieve statistics. When the statistics are computed during the spool building time, a feedback retrieve step consolidates the statistics feedback from the AMP local statistics to global statistics and sends the global statistics back to the optimizer. | |
we do [an all-AMPs | a single-AMP | a group-AMPs] FEEDBACK RETRIEVE step from n% of Spool n to collect statistics. | |
Vantage was unable to generate statistics while the spool was being generated, so the statistics were derived by sampling with the feedback retrieve step. | |
we do [an all-AMPs | a single-AMP | a group-AMPs] FEEDBACK RETRIEVE step from Spool n (Last use). | |
The Teradata optimizer generated results feedback steps. | |
we do an all-AMPs SUM step in mapname to aggregate from database_name.table_name by way of an all-rows scan with no residual conditions, grouping by field1 (database_name.table_name.column_name_1). | |
Vantage uses an all-AMPs SUM step in the indicated map to aggregate statistics information from a single column of table or join index table_name in database or user database_name and groups the statistics by column_name_1. | |
we do an all-AMPs SUM step in mapname to aggregate from database_name.table_name by way of an all-rows scan with no residual conditions, grouping by field1 (database_name.table_name.column_name_1, database_name.table_name.column_name_2) | |
Vantage uses an all-AMPs SUM step in the indicated map to aggregate statistics information from table or join index table_name in database or user database_name and groups the statistics by (column_name_1, column_name_2). column_name_1 and column_name_2 are nonunique. Therefore, the optimizer made a cost-based decision to pre-aggregate on those two columns and to roll up the result to individual columns. This optimization guarantees that the base table is read only once and the aggregations on the individual columns run faster by working on pre-aggregated data. This optimization is used only for recollecting statistics, and is possible only when the applicable statistics are specified in a single COLLECT STATISTICS request. |
|
we do an all-AMPs SUM step in mapname to aggregate from database_name.table_name by way of a traversal of index # n without accessing the base table with no residual conditions, grouping by field1 (database_name.table_name.column_name_1, database_name.table_name.column_name_2). | |
Vantage uses an all-AMPs SUM step in the indicated map to aggregate statistics information from multiple columns of table or join index table_name in database or user database_name and groups the statistics by column_name_1. Single-column statistics are collected for column_name_1 and for column_name_2, and multicolumn statistics are collected for (column_name_1, column_name_2). The aggregation for the multicolumn case is done first. The result of this aggregation is used to roll up the statistics for individual statistics on column_name_1 and column_name_2. This way, the base table is read only once and the aggregations on the individual columns runs faster by working on the pre-aggregated data. This optimization is possible only when all the three statistics are specified in a single COLLECT STATISTICS request. |
|
we do a SMS (set manipulation step) … | |
Combine rows under control of a UNION, EXCEPT, MINUS, or INTERSECT operator. | |
we lock a distinct [databasename.]”pseudo table” for [locking_severity] on a RowHash for deadlock prevention | |
The pseudo lock that is used to serialize primary and fallback RowHash locks on a dictionary table for DDL processing. See Locks on Pseudo Table for details. | |
we lock [databasename.]tablename for exclusive use on a reserved RowHash to prevent global deadlock | |
The proxy lock that serializes the exclusive lock for databasename.tablename in the lock phrase that follows this one. | |
we lock [databasename.]tablename for [lock_severity] we lock [databasename.]tablename for [lock_severity] on a single partition we lock [databasename.]tablename for [lock_severity] on npartitions |
|
The Lock Manager sets an EXCLUSIVE lock on database_name.table_name. The last phrase occurs only if DDL processing is occurring on the DBC.AccessRights dictionary table. | |
we lock [databasename.]tablename for [locking_severity] on a [locking_level] for deadlock prevention | |
The Lock Manager sets a lock for the indicated locking severity at the indicated locking level on databasename.tablename. [locking_level] can be one of the following levels:
The first levels is used for non-row-partitioned table and levels 2 to 4 are used for row-partitioned tables. |
|
we lock [databasename.]tablename for single writer on a reserved RowHash to prevent global deadlock we lock [databasename.]tablename for single writer on a reserved RowHash in all partitions to prevent global deadlock we lock [databasename.]tablename for single writer we lock [databasename.]tablename for single writer on a reserved RowHash in all partitions to serialize concurrent updates on the partition-locked table to prevent global deadlock we lock [databasename.]tablename for single writer to serialize concurrent updates on the partition-locked table |
|
The first phrase indicates a proxy
lock that is used for a non-row-partitioned table. The second phrase indicates a proxy lock that is used for a row-partitioned table. Both the first and second phrases indicate serialization locks for the proxy lock in the third phrase. A table-level single writer lock allows concurrent reads on the data that is not rowhash write locked. The fourth phrase indicates a proxy lock used to serialize the all-AMP single writer lock at phrase 5. This type of single writer lock is used in certain cases of partition-locked bulk DML, where NUSI, USI or RI maintenance on the target table is required. The fifth phrase indicates a single-writer lock that allows concurrent reads on the data in a table that is not write locked during bulk DML processes. |
|
we save the UPDATED STATISTICS for ('a,b') from Spool m into Spool n, which is built locally on a single AMP in mapname derived from the hash of the table id. Statistics are collected since the age of the statistics (y days) exceeds the time threshold of x days and the estimated data change of z% exceeds the system-determined change threshold of zx%. | |
The user specified a time threshold that is met and the specified system change threshold is also met, so Vantage recollects the requested statistics. | |
we save the UPDATED STATISTICS for ('a,b') from Spool m into Spool n, which is built locally on a single AMP in mapname derived from the hash of the table id. Statistics are collected since the estimated update of z% exceeds the system-determined update threshold of x%. | |
The system-determined change threshold is met and the columns of interest have been updated after the last recollection of statistics, so Vantage recollects the requested statistics. | |
we save the UPDATED STATISTICS for ('a,b') from Spool m into Spool n, which is built locally on a single AMP in mapname derived from the hash of the table id. Statistics collected since the estimated data change could not be determined. | |
The Optimizer has a difficult time estimating the cardinality changes and update activity on related columns, so Vantage does recollect the requested statistics. This happens when the Optimizer detects one of the following 4 conditions:
|
|
we save the UPDATED GEOSPATIAL STATISTICS for column from Spool m (Last Use) into Spool n … | |
Vantage copies the updated geospatial NUSI statistics collected on column locally from spool m into spool n. | |
We send an END PLAN FRAGMENT step for plan fragment n. | |
Indicates to the dispatcher the end of a plan fragment in a dynamic plan. The step occurs at the end of each plan fragment for a dynamic plan except for the last. This step is not sent to the AMPs. n indicates the number of the plan fragment. There is no step in the plan to indicate the beginning of a plan fragment. For a dynamic plan, the first step is the beginning of the first plan fragment. The first step after an END PLAN FRAGMENT step is the beginning of the next plan fragment. The last step of a dynamic plan is the end of the last plan fragment, and there is no END PLAN FRAGMENT step for the last plan fragment. The number of plan fragments in the plan and the number of the last plan fragment is the number in the last END PLAN FRAGMENT step plus 1. |
|
We SKIP collecting STATISTICS for (‘a,b’), because the age of the statistics (n days) does not exceed the user-specified time threshold of m days and the estimate data change of o% does not exceed the user-specified change threshold of p%. | |
The user specified a time threshold and a system-determined change threshold is enabled. Neither threshold is met, so Vantage does not recollect the specified statistics. | |
We SKIP collecting STATISTICS for ('a,b'), because the estimated data change of 5% does not exceed the user-specified change threshold of 20%. | |
The user specified a data change threshold that the statistics do not meet, so Vantage does not recollect the specified statistics. | |
which is duplicated on all AMPs in mapname… | |
Relocating data in preparation for a join. | |
which is redistributed by hash code to all AMPs in mapname… | |
Relocating data by hash code in preparation for a join. | |
which is redistributed by the hash code of ([database_name.]table_name.column_expression) which is redistributed by the hash code of ([database_name.]table_name.column_expression) to few AMPs in mapname. which is redistributed by the hash code of ([database_name.]table_name.column_expression) to few or all AMPs in mapname. which is redistributed by the hash code of ([database_name.]table_name.column_expression) to all AMPs in mapname. |
|
Relocating data by the hash code of the specified column expression based on a column set from table_name in preparation for a join. | |
which is redistributed randomly … | |
When you specify HASH BY RANDOM for an INSERT ... SELECT request, the Optimizer redistributes blocks of selected rows randomly before being optionally ordered locally by specifying a LOCAL ORDER BY clause, and inserting the rows locally into the target table. The Optimizer does this by taking a generated response spool, generating a random hash value for the spool, and changing the existing row hash value before redistributing the rows. This is useful when the result of the SELECT operation does not provide an even distribution. |