15.10 - EXPLAIN Request Modifier Phrase Terminology - Teradata Database

Teradata Database SQL Request and Transaction Processing

prodname
Teradata Database
vrm_release
15.10
category
Programming Reference
User Guide
featnum
B035-1142-151K

Many of the terms used in EXPLAIN phrases are described in the following list. Self-explanatory phrases are not listed.

 

Phrase

Explanation

:*

 

Teradata Database uses this string to signify that it has replaced values that have been masked from the report for a dynamic query plan because a system‑controlled variable has been set to mask the intermediate results to be inserted into the request during incremental planning.

m1 column partitions of …

 

This phrase indicates that up to m1 column partitions of the column‑partitioned table or join index might need to be accessed. There are column partition contexts available for each of the m1 column partitions.

For this phrase to occur, the column‑partitioned table or join index does not have row partitioning.

m1 2. One of the column partitions accessed might be the delete column partition.

Not all of the m1 column partitions might need to be accessed if no rows qualify.

A using rowid Spool phrase might follow the table or join index name. In this case, m1 is the number of column partitions accessed when using the rowID spool while m2 in the using rowid Spool phrase is the number of column partitions accessed to build the rowID spool.

This phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that might read a column-partitioned source.

For a delete operation, Teradata Database does not immediately reclaim storage for any deleted rows other than LOBs and column partitions with ROW format. A subsequent fast path deletion of all the rows of the table or join index reclaims the storage of all deleted rows in the table, including previously deleted rows.

m1 column partitions (c1 contexts) of …

 

This phrase indicates that up to m1 column partitions of the column‑partitioned table or join index might need to be accessed using c1 column partition contexts.

For this phrase to occur, the column‑partitioned table or join index does not have row partitioning.

m1 2.

2 < c1 < m1-1

c1 is equal to, or one less than, the number of available column partition contexts.

One of the column partitions accessed might be the delete column partition. Not all of the m1 column partitions might need to be accessed if no rows qualify.

A using CP merge Spool or using covering CP merge Spool phrase follows the table or join index name.

This phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that might read a column-partitioned source.

To access the m1 column partitions, columns from the column partitions are merged up to c1 column partitions at a time into a column-partitioned spool until all the projected columns have been retrieved for the qualifying rows.

One or more of the merges might require accessing the delete column partition. For each such merge, the delete column partition is included in m1.

Performance can degrade if c1 is much less than m1 and the retrieve is not very selective. In this case, consider decreasing the number of column partitions that need to be accessed, combining column partitions so there are fewer column partitions, or increasing PPICacheThrP if there is enough available memory to do so.

For a delete operation, Teradata Database does not immediately reclaim storage for any deleted rows other than LOBs and column partitions with ROW format. A subsequent fast path deletion of all rows of the table or join index reclaims the storage of all deleted rows in the table, including previously deleted rows.

n partitions of …

 

This phrase indicates that only n of the combined partitions are accessed. n is greater than one.

This phrase only applies to an RPPI table or join index.

n1 combined partitions (one column partition) of …

 

This phrase indicates that one column partition of multiple row partitions of the column-partitioned table or join index might need to be accessed. For this phrase to occur, the table or join index is accessed via rowIDs.

For this phrase to occur, the table or join index is accessed via rowids.

A using rowid Spool phrase might follow the table or join index name. In this case, one column partition is accessed when using the rowID spool while m2 in the using rowid Spool phrase is the number of column partitions accessed to build the rowID spool.

The phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that might read a column-partitioned source.

n1 combined partitions (m1 column partitions) of …

 

This phrase indicates that the rows and columns for up to m1 combined partitions of the table or join index might need to be accessed.

For the column-partitioning level, m1 column partitions might need to be accessed. There are column partition contexts available for each of the m1 column partitions.

For this phrase to occur, the column‑partitioned table or join index has both row and column partitioning. One of the column partitions that might be accessed is the delete column partition.

Not all of the m1 column partitions might need to be accessed if no rows qualify.

A using rowid Spool phrase might follow the table or join index name. In this case, m1 is the number of column partitions accessed when using the rowID spool while m2 in the using rowid Spool phrase is the number of column partitions accessed to build the rowID spool.

The phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that might read a column-partitioned source.

For a delete operation, Teradata Database does not immediately reclaim storage for any deleted rows other than LOBs and column partitions with ROW format. A subsequent fast path deletion of all the rows in a row partition reclaims the storage of all deleted rows in that row partition, including previously deleted rows in that partition. A subsequent fast path deletion of all the rows of the table or join index reclaims the storage of all deleted rows in the table, including previously deleted rows.

n1 combined partitions (m1 column partitions and c1 contexts) of …

 

This phrase indicates that the rows and columns for up to n1 combined partitions of the table or join index might need to be accessed.

For the column-partitioning level, m1 column partitions might need to be accessed. For this phrase to occur, the column‑partitioned table or join index has both row and column partitioning.

For the column-partitioning level, c1 column partition contexts are used to access up to m1 column partitions. n1 2.

m1 2.

2 < c1 < m1-1 and is equal to or one less than the number of available column partition contexts.

One of the column partitions that might be accessed is the delete column partition. Not all of the m1 column partitions might need to be accessed if no rows qualify. A using CP merge Spool or using covering CP merge Spool phrase follows the table or join index name.

The phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that might read a column-partitioned source. To access the m1 column partitions, columns from the column partitions are merged up to c1 column partitions at a time into a column‑partitioned spool until all the projected columns have been retrieved for the qualifying rows.

One or more of the merges might require accessing the delete column partition. For each such merge, the delete column partition is included in m1.

Performance can degrade if c1 is much less than m1 and the retrieve is not very selective. In this case, consider decreasing the number of column partitions that need to be accessed, combining column partitions so there are fewer column partitions, or increasing PPICacheThrP if there is enough available memory to do so.

For a delete operation, Teradata Database does not immediately reclaim storage for any deleted rows other than LOBs and column partitions with ROW format. A subsequent fast path deletion of all the rows in a row partition reclaims the storage of all deleted rows in that row partition, including previously deleted rows in that partition. A subsequent fast path delete of all the rows of the table or join index reclaims the storage of all deleted rows in the table, including previously deleted rows.

aggregate results are computed globally …

 

This phrase indicates that totals are aggregated across all AMPs.

For this case, Teradata Database performs all four steps of the ARSA aggregation algorithm.

1 Aggregate locally on each AMP.

2 Redistribute the local aggregations to their target AMPs.

3 Sort the redistributed aggregations.

This stage of the process includes the elimination of duplicate rows.

4 Aggregate globally to compute the final aggregation.

aggregate results are computed locally …

 

This phrase indicates that totals are aggregated locally on each AMP.

For this case, local aggregation is the only stage of the ARSA aggregation algorithm that Teradata Database performs.

all partitions of …

 

This phrase indicates that Teradata Database accesses all combined partitions of an AMP for a primary index access to a single AMP for a primary‑indexed row‑partitioned table or join index.

This phrase is not applicable to column‑partitioned tables or join indexes because those objects do not have a primary index.

a single column partition of …

 

This phrase indicates that only one column partition of the column‑partitioned table or join index might need to be accessed. For this phrase to occur, the column‑partitioned table or join index does not have row partitioning. Also, this phrase indicates the delete column partition does not need to be accessed either because the column partition has ROW format or access is via rowIDs from an index or rowID spool.

A using rowid Spool phrase might follow the table or join index name. In this case, one column partition is accessed when using the rowID spool while m2 in the using rowid Spool phrase is the number of column partitions accessed to build the rowID spool.

This phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that might read a column-partitioned source.

For a delete operation, Teradata Database does not immediately reclaim storage for any deleted rows, other than LOBs and column partitions with ROW format. A subsequent fast path deletion of all of the rows of the table or join index reclaims the storage of all deleted rows in the table, including previously deleted rows.

a single combined partition of …

 

This phrase indicates that only one column partition of only one row partition of the column‑partitioned table or join index might need to be accessed.

For this phrase to occur, the column‑partitioned table or join index has to have both row and column partitioning. Also, this phrase indicates that the delete column partition does not need to be accessed either because the column partition has ROW format or because access is via rowids.

A using rowid Spool phrase may follow the table or join index name. In this case, one column partition is accessed when using the rowID spool while m2 in the using rowid Spool phrase is the number of column partitions accessed to build the rowID spool.

This phrase is used in steps such as RETRIEVE, DELETE, JOIN, and MERGE that might read a column-partitioned source.

a single partition of …

 

This phrase indicates that only a single combined partition of a primary‑indexed row‑partitioned table or join index is accessed.

This phrase is not applicable to a column‑partitioned table or join index.

(all_amps) …

 

This phrase indicates that the spool is created on all AMPs to which the step is sent. Each AMP on which the step executes does not need to enter the AMP into a dynamic AMP group.

all‑AMPs JOIN step by way of a RowHash match scan …

 

1 The first row is retrieved from the first table; the hash code for that row is used to locate a row from the second table.

2 Each row‑hash match is located and processed as follows:

a
The row‑hashes are compared. If not equal, the larger row‑hash is used to read rows from the other table until a row‑hash match is found, or until the table is exhausted.
b
If match is found, each pair of rows with the same hash code is accessed (one at a time) from the two tables. For each such pair, if the join condition is satisfied, a join result row is produced.
c
After all rows with the same row‑hash are processed from both tables, one more row is read from each table. The row‑hashes from these two rows are compared, restarting the compare process.

all‑AMPs RETRIEVE step by way of an all‑rows scan …

 

This phrase indicates that all rows of a table are selected row by row on all AMPs on which the table is stored.

BMSMS (bit map set manipulation step) intersects the following rowID bit maps:

     1)   The bit map built for … by way of index # n... 
     2)   The bit map built for … by way of index # n...
      …

The resulting bit map is placed in Spool n...

     BMSMS...

Indicates that two or more bit maps are intersected by ANDing them to form one large bit map.

     index # n...

Identifies, in the order in which they are ANDed, each nonunique secondary index used in the intersection. 

     resulting bit map is placed in Spool n...

Identifies the temporary file in which the large bit map produced by the BMSMS is stored to make it available for use in producing the final result.

all partitions of …

 

This phrase indicates that all combined partitions of an AMP are accessed for a primary index access to a single AMP for a primary‑indexed row‑partitioned table or join index.

This phrase only applies to a PPI table or join index.

a rowkey-based …

 

The join is hash‑based by partition (that is, by the rowkey). In this case, there are equality constraints on the partitioning columns and primary index columns. This allows for a faster join since each uneliminated row partition needs to be joined with at most only one other row partition.

When the phrase is not given, the join is hash based. That is, there are equality constraints on the primary index columns from which the hash is derived. For a PPI table or join index, there is some additional overhead in processing the table in hash order.

Note that with either method, the join conditions must still be validated.

<BEGIN ROW TRIGGER LOOP>

 

Processing of the trigger action statements defined in the row trigger starts from the current AMP step, step n, of the EXPLAIN text.

All AMP steps from the current step through the step in which the phrase END ROW TRIGGER LOOP for step n appears constitute the row trigger loop.

by skipping global aggregation

 

Teradata Database avoids the cost of redistributing rows and aggregating them globally by skipping global aggregation when you collect multicolumn statistics and any column in the multicolumn set is unique.

by skipping local aggregation

 

Teradata Database avoids the cost of first writing to spool and then sorting the spooled rows by skipping local aggregation when you collect statistics.

by the hash code of ([database_name].table_name.column_name).

 

The spool sort is done on the specified column in row hash order.

by using cache during local aggregation

 

Teradata Database avoids the cost of sorting large numbers of input rows by caching output rows during local aggregation and coalescing any duplicate rows in the cache when you collect statistics.

by way of an all‑rows scan

 

All rows in the table are scanned because a single‑partition scan is not possible for the specified predicate conditions. See “Single Partition Scans and BEGIN/END Bound Functions” on page 343.

by way of index # n and the bit map in Spool n

 

The data row associated with the rowID is accessed only if the associated bit is turned on in the bit map.

by way of the primary AMP index …

 

Indicates that there are conditions on the primary AMP index columns that allow the system to access only a single AMP for the indicated table.

by way of the primary index …

 

Indicates that there are conditions on the primary index columns that allow the system to access only a single AMP for the indicated table.

by way of the sort key in spool field1 …

 

Field1 is created to allow a tag sort.

by way of the unique primary index …

 

Indicates that there are conditions on the unique primary index columns that allow the system to access only a single AMP and a single row for the indicated table.

(compressed columns allowed)

 

The target spool can have compressed values.

computed globally …

 

The computation involves all the intermediate spool data.

condition …

 

An intermediate condition that joins table rows (as compared with the overall join condition).

duplicated on all AMPs …

 

A spool containing intermediate data that is used to produce a result is copied to all AMPs containing data with which the intermediate data is compared.

eliminating duplicate rows …

 

Duplicate rows can exist in spools, either as a result of selection of nonunique columns from any table or of selection from a MULTISET table. This is a DISTINCT operation.

<END ROW TRIGGER LOOP for step n.>

 

Delimits the running of the last AMP step in the row trigger loop.

Control moves to the next step outside the trigger.

END TRANSACTION step …

 

Indicates that processing is complete and that any locks on the data might be released. Changes made by the transaction are committed.

enhanced by dynamic partition elimination …

 

Indicates a join condition where dynamic row partition elimination has been used.

estimated size …

 

This value, based on any statistics collected for a table, is used to estimate the size of the spool file needed to accommodate spooled data. If statistics have not been collected for a table, this estimate might be grossly incorrect (see SQL Data Definition Language).

estimated time …

 

This approximate time is based on average times for the suboperations that comprise the overall operation, and the likely number of rows involved in the operation. The accuracy of the time estimate is also affected by the accuracy of the estimated size.

execute the following steps in parallel …

 

This phrase identifies a set of AMP steps that is processed concurrently. The explanatory text immediately following the list describes the execution of each step.

full outer joined using a single partition hash join

 

Teradata Database joins two spools using a single‑partition full outer hash join.

(group_amps)

 

This phrase is used to indicate that a spool is created on a group of AMPs. Each AMP on which the step executes enters into a dynamic AMP group if rows are generated for that AMP. If rows are not generated for that AMP, it does not enter into a dynamic AMP group.

grouping by fieldn ([database_name].table_name.column_expression).

 

The specified grouping column expression is based on column n in the specified table.

INSERT into a single column partition of …

 

This phrase indicates that at least one row is being inserted into a table or join index with only one user-specified column partition.

INSERT into m3 column partitions of …

 

This phrase indicates that at least one row is being inserted into a table or join index with m3 column partitions.

In this case, the value of m3 2.

in view view_name

 

The specified [database_name].table_name is accessed by means of the view view_name.

join condition …

 

The overall constraint that governs the join.

In the following request, employee.empno = department.mgrno is the overall constraint governing the join:

     SELECT deptname, name 
     FROM employee, department 
     WHERE employee.empno = department.mgrno;

Join conditions are also specified in the ON clause of an outer join or an ANSI SQL standard‑formatted inner join.

JOIN step by way of an all‑rows scan …

 

This phrase indicates that on each AMP on which they reside, spooled rows, primary table rows, or both are searched row by row; rows that satisfy the join condition are joined.

joined using a single partition exclusion hash join, with a join condition of (join_condition)

 

Teradata Database joins two spools using a single‑partition exclusion hash Join on join_condition.

joined using a single partition hash join, with a join condition of (join_condition)

 

Teradata Database joins two spools using a single‑partition hash join on join_condition.

joined using a single partition exclusion hash join, with a join condition of (join_condition) enhanced by dynamic partition elimination

 

Teradata Database joins two spools using a single‑partition exclusion hash join on join_condition with dynamic row partition elimination.

joined using a single partition hash join, with a join condition of (join_condition) enhanced by dynamic partition elimination

 

Teradata Database joins two spools using a single‑partition hash join on join_condition with dynamic row partition elimination.

joined using a single partition inclusion hash join, with a join condition of (join_condition)

 

Teradata Database joins two spools using a single‑partition Inclusion Hash Join on join_condition.

joined using a single partition inclusion hash join, with a join condition of (join_condition) enhanced by dynamic partition elimination

 

Teradata Database join two spools using a single‑partition exclusion hash join on join_condition with dynamic row partition elimination.

(Last Use)

 

This term identifies the last reference to a spool that contains intermediate data. The spool is released following this AMP step.

left outer joined using a single partition hash join, with a join condition of (join_condition)

 

Teradata Database joins two spools using a single‑partition left outer hash join on join_condition.

(load committed)…

 

The read mode. Indicates that Teradata Database reads the rows that are load-committed.

If the read mode is (load uncommitted), it indicates that Teradata Database ignores the logically deleted rows during processing. All other rows are read.

For more information about load isolation, see SQL Data Definition Language Syntax and Examples and SQL Data Manipulation Language.

(load isolated)…

 

Indicates that the modification operation performed in the step is load isolated.

For this phrase to occur, the table being modified must be a load-isolated table or a join index defined on a load-isolated table.

Delete operations that are load isolated do not physically delete the rows, rather, they mark the rows as logically deleted and row versioned. Update operations that are load isolated version the qualified rows such that older values are retained. Insert operations that are load isolated mark the rows with a load identity value that provides the commit property of the row to concurrent readers.

During such modifications, a concurrent reader session on the table or join index can continue to obtain load-committed rows while the changes in the on-going load are not visible until committed.

For more information about load isolation, see SQL Data Definition Language Syntax and Examples and SQL Data Manipulation Language.

locally on the AMPs …

 

That portion of spooled intermediate or result data for which an AMP is responsible is stored on the AMP; it is not duplicated on or redistributed to other AMPs that are processing the same request.

lock …

 

The Lock Manager places an ACCESS, READ, WRITE, or EXCLUSIVE lock on the database object that is to be accessed by a request.

MERGE into a single column partition of …

 

This phrase indicates that rows are being merge inserted into a table or join index with only one user-specified column partition.

MERGE into m3 column partitions of …

 

This phrase indicates that rows are being inserted into a table or join index with m3 user-specified column partitions, using one column-partition context.

m3 2.

Teradata Database can use either of two methods to insert the rows.

  • Take 1 pass over the source rows to insert the rows.
  • The column values from each source row are buffered with a separate 128KB output buffer for each target column partition. Teradata Database writes a buffer to its corresponding column partition when it is full or when all the source rows have been read).

    The source rows either do not need to be sorted for the MERGE step or have already been sorted by a previous AMP step. The phrase with buffered output is reported in the MERGE step for this method.

  • Fill a buffer with source rows and then sort it.
  • Teradata Database then scans the rows in the buffer once for each column partition in order to insert the column partition values for that column partition. This is repeated until all the source rows have been read.

    This method avoids an AMP step to spool and sort the rows. This is efficient if enough data is inserted for each set of buffered source rows into each combined partition when there is row partitioning for a target column‑partitioned table and the source is a NoPI table.

    If the source is not a NoPI table, this method can cause an inefficient rereading and rewriting of data blocks that can occur when only a few column partition values are going into a combined partition at a time.

    The phrase with buffered input and sorted input is reported in the MERGE step for this method

    The Optimizer recommends the least costly method to use, but an AMP might change to the other method as the rows are being inserted if it determines the other method might perform better.

    MERGE into m3 column partitions (c3 contexts) of …

     

    This phrase indicates that rows are being inserted into a table or join index with m3 user-specified column partitions, using c3 column-partition contexts.

    m3 2.

    2 < c3 < m3.

    Teradata Database can use one of two methods to insert the rows.

  • Make passes over the source rows to insert them.
  • The column values from each source row that is read are buffered with a separate 128KB output buffer for each target column partition being processed by a pass. A buffer is written to its corresponding column partition when it is full or when all the source rows have been read.

    The source rows either do not need to be sorted for the MERGE step or have already been sorted by a previous AMP step. The source must be spooled if the source is a table or join index with an ACCESS lock. Even though this method requires multiple passes over the source rows, it is often less costly than the next method.

    Teradata Database reports the phrase with buffered output in the MERGE step for this method

  • Fill a buffer with source rows and then sort it.
  • Teradata Database scans the rows in the buffer once for each column partition in order to insert the column partition values for that column partition and repeats this stage of the process until all the source rows have been read. This avoids an AMP step to spool and sort the rows and avoids making multiple passes over the source rows. This method can be efficient if enough data is inserted for each set of buffered source rows into each combined partition when there is row partitioning for a target column‑partitioned table and the source is a NoPI table.

    If the source is not a NoPI table, this method can cause an inefficient rereading and rewriting of data blocks. This inefficiency can occur when only a few column partition values are going into a combined partition at a time.

    The phrase with buffered input and sorted input occurs in the MERGE step for this method.

    The Optimizer recommends the method to use to the AMPs, but an AMP might change to the other method as the rows are inserted if it determines that the other method might perform better.

    There is a minimum of 8 and a maximum of 256 available column partition contexts.

    The number of available column partition contexts is based on 3 factors: the amount of FSG cache memory, the estimated block size, and the value of the PPICacheThrP cost profile constant or the DBS Control field PPICacheThrP.

    If the cost profile constant PPICacheThrP is set to any value other than 0, then its setting overrides the setting of the PPICacheThrP DBS Control field. Only Teradata support personnel can modify the setting of the PPICacheThrP cost profile constant.

    MERGE into a single column partition of …

     

    This phrase indicates that rows are being inserted into a table or join index with only one user-specified column partition.

    merge join …

     

    One of the methods of join processing performed by Teradata Database.

    merge with matched updates and unmatched inserts …

     

    An AMP step that can do any of the three following operations in a single step:

  • Both update and insert operations
  • Insert operations only
  • Update operations only
  • The step assumes that the source table is always distributed on the join column of the source table, which is specified in the ON clause as an equality constraint with the primary index of the target table and sorted on the RowKey.

    The step performs a RowKey‑based Merge Join internally, identifying source rows that qualify for updating target rows and source rows that qualify for inserts, after which it performs those updates and inserts.

    This step is very similar to the APPLY phase of MultiLoad because it guarantees that the target table data block is read and written only once during the MERGE operation.

    nested join …

     

    One of the methods of join processing performed by Teradata Database.

    (no lock required)

     

    The lock required for the operation was acquired in a previous AMP step.

    This phrase is not always reported for steps where a lock was acquired earlier.

    (nonconcurrent load isolated)…

     

    Indicates that the modification operation is not load isolated.

    For this phrase to occur, the table being modified must be a load-isolated table or a join index defined on a load-isolated table.

    Such modifications are like regular modifications on a non-load-isolated table that do not version the rows. The difference is that EXCLUSIVE locks are applied when such changes occur.

    During such modification, a concurrent reader session is blocked (including the case when ACCESS lock is used) until the transaction closes.

    For more information about load isolation, see SQL Data Definition Language Syntax and Examples and SQL Data Manipulation Language.

    no residual conditions …

     

    Rows are selected in their entirety; there are no specific search conditions. All applicable conditions have been applied to the rows.

    normalizing rows …

     

    Indicates that Teradata Database performs a normalize operation as part of sort.

    of n3 combined partitions

     

    This phrase indicates that all the rows and columns in each of n3 combined partitions of a table or join index, which has both row and column partitioning in this case, can be completely deleted as a fast path delete and storage for the deleted rows is recovered.

    n3 > 1.

    of n3 combined partitions from …

     

    This phrase indicates that all the rows and columns in each of n3 combined partitions of a table or join index that has both row and column partitioning are to be completely deleted as a fast path delete, and the storage for the deleted rows is recovered.

    n3 is greater than one.

    of n partitions from …

     

    This phrase indicates that all rows in each of n combined partitions can be completely deleted for a primary‑indexed row‑partitioned table or join index.

    The value of n is always greater than one.

    In some cases, this allows for faster deletion of entire partitions.

    This phrase is not applicable to column-partitioned tables or join indexes because other phrases are used in those cases.

    of a single partition from …

     

    This phrase indicates that the Optimizer determined that all rows in a single combined partition for a row‑partitioned table or join index with a primary index can be completely deleted.

    In some cases, this allows for faster deletion of the entire combined partition.

    This phrase is not applicable to column-partitioned tables or join indexes because there must be at least two column partitions for a column‑partitioned table or join index: at least one specified column partition and the delete column partition.

    on a reserved RowHash [in all partitions] [in n partitions] [in a single partition]…

     

    This phrase indicates that the system has placed a proxy lock on the target table to prevent global deadlock for a later all-AMP lock on the table. The optional phrases "in all partitions," "in n partitions," and "in a single partition" appear only if the table is row-partitioned.

    The phrase “in all partitions” indicates a proxy for a table-level lock.

    The phrase “in n partitions” indicates a proxy for a partition-range lock.

    The phrase “in a single partition” indicates a proxy for a single-partition lock.

    For more information, see “Proxy Locks” on page 713.

    partial SUM

     

    A partial GROUP BY optimization has been done on the SUM step.

    (partial-covering)

     

    The NUSI in question partially covers the query using a constraint scan.

    For example, the system can probe a NUSI subtable to get the rowID values for base table rows that might qualify for the query, but it must then also access the underlying base table for the NUSI to obtain the rows that definitely qualify.

    post join condition of …

    Indicates that the optimizer detected a delayed condition and processed it as a post-join condition. This phrase appears immediately after the regular join condition.

    The following conditions trigger the use of post-join processing:

  • Predicates that contain a CASE statement, and/or NULL sensitive predicates that refer to the inner table of an outer join.
  • Predicates that contain a complex OR term used in conjunction with an outer join.
  • Predicates that are used in conjunction with the NOT IN or the NOT EXIST connecting term.
  • product join …

     

    One of the methods of join processing performed by Teradata Database.

    redistributed by hash code to all AMPs …

     

    Given the values of an indexed or unindexed column, rows are sent to the AMPs that are responsible for storing the rows that use these values as a primary index.

    right outer joined using a single partition hash join, with a join condition of (join_condition)

     

    Teradata Database joins a spool from the left relation to a right relation using a two‑step single‑partition right outer hash join on join_condition.

    single‑AMP JOIN step by way of the unique primary index …

     

    A row is selected on a single AMP using the unique primary index for the table. Using a value in the row that hashes to a unique index value in a second table, the first row is joined with a second row located on another AMP.

    single‑AMP RETRIEVE by way of unique index # n

     

    A single row of a table is selected using a unique secondary index value that hashes to the AMP on which the table row is stored.

    single‑AMP RETRIEVE step by way of the unique primary index …

     

    A single row of a table is selected using a unique primary index value that hashes to the single AMP on which the data row is stored.

    Although not explicitly stated in the LOCK portion of the Explain text, a rowhash lock is required because the table is accessed via the unique primary index.

    SORT/GROUP

     

    Partial GROUP BY was used to reduce cardinality by simultaneously sorting and grouping.

    SORT to normalize …

     

    Teradata Database performs the normalize operation as part of sort.

    SORT to order Spool n by row hash …

     

    Rows in the spool are sorted by hash code to put them the same order as rows in the primary table, or in another spool on the same AMP, with which they are to be matched

    SORT to order Spool n by row hash and the sort key in spool field1 eliminating duplicate rows …

     

    Rows in the spool are sorted by hash code using a uniqueness sort to eliminate duplicate rows. Uniqueness is based on the data in field1.

    The contents of field1 depend on the query and might comprise any of the following:

  • A concatenation of all the fields in the spool row (used for queries with SELECT DISTINCT or that involve a UNION, INTERSECT, EXCEPT, or MINUS operation).
  • A concatenation of the rowIDs that identify the data rows from which the spool row was formed (used for complex queries involving subqueries).
  • Some other value that uniquely defines the spool row (used for complex queries involving aggregates and subqueries).

    SORT to order Spool 1 by the sort key in spool field1 …

     

    Rows in the spool are sorted by the value in field1. The contents of field1 are determined by the column set defined in the ORDER BY or WITH...BY clause of the request being processed.

    For example, “…SORT to partition by rowkey…” or “…SORT to order Spool 1 by the sort key in spool field1 (THU.JI1.b1)…”

    SORT to partition by rowkey.

     

    This indicates that a join spool is joined using a rowkey‑based join and sorted to the appropriate partitions.

    SORT to string_1 by string_2 …

     

    Rows are sorted to string_1 by their string_2 sort key.

    For example, “…SORT to partition by rowkey…” or “…SORT to order Spool 1 by the sort key in spool field1 (THU.JI1.b1)…”

    SORT to partition Spool n by rowkey …

     

    The Optimizer determined that a spool is to be partitioned based on the same partitioning expression as a table to which the spool is be joined. That is, the spool is to be sorted by rowkey (partition and hash). Partitioning the spool in this way allows for a faster join with the partitioned table. n is the spool number.

    spool n

     

    Identifies a spool, which is used to contain data during an operation.

  • Spool 1 is normally used to hold a result before it is returned to the user.
  • Spools 2, 3, etc., contain intermediate data that produces a result.
  • Spool n statistics go into Spool n.

     

    In a dynamic EXPLAIN, identifies the spool where the statistics that were collected during the generation of the spool are located. If step generates multiple spools, all spools are included.

    spool_n.Field …

     

    Identifies the field of the spooled rows that is being used in a join constraint or comparison operation.

    For example:

         personnel.employee.empno = spool_2.mgrno 
         Statement 1...

    This term refers to the initiating request.

    SUM step to normalize from …

     

    Indicates that Teradata Database performs a normalize operation as part of the aggregation of an aggregation step.

    table‑level summary statistics

     

    Explains that Teradata Database is computing the summary statistics, not detailed statistics, for a table.

    The actual size of Spool n is n row[s] (n bytes).

     

    Reports the actual size of the spool, if available.

    After the steps where the location of the Spool n statistics is reported and the statistics are consolidated, if the statistics are retrieved from additional runtime information, the size of the spool is given.

    The estimated time for this step is nn seconds.

     

    The estimated time for the reported Delete, Insert, Join, Merge, Merge Delete, Merge Update, Retrieval, Sum, or Update step in seconds.

    Note that the time shown is not clock time; you should consider it to be an arbitrary unit of measure that you can use to compare different operations.

    the estimated time is nn seconds.

     

    The estimated time to completion for the entire request in seconds.

    The time shown is not clock time; you should consider it to be an arbitrary unit of measure that you can use to compare different operations.

    The following is the dynamic explain for the request.

     

    This phrase indicates the query plan is a dynamic plan generated by incremental planning and execution (IPE) for this request. This phrase, if included, occurs before the first step of the dynamic plan.

    The size of Spool x is estimated with low confidence to be y rows (z bytes).

     

    The estimated size of spool n (which is always reported with Low Confidence - see “About Optimizer Confidence Levels” on page 537), where m is the estimated cardinality of the spool and o is the estimated number of bytes in the spool.

    The spool size in bytes is calculated as follows:

    … the Spatial index subtable …

     

    The index being created or dropped is a spatial NUSI.

    This request is eligible for incremental planning and execution (IPE) but dynamic plan does not support USING request modifier. The following is the static plan for the request:

     

    This is a generic plan for the request that is eligible for incremental planning and execution; however, DYNAMIC EXPLAIN is not supported because the phrase being explained is preceded by a USING request modifier. The phrase, if included, occurs before the first step of the EXPLAIN text with a DYNAMIC EXPLAIN request modifier.

    This repeated request is eligible for incremental planning and execution (IPE) but it may also be eligible to be cached so a generic plan is tried. The following is the static plan for the request.:

     

    This is a generic plan for the request that is eligible for incremental planning and execution; however, this is a repeated request whose generic plan might be eligible to be cached, so Teradata Database tries a generic plan the second time. A generic plan is always a static plan.

    The phrase, if included, occurs before the first step of the generic plan in DBC.DBQLExplainTbl if the EXPLAIN text for the plan is being logged by DBQL; it does not occur when you specify an EXPLAIN, STATIC EXPLAIN, or DYNAMIC EXPLAIN request modifier.

    This request is eligible for incremental planning and execution (IPE) but doesn’t meet thresholds. The following is the static plan for the request:

     

    Teradata Database reports this phrase before the first step of a static plan.

    This is a specific static plan that is eligible for incremental planning and execution, but does not meet certain system‑specified cost thresholds. If Teradata Database executes the request, this static plan is generated and executed.

    This request is eligible for incremental planning and execution (IPE) but dynamic explain does not support USING request modifier. The following is the static plan for the request.:

     

    The request is eligible for incremental planning and execution based on this static plan; however, the dynamic plan cannot be displayed for requests with a USING request modifier, so a generic plan is displayed instead. This phrase, if included, occurs before the first step of the static plan.

    A generic plan is always a static plan. If the request is executed, a generic plan, a specific static plan, or a dynamic plan might be generated and executed. This phrase does not occur in DBC.DBQLExplainTbl in the EXPLAIN text for a plan being logged by DBQL.

    This request is eligible for incremental planning and execution (IPE). The following is the static plan for the request:

     

    Teradata Database reports this phrase before the first step of a static plan.

    This is a specific static plan that is eligible for incremental planning and execution and, if the request is executed, a dynamic plan might be generated and executed instead of this static plan. However, this static plan is the one used for evaluating workload filters, throttles, and classification criteria.

    This request is eligible for incremental planning and execution (IPE). IPE is disabled. The following is the static plan for the request:

     

    This is a static plan for this request that is eligible for incremental planning and execution; however IPE is disabled for the system. If included, Teradata Database reports the phrase before the first step of the static plan.

    If the request is executed and IPE is still disabled, Teradata Database generates and executes a static plan; if the request is executed and IPE is enabled, a dynamic plan is generated. To enable dynamic plans, contact Teradata Customer Support.

    two‑AMP RETRIEVE step by way of unique index #n

     

    A row of a table is selected based on a USI value:

  • A single row in the USI subtable is selected using the index value that hashes to the AMP on which the subtable row is stored.
  • The hash value in the index rowID determines the AMP on which the data row is stored.
  • unsatisfiable

     

    The Optimizer has added an implied constraint to identify an unsatisfiable condition for a Period column when a DML request is defined with a specific set of BEGIN and END search condition constraints. See “Predicate Simplification” on page 94.

    using covering CP merge Spool q (s2 subrow partitions and Last Use)

     

    using covering CP merge Spool q (s1 + s2 subrow partitions and Last Use)

     

    These phrases indicate that a column‑partitioned merge spool is created and used to merge column partitions from the table or join index, then the resulting s2 subrow column partitions are read from the column‑partitioned merge spool.

    s1 indicates the number of intermediate subrow column partitions needed for the merges, while s1 + s2 indicates the number of merges needed.

    If s1 is not included, no intermediate subrow column partitions are needed for the merges, and s2 merges are required.

    The number of merges needed depends on the number of column partitions that need to be accessed and the number of available column partition contexts as indicated in a preceding from phrase. This is the last usage of this column‑partitioned merge spool so it can be deleted.

    q is the spool number for the column‑partitioned merge spool.

    using covering CP merge Spool q (s2 subrow partitions and Last Use) and rowid Spool k (Last Use)

     

    using covering CP merge Spool q (s1 + s2 subrow partitions and Last Use) and rowid Spool k (Last Use)

     

    These phrases indicate that a column‑partitioned merge spool is created and used to merge column partitions from the table or join index, and then the resulting s2 subrow column partitions are read from the column‑partitioned merge spool, driven by the rowid spool.

    s1 indicates the number of intermediate subrow column partitions needed for the merges.

    s1 + s2 indicates the number of merges needed.

    If s1 is not included, no intermediate subrow column partitions are needed for the merges and in addition, there are s2 merges needed.

    The number of merges needed depends on the number of column partitions that need to be accessed and the number of available column partition contexts as indicated in a preceding from phrase.

    This is the last usage of both this column‑partitioned merge spool and rowID spool so they can be deleted.

    q is the spool number for the column‑partitioned merge spool.

    k is the spool number for the rowID spool.

    If rowID Spool k is generated by this AMP step instead of a previous step, a built from phrase follows this using phrase.

    using CP merge Spool q (one subrow partition and Last Use)

     

    using CP merge Spool q (s2 subrow partitions and Last Use)

     

    using CP merge Spool q (s1 + s2 subrow partitions and Last Use)

     

    These phrases indicate that a column‑partitioned merge spool is created and used to merge column partitions from the table or join index. Then, other column partitions from the table or join index and the resulting one subrow column partition or s2 subrow column partitions from the column‑partitioned merge spool are read.

    s1 indicates the number of intermediate subrow column partitions needed for the merges.

    s1 + s2 indicates the number of merges needed.

    If s1 is not included, no intermediate subrow column partitions are needed for the merges, and there is also one or s2 merges needed.

    The number of merges needed depends on the number of column partitions that need to be accessed and the number of available column partition contexts, as indicated in a preceding from phrase.

    This is the last usage of this column‑partitioned merge spool, so they can be deleted.

    q is the spool number for the column‑partitioned merge spool.

    using CP merge Spool q (one subrow partition and Last Use) and rowid Spool k (Last Use)

     

    using CP merge Spool q (s2 subrow partitions and Last Use) and rowid Spool k (Last Use)

     

    using CP merge Spool q (s1 + s2 subrow partitions and Last Use) and rowid Spool k (Last Use)

     

    These phrases indicate that a column‑partitioned merge spool is created and used to merge column partitions from the table or join index, and then some other column partitions from the table or join index and the resulting one subrow column partition or s2 subrow column partitions from the column‑partitioned merge spool are read, driven by the rowid spool.

    s1 indicates the number of intermediate subrow column partitions needed for the merges.

    s1 + s2 indicates the number of merges needed.

    If s1 is not included, no intermediate subrow column partitions are needed for the merges, and there is also one or s2 merges needed.

    This is the last usage of both this column‑partitioned merge spool and rowid spool so they can be deleted.

    q is the spool number for the column‑partitioned merge spool.

    k is the spool number for the rowID spool.

    If rowID Spool k is generated by this AMP step instead of a previous step, a built from phrase follows this using phrase.

    using rowid Spool k (Last Use)

     

    This phrase indicates that a rowID spool contains the rowIDs of rows in the table that qualify, and then the column‑partitioned table or join index is read, driven by this rowID spool.

    This is the last usage of this rowID spool, so it can be deleted.

    k is the spool number for the rowID spool.

    If rowID Spool k is generated by this AMP step instead of a previous step, a built from phrase follows this using phrase.

    This phrase is used in AMP steps, such as RETRIEVE, JOIN, and MERGE, that might read a column-partitioned source.

    using rowid Spool k (Last Use) built from m2 column partitions

     

    This phrase indicates that the rowID Spool k is built containing the rowIDs of rows in the table that qualify, and then the column‑partitioned table or join index is read, driven by this rowID spool.

    Up to m2 column partitions of the column‑partitioned table or join index may need to be accessed to evaluate predicates and build the rowID spool.

    There are column partition contexts available for each of the m2 column partitions.

    For this phrase to occur, the column‑partitioned table or join index would not have row partitioning.

    m2 2.

    One of the column partitions accessed may be the delete column partition.

    Not all of the m2 column partitions may actually need to be accessed if no rows qualify.

    This phrase is used in AMP steps such as RETRIEVE, JOIN, and MERGE that might read a column-partitioned source.

    using rowid Spool k (Last Use) built from m2 column partitions (c2 contexts)

     

    This phrase indicates that a rowID spool is created that contains the rowIDs of rows in the table or join index that qualify. Then the column‑partitioned table or join index is read, driven by this rowID spool.

    Up to m2 column partitions of the column‑partitioned table or join index may need to be accessed using c2 column-partition contexts to evaluate predicates and build the rowID spool. For this phrase to occur, the column‑partitioned table or join index does not have row partitioning.

    k is the spool number for the rowID spool.

    m2 2.

    2 < c2 < m2-1 and is equal to or one less than the number of available column-partition contexts.

    One of the column partitions accessed might be the delete column partition. Not all of the m2 column partitions might need to be accessed if no rows qualify.

    This phrase is used in AMP steps such as RETRIEVE, JOIN, and MERGE that might read a column-partitioned source.

    using rowid Spool k (Last Use) built from n2 combined partitions (m2 column partitions)

     

    This phrase indicates that the rowID Spool k is built containing the rowids of rows in the table or join index that qualify, and then the column‑partitioned table or join index is read, driven by this rowID spool.

    The rows and columns for up to n2 combined partitions of the table or join index (which has both row and column partitioning in this case) might need to be accessed to evaluate predicates and build the rowID spool.

    k is the spool number for the rowID spool.

    There are column partition contexts available for each of the m2 column partitions.

    m2 2.

    One of the column partitions accessed might be the delete column partition.

    Not all of the m2 column partitions might need to be accessed if no rows qualify.

    This phrase is used in AMP steps such as RETRIEVE, JOIN, and MERGE that might read a column-partitioned source.

    using rowid Spool k (Last Use) built from n2 combined partitions (m2 column partitions and c2 contexts)

     

    This phrase indicates that a rowID spool is created that contains the rowIDs of rows in the table that qualify. Then the column‑partitioned table is read, driven by this rowID spool. This is the last usage of this rowID spool, so it can be deleted.

    k is the spool number for the rowID spool.

    The rows and columns for up to n2 combined partitions of the table or join index, which has both row and column partitioning, might need to be accessed to evaluate predicates and build the rowID spool.

    For the column-partitioning level, c2 column-partition contexts are used to access up to m2 column partitions.

    n2 s and m2 2.

    2 < c2 < m2-1 and is equal to or one less than the number of available column-partition contexts.

    One of the column partitions that might be accessed is the delete column partition. Not all of the m2 column partitions might need to be accessed if no rows qualify.

    This phrase is used in AMP steps such as RETRIEVE, JOIN, and MERGE that might read a column‑partitioned source.

    we compute the table-level summary statistics from spool n and save them into spool m

     

    This phrase indicates that Teradata Database takes rolled up pre‑aggregated statistics information from spool n, computes table‑level summary statistics from that data, and saves the summary statistics it computed in spool m.

    we do a BMSMS... (bit map set manipulation step)

     

    BMSMS is a method for handling weakly selective secondary indexes that have been ANDed; NUSI bit mapping.

    we do a GEOSPATIAL COLLECT STATISTICS summary step …

     

    This phrase indicates that Teradata Database uses a SUM step to aggregate geospatial NUSI statistics when index statistics are collected on a geospatial column.

    we do an ABORT test …

     

    An ABORT or ROLLBACK statement was detected.

    we do [an all-AMPs | a single-AMP | a group-AMPs] DISPATCHER RETRIEVE step from Spool 2 (Last use) by way of an all-rows scan and send the rows back to the Dispatcher.

     

    This phrase indicates that Teradata Database uses either an all-AMPs, a single AMP, or a group-AMPs step to retrieve rows from the spool. The consolidated rows from multiple AMPs are then sent back to the dispatcher. This step is used to process row-based triggers, noncorrelated scalar subqueries, and so on.

    we do [an all-AMPs | a single-AMP | a group-AMPs] FEEDBACK RETRIEVE step from Spool n (Last Use) of Spool n statistics.

     

    This phrase indicates that Teradata Database uses either an all‑AMPs, a single AMP, or a group-AMPs step to retrieve statistics. When the statistics are computed during the spool building time, a feedback retrieve step consolidates the statistics feedback from the AMP local statistics to global statistics and sends them back to the optimizer.

    we do [an all-AMPs | a single-AMP | a group-AMPs] FEEDBACK RETRIEVE step from n% of Spool n to collect statistics.

     

    This phrase indicates that it was not possible for Teradata Database to generate statistics while the spool was being generated, so the statistics were derived by sampling with the feedback retrieve step.

    we do [an all-AMPs | a single-AMP | a group-AMPs] FEEDBACK RETRIEVE step from Spool n (Last use).

     

    This phrase indicates that the Teradata Database optimizer generated results feedback steps.

    we do an all-AMPs SUM step to aggregate from database_name.table_name by way of an all-rows scan with no residual conditions, grouping by field1 (database_name.table_name.column_name_1).

     

    This phrase indicates that Teradata Database uses an all‑AMPs SUM step to aggregate statistics information from a single column of table or join index table_name in database or user database_name and groups them by column_name_1.

    we do an all-AMPs SUM step to aggregate from database_name.table_name by way of an all-rows scan with no residual conditions, grouping by field1 (database_name.table_name.column_name_1, database_name.table_name.column_name_2)

     

    This phrase indicates that Teradata Database uses an all‑AMPs SUM step to aggregate statistics information from table or join index table_name in database or user database_name and groups it by (column_name_1, column_name_2).

    For this case, column_name_1 and column_name_2 are highly nonunique. Because of this, the optimizer made a cost‑based decision to pre-aggregate on those two columns and to roll up the result to individual columns. This optimization ensures that the base table is read only once and the aggregations on the individual columns run faster because they are working on pre-aggregated data.

    This optimization is used only for recollecting statistics, and is possible only when the applicable statistics are specified in a single COLLECT STATISTICS request.

    we do an all-AMPs SUM step to aggregate from database_name.table_name by way of a traversal of index # n without accessing the base table with no residual conditions, grouping by field1 (database_name.table_name.column_name_1, database_name.table_name.column_name_2).

     

    This phrase indicates that Teradata Database uses an all‑AMPs SUM step to aggregate statistics information from multiple columns of table or join index table_name in database or user database_name and groups them by column_name_1.

    For this particular case, single‑column statistics are collected for column_name_1 and for column_name_2, and multicolumn statistics are collected for (column_name_1, column_name_2).

    The aggregation for the multicolumn case is done first. The result of this aggregation is used to roll up the statistics for individual statistics on column_name_1 and column_name_2. This way, the base table is read only once and the aggregations on the individual columns runs faster because they are working on the pre-aggregated data.

    This optimization is possible only when all the three statistics are specified in a single COLLECT STATISTICS request.

    we do a SMS (set manipulation step) …

     

    Combine rows under control of a UNION, EXCEPT, MINUS, or INTERSECT operator.

    we lock a distinct [database_name].”pseudo table” for [locking_severity] on a RowHash for deadlock prevention

     

    This phrase indicates the pseudo lock that is used to serialize primary and fallback RowHash locks on a dictionary table for DDL processing. See “Pseudo Table Locks” on page 717 for details.

    we lock [database_name].[table.name] for exclusive use on a reserved RowHash to prevent global deadlock

     

    This phrase indicates the proxy lock that serializes the exclusive lock for database_name.table_name in the lock phrase that follows this one.

    we lock [database_name].[table_name] for [lock_severity]

    we lock [database_name].[table_name] for [lock_severity] on a single partition

    we lock [database_name].[table_name] for [lock_severity] on n partitions

     

    The Lock Manager sets an EXCLUSIVE lock on database_name.table_name. The last phrase occurs only if DDL processing is occurring on the DBC.AccessRights dictionary table.

    we lock [database_name].[table_name] for [locking_severity] on a [locking_level] for deadlock prevention

     

    The Lock Manager sets a lock for the indicated locking severity at the indicated locking level on database_name.table_name.

    [locking_level] can be one of the following levels:

  • reserved RowHash
  • reserved RowHash in a single partition
  • reserved RowHash in n partitions
  • reserved RowHash in all partitions
  • The first levels is used for non-row-partitioned table and levels 2 to 4 are used for row-partitioned tables.

    we lock [databasename].[tablename] for single writer on a reserved RowHash to prevent global deadlock

    we lock [databasename].[tablename] for single writer on a reserved RowHash in all partitions to prevent global deadlock

    we lock [databasename].[tablename] for single writer

    we lock [databasename].[tablename] for single writer on a reserved RowHash in all partitions to serialize concurrent updates on the partition-locked table to prevent global deadlock

    we lock [databasename].[tablename] for single writer to serialize concurrent updates on the partition-locked table

     

    The first phrase indicates a proxy lock that is used for a non-row-partitioned table.

    The second phrase indicates a proxy lock that is used for a row-partitioned table.

    Both the first and second phrases indicate serialization locks for the proxy lock in the third phrase. A table-level single writer lock allows concurrent reads on the data that is not rowhash write locked.

    The fourth phrase indicates a proxy lock used to serialize the all-AMP single writer lock at phrase 5. This type of single writer lock is used in certain cases of partition-locked bulk DML, where NUSI, USI or RI maintenance on the target table is required.

    The fifth phrase indicates a single-writer lock that allows concurrent reads on the data in a table that is not write locked during bulk DML processes.

    we save the UPDATED STATISTICS for ('a,b') from Spool m into Spool n, which is built locally on a single AMP derived from the hash of the table id. Statistics are collected since the age of the statistics (y days) exceeds the time threshold of x days and the estimated data change of z% exceeds the system-determined change threshold of zx%.

     

    The user specified a time threshold that is met and the specified system change threshold is also met, so Teradata Database recollects the requested statistics.

    we save the UPDATED STATISTICS for ('a,b') from Spool m into Spool n, which is built locally on a single AMP derived from the hash of the table id. Statistics are collected since the estimated update of z% exceeds the system-determined update threshold of x%.

     

    The system‑determined change threshold is met and the columns of interest have been updated after the last recollection of statistics, so Teradata Database recollects the requested statistics.

    we save the UPDATED STATISTICS for ('a,b') from Spool m into Spool n, which is built locally on a single AMP derived from the hash of the table id. Statistics collected since the estimated data change could not be determined.

     

    The Optimizer has a difficult time estimating the cardinality changes and update activity on related columns, so Teradata Database does recollect the requested statistics.

    This happens when the Optimizer detects one of the following 4 conditions:

  • UDI counts are not reliable because they are not consistent with dynamic AMP sampling or reliable trends.
  • The target table is small, where aggregation is estimated to take less than 0.5 seconds.
  • There is no reliable trend on the change of major statistics.
  • Sampled statistics in the latest collection did not follow the statistics estimated by trending.
  • we save the UPDATED GEOSPATIAL STATISTICS for column from Spool m (Last Use) into Spool n

     

    Teradata Database copies the updated geospatial NUSI statistics collected on column locally from spool m into spool n.

    We send an END PLAN FRAGMENT step for plan fragment n.

     

    This phrase is always part of a dynamic plan. The step occurs at the end of each plan fragment for a dynamic plan except for the last.

    n indicates the number of the plan fragment. There is no step in the plan to indicate the beginning of a plan fragment.

    For a dynamic plan, the first step is the beginning of the first plan fragment. The first step after an END PLAN FRAGMENT step is the beginning of the next plan fragment. The last step of a dynamic plan is the end of the last plan fragment, and there is no END PLAN FRAGMENT step for the last plan fragment. The number of plan fragments in the plan and the number of the last plan fragment is the number in the last END PLAN FRAGMENT step plus 1.

    We SKIP collecting STATISTICS for (‘a,b’), because the age of the statistics (n days) does not exceed the user‑specified time threshold of m days and the estimate data change of o% does not exceed the user‑specified change threshold of p%.

     

    The user specified a time threshold and a system‑determined change threshold is enabled. Neither threshold is met, so Teradata Database does not recollect the specified statistics.

    We SKIP collecting STATISTICS for ('a,b'), because the estimated data change of 5% does not exceed the user-specified change threshold of 20%.

     

    The user specified a data change threshold that the statistics do not meet, so Teradata Database does not recollect the specified statistics.

    which is duplicated on all AMPs …

     

    Relocating data in preparation for a join.

    which is redistributed by hash code to all AMPs …

    Relocating data by hash code in preparation for a join.

    which is redistributed by the hash code of ([database_name].table_name.column_expression)

     

    which is redistributed by the hash code of ([database_name].table_name.column_expression) to few AMPs.

     

    which is redistributed by the hash code of ([database_name].table_name.column_expression) to few or all AMPs.

     

    which is redistributed by the hash code of ([database_name].table_name.column_expression) to all AMPs.

    Relocating data by the hash code of the specified column expression based on a column set from table_name in preparation for a join.

    which is redistributed randomly …

     

    When you specify HASH BY RANDOM for an INSERT … SELECT request, the Optimizer redistributes blocks of selected rows randomly before they are optionally ordered locally by specifying a LOCAL ORDER BY clause, and inserting the rows locally into the target table. The Optimizer does this by taking a generated response spool, generating a random hash value for it, and changing the existing row hash value before redistributing the rows. This is useful when the result of the SELECT operation does not provide an even distribution.