Usage Notes - Teradata Database

SQL Data Manipulation Language

Product
Teradata Database
Release Number
16.10
Published
June 2017
Language
English (United States)
Last Update
2018-04-25
dita:mapPath
psg1480972718197.ditamap
dita:ditavalPath
changebar_rev_16_10_exclude_audience_ie.ditaval
dita:id
B035-1146
lifecycle
previous
Product Category
Teradata® Database

Locks and Concurrency

An INSERT operation sets a WRITE lock for the table being updated at the appropriate level:

  • Table-level lock for all rows in the table on all AMPs
  • Partition lock for all rows in the selected partitions on all AMPs
  • Rowhash for the rows with the same hash value on one AMP or rowkey for the rows with the same partition and hash values on one AMP

If the target table is load isolated and the insert is not an isolated load operation, then an EXCLUSIVE lock is applied at that level, instead of a WRITE lock. If the WITH NO ISOLATED LOADING option is specified or isolated load operations are disabled on the table or in the session, then the insert is not an isolated load operation. For information on the CREATE TABLE WITH ISOLATED LOADING option and the SET SESSION FOR ISOLATED LOADING statement, see SQL Data Definition Language Syntax and Examples.

The lock set for SELECT subquery operations depends on the isolation level for the session, the setting of the AccessLockForUncomRead DBS Control field, and whether the subquery is embedded within a SELECT operation or within an INSERT request.

Transaction Isolation Level DBS Control AccessLockForUncomRead Field Setting Default Locking Severity for Outer SELECT and Ordinary SELECT Subquery Operations Default Locking Severity for SELECT Operations Embedded Within an INSERT Request
SERIALIZABLE FALSE READ READ
TRUE READ
READ UNCOMMITTED FALSE READ
TRUE ACCESS
See also:
  • “SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL” in SQL Data Definition Language
  • SQL Request and Transaction Processing
  • Utilities

DEFAULT Function Option

You can use the DEFAULT function to return, and then insert, the default value for a column based on either its position within the VALUES expression list or, if an explicit column name is specified along with the DEFAULT function, its name.

DEFAULT VALUES Option

For a table with all columns defined as in the following table, an INSERT defined with the DEFAULT VALUES keywords adds a row consisting of defined default values (where indicated), identity values for identity columns, and nulls for other columns, where defined:

Nullable? Default defined?
Yes No
Yes Yes
No Yes

This INSERT occurs when the row satisfies the conditions imposed by the table definition. Otherwise an error is returned.

Embedded SQL also supports the DEFAULT VALUES option for INSERT.

When inserting into a table with an identity column, the system always generates a number for the identity column, whether it is defined as GENERATED ALWAYS or as GENERATED BY DEFAULT.

In the following example, if table_1 has an identity column, the system always generates a value for it:

     INSERT table_1 DEFAULT VALUES;

When any non-identity column in the table is defined as NOT NULL and does not have defined DEFAULT, an error is returned (ANSI session mode) or the transaction aborts (Teradata session mode).

When a row is otherwise valid but duplicates an existing row and the table has been defined not to accept duplicate rows, an error is returned (ANSI session mode) or the transaction aborts (Teradata session mode).

Inserting Into Columns Without Specifying a Value

The following rules apply to an INSERT operation that does not assign values for every column in a new row:
  • If the column is not declared as NOT NULL and no default value is declared, nulls are inserted.
  • If the column is declared as NOT NULL and no default value is declared, an error is returned (ANSI session mode) or the transaction aborts (Teradata session mode).

Inserting Into Distinct UDT Columns

To insert into a distinct column, either of the following must be true:
  • The inserted value must be of the same distinct type as the column.
  • There must exist a cast, either system-generated or user-defined, that converts the type of the inserted value to the distinct type of the column, and that cast was created with the AS ASSIGNMENT option. See CREATE CAST in SQL Data Definition Language .
A distinct value can be constructed from any of the following:
  • A cast expression where the target is the same distinct type.
  • An instance method invocation that returns the same distinct type.
  • A UDF invocation that returns the same distinct type.
  • A column reference.
  • A null.

For example, suppose you have the following two table definitions:

     CREATE TABLE table_1 (
       column1 euro,
       column2 INTEGER)
     UNIQUE PRIMARY INDEX(column2);
     CREATE TABLE table_2 (
       column1 euro,
       column2 INTEGER)
     UNIQUE PRIMARY INDEX(column2);

For this case, the information in the following table is true:

Example Comment
INSERT INTO table_1
VALUES (5.64, 1);	
Valid if there is a cast with AS ASSIGNMENT, whether system-generated or user-defined, to convert a DECIMAL type to the distinct type euro.
INSERT INTO table_1
VALUES (CAST(5.64 AS euro), 1);	
Valid because an explicit cast is specified for conversion from the source type euro.
INSERT INTO table_1
SELECT CAST(us_dollar_col AS euro), c2
FROM us_sales;	
Valid if the cast from us_dollar to euro is valid.
INSERT INTO table_1 						
SELECT column1.roundup(0), column2
FROM TAB2;	
Valid if the method roundup() returns the euro type.
INSERT INTO table_1 						
SELECT column1, column2 from table_2;	
Valid column reference.
INSERT INTO table_1
VALUES (NULL, 8);
Valid because the request insert a null into a distinct type column.

Inserting Into Structured UDT Columns

To insert into a structured type column, either of the following must be true:

  • The inserted value must be of the same structured type.
  • There must exist a cast that converts the type of the inserted value to the structured type of the column, and the cast was created with the AS ASSIGNMENT option. See the CREATE CAST statement in SQL Data Definition Language .

A structured type can be constructed from any of the following:

  • A NEW specification (constructor method invocation).
  • A constructor function invocation.
  • A UDF invocation that returns the same structured type.
  • Mutator method invocations.
  • An instance method invocation that returns the same structured type.
  • A cast expression where the target data type is the same structured type.
  • A column reference.
  • A null.

For example, suppose you have the following two table definitions:

     CREATE TABLE table_1 (
       column1 address,
       column2 INTEGER)
     UNIQUE PRIMARY INDEX(column2);
     CREATE TABLE table_2 (
       column1 address,
       column2 INTEGER)
     UNIQUE PRIMARY INDEX(column2);

For this case, the information in the following table is true:

Example Comment
INSERT INTO table_1
VALUES ('17095 Via Del Campo;92127', 1 );
Valid if there is a cast with AS ASSIGNMENT that converts a string to the structured type address.
USING (addr VARCHAR(30), c2 INTEGER)
INSERT INTO table_1 (:addr, :c2 );	
Valid if there is a cast with AS ASSIGNMENT that converts a string to the structured type address.
INSERT INTO table_1
VALUES (NEW address('17095 Via Del Campo', '92127'), 1 );	
Valid invocation of a constructor method.
USING (street varchar(20), zip char(5))
INSERT INTO TAB1
VALUES (NEW address(:street, :zip), 2 );	
Valid invocation of a constructor method with host variables.
INSERT INTO TAB1
VALUES ( NEW address(), 3 );	
Valid invocation of a constructor function.
INSERT INTO TAB1
VALUES ( NEW address().street('17095 Via Del Campo').zip('92127'), 4 );	
Valid invocations of mutator methods:
  1. The constructor function is invoked.

    The result is an address value whose attribute values are set to their defaults.

  2. The mutator for the street attribute is invoked.

    The result is an updated address value with its street attribute modified.

  3. The mutator for the zip attribute is invoked.

    The result is another updated address value with its zip attribute modified. This result also contains the previous change to the street attribute.

  4. The result address value is inserted into column1 of table_1.
INSERT INTO table_1
  SELECT empl.residence(), table_2.column2
  FROM table_2, table_3;	
Valid if method empl.residence() returns the address data type.
INSERT INTO table_1
  VALUES (SELECT CAST(intl_addr_col AS           address), c2)
FROM table_x;   	
Valid if the cast from intl_addr to the structured type address is valid.
INSERT INTO table_1
SELECT column1, column2
FROM table_2;	
Valid column reference.
INSERT INTO table_1
VALUES (NULL, 8);
Valid insertion of a null into a structured type column.

Inserting Into Row-Partitioned Tables

The section lists the rules for inserting rows into row-partitioned tables.

  • Inserting rows into empty row partitions is optimized to avoid transient journaling of each row inserted into those partitions. This optimization can only be applied to tables that are not defined with referential integrity relationships.
  • The outcome of partitioning expression evaluation errors, such as divide-by-zero errors, depends on the session mode.
Session Modework unit Work Unit that Expression Evaluation Errors Roll Back
ANSI Request that contains the aborted request.
Teradata Transaction that contains the aborted request.
  • When inserting rows into a single-level row-partitioned table for which the partitioning expression is not a RANGE_N or CASE_N function, the partition expression must result in a nonnull value between 1 and 65,535 after casting to INTEGER, if the data type is not INTEGER.
  • When inserting rows into a row-partitioned table, any of the partitioning expressions (which must be RANGE_N or CASE_N functions) for that row must result in a nonnull value.
  • When inserting rows into a base table that cause an insert into a row-partitioned join index, any of the partitioning expressions for that join index must result in a nonnull value.
  • When inserting rows into a base table that cause an insert into a single-level row-partitioned join index, for a partitioning expression that is not RANGE_N or CASE_N, the result of the partitioning expression must be a nonnull value between 1 and 65,535 after casting to INTEGER, if the data type is not INTEGER.
  • When inserting rows into a base table that cause an update of a join index row in a row-partitioned join index, any of the partitioning expressions for that updated index row must result in a nonnull value.
  • When inserting rows into a base table that cause an update of a join index row in a row-partitioned join index with single-level row partitioning where the partitioning expression is not RANGE_N or CASE_N, the result of the partitioning expression must be a nonnull value between 1 and 65,535 after casting to INTEGER, if the data type is not INTEGER.
  • You cannot assign either a value or a null to the system-derived columns PARTITION or PARTITION#L1 through PARTITION#L62 in an insert operation.
  • The session mode and session collation at the time the row-partitioned table was created does not have to match the current session mode and collation for the insert operation to succeed. This is because the partition a row is to be inserted into is determined by evaluating the partitioning expression on partitioning column values using the table’s session mode and collation.
  • Collation has the following implications for inserting rows into tables defined with a character partitioning:
    • If the collation for a row-partitioned table is either MULTINATIONAL or CHARSET_COLL, and the definition for the collation has changed since the table was created, Teradata Database aborts any request that attempts to insert a row into the table and returns an error to the requestor.
    • If a noncompressed join index with character partitioning under either an MULTINATIONAL or CHARSET_COLL collation sequence is defined on a table, and the definition for the collation has changed since the join index was created, the system aborts any request that attempts to insert a row into the table and returns an error to the requestor whether the insert would have resulted in rows being modified in the join index or not.
  • If a partitioning expression for a table or noncompressed join index involves Unicode character expressions or literals, and the system has been backed down to a release that has Unicode code points that do not match the code points that were in effect when the table or join index was defined, Teradata Database aborts any attempts to insert rows into the table and returns an error to the requestor.

Inserting Into Global Temporary Tables

Inserting into a global temporary table creates an instance of that table in the current session.

Inserting Into NoPI Tables

You can use INSERT to insert rows into NoPI tables. Teradata Parallel Data Pump operations use Array INSERT on NoPI tables to load data. You can also perform batch inserts of data into NoPI tables using INSERT … SELECT operations. Inserts are transient journaled in the same way they are for primary indexed tables. For more information about NoPI tables, see Database Design .

The following INSERT examples are based on these table definitions:

     CREATE TABLE sales,
     FALLBACK (
       ItemNbr   INTEGER NOT NULL,
       SaleDate  DATE FORMAT 'MM/DD/YYYY' NOT NULL,
       ItemCount INTEGER)
     PRIMARY INDEX (ItemNbr);
     CREATE TABLE newsales,
     FALLBACK (
       ItemNbr   INTEGER NOT NULL,
       SaleDate  DATE FORMAT 'MM/DD/YYYY' NOT NULL,
       ItemCount INTEGER)
     NO PRIMARY INDEX;

For INSERT requests, the system randomly selects an AMP to send the row or rows. This is true for simple requests like the one below, and more complex requests that involve the insertion of multiple arrays of rows.

     INSERT INTO newsales (100, '11/01/2007', 10);

The AMP then converts the row or array of rows into the proper internal format and appends them to the end of the target table (newsales in this example).

When inserting data from a source table into a NoPI target table using an INSERT … SELECT request like the one below, data from the source table is not redistributed. The data is locally appended to the target table.

If the SELECT is constrained from a source table, and the rows returned from the AMPs are skewed, the NoPI or NoPI column-partitioned table can become skewed, because the set of rows is locally inserted into the table without redistribution.

     INSERT INTO newsales
     SELECT *
     FROM sales;

Rules for Using HASH BY or LOCAL ORDER BY to Insert Rows

The following rules apply:
  • You can only specify a HASH BY clause if the target table or the underlying target view table is a NoPI table. If the table has a primary index or Primary AMP index, the system returns an error or failure message, depending on the session mode.
  • If you do not specify a HASH BY clause, the system does not redistribute the source rows before inserting into the target NoPI table.

    If a NoPI target table has row partitioning that is different than the source table, the system sorts the source rows locally on the internal partition number, then copies the rows locally into the target table. The internal partition number is computed based on the row-partitioning of the target table.

  • If you specify a HASH BY clause with a hash list, the system first redistributes the selected rows by a hash value based on the hash list.

    If you also specify a LOCAL ORDER BY clause or if the target table is row-partitioned, the system orders the rows locally and inserts them locally into the target table or underlying target view table.

    This is useful if the result of the subquery does not provide an even distribution of the rows. If the target table or underlying target view table is also column-partitioned, the locally ordered hash redistribution might also be useful to distribute equal values of a column to the same AMP, which might then enable effective autocompression of the column partitions with the columns on which the hash value is calculated.

    Because the object you specify for a HASH BY clause is an expression, the expression can be a function call such as RANDOM(n, m).

    HASH BY RANDOM(1, 2000000000) is useful to redistribute each individual selected row when there is no particular column set on which to hash distribute the rows, and when a more even distribution is needed than the HASH BY RANDOM clause provides.

    A poor choice of a hash expression can lead to a very uneven distribution, similar to when a poor choice is made for primary index columns.

  • If you specify a HASH BY RANDOM clause, the system first redistributes data blocks of selected rows randomly.

    If you also specify a LOCAL ORDER BY clause, the system orders the rows locally and inserts them locally into the target table or underlying target view table.

    This is useful if the result of the subquery does not provide an even distribution of rows. Distributing data blocks is more efficient than distributing individual rows and usually provides a nearly even distribution. However, distributing individual rows using an expression like HASH BY RANDOM(1,2000000000) can provide a more even distribution of rows, which might be necessary in some cases.

  • You can specify a LOCAL ORDER BY clause if the target table or the underlying target view table is a PI, PA, or NoPI table. If you specify a LOCAL ORDER BY clause and the target table is row partitioned, the system orders the selected rows locally according to their internal partition numbers computed based on the row-partitioning of the target table, with the column partition number (if any), as 1, (if the target table or underlying target view table is column partitioned) using the partitioning of the target table or underlying target view table if it has row partitioning, and then the ordering expressions after they have been redistributed if you have also specified a HASH BY clause. The final step is to insert the rows locally into the target table or underlying target view table.

    If the target table or underlying target view table is also column-partitioned, this might allow for more effective autocompression of the column partitions with the columns on which the ordering is done.

  • Teradata Database first resolves column references specified in a HASH BY hash list to the select expression in the subquery expression list corresponding to a matching column name of the target table or view and, if it does not find a matching column, resolved to a result or underlying column in the subquery per the existing rules of resolving column references in an ORDER BY clause for a SELECT request.

Multistatement and Iterated INSERT Requests

Teradata Database provides statement independence for multistatement INSERT requests and iterated INSERT requests. Statement independence enables multistatement or iterated INSERT requests to roll back only the statements that fail within a transaction or multistatement request and not all of the individual statements within the transaction.

These forms of multistatement requests support statement independence:

  • INSERT; INSERT; INSERT;
  • BT; INSERT; INSERT; ET;
  • BT; INSERT; INSERT;
  • INSERT; INSERT; ET;
  • INSERT; COMMIT;
  • INSERT; INSERT; COMMIT;
The client software you are using must support statement independence to prevent all multistatement requests and iterated INSERT requests in a transaction or multistatement request from being rolled back. Refer to the documentation for your client software for information on support for statement independence.

Most statement data errors resulting from multistatement INSERT requests only roll back failed requests within the transaction or multistatement request, but do not abort all of the statements within the transaction or request unless every INSERT request within the multistatement request aborts because of data-related errors. Teradata Database does not roll back the transaction itself if this happens, only the INSERT requests it contains.

Statement independence supports the following multistatement INSERT data error types:

  • Column-level CHECK constraint violations
  • Data translation errors
  • Duplicate row errors for SET tables
  • Primary index uniqueness violations
  • Referential integrity violations
  • Secondary index uniqueness violations

Statement independence is not enabled for multistatement INSERT requests into tables defined with the following options:

  • Triggers
  • Hash indexes
  • Join indexes

Statement independence is also not supported for the following SQL features:

  • INSERT … SELECT requests
  • SQL stored procedures

Inserting When Using a DEFAULT Function

The following rules apply when using a DEFAULT function to insert rows into a table:

  • The DEFAULT function takes a single argument that identifies a relation column by name. The function evaluates to a value equal to the current default value for the column. For cases where the default value of the column is specified as a current built-in system function, the DEFAULT function evaluates to the current value of system variables at the time the request is executed.

    The resulting data type of the DEFAULT function is the data type of the constant or built-in function specified as the default unless the default is NULL. If the default is NULL, the resulting date type of the DEFAULT function is the same as the data type of the column or expression for which the default is being requested.

  • The DEFAULT function has two forms. It can be specified as DEFAULT or DEFAULT ( column_name ) . When no column name is specified, the system derives the column based on context. If the column context cannot be derived, the request aborts and an error is returned to the requestor.
  • The DEFAULT function without a column name can be specified in the expression list. If the INSERT request has a column list specified, the column name for the DEFAULT function is the column in the corresponding position of the column list. If the request does not have a column name list, the column name is derived from the position in the VALUES list.
  • The DEFAULT function without a column name in the INSERT request cannot be a part of an expression; it must be specified as a standalone element. This form of usage is ANSI compliant.
  • The DEFAULT function with a column name can be specified in an expression list. This is a Teradata extension to ANSI.
  • The DEFAULT function with a column name can be specified anywhere in the expression. This is a Teradata extension.
  • When there is no explicit default value associated with the column, the DEFAULT function evaluates to null.

For more information about the DEFAULT function, see SQL Functions, Operators, Expressions, and Predicates .

Using Scalar Subqueries in INSERT Requests

The following rules and restrictions apply to specifying scalar subqueries in INSERT and INSERT … SELECT requests:

  • You can specify a scalar subquery as a parameterized value in the value list of a simple INSERT request, but Teradata Database always interprets it as a noncorrelated scalar subquery. See Example: INSERT Using a Scalar Subquery.
  • You cannot specify a noncorrelated scalar subquery as a value in a value list that is assigned to an identity column in a simple INSERT request.
  • You can specify an INSERT request with scalar subqueries in the body of a trigger.

    However, you cannot specify a simple INSERT request with a scalar subquery in its value list in the body of a row trigger.

  • You can specify a scalar subquery in the SELECT component of an INSERT … SELECT request.
  • Teradata Database processes any noncorrelated scalar subquery specified in the SELECT component of an INSERT … SELECT in a row trigger as a single-column single-row spool instead of as a parameterized value.

Using INSERT … SELECT With Subqueries

You cannot specify an ORDER BY clause in the SELECT component of an INSERT … SELECT request when the SELECT is a subquery.

Using INSERT … SELECT With Tables That Have Row-Level Security

You can use INSERT … SELECT requests with tables that have row-level security if all the tables have exactly the same row-level security constraints.

Teradata Database does not execute the security policy UDF for row-level security constraints on the target table.
  • If the session executing the request does not have the appropriate OVERRIDE privilege to use the DML statement on the target table, Teradata Database takes the values for all row-level security constraint columns from the source table.
  • If the session has the appropriate OVERRIDE privilege, Teradata Database takes the constraint values from the source table unless they are provided as part of the INSERT … SELECT request.

Example: Application of Row-Level Security INSERT Constraint Functions for Single Statement INSERT Requests

This example shows how the INSERT constraint functions are applied for a single-statement INSERT request on a table that has the row-level security INSERT constraint.

An EXPLAIN statement is used to show the steps involved in the execution of the request and the outcome of the application of the constraint functions.

The statement used to create the table in this example is:

     CREATE TABLE rls_tbl(
       col1 INT,
       col2 INT, 
       classification_levels   CONSTRAINT,
       classification_categories CONSTRAINT);

The user’s sessions constraint values are:

     Constraint1Name LEVELS
     Constraint1Value 2
     Constraint3Name CATEGORIES
     Constraint3Value '90000000'xb

This EXPLAIN statement is used to show the steps involved in the execution of the INSERT request and the outcome of the application of the INSERT constraint functions.

       EXPLAIN INSERT rls_tbl(1,1,,);

The system returns this EXPLAIN text.

 *** Help information returned. 6 rows.
 *** Total elapsed time was 1 second.
Explanation
------------------------------------------------------------------------ 1) First, we do an INSERT into RS.rls_tbl constrained by (
     RS.rls_tbl.levels = SYSLIB.INSERTLEVEL (2)), (
     RS.rls_tbl.categories = SYSLIB.INSERTCATEGORIES ('90000000'XB)).
     The estimated time for this step is 0.07 seconds.
  -> No rows are returned to the user as the result of statement 1.
     The total estimated time is 0.07 seconds.

Example: Row-Level Security INSERT and SELECT Constraints When User Lacks Required Privileges (INSERT...SELECT Request)

This example show how the INSERT and SELECT constraints are applied when a user that does not have the required OVERRIDE privileges attempts to execute an INSERT...SELECT request on a table that has the row-level security INSERT and SELECT constraints.

The statements used to create the tables in this example are:

     CREATE TABLE rls_src_tbl(
       col1 INT,
       col2 INT, 
       classification_levels   CONSTRAINT,
       classification_categories CONSTRAINT);

     CREATE TABLE rls_tgt_tbl(
       col1 INT,
       col2 INT, 
       classification_levels   CONSTRAINT,
       classification_categories CONSTRAINT);

The user’s sessions constraint values are:

     Constraint1Name LEVELS
     Constraint1Value 2
     Constraint3Name CATEGORIES
     Constraint3Value '90000000'xb

Following is the INSERT statement:

       INSERT rls_tgt_tbl SELECT * FROM rls_src_tbl;

An EXPLAIN shows the INSERT and SELECT constraints applied during RETRIEVE step from rls_src_tbl with a condition of ("((SYSLIB.SELECTCATEGORIES ( '90000000'XB, RS.rls_src_tbl.categories ))= 'T') AND ((SYSLIB.SELECTLEVEL (2, rls_src_tbl.levels ))= 'T')").

Using INSERT … SELECT With a DEFAULT Function

The following rules apply when using a DEFAULT function to load a table using an INSERT … SELECT operation:
  • All of the rules listed for DEFAULT Function in SELECT Statements also hold for the SELECT subquery in an INSERT … SELECT request.
  • The DEFAULT function cannot be specified without a column name as its argument within the SELECT subquery of an INSERT … SELECT request.

For more information about the DEFAULT function, see SQL Functions, Operators, Expressions, and Predicates .

Using a PERIOD Value Constructor With INSERT

For the rules on using PERIOD value constructors, see SQL Functions, Operators, Expressions, and Predicates . For examples of how to use PERIOD value constructors in INSERT requests, see Example: INSERT Using a PERIOD Value Constructor.

INSERT and Scalar UDFs

You can specify a scalar UDF as a column value in the VALUES clause of an INSERT request. The rules for the invocation of a scalar UDF in a VALUES clause are as follows.

  • A scalar UDF that passes a value in the VALUES list of an INSERT request must return a value expression.
  • The arguments passed to a scalar UDF must be constants, USING values, or parameters that resolve to a constant.

INSERT, UDTs, and Stored Procedures

The following rules apply to insert operations, UDTs, and stored procedures:

  • You can insert into a UDT local variable of a stored procedure.
  • You can insert a UDT local variable into a table.

Inserting Rows Into Queue Tables

The first column of a queue table is defined as a Queue Insertion TimeStamp (QITS) column. The values in the column determine the order of the rows in the queue, resulting in approximate first-in-first-out (FIFO) ordering.

If you want the QITS value of a row to indicate the time that the row was inserted into the queue table, then you can use the default value, the result of CURRENT_TIMESTAMP, instead of supplying a value. If you want to control the placement of a row in the FIFO order, you can supply a TIMESTAMP value for the QITS column.

For a multistatement request containing multiple INSERT requests that do not supply values for the QITS column, the QITS values are the same for every row inserted.

If you want unique QITS values for every row in a queue table, you can do any of the following things:

  • Supply a TIMESTAMP value for the QITS column in every INSERT request.
  • Avoid multistatement requests containing multiple INSERT statements that do not supply values for the QITS column.
  • Add incremental offsets to the current timestamp for the QITS column value in each INSERT request.

    For example:

         INSERT shopping_cart(CURRENT_TIMESTAMP + INTERVAL '0.001', 100)
        ;INSERT shopping_cart(CURRENT_TIMESTAMP + INTERVAL '0.002', 200)
        ;INSERT shopping_cart(CURRENT_TIMESTAMP + INTERVAL '0.003', 300);

Regarding performance, an INSERT operation into a queue table has the following effects:

  • Does not affect response time when the system is not CPU-bound.
  • Is more expensive than an INSERT into a base table because it requires the update of an internal in-memory queue.

For details on queue tables and the queue table cache, see “CREATE TABLE” in SQL Data Definition Language .

Inserting Into Queue Tables Using Iterated Requests

If you use an INSERT request in an iterated request to insert rows into a queue table, you might have to limit the number of data records with each request to minimize the number of rowhash-level WRITE locks placed on the table and reduce the likelihood of deadlocks occurring because of resource conflicts between the locks and the all-AMPs table-level READ lock exerted by the internal row collection processing used by queue tables to update the internal queue table cache.

IF you use an INSERT request in an iterated request to insert rows into a queue table and … THEN …
all of the following conditions are true:
  • the queue table is not empty
  • either an INSERT request or a SELECT AND CONSUME request has already been performed from the queue table since the last system reset
  • rows are not updated or deleted from the queue table during the insert operation
the number of data records that you pack with each request is not an issue.
any of the following conditions are true:
  • the queue table is empty
  • neither an INSERT request nor a SELECT AND CONSUME request has already been performed from the queue table since the last system reset
  • rows are updated or deleted from the queue table during the insert operation
pack a maximum of four data records with each request.

For example, if you use BTEQ to import rows of data into a queue table, use a maximum value of 4 with the BTEQ .SET PACK command.

These conditions trigger the internal row collection processing used by queue tables to update the internal queue table cache.

For details on queue tables and the queue table cache, see “CREATE TABLE” in SQL Data Definition Language .

Valid INSERT Operations

An INSERT operation does not return an error message if any of the following is true:

  • In Teradata session mode, the operation attempts to insert a character string that is longer or shorter than that declared for the column. The string is automatically adjusted and inserted. This could result in improper strings for the Kanji1 character data type.
  • The operation uses a query and no rows are returned.

INSERT Operations That Are Not Valid

An INSERT operation causes an error or failure message to be returned if any of the following are true.

  • The operation attempts to assign a value that will result in a violation of a unique index specification.
  • The operation attempts to insert a row with no value for a column that has no default and is defined as NOT NULL.
  • The operation attempts to assign a nonnull value that violates a CHECK constraint declared for a column.
  • The operation attempts to assign a value that is of a different numeric type than that declared for the column and the assigned value cannot be converted correctly.
  • The operation attempts to assign a character value that is not in the repertoire of the destination character data type.
  • The operation attempts to insert a character string trailing pad characters into a VARCHAR field, and that operation causes the row to become identical to another row (except for the number of trailing pad characters).
  • In ANSI session mode, inserting character data, if in order to comply with maximum length of the target column, non-pad characters are truncated from the source data.
KANJI1 support is deprecated. KANJI1 is not allowed as a default character set. The system changes the KANJI1 default character set to the UNICODE character set. Creation of new KANJI1 objects is highly restricted. Although many KANJI1 queries and applications may continue to operate, sites using KANJI1 should convert to another character set as soon as possible.

Large Objects and INSERT

The behavior of truncated LOB inserts differs in ANSI and Teradata session modes. The following table explains the differences in truncation behavior.

Session Mode Result When Non-Pad Bytes are Truncated on Insertion
ANSI An exception condition is raised.

The INSERT fails.

Teradata Exception condition is not raised.

The INSERT succeeds: the truncated LOB is stored.

Duplicate Rows and INSERT

When an insert operation would create a duplicate row, the outcome of the operation depends on how the table is defined. The system ignores trailing pad characters in character strings when comparing values for field or row duplication.

Table Definition Duplicate Row Insert Action
MULTISET with no UNIQUE constraints Inserted.
  • SET
  • MULTISET with UNIQUE constraints
Not inserted.

An error is returned to the requestor.

If inserting into an identity column table, this is true only when the column is defined WITH NO CYCLE. An error can occur in this case if an attempt is made to cycle. The system can also return errors for other reasons, like uniqueness violations, before a duplicate row violation can be reported.

Duplicate Rows and INSERT … SELECT

If an INSERT using a SELECT subquery will create duplicate rows, the result depends on the table definition:

  • Duplicate rows are permitted in a MULTISET set only if no UNIQUE constraints or UNIQUE indexes are defined on the table.
  • Duplicate rows are not permitted in a SET table.

The following table summarizes the restrictions on duplicate rows and INSERT … SELECT:

Type of Table Duplicate Rows
MULTISET with no unique constraints Permitted.

Inserted duplicate rows are stored in the table.

MULTISET with unique constraints not permitted. An error message is returned to the requestor.

The following constraints are considered to be unique in Teradata Database:

  • Unique primary index
  • Unique secondary index
  • Primary key
  • UNIQUE column constraint
  • GENERATED ALWAYS identity column constraint.

For nontemporal tables, unique secondary indexes, primary keys, and UNIQUE constraints are all implemented internally as unique secondary indexes.

For information about how USIs, PKs, and UNIQUE constraints are implemented for temporal tables, see ANSI Temporal Table Support and Temporal Table Support .

SET Not permitted.
  • In ANSI session mode, the system rejects the request and returns an error message to the requestor.

    If some other error, such as violating a uniqueness constraint, occurs first, then the system returns that error to the requestor rather than a duplicate row message

  • In Teradata session mode, Teradata Database does the following.

    Rejects the duplicate rows in the transaction.

    Inserts the non-duplicate rows into the table.

    Does not return an error message to the requestor.

In Teradata session mode, if an INSERT … SELECT request specifies a SAMPLE clause that selects a set of rows from a source MULTISET table, and then inserts them into a target SET table, and the sampled row set contains duplicates, the number of rows inserted into the target SET table might be fewer than the number requested in the SAMPLE clause.

In Teradata session mode, the condition occurs because SET tables reject attempts to insert duplicate rows into them. The result is that the INSERT portion of the INSERT … SELECT operation inserts only distinct rows into SET target tables. As a result, the number of rows inserted into the target table can be fewer than the number specified in the SAMPLE clause.

For example, if an INSERT … SELECT request SAMPLE clause requests a sample size of 10 rows, and there are duplicate rows from the MULTISET source table in the collected sample, the system rejects the duplicate instances when it attempts to insert the sampled rows into the SET table and inserts only the distinct rows from the sample set. That is, you could request a sample of 10 rows, but the actual number of rows inserted into the target table could be fewer than 10 if there are duplicate rows in the sampled row set. Teradata Database does not return any warning or information message when this condition occurs.

INSERT Process

An INSERT process performs the following actions:
  1. Sets a WRITE lock on the rowkey, partition, or table, as appropriate.
  2. Performs the entire INSERT operation as an all-or-nothing operation in which every row is inserted successfully or no rows are inserted.

    This is to prevent a partial insert from occurring.

Session Mode Unsuccessful INSERT Result
ANSI Rolls back the erring request only.
Teradata Rolls back the entire containing transaction

The rules for rolling back multistatement INSERT requests for statement independence frequently enable a more relaxed handling of INSERT errors within a transaction or multistatement request. See Multistatement and Iterated INSERT Requests for the details of how Teradata Database handles failed INSERT operations in situations that involve statement independence.

The INSERT operation takes more processing time on a table defined with FALLBACK or a secondary, join, or hash index, because the FALLBACK copy of the table or index also must be changed.

Inserting Rows Through Views

Use caution when granting the privilege to insert data through a view because data in fields not visible to the user might be inserted when a row is inserted through a view.

The following rules apply to inserting rows through a view:

  • Both you and the immediate owner of the view must have the appropriate privileges.
  • The view must reference columns in only one table or view.
  • None of the columns in the view can be derived by using an expression to change values in the underlying table.
  • Each column in the view must correspond one to one with a column in the underlying table or view.
  • The view must include any column in the underlying table or view that is declared as NOT NULL.
  • No two view columns can reference the same column in the underlying table.
  • If the request used to define a view contains a WHERE clause, and WITH CHECK OPTION, all values inserted through that view must satisfy constraints specified in the WHERE clause.

    If a view includes a WHERE clause but does not include the WITH CHECK OPTION, then data can be inserted that is not visible through that view.

Subqueries In INSERT Requests

An INSERT operation that uses a subquery, referred to as an INSERT … SELECT request, differs from a simple INSERT in that many rows can be inserted in a single operation, and the row values can come from more than one table.

The query specification must always include a FROM table_name clause.

Also, an INSERT request that includes a subquery must define a column name list when the following conditions exist:

  • The number of columns listed in the query specification differs from the number of columns in the table receiving new rows.
  • The order of columns listed in the query specification differs from the order the columns were defined in the table receiving new rows.

Also see Using Scalar Subqueries in INSERT Requests.

SELECT AND CONSUME Subqueries in INSERT Requests

An INSERT operation can use a SELECT AND CONSUME subquery to insert data from the row with the lowest value in the QITS column in the specified queue table and delete the row from the queue table.

The target table for the INSERT operation can be a base table or a queue table. If the target is a queue table, it can be the same queue table in the SELECT AND CONSUME subquery.

The SELECT AND CONSUME part of the request behaves like a regular SELECT AND CONSUME request. For details, see SELECT AND CONSUME.

When the target table is a queue table, the INSERT part of the request behaves like an INSERT into a queue table. For details, see Inserting Rows Into Queue Tables.

Certain restrictions apply to triggers. An INSERT operation that uses a SELECT AND CONSUME subquery cannot be used as either of the following:

  • Triggering event statement that fires a trigger.
  • Triggered action statement fired by a triggering event.

Data Takes the Attributes of the New Table

If the column attributes defined for a new table differ from those of the columns whose data is being inserted using INSERT, then the data takes on the attributes of the new table.

The source column names can differ from the destination column names in the new table, but the data is inserted correctly as long as the SELECT request lists the source column names in the same order as the corresponding destination columns in the CREATE TABLE request. This is true even if the new table has a column that is to contain data derived (either arithmetically or by aggregate operation) from the column data in the existing table.

Logging Errors For INSERT … SELECT Requests

Normally, an INSERT … SELECT request with error logging completes without any USI or RI errors. Exceptions to normal completion of a INSERT … SELECT request are processed as follows:

  • Not all types of errors are logged when you specify the LOGGING ERRORS option for an INSERT … SELECT request.
    • All local, or data errors, are logged.
    • Errors that occur before the row merge step, such as data conversion errors detected in the RET AMP step before the MRG or MRM AMP steps, are not logged. See SQL Request and Transaction Processing .

      These are errors that occur during row merge step processing, such as CHECK constraint, duplicate row, and UPI violation errors.

  • When Teradata Database encounters USI or RI errors (or both) in the INSERT … SELECT operation, the following events occur in sequence:
    1. The transaction or request runs to completion.
    2. The system writes all error-causing rows into the error table.
    3. The system aborts the transaction or request.
    4. The system rolls back the transaction or request.

      Note that the system does not invalidate indexes, nor does it roll error table rows back, enabling you to determine which rows in the INSERT set are problematic and to determine how to correct them.

    If the number of errors in the request is large, running it to completion plus rolling back all the INSERT operations can exert an impact on performance. To minimize the potential significance of this problem, you should always consider either using the default limit or specifying a WITH LIMIT OF error_limit clause with a relatively small value for error_limit. In other words, unless you have a good reason for doing so, you should avoid specifying WITH NO LIMIT.

  • When an INSERT … SELECT operation encounters data errors only, their occurrence does not abort the transaction or request, the non-erring INSERT operations complete successfully, and the erring rows are logged in the error table so you can correct them.

The following rules and guidelines apply to logging errors in an error table for INSERT … SELECT loads:

  • Before you can log errors for INSERT … SELECT loads, you must first create an error table for the base data table into which you intend to do an INSERT … SELECT load. See “CREATE ERROR TABLE” in SQL Data Definition Language .
  • If error logging is not enabled and you submit an INSERT … SELECT bulk loading operation with the LOGGING ERRORS option specified, the system aborts the request and returns an error message to the requestor.
  • The LOGGING ERRORS option is not valid in a multistatement request.
  • Two basic types of errors can be logged when you specify the LOGGING ERRORS option:
    • Local errors
      Local errors are defined as errors that occur on the same AMP that inserts the data row. The following types of errors are classified as local errors:
      • Duplicate row errors, which occur only in ANSI session mode.

        The system silently ignores duplicate row errors that occur from a INSERT … SELECT into a SET table in Teradata session mode.

        Duplicate rows can also arise from these INSERT … SELECT insert situations:
        • The source table has duplicate rows.
        • An insert is not well-behaved, meaning that the insert is made on a different
    AMP than the failed update.
      • Duplicate primary key errors
      • CHECK constraint violations
    • Nonlocal errors
      Nonlocal errors are defined as errors that occur on an AMP that does not own the data row. The following types of errors are classified as nonlocal errors:
      • Referential integrity violations
      • USI violations

        An exception to this is the case where a USI violation is local because the USI is on the same set of columns as a the primary index of a row-partitioned table. The system treats such an error as a nonlocal error, even though it is local in the strict definition of a local error.

    The response of the system to the two error types differs, as the following table explains.

IF this kind of error occurs … THEN the system records it in the error table, rejects the error-causing rows from the target table, and …
local completes the request or transaction successfully.
nonlocal lets the request or transaction run to completion in order to record all the error causing rows in the INSERT … SELECT load, then aborts the request or transaction and rolls back its inserts and updates.
both local and nonlocal
  • The system does not handle batch referential integrity violations for INSERT … SELECT error logging. Because batch referential integrity checks are all-or-nothing operations, a batch referential integrity violation causes the system to respond in the following session mode-specific ways:
IF this session mode is in effect … THEN the erring …
ANSI request aborts and rolls back.
Teradata transaction aborts and rolls back.
  • The system does not handle error conditions that do not allow useful recovery information to be logged in the error table. Such errors typically occur during intermediate processing of input data before it are built into a row format that corresponds to the target table.

    The system detects this type of error before the start of data row inserts and updates. The following are examples of these types of error:

    • UDT, UDF, and table function errors
    • Version change errors
    • Nonexistent table errors
    • Down AMP request against nonfallback table errors
    • Data conversion errors

      Conversion errors that occur during row inserts are treated as local data errors.

    The way the system handles these errors depends on the current session mode:

IF this session mode is in effect … THEN the erring …
ANSI request aborts and rolls back.
Teradata transaction aborts and rolls back.

The system preserves error table rows logged by the aborted request or transaction and does not roll them back.

The system inserts a marker row into the error table at the end of a successfully completed INSERT … SELECT request with logged errors.

Marker rows have a value of 0 in the ETC_ErrorCode column of the error table, and their ETC_ErrSeq column stores the total number of errors logged. All other columns in a marker row except for ETC_DBQL_QID and ETC_TimeStamp are set to null.

If no marker row is recorded, the request or transaction was aborted and rolled back because of one or more of the following reasons:

  • The specified error limit was reached.
  • The system detected an error that it cannot handle.
  • The system detected a nonlocal (RI or USI) violation.

    Teradata Database preserves the error rows that belong to the aborted request or transaction.

  • In addition to the previously listed errors, the system does not handle the following types of errors. though it preserves logged error table rows if any one of the listed errors is detected:
    • Out of permanent space or our of spool space errors
    • Duplicate row errors in Teradata session mode (because the system ignores such errors in Teradata session mode)
    • Trigger errors
    • Join index maintenance errors
    • Identity column errors
  • The LOGGING ERRORS option is applicable to INSERT … SELECT load requests whose target tables are permanent data tables only.

    Other kinds of target tables, such as volatile and global temporary tables, are not supported.

    Teradata Database returns a warning message to the requestor if it logs an error.

  • You can either specify logging of all errors or logging of no errors. This means that you cannot specify the types of errors to log. The WITH LIMIT OF error_limit option, of course, enables you to terminate error logging when the number of errors logged matches the number you specify in the optional WITH LIMIT OF error_limit clause.

    If you do not specify a LOGGING ERRORS option, and an error table is defined for the target data table of the INSERT … SELECT request, the system does no error handling for INSERT … SELECT operations against that data table.

    In this case, the request or transaction containing the erring INSERT … SELECT request behaves as follows when an error occurs:

IF this session mode is in effect … THEN the erring …
ANSI request aborts and rolls back.
Teradata transaction aborts and rolls back.
  • If you specify neither the WITH NO LIMIT option, nor the WITH LIMIT OF error_limit option, the system defaults to an error limit of 10.

    Teradata Database logs errors up to the limit of 10, and then the request of transaction containing the INSERT … SELECT request behaves as follows when the tenth error occurs:

IF this session mode is in effect … THEN the erring …
ANSI request aborts and rolls back.
Teradata transaction aborts and rolls back.

Teradata Database preserves error table rows logged by the aborted request or transaction and does not roll them back.

  • WITH NO LIMIT

    Teradata Database places no limit on the number of error rows that can accumulate in the error table associated with the target data table for the INSERT … SELECT operation.

  • WITH LIMIT OF error_limit

    Teradata Database logs errors up to the limit of error_limit, and then the request or transaction containing the INSERT … SELECT request behaves as follows when the error_limit is reached:

IF this session mode is in effect … THEN the erring …
ANSI request aborts and rolls back.
Teradata transaction aborts and rolls back.

Teradata Database preserves error table rows logged by the aborted request or transaction and does not roll them back.

  • The activity count returned for an INSERT … SELECT … LOGGING ERRORS request is the same as that returned for an INSERT … SELECT operation without a LOGGING ERRORS option: a count of the total number of rows inserted into the target data table.
  • LOGGING ERRORS does not support LOB data. LOBs in the source table are not copied to the error table. They are represented in the error table by nulls.
  • An index violation error does not cause the associated index to be invalidated.
  • For referential integrity validation errors, you can use the IndexId value with the RI_Child_TablesVX view to identify the violated Reference index. For information about Reference indexes, see Database Design .

    You can distinguish whether an index error is a USI or referential integrity error by the code stored in the ETC_IdxErrType error table column.

ETC_IdxErrType Valueis Error
R foreign key insert violation.
r parent key delete violation.
U USI validation error.

Fast Path INSERT … SELECT Requests

Multistatement INSERT … SELECTs are optimized so that each request except for the last in the series returns a ‘zero row inserted’ message as a response. The retrieved rows for each SELECT are sent to an empty target table. The last INSERT … SELECT request returns the total number of rows inserted for the entire request and sorts and merges the spool table into the target table.

Columns defined with the COMPRESS option do not participate in fast path optimizations, so if you perform an INSERT … SELECT operation on compressed columns, fast path optimization is not specified by the Optimizer when it creates an access plan.

INSERT … SELECT Performance and Target Table Identity Column Primary Indexes

For those cases where INSERT … SELECT is optimized so that a direct merge from source to target table is possible, like when source and target tables have the identical structure, there can be as much as a threefold degradation of performance caused by the need for an extra step to spool and redistribute rows when the target table has an identity column primary index.

Fast Path INSERT … SELECT Requests

There are several restrictions that must be observed in order for the high performance fast path optimization to succeed in a multistatement request.

These rules are as follows:
  • The target table must be empty.
  • All INSERT statements in the multistatement request must have the same target table.
  • Only INSERT statements can be included in the request.

If you insert other statement types into the multistatement request, the fast path optimization does not occur (only the first INSERT … SELECT in the series is optimized) and performance degrades accordingly.

General Rules for INSERT in Embedded SQL and Stored Procedures

The following rules apply to both forms of INSERT in embedded SQL and stored procedures.

The row to be inserted is constructed by building a candidate row as follows:
  • Define each column value of the row to be NULL.
  • Assign each inserted value to the corresponding column.
Result Row Value for Any NOT NULL Column Insertion Result
Non-null Succeeds.
Null Fails.

SQLCODE is set to -1002.

Insert values are set in the corresponding row column values according to the rules for defining host variables.

If the table identified by table_nameis a view that was defined WITH CHECK OPTION, then the row to be inserted must be in the set of rows selected by the view.

Valued INSERT in Embedded SQL

When using the valued form of INSERT with embedded SQL, the colon is optional with host variables in the VALUES clause.

Rules for INSERT … SELECT in Embedded SQL and Stored Procedures

The following rules apply to using the selected form of INSERT with embedded SQL and stored procedures:
  • The number of rows in the temporary table returned by subquery determines the number of rows to be inserted.
  • If an error occurs after one or more of the selected rows has been inserted, then the value -1002 is returned to SQLCODE and the current transaction is terminated with rollback.
  • If subqueryselects no rows, then the value +100 is returned to SQLCODE and no rows are inserted.

Inserting into Load Isolated Tables

The RowLoadID value in each row of a load isolated table records the LoadID value. The LoadID used in a newly inserted row is derived from the last committed LoadID value for the table as recorded in DBC.TVM.CurrentLoadId. For more information, see SQL Request and Transaction Processing.

TD_ROWLOADID expression

The TD_ROWLOADID expression reads the RowLoadID. The data type of the expression is BIGINT with a default format of '-(19)9'.

The TD_RowLoadID expression is not permitted in a JI definition, a partitioning expression, or a CHECK Constraint.

You specify the TD_RowLoadID expression as follows:

    database_name.table_name.TD_ROWLOADID
database_name
Name of the containing database, if other than the default database.
table_name
Name of a load isolated table.