Example: Inserting a Row
You can use the following request to insert a row for the new employee named Smith in the employee table.
INSERT INTO employee (name, empno, deptno, dob, sex, edlev) VALUES ('SMITH T', 10021, 700, 460729, 'F', 16);
Example: Insert Using a SELECT Subquery
This example uses a SELECT subquery to insert a row for every employee who has more than ten years of experience into a new table, promotion, which contains three columns defined as name, deptno, and yrsexp.
INSERT INTO promotion (deptnum, empname, yearsexp) SELECT deptno, name, yrsexp FROM employee WHERE yrsexp > 10 ;
Example: Insert Using a SELECT Subquery Without Target Column List
The INSERT operation performed in the previous example can also be accomplished by the following statement. Note that a column name list is not given for the promotion table; therefore, the employee columns referenced in the SELECT subquery must match, in both quantity and sequence, the columns contained by promotion.
INSERT INTO promotion SELECT name, deptno, yrsexp FROM employee WHERE yrsexp > 10 ;
Example: Insert With Named Columns
To add a row to the employee table for new employee Orebo, you could submit the following statement:
INSERT INTO employee (name, empno, deptno, dob, sex, edlev) VALUES ('Orebo B', 10005, 300, 'Nov 17 1957', 'M', 18) ;
In this example, you list the columns that are to receive the data, followed by the data to be inserted, in the same order as the columns are listed.
If you do not specify a value for a named column, the system inserts a null.
Example: Insert Ordered Entries
You could achieve the same result with the following statement:
INSERT INTO employee (10005,'Orebo B',300,,,,'Nov 17 1957', 'M',,,18,);
Using this form, you do not specify column names for fields because you enter the field data in the same order as the columns are defined in the employee table. In all cases, a comma is required as a place marker for any field for which data is not specified.
For ANSI compliance, you should use the keyword NULL for nulls, for example, (10005, ‘Ore60B’, 300, NULL, NULL, NULL, ‘Nov 17 1957’, ‘M’, NULL, NULL, 18).
Example: Bulk Insert
This INSERT operation performs a bulk insert. With partition level locking, the parser generates a static partition elimination list based on condition "PC = 4" (that is, positional matching of slppit1.pc = srct1.b = 4) and uses this list to place a PartitionRange lock.
The table definitions for this example are as follows:
CREATE TABLE HLSDS.SLPPIT1 (PI INT, PC INT, X INT) PRIMARY INDEX (PI) PARTITION BY (RANGE_N(PC BETWEEN 1 AND 100 EACH 10)); CREATE TABLE HLSDS.SRCT1 (A INT, B INT, C INT) PRIMARY INDEX (A);
An EXPLAIN of the INSERT operation shows the condition PC=4 as WHERE B=4:
Explain INSERT INTO HLSDS.SLPPIT1 SELECT A, B, C FROM HLSDS.SRCT1 WHERE B = 4; 1) First, we lock HLSDS.srct1 for read on a reserved RowHash to prevent global deadlock. 2) Next, we lock HLSDS.slppit1 for write on a reserved RowHash in a single partition to prevent global deadlock. 3) We lock HLSDS.srct1 for read, and we lock HLSDS.slppit1 for write on a single partition. 4) We do an all-AMPs RETRIEVE step from HLSDS.srct1 by way of an all-rows scan with a condition of ("HLSDS.srct1.b = 4") into Spool 1(all_amps), which is built locally on the AMPs. Then we do a SORT to partition Spool 1 by rowkey. The size of Spool 1 is estimated with no confidence to be 1 row (25 bytes). The estimated time for this step is 0.03 seconds. 5) We do an all-AMPs MERGE into HLSDS.slppit1 from Spool 1 (Last Use). The size is estimated with no confidence to be 1 row. The estimated time for this step is 0.55 seconds. 6) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request.
Example: Insert Operation Using Single-Writer Lock
This INSERT operation uses a single-writer lock. The single-writer lock is added for bulk DML operations that require index maintenance on a partition-locked table. The lock prevents concurrent write operations on the table, while allowing concurrent read operations on the partitions that are not locked.
The table definitions for this example are as follows:
CREATE TABLE HLSDS.SLPPIT3 (PI INT, PC INT, USI INT, X INT, Y INT) PRIMARY INDEX (PI) PARTITION BY (RANGE_N(PC BETWEEN 1 AND 1000 EACH 1, NO RANGE, UNKNOWN)) UNIQUE INDEX (USI); CREATE TABLE HLSDS.SRCT3 (A INT, B INT, C INT, D INT, E INT);
An EXPLAIN of the INSERT operation shows the condition PC=100 as WHERE B=100:
Explain INSERT HLSDS.SLPPIT3 SELECT * FROM HLSDS.SRCT3 WHERE B = 100; 1) First, we lock HLSDS.srct3 for read on a reserved RowHash to prevent global deadlock. 2) Next, we lock HLSDS.slppit3 for single writer on a reserved RowHash in all partitions to serialize concurrent updates on the partition-locked table to prevent global deadlock. 3) We lock HLSDS.slppit3 for write on a reserved RowHash in a single partition to prevent global deadlock. 4) We lock HLSDS.srct3 for read, we lock HLSDS.slppit3 for single writer to serialize concurrent updates on the partition-locked table, and we lock HLSDS.slppit3 for write on a single partition. 5) We do an all-AMPs RETRIEVE step from HLSDS.srct3 by way of an all-rows scan with a condition of ("HLSDS.srct3.b = 100") into Spool 1 (all_amps), which is built locally on the AMPs. Then we do a SORT to partition Spool 1 by rowkey. The size of Spool 1 is estimated with no confidence to be 1 row (33 bytes). The estimated time for this step is 0.03 seconds. 6) We do an all-AMPs MERGE into HLSDS.slppit3 from Spool 1 (Last Use). The size is estimated with no confidence to be 1 row. The estimated time for this step is 1.73 seconds. 7) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request.
Example: INSERT and GENERATED ALWAYS Identity Columns
Assume that column_1 of newly created table_1 is an identity column defined as GENERATED ALWAYS. The following INSERT statements automatically generate numbers for column_1 of the inserted rows, even if they specify a value for that column.
INSERT INTO table_1 (column_1, column_2, column_3) VALUES (111,111,111); INSERT INTO table_1 (column_1, column_2, column_2) VALUES (,222,222);
Check the inserts by selecting all the rows from table_1.
SELECT * FROM table_1; *** Query completed. 2 rows found. 3 columns returned. *** Total elapsed time was 1 second. column_1 column_2 column_3 --------- --------- --------- 2 111 111 1 222 222
- Even though the value 111 was specified for column_1 in the first insert, the value is rejected and replaced by the generated value 2 because column_1 is defined as GENERATED ALWAYS.
- A warning is returned to the requestor when a user-specified value is overridden by a system-generated number.
- The first number in the identity column sequence is not necessarily allocated to the first row inserted because of parallelism.
Example: INSERT and GENERATED BY DEFAULT Identity Columns
The result of inserting values into the same table defined in Example: INSERT and GENERATED ALWAYS Identity Columns shows the difference in the behavior of the two identity column types. Because GENERATED BY DEFAULT only generates values when an INSERT request does not supply them, the same insert operations produce different results.
Assume that column_1 of newly created table_1 is an identity column defined as GENERATED BY DEFAULT. The following INSERT requests generate numbers for column_1 of the inserted rows only if they do not specify a value for that column.
INSERT INTO table_1 (column_1, column_2, column_3) VALUES (111,111,111); INSERT INTO table_1 (column_1, column_2, column_2) VALUES (,222,222);
Check the result of the insert operations by selecting all the rows from table_1.
SELECT * FROM table_1; *** Query completed. 2 rows found. 3 columns returned. *** Total elapsed time was 1 second. column_1 column_2 column_3 --------- --------- --------- 111 111 111 1 222 222
Example: Identity Columns and INSERT … SELECT
Assume that column_1 of table_1 is a GENERATED BY DEFAULT identity column.
INSERT INTO table_2 (column_1, column_2, column_3) VALUES (111,111,111); INSERT INTO table_2 (column_1, column_2, column_3) VALUES (222,222,222); INSERT INTO table_2 (column_1, column_2, column_3) VALUES (1,333,333); INSERT INTO table_1 (column_1, column_2, column_3) SELECT * FROM table_2;
Check the result of the insert operations by selecting all the rows from table_1.
SELECT * FROM table_1; *** Query completed. 3rows found. 3 columns returned. *** Total elapsed time was 1 second. column_1 column_2 column_3 --------- --------- --------- 111 111 111 222 222 222 1 333 333
Example: INSERT and Queue Tables
The following request populates a queue table with values from a base table. The value for the QITS column of the queue table is the default timestamp.
INSERT INTO orders_qt (productid, quantity, unitprice) SELECT productid, quantity, unitprice FROM backorders WHERE ordernumber = 1002003;
The following request performs a FIFO pop on the orders_qt queue table and stores it into the backorders base table:
INSERT INTO backorders (productid, quantity, unitprice) SELECT AND CONSUME TOP 1 productid, quantity, unitprice FROM orders_qt;
Example: Non-Valid Use of PARTITION In VALUES Clause of INSERT Request
This example is not valid because PARTITION cannot be referenced in the VALUES clause of an INSERT request.
INSERT INTO Orders VALUES (PARTITION, 10, 'A', 599.99, DATE '2001-02-07','HIGH', 'Jack', 3,'Invalid insert');
Example: Simple INSERT Requests Using a DEFAULT Function
The following example set uses this table definition:
CREATE TABLE table7 ( col1 INTEGER, col2 INTEGER DEFAULT 10, col3 INTEGER DEFAULT 20, col4 CHARACTER(60) );
The following INSERT request is valid:
INSERT INTO table7 VALUES (1, 2, DEFAULT, 'aaa');
The DEFAULT function evaluates to the default value of col3, the third column position in the insert list. The example INSERT request inserts the value 20 into table7 for col3.
The resulting row is as follows:
col1 col2 col3 col4 ---------- ----------- ----------- ----------- 1 2 20 aaa
The following INSERT request is valid:
INSERT INTO table7 (col1, col3, col4) VALUES (1, DEFAULT, 'bbb');
The DEFAULT function evaluates to the default value of col3 because DEFAULT is specified second in the expression list, and the second column in the column list is col3.
The resulting row is as follows:
col1 col2 col3 col4 ---------- ----------- ----------- ---------- 1 10 20 bbb
You can specify a DEFAULT function with a column name in an expression list. This is a Teradata extension to the ANSI SQL:2011 standard.
The following INSERT request is valid:
INSERT INTO table7 VALUES (1, 2, DEFAULT(col2), 'aaa');
The DEFAULT function evaluates to the default value of col2 because col2 is passed as an input argument to the DEFAULT function. The INSERT results in the value 10 for col3 in the row because col3 is the third column position in the insert list.
The resulting row is as follows:
col1 col2 col3 col4 ----------- ----------- ----------- ------------ 1 2 10 aaa
The following INSERT request is valid:
INSERT INTO table7 (col1, col3, col4) VALUES (1, DEFAULT(col2), 'bbb');
Because col2 is passed as an input argument to the DEFAULT function, the function evaluates to the default value of col2. Because the second column in the column list is col3, the system assigns it the value 10, which is the default value of col2).
The resulting row is as follows:
col1 col2 col3 col4 ----------- ----------- ----------- ------ 1 10 10 bbb
You can specify the DEFAULT function with a column name anywhere in the expression. This is a Teradata extension to the ANSI SQL:2011 standard.
The following INSERT request is valid:
INSERT INTO table7 VALUES (1, 2, DEFAULT (col2)+5, 'aaa');
The DEFAULT function evaluates to the default value of col2 plus 5 (or 10+5). Because col3 is in the third column position in the insert list, the resulting row is as follows:
col1 col2 col3 col4 ---------- ----------- ----------- ----------------- 1 2 15 aaa
The following INSERT request is valid:
INSERT INTO table7 (col1, col3, col4) VALUES (1, DEFAULT(col2)+5, 'bbb');
The DEFAULT function evaluates to the default value of col2 plus 5 (or 10+5). Because the second column in the column list is col3, the system assigns the value 15 to it. The resulting row is as follows:
col1 col2 col3 col4 ---------- ----------- ----------- --------------- 1 10 15 bbb
When there is no explicit default value associated with a column, the DEFAULT function evaluates to null.
Assume the following table definition for this example:
CREATE TABLE table9 ( col1 INTEGER, col2 INTEGER NOT NULL, col3 INTEGER NOT NULL DEFAULT NULL col4 INTEGER CHECK (Col4>100) DEFAULT 99 );
The following INSERT request is valid:
INSERT INTO table9 VALUES (DEFAULT, 99, 101, 101);
In this example, col1 is nullable and does not have an explicit default value associated with it. Therefore, the DEFAULT function evaluates to null.
The resulting row is as follows:
col1 col2 col3 col4 ----------- ----------- ----------- ------------ ? 99 101 101
Assume the following table definition for the examples below:
CREATE TABLE table10 col1 INTEGER , col2 INTEGER DEFAULT 55, col3 INTEGER NOT NULL DEFAULT 99 );
The following examples are correct and use Teradata extensions to the DEFAULT function:
INSERT INTO table10 VALUES (1, DEFAULT(col2), DEFAULT(col3)); INSERT INTO table10 (col1, col2) VALUES (1, DEFAULT(col2)); INSERT INTO table10 (col1, col2, col3) VALUES (1, DEFAULT(col2), DEFAULT(col3));
The following examples are correct and use ANSI SQL:2011-compliant syntax for the DEFAULT Function:
INSERT INTO table10 VALUES (1, DEFAULT, DEFAULT); INSERT INTO table10 (col1, col2) VALUES (1, DEFAULT); INSERT INTO table10 (col1, col2, col3) VALUES (1, DEFAULT, DEFAULT);
The resulting row for all the above insert operations is the following:
col1 col2 col3 ----- ----------- ----------- 1 55 99
The following INSERT requests are all equivalent to one another. The first example uses an ANSI-SQL:2011-compliant syntax:
INSERT INTO table10 VALUES (5, DEFAULT, 99);
The following example uses Teradata extensions to the ANSI-SQL:2011 syntax:
INSERT INTO table10 VALUES (5, DEFAULT(col2), 99);
The resulting row for both of these insert operations is as follows:
col1 col2 col3 ------ ----------- ----------- 5 55 99
Example: Using the DEFAULT Function With INSERT … SELECT
Assume the following table definitions for this example:
CREATE TABLE tbl_source ( col1 INTEGER, col2 INTEGER DEFAULT 10, col3 INTEGER DEFAULT 20, col4 CHARACTER(60)); CREATE TABLE tbl_destination ( col1 INTEGER, col2 INTEGER DEFAULT 10, col3 INTEGER DEFAULT 20, col4 CHARACTER(60));
The following example shows correct use of the DEFAULT function within an INSERT … SELECT request:
INSERT INTO tbl_destination (col1) SELECT COALESCE(col3, DEFAULT(col3)) END FROM tbl_source WHERE col3 <> DEFAULT;
In this example, the DEFAULT function evaluates to a constant value, which is the default value of col3 of tbl_source.
Example: Logging Errors With INSERT … SELECT
The following examples show various types of error logging with INSERT … SELECT requests:
The following request logs all data, referential integrity, and USI errors to the default limit of 10 errors.
INSERT INTO t SELECT * FROM s LOGGING ERRORS;
The following request logs all data, referential integrity, and USI errors to the default limit of 10 errors.
INSERT INTO t SELECT * FROM s LOGGING ALL ERRORS;
The following request logs data, referential integrity, and USI errors with no error limit. This does not mean that there is no limit on the number of errors the system can log; instead, it means that errors will continue to be logged until the system-determined limit of 16,000,000 have been logged. See “CREATE ERROR TABLE” in SQL Data Definition Language .
INSERT INTO t SELECT * FROM s LOGGING ERRORS WITH NO LIMIT;
The following request logs data row, referential integrity, and USI errors to a limit of 100 errors.
INSERT INTO t SELECT * FROM s LOGGING ERRORS WITH LIMIT OF 100;
Example: Using the ST_Geometry Data Type to Represent Other Geospatial Data Types for INSERTs
You can use the ST_Geometry data type to represent implicitly all of the geospatial data types defined by the ANSI SQL:2011 standard, as indicated by the following set of examples. See SQL Geospatial Types . Note that in every case, the geospatial values are actually stored using the ST_Geometry data type, not the type specified in the INSERT request.
The POINT type has a 0-dimensional geometry and represents a single location in two-dimensional coordinate space.
The following example inserts such a point into table tab1:
INSERT INTO tab1 VALUES (0, 'POINT(10 20)');
The LINESTRING type has a 1-dimensional geometry and is usually stored as a sequence of points with a linear interpolation between the individual points.
The following example inserts such a point sequence into table tab1:
INSERT INTO tab1 VALUES (0, 'LINESTRING(1 1, 2 2, 3 3, 4 4)');
The POLYGON type has a 2-dimensional geometry consisting of one exterior boundary and 0 or more interior boundaries, where each interior boundary defines a hole.
The following example inserts such a polygon into table tab1:
INSERT INTO tab1 VALUES (0, 'POLYGON((0 0, 0 20, 20 20, 20 0, 0 0), (5 5, 5 10, 10 10, 10 5, 5 5)) ');
The GEOMCOLLECTION type is a collection of 0 or more ST_Geometry values.
The following example inserts such a geometric collection into table tab1:
INSERT INTO tab1 VALUES (0, 'GEOMETRYCOLLECTION( POINT(10 10), POINT(30 30), LINESTRING(15 15, 20 20 ) )');
The MULTIPOINT type is a 0-dimensional geometry collection whose elements are restricted to POINT values.
The following example inserts such a geometric collection into table tab1:
INSERT INTO tab1 VALUES (0,'MULTIPOINT(1 1, 1 3, 6 3, 10 5, 20 1)');
The MULTILINESTRING type is a 1-dimensional geometry collection whose elements are restricted to LINESTRING values.
The following example inserts such a geometric collection into table tab1:
INSERT INTO tab1 VALUES (0, 'MULTILINESTRING((1 1, 1 3, 6 3), (10 5, 20 1))');
The MULTIPOLYGON type is a 2-dimensional geometry collection whose elements are restricted to POLYGON values.
The following example inserts such a geometric collection into table tab1:
INSERT INTO tab1 VALUES (0,'MULTIPOLYGON( ((1 1, 1 3, 6 3, 6 0, 1 1)), ((10 5, 10 10, 20 10, 20 5, 10 5)))');
Example: Using the ST_Geometry Data Type When Inserting Geospatial Data Into Tables
This is a more detailed example of using the ST_Geometry data type to represent other geospatial data types when inserting geospatial data into tables. Create two tables in the test database: cities, which represents a list of cities and streets, which represents a list of streets. The cities are Polygons and the streets are LineStrings, and both are implemented using the ST_Geometry type as you can see from the table definitions for cities and streets. Insert some cities and streets into these tables using the well-known text format for these geospatial types and then submit a query to see if any of the streets are within any of the cities.
Here are the table definitions:
CREATE TABLE test.cities ( pkey INTEGER, CityName VARCHAR(40), CityShape ST_Geometry); CREATE TABLE test.streets ( pkey INTEGER, StreetName VARCHAR(40), StreetShape ST_Geometry);
First insert three rows into the cities table:
INSERT INTO test.cities VALUES(0, 'San Diego', 'POLYGON((1 1, 1 3, 6 3, 6 0, 1 1))' ); INSERT INTO test.cities VALUES(1, 'Los Angeles', 'POLYGON((10 10, 10 20, 20 20, 20 15, 10 10))' ); INSERT INTO test.cities VALUES(2, 'Chicago', 'POLYGON((100 100, 100 200, 300 200, 400 0, 100 100))' );
Then insert three rows into the streets table:
INSERT INTO test.streets VALUES(1, 'Lake Shore Drive', 'LINESTRING(110 180, 300 150)' ); INSERT INTO test.streets VALUES(1, 'Via Del Campo', 'LINESTRING(2 2, 3 2, 4 1)' ); INSERT INTO test.streets VALUES(1, 'Sepulveda Blvd', 'LINESTRING(12 12, 18 17)' );
Now join cities and streets on Streetshape.ST_Within(CityShape)=1 and select the StreetName and CityName column values from them:
SELECT StreetName, CityName FROM test.cities, test.streets WHERE StreetShape.ST_Within(CityShape) = 1 ORDER BY CityName;
If you use BTEQ, the result looks like this:
StreetName CityName ---------------------------------------- ---------------- Lake Shore Drive Chicago Sepulveda Blvd Los Angeles Via Del Campo San Diego
Example: INSERT Using a Scalar Subquery
A scalar subquery can be specified as a parameterized value in the value list of a simple INSERT request, but Teradata Database always interprets it as a noncorrelated scalar subquery.
For example, Teradata Database processes the scalar subquery SELECT d2 FROM t2 WHERE a2=t1.a1 in the select list of the following INSERT request as if it were SELECT d2 FROM t2, t1 WHERE a2=t1.a1.
In other words, Teradata Database interprets t1 in the scalar subquery as a distinct instance of t1 rather than as the target table t1 of the insert operation.
INSERT INTO t1 VALUES (1,2,3 (SELECT d2 FROM t2 WHERE a2=t1.a1));
Example: INSERT Using a PERIOD Value Constructor
The following examples use tables t1 and t2, which are defined as follows:
CREATE TABLE t1 ( c1 INTEGER c2 PERIOD(DATE)) UNIQUE PRIMARY INDEX (c1); CREATE TABLE t2 ( a INTEGER b DATE c DATE) UNIQUE PRIMARY INDEX (a);
The following two INSERT requests both use a PERIOD value constructor:
INSERT INTO t1 VALUES (1, PERIOD(DATE '2005-02-03', DATE '2006-02-04')); INSERT INTO t1 SELECT a, PERIOD(b, c) FROM t2;
Example: Passing an SQL UDF to a Single-Row Request
This example passes an SQL UDF into a single-row INSERT request. Note that the arguments must be passed as constants.
INSERT INTO t1 VALUES (1, test.value_expression(2, 3), 4);
Example: Using the HASH BY Option With a NoPI Table
Assume that you have created the following NoPI tables for this example:
CREATE TABLE orders ( o_orderkey INTEGER NOT NULL, o_custkey INTEGER, o_orderstatus CHAR(1) CASESPECIFIC, o_totalprice DECIMAL(13,2) NOT NULL, o_ordertsz TIMESTAMP(6) WITH TIME ZONE NOT NULL, o_comment VARCHAR(79)) UNIQUE INDEX (o_orderkey), PARTITION BY COLUMN; CREATE TABLE orders_staging AS orders WITH NO DATA NO PRIMARY INDEX;
The following INSERT … SELECT request redistributes rows by the hash value of o_orderkey to provide even distribution of the data selected from orders_staging and ordered locally to obtain better run length compression on o_ordertsz before locally copying into orders.
INSERT INTO orders SELECT * FROM orders_staging HASH BY o_orderkey LOCAL ORDER BY o_ordertsz;
For the HASH BY clause, o_orderkey resolves to o_orderkey in the orders table. Because this corresponds to the first expression in the select expression list of the SELECT request, which is o_orderkey from orders_staging, Teradata Database distributes the spool generated for the SELECT request on o_orderkey. Similarly, Teradata Database uses orders.staging. o_ordertsz to order the spool before locally inserting into orders.
This example uses the same tables.
INSERT INTO orders SELECT o_orderkey, o_custkey, o_orderstatus, o_totalprice + 10, o_ordertsz, o_comment FROM orders_staging HASH BY o_totalprice;
For the HASH BY clause, o_totalprice resolves to o_totalprice in the orders table. Because this corresponds to the fourth expression in the select expression list of the SELECT request, which is o_totalprice + 10, Teradata Database distributes the spool generated for the SELECT request on the value of this expression, not on the value of orders_staging.o_totalprice.
To distribute on the values of orders_staging.o_totalprice, qualify the reference in the HASH BY clause as the following revision of the previous INSERT … SELECT request does.
INSERT INTO orders SELECT o_orderkey, o_custkey, o_orderstatus, o_totalprice + 10, o_ordertsz, o_comment FROM orders_staging HASH BY orders_staging.o_totalprice;
Example: Using the LOCAL ORDER BY Option
This example uses the same tables as the previous example except that it uses a new version of orders_staging that has an integer column named o_sequence, which it uses to order the rows it selects locally.
ALTER TABLE orders_staging ADD o_sequence INTEGER;
The following INSERT … SELECT request locally orders the rows of the columns it selects from orders_staging by o_sequence before inserting them into orders.
INSERT INTO orders SELECT o_orderkey, o_custkey, o_orderstatus, o_totalprice+10, o_ordertsz, o_comment FROM orders_staging LOCAL ORDER BY o_sequence;
For the LOCAL ORDER BY clause, o_sequence does not resolve to a column in the target table orders, so Teradata Database resolves using the standard rules for resolution to o_sequence in the underlying table of the SELECT request, orders_staging. Note that orders_staging.o_sequence is not included in the select expression list and the spool is generated locally and sorted on the value of orders_staging.o_sequence before being locally inserting into orders.
Example: Using the RANDOM Option to Randomly Redistribute Data Blocks of Rows and Individual Rows Before Copying Them Locally
Assume that you created the following NoPI tables for this example.
CREATE TABLE tnopi1 ( a INT, b INT, c INT) NO PRIMARY INDEX; CREATE TABLE tnopi2 ( a INT, b INT, c INT) NO PRIMARY INDEX;
The following INSERT … SELECT request uses the RANDOM option to redistribute data blocks of rows randomly from tnopi1 before locally copying them into tnopi2.
INSERT INTO tnopi2 SELECT * FROM tnopi1 HASH BY RANDOM;
The following INSERT … SELECT request uses the RANDOM function to redistribute individual rows randomly from tnopi1 before locally copying them into tnopi2:
INSERT INTO tnopi2 SELECT * FROM tnopi1 HASH BY RANDOM(1, 2000000000);
Example: Inserting into a Table with an Implicit Isolated Load Operation
For information on defining a load isolated table, see the WITH ISOLATED LOADING option for CREATE TABLE and ALTER TABLE in SQL Data Definition Language - Syntax and Examples.
Following are the table definitions for the example.
CREATE TABLE ldi_table1, WITH CONCURRENT ISOLATED LOADING FOR ALL (a INTEGER, b INTEGER, c INTEGER) PRIMARY INDEX ( a ); CREATE TABLE t1 (c1 INTEGER, c2 INTEGER, c3 INTEGER) PRIMARY INDEX ( c1 );
INSERT WITH ISOLATED LOADING INTO ldi_table1 SELECT * FROM t1 WHERE c1 > 10;
Example: Inserting into a Table with an Explicit Isolated Load Operation
For information on defining a load isolated table and performing an explicit isolated load operation, see the WITH ISOLATED LOADING option for CREATE TABLE and ALTER TABLE, in addition to the Load Isolation Statements chapter in SQL Data Definition Language - Syntax and Examples.
Following are the table definitions for the example.
CREATE TABLE ldi_table1, WITH CONCURRENT ISOLATED LOADING FOR ALL (a INTEGER, b INTEGER, c INTEGER) PRIMARY INDEX ( a ); CREATE TABLE t1 (c1 INTEGER, c2 INTEGER, c3 INTEGER) PRIMARY INDEX ( c1 );
BEGIN ISOLATED LOADING ON ldi_table1 USING QUERY_BAND 'LDILoadGroup=Load1;';
SET QUERY_BAND='LDILoadGroup=Load1;' FOR SESSION;
INSERT INTO ldi_table1 SELECT * FROM t1 WHERE c1 > 10;
END ISOLATED LOADING FOR QUERY_BAND 'LDILoadGroup=Load1;';
SET QUERY_BAND = 'LDILoadGroup=NONE;' FOR SESSION;