For use with the AddColumn function in the schema object. The following table defines data type for each column in the schema. See Teradata Vantage™ - Data Types and Literals, B035-1143 for detailed information on each data type including an explanation of precision and scale for the decimal type and length and precision for the period data type.
Constant | Data Type |
---|---|
TD_BIGINT | big integer |
TD_BLOB | inline BLOB (Binary Large Object) |
TD_BLOB_AS_DEFERRED_BY_NAME | deferred BLOB |
TD_BYTE | byte |
TD_BYTEINT | byte integer |
TD_CHAR | character |
TD_CLOB | inline CLOB (Character Large Object) |
TD_CLOB_AS_DEFERRED_BY_NAME | deferred CLOB |
TD_DATE | date |
TD_DATE_ANSI | ANSI date |
TD_DECIMAL | decimal |
TD_FLOAT | float |
TD_GRAPHIC | graphic |
TD_INTEGER | integer |
TD_JSON | inline JSON (JavaScript Object Notation) |
TD_JSON_AS_DEFERRED_BY_NAME | deferred JSON |
TD_LONGVARCHAR | long variable length character |
TD_LONGVARGRAPHIC | long variable length graphic |
TD_NUMBER | fixed / floating point decimal |
TD_PERIOD_DATE | period(date) |
TD_PERIOD_TIME | period(time) |
TD_PERIOD_TIME_TZ | period(time with time zone) |
TD_PERIOD_TS | period(time stamp) |
TD_PERIOD_TS_TZ | period(time stamp with time zone) |
TD_SMALLINT | small integer |
TD_VARBYTE | variable length byte |
TD_VARCHAR | variable length character |
TD_VARGRAPHIC | variable length graphic |
TD_XML | inline XML |
TD_XML_AS_DEFERRED_BY_NAME | deferred XML |
On Specifying Data Types
- The length of any CHAR or VARCHAR column should be an even number when using the UTF16 session character set. If the length for either one of these data types is an odd number when using the UTF16 session character set then an error will be returned by the Initiate method.
- Do not use the TD_LONGVARCHAR data type when the server storage character set or the client session character set is a multibyte character set. KANJISJIS_OS, UTF8, and UTF16 are examples of multibyte character sets.
- The PERIOD data types that support the date and time are in the form:
- PERIOD(DATE)
- PERIOD(TIME(n))
- PERIOD(TIME(n) WITH TIME ZONE)
- PERIOD(TIMESTAMP(n))
- PERIOD(TIMESTAMP(n) WITH TIME ZONE)
The (n) corresponds to the precision of the column. The precision of these columns can be an integer from zero to six. Six is the default if no precision is specified. The PERIOD(DATE) data type has no precision.
- PERIOD(DATE), PERIOD(TIME(n)) and PERIOD(TIME(n)) WITH TIME ZONE) are fixed length.
- PERIOD(TIMESTAMP(n)) and PERIOD(TIMESTAMP(n)) WITH TIME ZONE) are variable length.
- When using the TIME, TIMESTAMP, and INTERVAL data types, see Teradata® Parallel Transporter Reference, B035-2436.
- The NUMBER data type, compatible with Oracle's NUMBER data type, stores both fixed and floating point decimal numbers. NUMBER data type can be specified up to 38 digits. The following are the different possible syntax for the NUMBER data type:
- NUMBER
- NUMBER(*)
- NUMBER(*scale)
- NUMBER(precision)
- NUMBER(precision,scale)
The NUMBER data type is a variable length numeric column with the following external specification:
<1-byte Length><2-bytes Scale><1 to 17 bytes 2's complement representation of the unscaled value in the client-appropriate endianness>
- User applications should add the length of the NUMBER type as '18' bytes (recommended) while defining the schema. For example:
schema -> AddColumn("Number_dt", TD_NUMBER, 18, 5, 2); Length = 18, Precision = 5, Scale = 2.
- For the Number(*) type, a precision of -128 should be added to the schema
Number(*, 4) is represented as schema -> AddColumn("Number_dt", TD_NUMBER, 18, -128, 4); Length = 18, Precision = -128, Scale = 4
If NUMBER is... Precision Passed to Operator is... Scale Passed to Operator is... NUMBER -128 -128 NUMBER(*) -128 -128 NUMBER(*,scale) -128 scale NUMBER(precision) precision 0 NUMBER(precision,scale precision scale
- In Teradata PT API, the Array data type, a Teradata UDT, is implemented as a VARCHAR. To implement the Array type in Teradata PT API:
- Create ARRAY data type as follows:
CREATE TYPE pnum AS CHAR(10) ARRAY[2];
- Create target table as follows:
Create table emp(Associate_Id integer, phone_nums pnum);
- Add the column to the schema in Teradata PT API application:
schema -> AddColumn("phone_nums", TD_VARCHAR, 47);
The insert statement for the Array data type is:INSERT INTO target_table ( :Associate_Id, :phone_nums );
- Create ARRAY data type as follows:
- The JSON data types are supported with the UPDATE and STREAM operators only.
- There are two type of JSON data types (Large Object data types):
- inline JSON
- deferred JSON.
- When an inline JSON data type is specified, its column data is stored in the data rows defined by the schema, in the same way the data is stored for non-JSON columns. The maximum size supported for an inline JSON is 64,000 bytes.
- When a deferred JSON data type is specified, its column data is contained in files rather than in the data rows defined by a Teradata PT API schema. The maximum size supported for deferred JSONs is 2,000,000,000 bytes.
- The names of the files that contain deferred JSON data are stored in data rows exactly like VARCHAR column values, with a 2-byte length prefix. The row value is either a fully-qualified path name or the name of a file that Teradata PT API looks for in the directory out of which it executes. The data type TD_JSON_AS_DEFERRED_BY_NAME indicates that JSON file identifiers are expected in the data stream instead of the JSON data itself.JSON Datatypes are TD_JSON and TD_JSON_AS_DEFERRED_BY_NAME.
- The LOB data types are supported with UPDATE and STREAM operators only.
- There are two type of LOB data types (Large Object data types):
- inline LOB
- deferred LOB
- When an inline LOB data type is specified, its column data is stored in the data rows defined by the schema, in the same way the data is stored for non-LOB columns. The maximum size supported for an inline LOB is 64,000 bytes.
- When a deferred LOB is data type is specified, its column data is contained in files rather than in the data rows is defined by a Teradata PT schema. The maximum size supported for deferred LOBs is 2,000,000,000 bytes.
- The names of the files that contain deferred LOB data are stored in data rows exactly like VARCHAR column values, with a 2-byte length prefix. The row value is either a fully-qualified path name or the name of a file that Teradata PT looks for in the directory out of which it executes. The data types TD_BLOB_AS_DEFERRED_BY_NAME and TD_CLOB_AS_DEFERRED_BY_NAME indicates that LOB file identifiers are expected in the data stream instead of the LOB data itself.LOB data types are TD_BLOB, TD_CLOB, TD_BLOB_AS_DEFERRED_BY_NAME, and TD_CLOB_AS_DEFERRED_BY_NAME.
- XML data types:
- The XML data types are supported with UPDATE and STREAM operators only.
- There are two type of XML data types (Large Object data types), inline XML and deferred XML.
- When an inline XML data type is specified, its column data is stored in the data rows defined by the schema just in the same way the data is stored for non-XML columns. The maximum size supported for an inline XML is 64,000 bytes.
- When a deferred XML data type is specified, its column data is contained in files rather than in the data rows defined by a Teradata PT API schema. The maximum size supported for deferred XML is 2,000,000,000 bytes.
- The names of the files that contain deferred XML data are stored in data rows exactly like VARCHAR column values,with a 2-byte length prefix. The row value is either a fully qualified path name or the name of a file that Teradata PT API looks for in the directory out of which it executes. The data type TD_XML_AS_DEFERRED_BY_NAME indicates that XML file identifiers are expected in the data stream instead of the XML data itself.
- XML Datatypes are TD_XML and TD_XML_AS_DEFERRED_BY_NAME.
- Mismatching DECIMAL data will be allowed as long as the job adheres to these three rules:
- The size of the source precision must be less than the schema precision.
- The storage size on the source and schema must be equal.
- The scale values must match.
Since the database stores DECIMAL data as scaled integers, a different scale results in incorrect values being stored in the database.