Open Table Format Data Types | VantageCloud Lake - Open Table Format Data Types - Teradata Vantage

Teradata® VantageCloud Lake

Deployment
VantageCloud
Edition
Lake
Product
Teradata Vantage
Published
January 2023
ft:locale
en-US
ft:lastEdition
2024-12-11
dita:mapPath
phg1621910019905.ditamap
dita:ditavalPath
pny1626732985837.ditaval
dita:id
phg1621910019905

Supported Compression Types

  Parquet ORC AVRO
Snappy Yes Yes Yes
ZSTD (zstandard) Yes Yes Yes
ZLIB No Yes No
LZ4 No Yes No
GZIP (deflate) Yes No Yes

Supported Data Types

Teradata Type Iceberg Type Delta Type
BYTEINT BOOLEAN BOOLEAN
SMALLINT INTEGER SMALLINT
INTEGER INTEGER INTEGER
BIGINT LONG BIGINT
REAL DOUBLE DOUBLE
DATE DATE DATE
DECIMAL1 DECIMAL DECIMAL
DECIMAL2 DECIMAL DECIMAL
DECIMAL4 DECIMAL DECIMAL
DECIMAL8 DECIMAL DECIMAL
DECIMAL16 DECIMAL DECIMAL
NUMBER_DT DECIMAL DECIMAL
VARCHAR STRING STRING
TIME TIME N/A
TIMESTAMP TIMESTAMP TIMESTAMP
TIMESTAMP_WTZ TIMESTAMP_WTZ TIMESTAMP (UTC)
BYTE(n) FIXED(n) BINARY
VARBYTE(n) BINARY BINARY
CHAR(n) - LATIN UUID STRING
INTERVAL_YTM_DT STRING STRING
INTERVAL_MONTH_DT STRING STRING
INTERVAL_YEAR_DT STRING STRING
INTERVAL_DAY_DT STRING STRING
INTERVAL_DTH_DT STRING STRING
INTERVAL_DTM_DT STRING STRING
INTERVAL_DTS_DT STRING STRING
INTERVAL_HOUR_DT STRING STRING
INTERVAL_HTM_DT STRING STRING
INTERVAL_HTS_DT STRING STRING
INTERVAL_MINUTE_DT STRING STRING
INTERVAL_MTS_DT STRING STRING
INTERVAL_SECOND_DT STRING STRING

Data Type Limitations

Table Format Impacted Data Types Notes Workaround
Delta DECIMAL Reading decimal values for partition columns has incorrect validation. The trailing zeroes in rescaled decimal values count towards total precision. This results in a validation error when the calculated precision exceeds the max allowed precision value, even though the actual value fits. Decimal values written into a partition field must have precision + scale not exceeding the target Delta schema max precision. When creating a Delta schema with partition decimal fields, allocate more precision digits for partition columns than necessary for the max value.

For example, if a decimal partition field expects to have a value of 2.1, define it as decimal(3,2), and not decimal(2,2), so that the rescaled value being validated (2.10) fits into max precision (3).

Decimal values are not supported in expressions, such as the WHERE clause, if the column is a partition column. N/A
Decimal values on INSERT/UPDATES are evaluated to fit into range calculated by precision and scale.*

The write operations with decimal values which do not fit into the range expected to result in either numeric overflow error (for most INSERTs):

Failed [2616 : 22003] Numeric overflow occurred during computation.

or decimal range validation error (for some INSERTs and all UPDATEs)** like:

Decimal value for column 'Column_1' is out of range. Try increasing the column precision to extend the range.

* Range is calculated with precision - scale number dictating the max abs value, i.e. for decimal(4,4) it is [0, .9999] , for decimal(4,2) it is [0, 99.99], and for decimal(8,4) it is [0, 9999.9999].

** INSERT operations do not have the column name; use COLUMN_X alias instead. For UPDATE operations, the valid column name should be shown in the message instead.

N/A
Iceberg, Delta ARRAY Delta Read and Iceberg Read support reading Array data, such as VARCHAR.

Support for writing array values (Insert, Update, Delete) with Delta Write and Iceberg Write is currently not available.

N/A
Delta VARCHAR

CHAR

VARBYTE

BYTE

BLOB

Columns of these character types do not use user-defined length. According to the Databricks API for Delta write functionality, length is not an available option. Spark has a data type that allows Length, but it is not compatible with the Databricks API. N/A
Delta TIME Delta does not support TIME data type and such values are not supported for Delta Write. Use TIMESTAMP instead of TIME when creating Delta tables.

If data needs to be inserted to a Delta table from a source containing TIME fields (i.e., local Teradata table or remote Iceberg table), you must properly cast TIME data into TIMESTAMP value so it can be written to the Delta table.

Iceberg, Delta BLOB BLOB objects are not supported for write operations. (Reading these values is still supported.) Use VARCHAR to store a link to a large binary object; reading such values will be on the client.