Supported Data Types | VantageCloud Lake - Supported Data Types - Teradata Vantage

Teradata® Open Table Format for Apache Iceberg and Delta Lake User Guide

Deployment
VantageCloud
VantageCore
Edition
VMware
Enterprise
IntelliFlex
Lake
Product
Teradata Vantage
Release Number
20.00
Published
October 2025
ft:locale
en-US
ft:lastEdition
2025-10-25
dita:mapPath
qrj1749167830193.ditamap
dita:ditavalPath
lli1749584660955.ditaval
dita:id
bsr1702324250454
Teradata → Iceberg Type Mapping
Teradata Type Iceberg Type Read/Write
BYTEINT BOOLEAN Read, Write
SMALLINT INTEGER Read, Write
INTEGER INTEGER Read, Write
BIGINT LONG Read, Write
REAL DOUBLE Read, Write
REAL FLOAT Read
DATE DATE Read, Write
DECIMALXX DECIMAL Read, Write
TIME TIME Read, Write
TIMESTAMP TIMESTAMP Read, Write
TIMESTAMP_WTZ TIMESTAMP_WTZ Read, Write
BYTE(n) for n < 64K FIXED(n) Read, Write
BLOB BINARY Read, Write
VARBYTE(n) UUID Read, Write
CHAR(n) STRING Write
VARCHAR(n), where UNICODE maxlength is 32000 STRING Read, Write
VARCHAR(n), where UNICODE maxlength is 32000 LIST Read
VARCHAR(n), where UNICODE maxlength is 32000 MAP Read
VARCHAR(n), where UNICODE maxlength is 32000 STRUCT Read
INTERVAL_YTM_DT STRING Write
INTERVAL_MONTH_DT STRING Write
INTERVAL_YEAR_DT STRING Write
INTERVAL_DAY_DT STRING Write
INTERVAL_DTH_DT STRING Write
INTERVAL_DTM_DT STRING Write
INTERVAL_DTS_DT STRING Write
INTERVAL_HOUR_DT STRING Write
INTERVAL_HTM_DT STRING Write
INTERVAL_HTS_DT STRING Write
INTERVAL_MINUTE_DT STRING Write
INTERVAL_MTS_DT STRING Write
INTERVAL_SECOND_DT STRING Write

Limitations for Iceberg

Teradata → Delta Lake Type Mapping
Teradata Type Delta Lake Type Read/Write
BYTEINT BOOLEAN Read, Write
SMALLINT SMALLINT | SHORT Read, Write
INTEGER INTEGER Read, Write
BIGINT BIGINT | LONG Read, Write
REAL DOUBLE Read, Write
DATE DATE Read, Write
DECIMALXX DECIMAL Read, Write
TIMESTAMP TIMESTAMP Read, Write
TIMESTAMP_WTZ TIMESTAMP Read, Write
BYTE(n) for n < 64K BINARY Read, Write
BLOB BINARY Read, Write
CHAR(n) STRING Write
VARCHAR(n), where UNICODE maxlength is 32000 STRING Read, Write
VARCHAR(n), where UNICODE maxlength is 32000 ARRAY <elementType> Read
VARCHAR(n), where UNICODE maxlength is 32000 MAP <keyType, valueType> Read
VARCHAR(n), where UNICODE maxlength is 32000 STRUCT < [fieldName: fieldType [NOT NULL] [COMMENT str] [, ...]] > Read
INTERVAL_YTM_DT STRING Write
INTERVAL_MONTH_DT STRING Write
INTERVAL_YEAR_DT STRING Write
INTERVAL_DAY_DT STRING Write
INTERVAL_DTH_DT STRING Write
INTERVAL_DTM_DT STRING Write
INTERVAL_DTS_DT STRING Write
INTERVAL_HOUR_DT STRING Write
INTERVAL_HTM_DT STRING Write
INTERVAL_HTS_DT STRING Write
INTERVAL_MINUTE_DT STRING Write
INTERVAL_MTS_DT STRING Write
INTERVAL_SECOND_DT STRING Write

Limitations for Delta Lake

General Limitation: In Delta tables, columns of the following data types do not use the user defined length: VARCHAR, CHAR, VARBYTE, BYTE, BLOB.

  • Reading decimal values for partition columns has incorrect validation.
    • Trailing zeroes in rescaled decimal values do not count towards total precision, resulting in validation errors when calculated precision exceeds max allowed precision value, even when the actual value would fit. The workaround is to have decimal values written into partition fields have precision and scale not exceeding the target Delta schema max precision. For example, if a decimal partition field expected to have a value 2.1, it needs to be defined as decimal (3, 2) and not decimal (2, 2) so that the rescaled value being validated (2.10) fits into max precision (3).
    • Decimal values are not supported in expressions (WHERE clause) if the column is a partition column.
    • Decimal values on INSERT/UPDATES are evaluated to fit into range calculated by precision and scale. The write operations with decimal values which do not fit into the range expected results in either numeric overflow error (for most INSERTs)
       Failed [2616 : 22003] Numeric overflow occurred during computation.
      or decimal range validation error (for some INSERTs and all UPDATEs) like:
      Decimal value for column 'Column_1' is out of range.
      The workaround is to increase precision of the target Delta field to ensure the value fits into range, i.e. if both values .4444 and 4444 need to be allowed to insert into decimal (4, 4) field, the field should be modified to type decimal (8, 4).
  • Reading TIMESTAMP values from partition columns is not supported. Delta Kernel error: Reading partition columns of TimestampType is unsupported.
  • For any data type used for a partition column, reading data from a Delta table which had a partition column renamed right before a select query results in DeltaBatchReadException. The workaround is to not rename partition columns in Delta tables. Instead, follow these steps:
    1. Drop the table preserving the data: DROP TABLE <table_name> NO PURGE;
    2. Recreate the table with the desired partition columns name: CREATE TABLE <table_name> (fields..) PARTITION BY <new_partition_name>;
  • DATE and TIMESTAMP values in the WHERE clause in UPDATE or DELETE SQL queries cause errors and operation is not successful.
  • Expressions containing string representation of complex Delta data types mapped to Teradata VARCHAR types (ARRAY, MAP, STRUCT) are not supported. For example, the following query with a string representation of some map value:
    SELECT * FROM delta_unity_test.delta_test_db.meteorite_landings
    WHERE map_col = '{"red":1,"green":2}';
    is expected to fail with the following error:
    ** Failure 7825 in UDF/XSP/UDM TD_OTFDB.TD_DELTA_READ: SQLSTATE [38001] [TD-delta-read]: Execution: 
     (column(`map_col`) = {"red":1,"green":2}): operands are of different types which are not comparable: left type=map[string, integer], right type=string
    Similar errors are expected for array and struct values represented as strings in Delta expressions.
  • See 5. Working with INTERVAL Data Type.