Supported Data Types - Teradata Vantage

Apache Iceberg and Delta Lake Open Table Format on VantageCloud Lake Getting Started

Deployment
VantageCloud
Edition
Lake
Product
Teradata Vantage
Published
December 2024
ft:locale
en-US
ft:lastEdition
2025-01-03
dita:mapPath
bsr1702324250454.ditamap
dita:ditavalPath
pny1626732985837.ditaval
dita:id
bsr1702324250454
Teradata → Iceberg Type Mapping
Teradata Type Delta Lake Type (3 Read/Write
BYTEINT BOOLEAN Read, Write
SMALLINT INTEGER Read, Write
INTEGER INTEGER Read, Write
BIGINT LONG Read, Write
REAL DOUBLE Read, Write
REAL FLOAT Read
DATE DATE Read, Write
DECIMALXX DECIMAL Read, Write
TIME TIME (2) Read, Write
TIMESTAMP TIMESTAMP Read, Write
TIMESTAMP_WTZ TIMESTAMP_WTZ Read, Write
BYTE(n) for n < 64K FIXED(n) Read, Write
BLOB (1) BINARY Read, Write
VARBYTE(n) UUID Read, Write
CHAR(n) STRING Write
VARCHAR(n), where UNICODE maxlength is 32000 STRING Read, Write
VARCHAR(n), where UNICODE maxlength is 32000 LIST Read
VARCHAR(n), where UNICODE maxlength is 32000 MAP Read
VARCHAR(n), where UNICODE maxlength is 32000 STRUCT Read
INTERVAL_YTM_DT (3) STRING Write
INTERVAL_MONTH_DT (3) STRING Write
INTERVAL_YEAR_DT (3) STRING Write
INTERVAL_DAY_DT (3) STRING Write
INTERVAL_DTH_DT (3) STRING Write
INTERVAL_DTM_DT (3) STRING Write
INTERVAL_DTS_DT (3) STRING Write
INTERVAL_HOUR_DT (3) STRING Write
INTERVAL_HTM_DT (3) STRING Write
INTERVAL_HTS_DT (3) STRING Write
INTERVAL_MINUTE_DT (3) STRING Write
INTERVAL_MTS_DT (3) STRING Write
INTERVAL_SECOND_DT (3) STRING Write

Limitations for Iceberg

  • BLOB objects are not supported for write operations
  • TIME is not supported by Iceberg on Unity catalog
  • For more information on reading and writing INTERVAL type, see 6. Working with INTERVAL Data Type.
Teradata → Delta Lake Type Mapping
Teradata Type Delta Lake Type (3 Read/Write
BYTEINT BOOLEAN Read, Write
SMALLINT SMALLINT | SHORT Read, Write
INTEGER INTEGER Read, Write
BIGINT BIGINT | LONG Read, Write
REAL DOUBLE Read, Write
DATE DATE (4) Read, Write
DECIMALXX DECIMAL (1) Read, Write
TIMESTAMP TIMESTAMP (2) (4) Read, Write
TIMESTAMP_WTZ TIMESTAMP (2) (4) Read, Write
BYTE(n) for n < 64K BINARY Read, Write
BLOB BINARY Read, Write
CHAR(n) STRING Write
VARCHAR(n), where UNICODE maxlength is 32000 STRING Read, Write
VARCHAR(n), where UNICODE maxlength is 32000 ARRAY <elementType> (5) Read
VARCHAR(n), where UNICODE maxlength is 32000 MAP <keyType, valueType> (1) (5) Read
VARCHAR(n), where UNICODE maxlength is 32000 STRUCT < [fieldName: fieldType [NOT NULL] [COMMENT str] [, ...]] > (5) Read
INTERVAL_YTM_DT (6) STRING Write
INTERVAL_MONTH_DT (6) STRING Write
INTERVAL_YEAR_DT (6) STRING Write
INTERVAL_DAY_DT (6) STRING Write
INTERVAL_DTH_DT (6) STRING Write
INTERVAL_DTM_DT (6) STRING Write
INTERVAL_DTS_DT (6) STRING Write
INTERVAL_HOUR_DT(6) STRING Write
INTERVAL_HTM_DT(6) STRING Write
INTERVAL_HTS_DT (6) STRING Write
INTERVAL_MINUTE_DT (6) STRING Write
INTERVAL_MTS_DT(6) STRING Write
INTERVAL_SECOND_DT(6) STRING Write

Limitations for Delta Lake

General Limitation: In Delta tables, columns of the following data types do not use the user defined length: VARCHAR, CHAR, VARBYTE, BYTE, BLOB.

  • Reading decimal values for partition columns have incorrect validation.
    • Trailing zeroes in rescaled decimal values do not count towards total precision, resulting in validation error when calculated precision exceeds max allowed precision value, event when the actual value would fit. The workaround is to have decimal values written into partition fields have precision and scale not exceeding the target Delta schema max precision. For example, if a decimal partition field expected to have a value 2.1, it needs to be defined as decimal (3, 2) and not decimal (2, 2) so that the rescaled value being validated (2.10) fits into max precision (3).
    • Decimal values are not supported in expressions (WHERE clause) if the column is a partition column.
    • Decimal values on INSERT/UPDATES are evaluated to fit into range calculated by precision and scale. The write operations with decimal values which do not fit into the range expected results in either numeric overflow error (for most INSERTs)
       Failed [2616 : 22003] Numeric overflow occurred during computation.
      or decimal range validation error (for some INSERTs and all UPDATEs) like:
      Decimal value for column 'Column_1' is out of range.
      The workaround is to increase precision of the target Delta field to ensure the value fits into range, i.e. if both values .4444 and 4444 need to be allowed to insert into decimal (4, 4) field, the field should be modified to type decimal (8, 4).
  • Reading TIMESTAMP values from partition columns is not supported. Delta Kernel error: Reading partition columns of TimestampType is unsupported.
  • For any data type used for a partition column, reading data from a Delta table which had a partition column renamed right before a select query results in DeltaBatchReadException. The workaround is to not rename partition columns in Delta tables. Instead, follow these steps:
    1. Drop the table preserving the data: DROP TABLE <table_name> NO PURGE;
    2. Recreate the table with the desired partition columns name: CREATE TABLE <table_name> (fields..) PARTITION BY <new_partition_name>;
  • DATE and TIMESTAMP values in the WHERE clause in UPDATE or DELETE SQL queries cause errors and operation is not successful.
  • Expressions containing string representation of complex Delta data types mapped to Teradata VARCHAR types (ARRAY, MAP, STRUCT) are not supported. For example, the folfowing query with a string representation of some map value:
    SELECT * FROM delta_unity_test.delta_test_db.meteorite_landings
    WHERE map_col = '{"red":1,"green":2}';
    is expected to fail with the following error:
    ** Failure 7825 in UDF/XSP/UDM TD_OTFDB.TD_DELTA_READ: SQLSTATE [38001] [TD-delta-read]: Execution: 
     (column(`map_col`) = {"red":1,"green":2}): operands are of different types which are not comparable: left type=map[string, integer], right type=string
    Similar errors are expected for array and struct values represented as strings in Delta expressions.
  • For more information on reading and writing INTERVAL type, see 6. Working with INTERVAL Data Type.