Global Types to Spark SQL Data Types
Global Data Types denoted with an asterisk (*) are only available with Teradata Database 16.20 Feature Update 1 and later.
Global Data Type | Spark SQL Data Type |
---|---|
G_Array | Array |
G_Array_VC_UTF16 / G_Array_VC_Latin * | Array |
G_BigInt | Bigint |
G_Blob | Binary |
G_Boolean | Boolean |
G_Byte | Binary |
G_ByteInt | Tinyint |
G_Char_Latin Latin data type mapping is only for data types using ISO_8859_1 or US_ASCII encoding.
|
String |
G_Char_UTF16 | String |
G_Clob_Latin Latin data type mapping is only for data types using ISO_8859_1 or US_ASCII encoding.
|
String |
G_Clob_UTF16 | String |
G_Date | Date |
G_Decimal | Decimal |
G_Double | Double |
G_Float | Float |
G_Integer | Integer |
G_JSON_UTF16 / G_JSON_Latin * | String |
G_Map | Map |
G_Number | Decimal |
G_Row | Struct |
G_SmallInt | Smallint |
G_STGeometry * | String |
G_TimeStamp | Timestamp |
G_Varbyte | Binary |
G_Varchar_Latin Latin data type mapping is only for data types using ISO_8859_1 or US_ASCII encoding.
|
String |
G_Varchar_UTF16 | String |
G_XML * | String |
Others | Currently not supported |
Spark SQL Data Types to Global Types
Spark SQL Data Type | Global Data Type |
---|---|
Array | G_Array |
Bigint | G_BigInt |
Binary | G_Blob |
Boolean | G_Boolean |
Char | G_Char_Latin Latin data type mapping is only for data types using ISO_8859_1 or US_ASCII encoding.
|
Char | G_Char_UTF16 |
Date | G_Date |
Decimal | G_Decimal |
Double | G_Double |
Float | G_Float |
Integer | G_Integer |
Map | G_Map |
Smallint | G_SmallInt |
String | G_Clob_Latin Latin data type mapping is only for data types using ISO_8859_1 or US_ASCII encoding.
|
String | G_Clob_UTF16 |
Struct | G_Row |
Timestamp | G_TimeStamp |
Tinyint | G_ByteInt |
Varchar | G_Varchar_Latin Latin data type mapping is only for data types using ISO_8859_1 or US_ASCII encoding.
|
Varchar | G_Varchar_UTF16 |
Spark SQL String and Binary Types Considerations
Spark SQL String and Binary columns are restricted to a maximum 1GB in size. However, due to the in-memory nature of these types, a large amount of resources are required on the Spark side when their size nears the 1GB restriction. Therefore, caution is advised when inserting large Teradata CLOB or BLOB columns into the Spark SQL String or Binary columns when using QueryGrid.