Teradata Limitations | QueryGrid - Known Limitations (Teradata-to-TargetConnector) - Teradata QueryGrid

QueryGridâ„¢ Installation and User Guide - 3.06

Deployment
VantageCloud
VantageCore
Edition
Enterprise
IntelliFlex
Lake
VMware
Product
Teradata QueryGrid
Release Number
3.06
Published
December 2024
ft:locale
en-US
ft:lastEdition
2024-12-07
dita:mapPath
ndp1726122159943.ditamap
dita:ditavalPath
ft:empty
dita:id
lxg1591800469257
Product Category
Analytical Ecosystem

The following limitations affect use of QueryGrid connectors in Teradata-to-TargetConnector links:

Limitation T2T T2P T2H T2S T2O T2B T2G
Dataset CSV storage is not supported. X X X X X X X
Transaction semantics between systems is not supported. X X X X X X X
Use of the Returns clause is not supported for Teradata-to-Teradata links X            
When using EXPORT clause queries, the LOB UDT data type cannot be exported if no LOB/LOB UDTs are imported in the FOREIGN TABLE query. X X X X X X X
The WITH clause cannot be used inside the FOREIGN TABLE push down query.   X X X X X X
The EXPORT clause does not support char/varchar with the Kanji1 character set. X X X X X X X
Date literals used in WHERE clauses are not converted to the time zone of the remote system if the remote system time zone is different from the initiator system time zone. X X X X X X X
The maximum size supported for BLOB and CLOB is less than 2GB (2,097,088,000). X X X X X X X
The maximum size of VARCHAR is 64k. X            
The temporary database name, NVP, is not supported on Teradata Database version 15.10. X            
When using SQL Engine 17.05 or earlier, a maximum of 8 Teradata connector properties can be overridden during an individual session for a foreign server. X X X X X X X
When using SQL Engine 17.05 or earlier, the maximum query band length supported is 1024 bytes X X X X X X X
The Foreign Function Execution (FFE) feature is not supported for target connectors.   X X X X X X
Use of Presto is limited to queries that can be performed in memory, so some queries may not be able to run in Presto that would run in Hive.   X          
QueryGrid does not support TimeWithTimeZone and TimestampWithTimeZone data types with Presto connectors.   X          
The following Hive speculative properties are not supported and are disabled by default, unless the Support Hive Task Retries parameter is set to True:
  • mapreduce.map.speculative=false
  • mapreduce.reduce.speculative=false
  • hive.mapred.reduce.tasks.speculative.execution=false
  • tez.am.speculation.enabled=false
    X        
By default, the Hive target connector returns a 1 as the number of rows exported regardless of how many rows were exported during a successful export query. Setting the Collect Approximate Activity Count connector property to true returns the number of rows exported with the following limitations:
  • If the Hive table statistics are inaccurate (this is uncommon), enabling this property can result in a performance overhead on the insert query.
  • If there are concurrent inserts on the Hive table, an inaccurate number of rows may be displayed, resulting in an approximate result rather than a precise number.
    X        
If Hive is upgraded or the location of standard Hive JARs are changed, a tdqg-node restart is required.     X        
UTF-16 supplementary characters longer than 2 bytes in a table cause data truncation.     X X X X X
IMPORT is not supported on the VARCHAR, STRING, and CHAR columns of a table if the table character set is something other than Latin or UTF-16.     X X X    
The Spark connector does not support ACID tables or transactional tables.       X      

After data has been exported and committed to a remote system, any subsequent errors or aborts on the local system do not roll back the remote request.

X X X X X X X
The Spark SQL connector does not support roles since roles are not supported by Spark.       X      
By default, the Spark SQL target connector returns a 1 as the number of rows exported regardless of how many rows were exported during a successful export query. Setting the Collect Approximate Activity Count connector property to true returns the number of rows exported with a slight performance overhead. If there are concurrent inserts on the Spark SQL table, an inaccurate number of rows might be displayed, resulting in an approximate result rather than a precise number.       X      
The following are a result of possible Apache Spark limitations:
  • Spark 2.1 and later: When using the Spark initiator, if the schema of a target table changes after a non-native table representing that target table has been created, that non-native table must be recreated in order to reflect the schema change.
  • Spark 2.2 and later: When importing data for the DATE type using the Spark target connector or exporting data of the DATE type using the Spark initiator, the data value from Spark can be incorrect.
  • Spark 2.2 and later: Spark does not support Char/Varchar; therefore, when using the Spark target connector to insert data from QueryGrid and the target table contains char/varchar columns, the data from QueryGrid may be incorrect. To avoid possible incorrect data, use String instead of Char/Varchar.
  • If Spark is upgraded or the location of standard Spark JARs are changed, a tdqg-node restart is required.
      X      
IMPORT is not supported on the VARCHAR, STRING, and CHAR columns of the Spark table if the table character set is something other than Latin or UTF-16.       X      
Condition pushdown of the LIMIT clause is not supported. X X X X X X  
Case-sensitive column names.         X X X
Comparisons with DATE in the WHERE clause may yield incorrect results. X X X X X X X
When UTF-16 character set is to Latin, set the NVP to WE8ISO8859P1.         X    
The BigQuery connector uses the Storage Read API and Storage Write API, which have some documented limitations.

For example, when writing to BigQuery there is a maximum row size of 10MB and a maximum of 100 concurrent threads when not using a multi-region.

See the BigQuery Quotas and Limits documentation.

          X  
Due to a Google limitation, the HELP FOREIGN server query only returns datasets in the US location.           X  
Due to a limitation of the Storage Write API, writing more than 100GB to BigQuery in a single query risks a premature commit of the data before the query is complete.           X  
BigQuery federated data sources are read-only. As a result, the QueryGrid BigQuery connector can read from the federated data sources but not write to them.           X  
The BigQuery connector cannot access datasets in other projects using the following format:

select col1 from project_name.dataset_name.table_name@fs_name

Use the following workaround instead:

select * from foreign table (select col1 from project_name.dataset_name.table_name)@fs_name ft;

          X  
Due to a Google limitation, exporting JSON numeric data types from Teradata to BigQuery STRUCT is not supported by QueryGrid.           X  
The maximum size of VARCHAR is 32k.             X
UDTs are not supported.             X
Export clause queries involving types that are not available on the target database will throw an error.             X
Export clause queries involving the data types TIME WITH TIMEZONE and TIMESTAMP WITH TIMEZONE are note supported.             X
Export of temporal tables is not supported. X X X X X X X
Import of temporal tables is not supported. X