Native Object Store Limitations - Advanced SQL Engine - Teradata Database

SQL Data Definition Language Syntax and Examples

Product
Advanced SQL Engine
Teradata Database
Release Number
17.05
Published
January 2021
Language
English (United States)
Last Update
2021-01-22
dita:mapPath
ncd1596241368722.ditamap
dita:ditavalPath
hoy1596145193032.ditaval
dita:id
B035-1144
lifecycle
previous
Product Category
Teradata Vantage™

CSV and JSON Payload Size

  • The maximum payload size is 16,776,192 bytes for READ_NOS for CSV and JSON data formats.
    • For UNICODE, that is equivalent to 8,388,096 characters.
    • For LATIN, that is equivalent to 16,776,192 characters.

The actual payload size is equivalent to the dataset individual record size.

Parquet Page Size

The maximum Parquet page size supported is for both READ_NOS and WRITE_NOS is 16,776,192 bytes (including LOB and LOB-based UDT columns).

If some of the data in the Parquet record is binary, the maximum number of characters is proportionately reduced.

Foreign Table Limitations

  • The following are not supported on foreign tables:
    • Primary indexes
    • Secondary indexes
    • Join indexes
    • Hash indexes
    • Row or column partitioning (except for Parquet foreign tables, which require column partitioning)
    • Collecting PARTITION statistics on Parquet foreign tables
    • ALTER TABLE operations
    • DML operations that change table contents (INSERT, UPDATE, DELETE)

Parquet Format Limitations

  • The READ_NOS table operator does not support Parquet. However, READ_NOS can be used to view the Parquet schema, using RETURNTYPE('NOSREAD_PARQUET_SCHEMA'). This is helpful in creating the foreign table when you do not know the schema of your Parquet data beforehand.
  • Certain complex data types are not supported, including STRUCT, MAP, LIST, and ENUM.
  • Because support for the STRUCT data type is not available, nested Parquet object stores cannot be processed by Native Object Store.

Bad Values in a Numeric Field

As with any CSV or JSON datasets, a NOS table can become unusable if a newly uploaded Amazon S3 file contains a single bad value in a numeric field. If this happens, you can identify the bad record/field by using TRYCAST. You can first upload new Amazon S3 files to a temporary location in the object storage and use TRYCAST to make sure the numeric data is not corrupted. Then, you can move the data to its permanent location in the object store. See Teradata Vantage™ - SQL Functions, Expressions, and Predicates, B035-1145.

For example:

Create an Authorization Object, if not already done:

CREATE AUTHORIZATION DefAuth
AS DEFINER TRUSTED
USER 'YOUR-ACCESS-KEY-ID'
PASSWORD 'YOUR-SECRET-ACCESS-KEY;

Create an example table.

CREATE MULTISET FOREIGN TABLE bad_numeric_fields
, EXTERNAL SECURITY DEFINER TRUSTED DefAuth
USING (
     LOCATION  ('/s3/td-usgs.s3.amazonaws.com/DATA-BAD/bad_numeric_data')
);

Select:

SELECT payload FROM bad_numeric_data;

Result:

Payload
----------------------------------------------------------------------------
{ "site_no":"09396100", "datetime":"2018-07-16 00:00", "Flow":"44.7", "GageHeight":"1.50", "Precipitation":"0.00", "GageHeight2":"1.50"}
{ "site_no":"09396100", "datetime":"2018-07-14 00:00", "Flow":"232", "GageHeight":"2.16", "Precipitation":"0.00", "GageHeight2":"2.16"}
{ "site_no":"09396100", "datetime":"2018-07-14 00:14", "Flow":"186", "GageHeight":"2.05", "Precipitation":"", "GageHeight2":"2.05"}
{ "site_no":"09400812", "datetime":"2018-07-12 00:09", "Flow":"", "GageHeight":"jjjj", "Precipitation":"bcde", "BatteryVoltage":"12.5"}
{ "site_no":"09400815", "datetime":"2018-07-12 00:00", "Flow":"0.00", "GageHeight":"-0.01", "Precipitation":"0.00", "BatteryVoltage":""}
{ "site_no":"09400815", "datetime":"2018-07-12 00:07", "Flow":"", "GageHeight":"", "Precipitation":"", "BatteryVoltage":"12.5"}

Select from the table:

SELECT payload.site_no, payload.GageHeight(INT) FROM bad_numeric_fields;

Result: The query returns an error.

Use TRYCAST to figure out which data is bad:

SELECT payload.site_no (CHAR(20)), TRYCAST(payload.GageHeight AS INT) FROM bad_numeric_fields;

Sample result:

Payload.site_no       TRYCAST(Payload.GageHeight)
--------------------  ---------------------------
09396100                                        1
09396100                                        2
09396100                                        2
09400812                                        ?
09400815                                        0
09400815                                        0

The results show there is bad data in the record with site_no 09400812.

WRITE_NOS Limitations

  • The maximum Parquet page size supported for both READ_NOS and WRITE_NOS is 16,776,192 bytes (including LOB and LOB-based UDT columns).
  • It is the responsibility of the user to clean up object store files on interrupted write operations. Write operations can be interrupted on transaction aborts or system resets, among other reasons.
  • Concurrent WRITE_NOS requests to the same location are likely to cause an error to be returned. If a request gets an error, any objects written by the request are not deleted from external storage and must be manually cleaned up using tools provided by the external object store vendor, such as s3cmd. To avoid this, Teradata recommends specifying a unique location for concurrent requests.
  • An error is reported if you attempt to write an object with same path name in the same location.
  • Manifest files are not cumulative. If you want to add entries to a manifest, you must create a new manifest that includes the original entries plus the ones you want to add.
  • An error is reported if you attempt to write another manifest file in the same location. Use OVERWRITE('TRUE') with MANIFESTONLY('TRUE') keywords to replace a manifest in the same location.