JSON Data Type | JSON Shredding | Teradata Vantage - About JSON Shredding - Advanced SQL Engine - Teradata Database

JSON Data Type

Product
Advanced SQL Engine
Teradata Database
Release Number
17.05
17.00
Published
September 2020
Language
English (United States)
Last Update
2021-01-23
dita:mapPath
gzn1554761068186.ditamap
dita:ditavalPath
lze1555437562152.ditaval
dita:id
B035-1150
lifecycle
previous
Product Category
Teradata Vantageā„¢

Vantage lets you shred JSON documents to extract the data and store it in relational format. The extracted data can be used to insert or update data in existing database tables.

For simple cases of shredding JSON data, you can use the INSERT...JSON statement. This statement only supports shredding JSON documents stored in text format.

For more complex cases of shredding JSON data, Vantage provides the following:

JSON_TABLE
This table operator shreds some or all of the data in a JSON document, and creates a derived database table based on the shredded data. TD_JSONSHRED is recommended for faster shredding of large JSON documents (those with large numbers of attributes or that will shred to many database records).
TD_JSONSHRED
This table operator is similar to JSON_TABLE. It shreds some or all of the data in a JSON document, and creates a derived database table based on the shredded data. It accepts larger JSON documents and shreds the data significantly more quickly, but it does not support the use of JSONPath expressions, and supports fewer output data types.
JSON_SHRED_BATCH and JSON_SHRED_BATCH_U
These stored procedures use successive calls to JSON_TABLE to shred multiple JSON documents and create a conglomerate derived database table that can be used to load or update one or more existing database tables.
JSON_SHRED_BATCH operates on LATIN character set data and JSON_SHRED_BATCH_U operates on UNICODE data. Otherwise, the two procedures function identically.

The table operators and stored procedures operate on JSON documents stored in text or binary format.

Differences Between JSON_TABLE and TD_JSONSHRED

Feature JSON_TABLE TD_JSONSHRED
Speed Slower Faster
Supports CLOB data type for input and output data No

16 MB (JSON data type maximum size) limit on input. Limits on output determined by data type.

Yes

CLOB type (2 GB maximum size) allowed for input/output in addition to other data types

Case Sensitive string matching Always Optional
JSONPath support Yes No
Return data types More Fewer

Can output as VARCHAR or CLOB and cast to other types

Pass-through columns (columns that do not contain JSON data in the input table) Appear after shredded JSON data in returned table Appear before shredded JSON data in the returned table
Handling of oversized shredded string data Truncates shredded JSON data that is longer than the data "type" specification in the COLEXPR parameter Truncates shredded JSON data that is longer than the RETURNTYPE data size specification. Optionally, you can have TD_JSONSHRED fail with an error if the shredded data exceeds the type specification. The error indicates the problematic column.