NOS enables you to do the following
- Analyze data stored on an external object storage
- Read data in CSV, JSON, or Parquet format from an external object storage
- Join or aggregate external data to relational data stored in Analytics Database
- Query cold data offloaded to an external object storage
- Load data from an external object storage into the database using one SQL request
- Write Analytics Database data to an external object storage. The data to be written can come from a table, derived results, another object store, QueryGrid federated query, and so on.
- Foreign Tables
- Users with CREATE TABLE privilege can create a foreign table inside the database, point this virtual table to an external storage location, and use SQL to translate the external data into a form useful for business.
- READ_NOS
- READ_NOS allows you to do the following:
- Perform an ad hoc query on all data formats with the data in-place on an external object storage
- List all the objects and path structure of an object storage
- List the object store
- Discover the schema of the data
- Read CSV, JSON, and Parquet data
- Bypass creating a foreign table in the Analytics Database
- Load data into the database with INSERT … SELECT where the select references READ_NOS
- Use a foreign table to query data stored by READ_NOS
- Use Delta Lake manifest files
Writing data to an external object storage:
- WRITE_NOS
- WRITE_NOS allows you to write data from database tables to external object storage and store it in Parquet format. Data stored by WRITE_NOS can be queried using a foreign table and READ_NOS.WRITE_NOS allows you to do the following:
- Extract selected or all columns from an Analytics Database table or from derived results and write to an external object storage in Parquet data format.
- Write to Teradata-supported external object storage, such as Amazon S3.
- Load data into the database with INSERT ... SELECT where the select references WRITE_NOS
- Use a foreign table to query data stored by WRITE_NOS
Supported External Object Storage Platforms
- Amazon S3
- Azure Data Lake Storage Gen2
- Cloudera
- Cloudian HyperStore
- Dell EMC/ECS
- Google Cloud Storage
- Hitachi Content Platform
- IBM Cloud Object Store (IBM COS)
- Microsoft Azure Blob storage
- MinIO
- NetApp StorageGRID
- RedHat Ceph
- Scality Ring
- VAST Data
- Vcinity VAccess
Supported Compression Formats
External data may arrive from an object in a compressed format. If that is the case, the data will be decompressed inside the Analytics Database, but only after decryption has been completed on the object store before being transmitted. GZIP is the only compression format supported for both JSON and CSV. Brotli, Snappy, and Zstd are supported for Parquet. The database recognizes the ".gz" suffix on the incoming files and performs the decompression automatically. Note, compression may bring some trade-offs, such as CPU overhead versus reduced needed Bandwidth amongst others.
Encryption
To encrypt files written to object store, configure the destination bucket to encrypt all objects using server-side encryption. Server-side encryption at the bucket level is supported by WRITE_NOS, READ_NOS, and foreign tables.
Note, all data is transmitted between the Vantage platform and the external object storage using TLS encryption, independent of whether the data is encrypted at rest in the object store.