Related Strategies - Teradata VantageCloud Lake

Lake - Database Reference

Deployment
VantageCloud
Edition
Lake
Product
Teradata VantageCloud Lake
Release Number
Published
February 2025
ft:locale
en-US
ft:lastEdition
2025-11-21
dita:mapPath
ohi1683672393549.ditamap
dita:ditavalPath
pny1626732985837.ditaval
dita:id
ohi1683672393549
Other functionally similar strategies for solving this problem can also be used. Only prototyping can determine which among the possible choices is best for your application environment and hardware configuration.
  • You can create a global temporary table definition and then populate a materialized instance of it with aggregated result sets. This is less an alternate strategy than a strategy designed for a different application.

    Global temporary tables are private to the session that materializes them and their data does not persist beyond the end of that session. Unless you specify the ON COMMIT PRESERVE option when the definition for the temporary table is created, its contents do not persist even across individual database transactions. Unless provisions are made to write their contents to a persistent base table when a session ends, their data is not saved. Without safeguards built into the process, this method does not provide assurance about the integrity of the data it produces. While the contents of any global temporary table are private to the session in which it is created, the definition of the table is global within a database and different sessions and users may materialize a different version of the table, populate it, and write the results to the same base table.

    The containing database or user for a global temporary table must have a minimum of 512 bytes of available PERM space to contain the table header for the GTT.

  • You can create a volatile table with aggregate expressions defined on some or all its columns. The drawbacks of global temporary tables for this application apply equally to volatile tables.
  • You can create a denormalized base table and populate it with an aggregated result set.

    Denormalization reduces the generality of the database for any ad hoc queries or data mining operations you may want to undertake and introduces problematic update anomalies. While a relatively mild degree of denormalization is standard in physically implemented databases, the sort of denormalization called for by this solution is probably beyond what most DBAs find acceptable.

    Because there is no mechanism for keeping such a table synchronized with its base table, it can become quickly outdated.

    Nonetheless, for some applications this approach may be the only high-performing solution.