Optimizing the Aggregate Cache - Analytics Database - Teradata Vantage

SQL Data Definition Language Detailed Topics

Analytics Database
Teradata Vantage
Release Number
June 2022
English (United States)
Last Update
Product Category
Teradata Vantage™

Appropriate management of the aggregate cache used by aggregate UDFs is very important for optimizing their performance. Excessive cache paging can have an enormous negative effect on the performance of an aggregate UDF.

You can specify the size of the interim aggregate cache for a UDF, which is the maximum number of cache bytes used by that function, to a value between a minimum of 1 byte and a maximum of 64,000 bytes.

Allocating the optimal size for the aggregate cache is important because by doing so, you allocate the group cache used by the function. For example, if you specify an aggregate cache size of 64 Kbytes, then the maximum number of interim group cache entries is 15, which is determined as follows:

If an aggregation requires more than 15 group cache entries, then those entries must be flushed to disk more frequently than would be necessary with a smaller aggregate cache allocation.

Aggregate UDFs use the same aggregate processing functionality as system-defined aggregate functions like SUM and MAX (see Teradata Vantage™ - SQL Functions, Expressions, and Predicates, B035-1145). Aggregate processing creates interim group entries to maintain the aggregation information between rows associated with the same aggregation group. The system caches these interim group entries to enhance performance, but in those cases where the number of interim groups exceeds the capacity of the aggregate cache to contain them, the least recently used entries are paged to a spool on disk. The system does not attempt to retrieve or combine interim group entries from spool until the last step of aggregate processing.

To achieve optimum performance, your goal is to keep all of the interim group entries in cache by minimizing the number of aggregation phases per group. The best way to improve aggregate cache usage for an aggregate UDF is to adjust the size of the aggregate interim group (the aggregate storage size) to exactly what is needed: no more and no less. Aggregation works correctly no matter what aggregate storage size you specify, but you can optimize its performance by adjusting the interim storage size using the CLASS AGGREGATE interim_size option when you create or replace an aggregate UDF.

The task is to optimize the trade off between maximizing the number of cache entries that can share the aggregate cache while at the same time allocating enough memory to handle the aggregation tasks required.

As long as the aggregate cache does not overflow, there are at most (number_of_AMPs - 1) aggregation phases for each group. But if the aggregate cache overflows, a large number of aggregation phases can occur, which has a negative impact on performance.

There are three variables to be considered in adjusting the aggregate storage size:

Variable Description
Aggregate cache size This value is fixed in the database at 1 MB.
Aggregate storage size This value is specified by the CLASS AGGREGATE interim_size option.

The aggregate storage size can range from a minimum of 1 byte to a maximum of 64 KB, with a default value of 64 bytes.

Number of interim group cache entries This value depends on the aggregate storage size, and is calculated as follows:

Note that the number of interim aggregation groups for an aggregate UDF is a direct function of the size of the aggregate storage specified by the UDF definition. The following table shows two extreme examples of this ranging over three orders of magnitude.

Aggregate Storage Size Number of Interim Groups That Can Be Contained in Cache
64 bytes 15,625
64 KB 15

As noted previously, when the number of interim groups exceeds the number that can be contained in the aggregate cache defined for the function, the cache overflows and some number of the least recently used entries are flushed to a spool. This paging of aggregation groups permits an unlimited number of groups to be processed, but at the cost of an additional aggregation phase for each aggregate cache entry written to spool plus the performance impact of writing to, and reading from, the spool on disk.