Compute Profiles | VantageCloud Lake - Compute Profiles - Teradata Vantage

Teradata® VantageCloud Lake

Deployment
VantageCloud
Edition
Lake
Product
Teradata Vantage
Published
January 2023
Language
English (United States)
Last Update
2024-04-03
dita:mapPath
phg1621910019905.ditamap
dita:ditavalPath
pny1626732985837.ditaval
dita:id
phg1621910019905

A compute profile is a scaling policy for compute clusters in the same compute group that are the same size. These compute clusters have the same compute map.

A compute group can have one or more compute profiles (for example, a compute profile for large compute clusters and a compute profile for small compute clusters). A compute profile controls the size of the compute group using the INSTANCE clause, which specifies a compute map. Map entries range from TD_COMPUTE_XSMALL (1 node) to TD_COMPUTE_2XL (32 nodes). See compute_map in CREATE COMPUTE PROFILE Syntax Elements.

If a compute group has two or more compute profiles, queries use the compute profile with the largest available compute clusters. Although smaller compute profiles may be defined for the compute group and shown as active, queries do not run on them until the compute profiles with larger compute clusters are suspended by their schedule or the SUSPEND COMPUTE statement.

Because you pay for active compute profiles, Teradata recommends having only one active compute profile at a time, except for a brief overlap when one is taking over from another (the cooldown period). (To see how your organization consumes compute and storage resources, see Review Consumption Usage.)

Switching between compute profiles can be accomplished using a schedule (for examples, see Example: Schedule a Small Compute Cluster for 5:00 PM - 8:00 AM and a Large Compute Cluster for 8:00 AM - 5:00 PM and Example: Schedule a Compute Cluster for Monday-Friday 8:00 AM - 5:00 PM). Queries running on the current compute profile complete only if they can do so before its cooldown period starts, when new queries run on the new compute profile.

Specifying Automatic Scaling

Automatic scaling manages compute clusters based on workload. By providing only the necessary compute clusters, automatic scaling reduces cost.

To create a compute profile that specifies automatic scaling, specify the scaling range with MIN_COMPUTE_COUNT and MAX_COMPUTE_COUNT. For descriptions of these arguments, see CREATE COMPUTE PROFILE Syntax Elements.

How Automatic Scaling Works

Automatic scaling monitors the workload of a compute group, specifically the following values:

Monitored Value Description
CPU Average percentage of active compute clusters in the compute group.
Memory Estimated available memory for active and new queries in the compute group.
Count Number of concurrent queries that can run on a compute cluster in the compute group.

Based on the monitored values, automatic scaling scales out or scales in.

Term Description When It Happens
Scaling out Activates more compute clusters, up to MAX_COMPUTE_COUNT. When any of the following is true for two minutes:
  • CPU exceeds 95% for active compute clusters.
  • Memory is 100% for active compute clusters.
  • Count exceeds 60 concurrent queries for active compute clusters.
Scaling in Hibernates some active compute clusters, down to MIN_COMPUTE_COUNT.

Existing work continues until cooldown period expires and any unfinished work has restarted on remaining active clusters.

When CPU, Memory, and Count for two or more active compute clusters is below capacity to hibernate compute clusters for two minutes.

After scaling out or scaling in (which takes minutes), new work is distributed between the active compute clusters.