Monitor CPU Usage | Teradata Vantage - Monitoring CPU Usage - Advanced SQL Engine - Teradata Database

Database Administration

Product
Advanced SQL Engine
Teradata Database
Release Number
17.10
Published
July 2021
Language
English (United States)
Last Update
2021-07-27
dita:mapPath
upb1600054424724.ditamap
dita:ditavalPath
upb1600054424724.ditaval
dita:id
B035-1093
lifecycle
previous
Product Category
Teradata Vantage™

Vantage records both normalized and raw CPU measures. Normalized CPU time is derived by applying a CPU scaling factor to the node level raw CPU time.

The standard co-existence scaling factors for all node types are pre-defined in PDE startup parameter files in the Open PDE startup.txt file. The per-node values are added to the vconfig and tosgetpma() structures by PDE startup for use by other components. In this way, Vantage provides accurate performance statistics for mixed node systems, particularly with respect to CPU skewing and capacity planning, that is, usable capacity.

The formula is:

      Node_Scaling_Factor * Node_CPU_Time

The scaling factor values are derived from:
  • Measuring the raw capability of the Teradata compute node unconstrained by I/O or environment.
  • Basing values on basic functionality, including data access rate, data load rate, row processing rate, raw transaction rate.

    Currently, the 5100 is the base scaling factor value (1.00).

For AMP level reporting in DBQL and AMPUsage, the formula is:

      Node_Scaling_Factor * AMP_CPU_Time

ResUsage and DBC.AMPUsage View

The DBC.AMPUsage view displays CPU usage information differently than the way usage information is displayed in ResUsage data.

This facility… Provides…
ResUsage metrics on the system, without making distinctions by individual user or account ID.
DBC.AMPUsage view AMP usage by individual user or account ID.

AMPUsage counts logical I/Os, not physical.

Some CPU seconds used by the system cannot be accounted for in AMPUsage. Therefore, ResUsage CPU metrics will always be larger than AMPUsage metrics. Typically, AMPUsage captures about 70-90% of ResUsage CPU time.

For more information on the DBC.AMPUsage view, see Teradata Vantage™ - Data Dictionary, B035-1092.

Using the ResNode Macro Set

For capacity planning, generally only ResUsageSpma is required. This is the only table required to make use of the ResNode macro set.

Important information from ResNode includes:
  • CPU utilization
  • Parallel efficiency to show hot nodes or AMPs
  • CPU to I/O balance
  • OS busy versus DBS busy to see the characteristics of the workload
  • Memory utilization
  • Availability and process swapping (paging)
  • Network traffic

Observing Trends

ResUsage data is most useful in seeing trends when there is a reasonably long history (more than a year) available for comparison. Use this data to answer questions, such as:
  • How heavily is the system used at different times of the day or week?
  • When are there peaks or available cycles in utilization?

CPU Busy

The ResNode macro reports CPU busy by second averages. Use it for general system analysis.

The macro contain the Avg CPU Busy column, which is the average CPU utilization for all nodes. Avg CPU Busy % is a measure of how often multiple CPUs in a node were busy during a log period.
  • In a DSS environment, a small number of jobs can easily bring the CPU close to 100% utilization.
  • High CPU utilization consists of a Avg CPU Busy % of 70% over significant periods of time.
  • The CPU is stressed differently in DSS and transaction processing environments.

For ResNode columns, see Teradata Vantage™ - Resource Usage Macros and Tables, B035-1099.

CPU Tasks in a DSS Environment

The following lists how CPU tasks are carried out on the node during DSS operations.

  1. Prepare for read:
    • Memory management allocates memory for the data block.
    • Database software communicates with the file system.
    • File system communicates with the disk controller.
  2. Qualify rows. Determine if the row satisfies the WHERE clause condition(s).

    Most DSS operations require full table scans in which the WHERE clause condition check is relatively time-consuming. Full table scans generally result from SQL statements whose WHERE clause does not provide a value for an index or partition elimination.

  3. Process rows:
    • Join
    • Sort
    • Aggregate
  4. Format qualifying rows for spool output.

CPU Tasks During Transaction and Batch Maintenance Processing

The following describes how the CPU tasks are carried out on the node during a transaction batch maintenance processing.

Notice that the qualify rows activity is missing from the table. In transaction processing, it is more common for the WHERE clause to provide a value for the PI or USI. The read itself qualifies rows. Transaction processing typically avoids further conditional checks against non-indexed columns. All of these CPU tasks occur on the nodes.

  1. Prepare for read:
    • Memory management allocates memory for the data block.
    • Database communicates with the file system.
    • File system communicates with the disk controller.
  2. Update row:
    • Database locates row to be updated.
    • Memory management allocates memory for the new data block to be built.
    • Database updates the changed row and copies the old rows.
    • Database communicates with the file system.
    • File system communicates with the disk controller.

Parallel Node Efficiency

Parallel note efficiency is a measure of how evenly the workload is shared among the nodes. The more evenly the nodes are utilized, the higher the parallel efficiency.

Parallel node efficiency is calculated by dividing average node utilization by maximum node utilization. Parallel node efficiency does not consider the heaviness of the workload. It only looks at how evenly the nodes share that workload.

The closer parallel node efficiency is to 100%, the better the nodes work together. When the percentage falls below 100%, one or a few nodes are working much harder than the others in the time period. If node parallel efficiency is below 60% for more than one or two 10-minute log periods, Vantage is not getting the best performance from the parallel architecture.

Poor Parallel Efficiency

Possible causes of poor node parallel efficiency include:
  • Down node
  • Uneven number of AMPs per node
  • Skewed table distribution
  • Skewed join or aggregation processing
  • Non-Vantage application running on a TPA node
  • Coexistence system with different speed nodes
Poor parallel efficiency can also occur at the AMP level. Common causes of poor AMP parallel efficiency include:
  • Poor table distribution (You can check this in DBC.TableSizeV.)
  • Skewed processing of an SQL statement:
    • User CPU in seconds (You can check this in DBC.AMPusage.)
    • Spool (You can check this in DBC.DiskspaceV.)

CPU Use

The following macros provide information on CPU use.

Macro Table Purpose
ResCPUByNode SPMA Report of how each individual node is utilizing CPUs
ResCPUByPE SVPR Report of how each Parsing Engine (PE) utilizes the CPUs on irrespective node
ResCPUByAMP SVPR Report of how each AMP utilizes the CPUs on the respective node

For information on the above macros, see Teradata Vantage™ - Resource Usage Macros and Tables, B035-1099.