How Does Deep Drill Down Analysis Work? - Teradata Workload Analyzer

Teradata Workload Analyzer User Guide

Product
Teradata Workload Analyzer
Release Number
16.20
Published
October 2018
Language
English (United States)
Last Update
2018-10-12
dita:mapPath
sef1527114222310.ditamap
dita:ditavalPath
Audience_PDF_include.ditaval
dita:id
B035-2514
lifecycle
previous
Product Category
Teradata Tools and Utilities

Deep drill-down analysis is a recursive process for deeper analysis on correlation and distribution parameters. If current analysis parameters do not satisfy the DBA, more appropriate parameters can be selected by reviewing the distinct values and ranges for other parameters.

For example, with respect to the distinct value counts, one particular workload could display the following characteristics:
  • UserName (24)
  • Applications (1)
  • Account Name (1)
  • Client Addresses (2)
  • Queryband (3)
    • Function (3)
    • Urgency (1)
    • AggLevel (8)
  • Estimated Processing Time (zero to 1000 seconds)
  • AMP Count (zero to 1)
See Configuring Application Options for information on viewing distinct value counts in workloads.

In this example, the DBA now knows that there is only one distinct Application and one distinct Account, and that they both run at the same urgency. Attempting to identify a correlation against a different Application, Account, or urgency values is a wasted effort. However, the opportunity for correlation does exist with User Name, Function, and AggLevel. The DBA could pursue those correlation options. For the distribution parameter ranges, an Estimated Processing Time range from zero to 1000 seconds suggests that a large variation of requests are included in this workload. The opportunity for identifying clusters is higher with this range than if the Estimated Processing Time range was just zero to one second.

The DBA may add clusters to the current workload for deeper analysis, or clusters may be split off into a new workload. The DBA may repeat this process until good set of workloads are defined, or all unassigned clusters are assigned to workloads.

Teradata WA uses an assigned and unassigned cluster concept. Each cluster (for example, Accounts, Users, QueryBands) found during analysis is initially unassigned. Selected clusters are assigned after adding clusters to the current workload for deeper analysis, or after splitting out into a new workload. The unassigned clusters remain available for subsequent action by the DBA, if wanted. Teradata WA brings back all unassigned clusters if the same analysis parameter is clicked again, after displaying an informational message.



If unassigned clusters are not used by the DBA, the associated requests are relegated to a different workload after the ruleset changes are saved. For example, consider the following set of six workloads that were generated after the first level of analysis on Accounts, where Workload A is defined for classification Account=A:

Workload A with classification Account =A

The DBA decides to analyze workload A, which consumes 35% of the CPU. Based on some criteria (for example, Client User), it is determined that one element should be isolated, and treated differently than the other elements. The DBA can either split the existing workload, or add classification to existing the workload.

If the DBA splits the particular element, the result is a new workload, A2, with classification Account=A and Client User = xyz. Workload A2 automatically has a higher evaluation order than the original workload A to assure client users of xyz execute within workload A2, and all other client users execute within workload A. The CPU distribution divided between the old workload (A) and new workload (A2) workload is shown in the following figure:

Workload A and Workload A2 with CPU Distribution Division

Alternatively, if the DBA chooses to instead add classification to an existing workload, (so that the workload classification of workload A is now Account=A, and Client User = xyz), the unselected elements are designated “unassigned,” as depicted below. If not further acted upon, the unassigned elements end up executing within WD-Default, because no other workload exists that would capture requests with classification Account=A and NOT client user = xyz.

Unassigned Elements after Classification is added to Existing Workload

To avoid accidental relegation of unassigned clusters to WD-Default, or some other unexpected WD, drill-down probes should begin the first analysis step using the Split Workloads option (see Splitting and Merging Workloads for Analysis for more information). Additional refinements are done using the Add (selected parameter) as classification to workload option against that new workload, so that unassigned requests are relegated back to the original workload. See Adding Existing Classifications to New Workloads for more information). See Example 2: Deep Drill Down Analysis with Queryband Parameters for a demonstration on this particular technique.

The DBA selects correlation parameters (“Who” and “Where”) and the distribution parameters (“What” and “Exception”) at each depth of analysis (see Supported Analysis Parameters for the list of supported parameters). The DBA can also review the workload by viewing the classification list after each level of analysis and click Undo Classification (if needed). The Undo operation is used to reverse any previous analysis performed. The operation deletes assigned clusters from a workload classification and bring them back as unassigned clusters for new add/split operations.

After PSA migration, when a CPU distribution pie chart is generated for 250 or more workloads, Teradata WA 16.20 displays the distribution of the top 10 workloads as determined by the percentage of CPU processing required for each workload. The remaining workloads are grouped into a segment labeled SUM of CPU % as shown below.
Distribution of Top 10 Workloads Represented as a Percentage of CPU Processing When More Than 250 Workloads are Represented after PSA Migration