DBQL Logging Scenarios | Teradata Vantage - Recommended DBQL Logging Scenarios - Advanced SQL Engine - Teradata Database

Database Administration

Product
Advanced SQL Engine
Teradata Database
Release Number
17.05
17.00
Published
June 2020
Language
English (United States)
Last Update
2021-01-22
dita:mapPath
rgu1556127906220.ditamap
dita:ditavalPath
lze1555437562152.ditaval
dita:id
B035-1093
lifecycle
previous
Product Category
Teradata Vantageâ„¢

User Type 1: Short Subsecond Known Work (Tactical Work)

  • Log as Summary
  • BEGIN QUERY LOGGING LIMIT SUMMARY = 1,3,5 ON ALL ACCOUNT = 'acctname';

    The numbers 1,3,5 are clock seconds not CPU seconds

  • No SQL gets logged
  • No Objects get logged

For this workload, high-speed performance and minimal response time are the primary objectives. Typically, this workload tends to be very predictable in nature with queries typically designed to be single AMP retrievals.

Logging of this workload at the request level may be unnecessary for two reasons:

  • The transactions are well-tuned, known, and repeated over and over again.
  • The additional overhead required to record SQL for each request would represent a meaningful portion of the overall work performed on behalf of the transaction, that is, the additional overhead could materially impact request response time.

The objective is to capture only summarized information about these SQL requests. Since the expectation for this workload type is that the work is predictable, repeatable and does not vary much, summary logging is sufficient. If however, there could be unauthorized use of this ID, or on occasion, a longer running query is run, threshold logging could be used.

User Type 2: Mostly Long Running Work

  • Log detail with SQL and objects
  • BEGIN QUERY LOGGING WITH SQL, OBJECTS LIMIT SQLTEXT =0 ON ALL ACCOUNT = 'acctname';

    If there are 10s of thousands of subsecond work, additional overhead will be incurred

Teradata recommends that a high degree of detailed data be captured for analysis of this workload category. The data generated from this DBQL logging option generates the critical detailed information needed for effective performance management and tuning.

The logging captures the entire SQL text for each request along with the objects used in processing the request. The SQL text and Object data is critical for performing query access path analysis. The DBQL detail data provides not only CPU and IO data, but also the data to calculate whether a query is skewed, does a large scan, or has the characteristics of a large product join, the keys to high impact performance tuning.

User Type 3: Mostly Short Subsecond Requests with Occasional Long Running or Unknown Work

  • Log Threshold
  • BEGIN QUERY LOGGING WITH SQL, OBJECTS LIMIT THRESHOLD =100 CPUTIME AND SQLTEXT =10000 ON ALL ACCOUNT = 'acctname';
  • The threshold number 100 represents hundredths of CPU seconds and causes queries that require more than one second of CPU time to be logged in DBQLSQLTbl, DBQLObjTbl and DBQLogTbl, with details. Queries with less than one second CPU time are summarized.

    With threshold logging, DBQL cannot log to separate Explain and XML tables, even for those queries taking longer than the specified criteria. SQL, STEPINFO, and OBJECTS can be logged during threshold logging, even for those queries taking longer than the specified clock seconds.

This logging scenario captures data at a more detailed level for unknown or long running requests, while still logging the bulk of the work, the subsecond requests, at a summary level.