Configuring Query Log Collection

Teradata Viewpoint User Guide

brand
Analytical Ecosystem
prodname
Teradata Viewpoint
vrm_release
16.00
category
User Guide
featnum
B035-2206-106K
The Query Log data collector can be enabled to collect DBQL data from the PDCRDATA.DBQLogTbl_Hst table in Teradata Database which allows users to see Teradata Database usage by users and applications as well as suspect queries in the Query Log and Application Queries portlets.

Up to 10 million rows of DBQL data is collected per day. When the number of DBQL rows exceeds 10 million, you can specify the name of a database view to load a subset of the query log data to ensure only the most important data is collected.

  1. From the Systems list, click the name of the system you want to update.
  2. From the Setup list, click Data Collectors.
  3. Click the Query Log data collector.
  4. [Optional] Select the Enable Query Log Collector check box to enable the data collector to begin processing data.
  5. If Login appears, select the login you want to use to run the system task.
  6. Under Collection Time, select a time zone to define the 24 hour period (midnight to midnight) for summarizing data.
  7. Select one of the following to specify when the data collector begins collecting data:
    Option Description
    After PDCR daily job Select to collect data within an hour of when the daily PDCR job runs.
    Daily at Select and then specify a time to start the daily data collection relative to the specified time zone.
  8. Under Delete Summary Data, select one of the following options to set the calendar or file size retention period. Summary data refers to the data aggregated up to a system, user, or an application per day. This data is displayed in the Trends tab in the Query Log and Application Queries portlets.
    Option Description
    Date
    1. Select the After check box to set the calendar retention period.
    2. Enter a number and select a calendar value.
    Size
    1. Select the Over check box to set the file size retention period.
    2. Enter a number and select a file size restriction.
      The size of the data actually retained by the collector is always larger than the specified number because data clean up only drops table partitions that exceed this number. A 1024 byte kilobyte is used, so 1 GB is actually 1,073,741,824 bytes.
  9. Under Delete Detail Data, type a number up to 999 days to control how many days of detailed data should be kept. Detail data refers to the individual queries logged to the DBQL. This setting controls the number of days you can view detailed query information in the Query Log and Application Queries portlets.
  10. Under Suspect Query Definition, set any of the following options to define the criteria for what classifies a suspect query. This setting controls what queries are flagged as being suspect in the Query Log and Application Queries portlets.
    Option Description
    CPU Skew Type a percentage up to 100 to flag as suspect those queries having a CPU skew over this percentage.
    I/O Skew Type a percentage up to 100 to flag as suspect those queries having an I/O skew over this percentage.
    Product Join Indicator Type a number up to 100 to flag as suspect those queries having a product join indicator over this number.
    Unnecessary I/O Type a number up to 100 to flag as suspect those queries having unnecessary I/O over this number.
    1. For Tolerance, type a number to set a threshold that is used to classify which queries are identified as suspect. The number must be between 0 and 99999. Increasing the Tolerance decreases the number of suspect queries by increasing the threshold for queries that consume smaller amounts of resources. For more information, see Suspect Query Formulas.
  11. Under Row Limiting View, enter a database view, in the format databaseName.viewName, that can be used to filter out less important data when the number of rows exceeds 10 million.
  12. Click Apply.