15.00 - schmon -m - Teradata Database

Teradata Database Utilities

Product
Teradata Database
Release Number
15.00
Content Type
Configuration
Publication ID
B035-1102-015K
Language
English (United States)
Last Update
2018-09-25

schmon -m

Purpose  

The schmon -m option displays or monitors Priority Scheduler resource usage statistics.

Syntax  

 

Syntax element

Description

-m

Displays or monitors Priority Scheduler resource usage statistics. If no additional options are specified, schmon -m displays current statistics for the current node.

-S

Displays or monitors statistics for all nodes in an MPP system.

‑L

Includes information related to the CPU usage limits imposed on Allocation Groups.

CPU usage limits can be set at the AG, RP, or Teradata Database system level using schmon ‑a, ‑b, and ‑l options, respectively. AG tasks can be delayed by the system, if necessary, to keep the CPU usage within those limits.

-P

Displays a separate line of information for each Performance Group.

Note: Performance Groups with no CPU usage are not shown.

-b

Displays resource usage over the last second. Without the -b option, schmon shows resource usage information for the last age time period. The age time period is defined by the schmon -t command.

-d

Displays the differences in data between sequential runs of schmon. Use the delay [reps] option to control timing of automatic schmon runs.

The following symbols precede numbers that appear in the output from the -d option.

  • - (a minus sign) indicates the value has decreased by the indicated amount since the previous schmon run.
  • An unsigned number indicates the value has increased by the indicated amount since the previous schmon run.
  • Note: The output of the -d option shows only those items for which data has changed since the previous schmon run.

    delay [reps]

    Monitors resource usage over time by causing schmon to run again automatically after a specified delay using the current -m options. delay is a positive integer that specifies the number of seconds between schmon executions.

    Use the optional reps argument, a positive integer, to specify the number of times schmon should run. If reps is not specified, schmon runs indefinitely, with delay seconds between executions to continually monitor resource usage.

    Note: The difference between time stamps of successive information displays may not precisely match the specified delay value due to the time required for the collection activity itself.

    Usage Notes  

    The statistics are described in the following table.

     

    Statistics category

    Description

    Stats Collection

    The date and time of the statistics displayed, as well as repetition number, if you specify multiple repetitions.

    Resource Partitions

    The statistics for each active Resource Partition, including the following:

  • RP specifies the Resource Partition ID Number.
  • Rel Wgt specifies the weight of the Resource Partition relative to the active Resource Partitions.
  • Note: The sum of relative weights might be greater or less than 100 due to the following considerations:

  • Fractions are truncated as the final step in relative weight calculations.
  • Any relative weight calculated to be less than one is converted to one.
  • Avg CPU specifies the resource usage during the preceding age period for a single CPU. The CPU data is shown in two columns.
  • The % column is the percentage of CPU consumed by the Resource Partition. When statistics are collected from all nodes in an MPP system, this is the average percentage per Resource Partition for all nodes.
  • The msec column is the milliseconds of CPU time consumed by the Resource Partition.
  • Avg I/O specifies a normalized number of data blocks transferred by the Resource Partition. When statistics are collected from all nodes in an MPP system, this is the average normalized resource use per Resource Partition for all nodes on the Teradata Database system. I/O data is shown in two columns.
  • The % column is the percentage of disk consumed by the Resource Partition. When statistics are collected from all nodes in an MPP system, this is the average percentage per Resource Partition for all nodes.
  • The sblks column is the number of blocks read and written by the Resource Partition.
  • # of Tasks specifies the number of tasks assigned to the Resource Partition at the end of the preceding collection period. When statistics are collected from all nodes in an MPP system, this is the average per Resource Partition for all nodes.
  • # of Sets specifies the number of Scheduling Sets associated with the Resource Partition at the end of the preceding collection period. There is one Scheduling Set per session.
  • Allocation Groups

    The statistics for each active Allocation Group, including the following:

  • AG specifies the Allocation Group ID number.
  • Rel Wgt specifies the weight of the Allocation Group relative to the active Allocation Groups of the Resource Partition and the active Resource Partitions.
  • Note: The sum of relative weights might be greater or less than 100 due to the following considerations:

  • Fractions are truncated as the final step in relative weight calculations.
  • Any relative weight calculated to be less than one is converted to one.
  • Avg CPU specifies resource usage by the Allocation Group during the preceding age period for a single CPU. The CPU data is shown in two columns.
  • The % column is the percentage of total available CPU on the node consumed by the Allocation Group. When statistics are collected from all nodes in an MPP system, this is the average percentage for all nodes.
  • The msec column is the milliseconds of CPU usage consumed by the group.
  • Avg I/O specifies a normalized number of data blocks transferred by the Allocation Group. When statistics are collected from all nodes in an MPP system, this is the average normalized resource use per Allocation Group for all nodes on the Teradata Database system. I/O data is shown in two columns.
  • The % column is the percentage of total I/O blocks transferred by the Allocation Group. When statistics are collected from all nodes in an MPP system, this is the average percentage for all nodes.
  • The sblks column is the number of blocks read and written by the group.
  • # of Tasks specifies a number of tasks assigned to the Allocation Group at the end of the proceeding collection period. When statistics are collected from all nodes in an MPP system, this is the average per Allocation Group for all nodes.
  • # of Sets specifies a number of Scheduling Sets associated with the Allocation Group at the end of the preceding collection period. When statistics are collected from all nodes in an MPP system, this is the total per Allocation Group for all nodes. There is one Scheduling Set per session.
  • Performance Groups Affected specifies all Performance Groups by name that reference the Allocation Group at that time. Since Allocation Groups can be shared among Performance Groups, this information is useful for resource usage traceback. (This information is not displayed when statistics are collected from all nodes in an MPP system, unless you use the -p option.)
  • Note: System information is listed under AG 200. This AG has Rel Wgt set to MAX, which indicates that system work receives the maximum priority, but is not in any Rel Wgt calculations.

    Work Requests

    Displays work request statistics for each active Allocation Group. This data refers to the preceding age period and is the average for all nodes on a multi-node Teradata Database system. The data includes the following:

  • AG specifies the Allocation Group number.
  • # of requests specifies the number of work requests received.
  • Avg queue wait specifies the average time, in milliseconds, that a work request waited on an input queue before being serviced.
  • Avg queue length specifies the average number of work requests waiting on the input queue for service.
  • Avg service time specifies average time, in milliseconds, that a work request required for service.
  • The following apply to MPP configurations:

  • For the -S option:
  • The count of nodes for a single repetition or for multiple repetitions is displayed.
  • Time information is displayed.
  • If a node is offline, it is not be included in the output, and the node count reflects this; however, information about this node being excluded is not displayed.
  • The following apply to SMP configurations:

  • Node count is not shown.
  • A status message that includes time of day regardless of the number of repetitions is displayed.
  • No transient status messages are displayed.
  • Example  

    The following example shows a four-node MPP system.

    To monitor current Priority Scheduler statistics for the current node, type:

    schmon -m

    The following appears:

      Stats: Tue Jan 20 07:48:23 2004
     
        Rel   Avg CPU     Avg I/O   # of  # of 
     RP Wgt  %  (msec)   %  (sblks) Tasks Sets  
    === === === ======= === ======= ===== ===== ==============================
      0 100   0      58   0       0    13     3
     
        Rel   Avg CPU     Avg I/O   # of  # of 
     AG Wgt  %  (msec)   %  (sblks) Tasks Sets  Performance Groups Affected
    === === === ======= === ======= ===== ===== ==============================
      1   9   0      33   0       0    10     1 L
      2  18   0       5   0       0     1     1 M
      4  72   0      20   0       0     2     1 R, PGFive
    200 MAX   0     217   0      15    27     1 System
     
                   Avg queue  Avg queue  Avg service
     AG #requests  wait(msec) length     time(msec)
    === ========== ========== ========== ===========
      2         16          0          0          76
      4          6          0          0           0

    Example  

    The following example shows a four-node MPP system.

    Note: PDE was not up on one of the nodes.

    To monitor Priority Scheduler statistics for all nodes with a monitoring interval with a five-second delay between repetitions of the display, type:

    schmon -m -S 5 1

    The following appears:

     Stats: 3 node(s)  Mon Mar  8 15:32:33 2004

                                    Avg   Avg            Node Resource Usage
        Rel   Avg CPU     Avg I/O   # of  # of     Minimum             Maximum
     RP Wgt  %  (msec)   %  (sblks) Tasks Sets     CPU    I/O Tasks    CPU    I/O Tasks=== === === ======= === ======= ===== ===== =================== ===================
      0 100   0      11   0       0     2     0    22       0     4    22       0     4

                                    Avg   Avg            Node Resource Usage
        Rel   Avg CPU     Avg I/O   # of  # of     Minimum             Maximum
     AG Wgt  %  (msec)   %  (sblks) Tasks Sets     CPU    I/O Tasks    CPU    I/O Tasks
    === === === ======= === ======= ===== ===== =================== ===================
      2  18   0      11   0       0     2     0    22       0     4    22       0     4

    Example  

    The following output is from an SMP configuration. The –L option shows information on CPU usage limits placed AGs by the System/RP/AG limit feature. This information includes a count of delays imposed on each AG to keep CPU usage within the limits.

    To monitor current Priority Scheduler statistics for the current node with information related to System/RP/AG limits, type:

    schmon -m -L

    The following appears:

     Stats: Wed Jan 13 10:53:20 2010
     
         Rel   Avg CPU     Avg I/O   # of  # of
     RP  Wgt  %  (msec)   %  (sblks) Tasks Sets
    === ==== === ======= === ======= ===== ===== ==============================
      0  100  24   14950   0       0    22     2
     
         Rel   Avg CPU     Avg I/O   # of  # of    Delay   Delay      Total
     AG  Wgt  %  (msec)   %  (sblks) Tasks Sets    Count Skipped      Delay
    === ==== === ======= === ======= ===== ===== ======= ======= ==========
     10   33   8    5202   0       0    12     1     599       0     447628
     11   66  16    9748   0       0    10     1     596       0     445184
    200  MAX   0      62   0       0    74     1       0       0          0
     
                   Avg queue  Avg queue  Avg service
     AG #requests  wait(msec) length     time(msec)
    === ========== ========== ========== ===========
     10         20      28.80       0.01    24975.40
     11         42      11.90       0.01    14268.19

    The following table explains the additional columns related to CPU limits and delays.

     

    Column

    Explanation

    Delay Count

    The number of delays imposed on tasks in the AG over the Age Period. The Age Period can be viewed and set using schmon ‑t.

    Delay Skipped

    The number of delays skipped due to tasks being in non-delayable states.

    Total Delay

    The total number of milliseconds that all tasks in the AG have been delayed over the Age Period. The Age Period can be viewed and set using schmon ‑t.

    Example  

    The following output is from an SMP configuration.

    To monitor current Priority Scheduler statistics for the current node with a monitoring interval of five seconds and two repetitions, type:

    schmon -m 5 2

    The following appears:

    Stats Collection #1: Mon Mar 20 18:02:31 2006
     
        Rel   Avg CPU     Avg I/O   # of  # of 
     RP Wgt  %  (msec)   %  (sblks) Tasks Sets  
    === === === ======= === ======= ===== ===== ==============================
      0 100   0       8   0       0     1     1
     
        Rel   Avg CPU     Avg I/O   # of  # of 
     AG Wgt  %  (msec)   %  (sblks) Tasks Sets  Performance Groups Affected
    === === === ======= === ======= ===== ===== ==============================
      2  18   0       8   0       0     1     1 M
     
     
     
      Stats Collection #2: Mon Mar 20 18:02:36 2006
     
        Rel   Avg CPU     Avg I/O   # of  # of 
     RP Wgt  %  (msec)   %  (sblks) Tasks Sets  
    === === === ======= === ======= ===== ===== ==============================
      0 100   0      38   0       0     1     1
     
        Rel   Avg CPU     Avg I/O   # of  # of 
     AG Wgt  %  (msec)   %  (sblks) Tasks Sets  Performance Groups Affected
    === === === ======= === ======= ===== ===== ==============================
      2  18   0      38   0       0     1     1 M

    Example  

    The -P option shows a separate line of information for each Performance Group. The following example compares the output from schmon -m and schmon -m -P.

    >schmon -m

      Stats: Mon Oct 30 14:33:19 2006

        Rel   Avg CPU     Avg I/O   # of  # of
     RP Wgt  %  (msec)   %  (sblks) Tasks Sets
    === === === ======= === ======= ===== ===== ==============================
      0  66   0      95   0       0     2     1
      1  33  24   14911  95     701   144   201

        Rel   Avg CPU     Avg I/O   # of  # of
     AG Wgt  %  (msec)   %  (sblks) Tasks Sets  Performance Groups Affected
    === === === ======= === ======= ===== ===== ==============================
      4  53   0      95   0       0     2     1 H, R
     11  11  11    7180  70     519    49     1 PG10, PG11, PG12
     12  22  12    7731  24     182    95   200 PG11, PG12, PG13
    200 MAX   2    1709   4      31   140     1 System
                   Avg queue  Avg queue  Avg service
     AG #requests  wait(msec) length     time(msec)
    === ========== ========== ========== ===========
      4        157       1.82       0.00       43.15
     11       2375     346.68      13.72      259.41
     12       4136     206.94      14.27      155.91


    >schmon -m -P

      Stats: Mon Oct 30 14:33:19 2006
              Avg CPU     Avg I/O   # of  # of
     RP  PG  %  (msec)   %  (sblks) Tasks Sets
    === === === ======= === ======= ===== ===== ==============================
      0   3   0      95   0       0     1     1
      1  10   9    5623  51     379    19    36
      1  11  10    6256  24     182    51    61
      1  12   3    2020  19     140    30    98
      1  13   1    1012   0       0    22   111

              Avg CPU     Avg I/O   # of  # of
     AG  PG  %  (msec)   %  (sblks) Tasks Sets  Performance Groups Affected
    === === === ======= === ======= ===== ===== ==============================
      4   3   0      95   0       0     1     1 R
     11  10   9    5623  51     379    19    36 PG10
     11  11   0     487   0       0    19    47 PG11
     11  12   1    1070  19     140     0    43 PG12
     12  11   9    5769  24     182    32    14 PG11
     12  12   1     950   0       0    30    55 PG12
     12  13   1    1012   0       0    22   111 PG13
    200  40   2    1709   4      31   140     1 System

                       Avg queue  Avg queue  Avg service
     AG  PG #requests  wait(msec) length     time(msec)
    === === ========== ========== ========== ===========
      4   3        157       1.82       0.00       43.15
     11  10       1205     266.38       5.35      818.12
     11  11       1158     433.82       8.37      525.99
     12  11         63      31.75       0.03     5313.52
     12  12       1946     216.07       7.01      262.16
     12  13       2113     205.12       7.22      300.01

    Example  

    The following example displays changes in data over time using the default delay time of five seconds.

    > schmon -m -d
    Using default 5 second delay.
      Stats: Thu Feb 08 18:24:13 2007

        Rel   Avg CPU     Avg I/O   # of  # of
     RP Wgt  %  (msec)   %  (sblks) Tasks Sets
    === === === ======= === ======= ===== ===== ==============================
      0   0   0      -3   0       0     4     0
      1   0   1     652   1     123  -143     0

        Rel   Avg CPU     Avg I/O   # of  # of
     AG Wgt  %  (msec)   %  (sblks) Tasks Sets  Performance Groups Affected
    === === === ======= === ======= ===== ===== ==============================
      2   0   0      -1   0       0     0     0 M
      4   0   0      -2   0       0     4     0 R
     10   0  -2   -1006  -4    -125    21     0 PG10, PG11, PG12
     11   0   3    1781   6     296  -230     0 PG10, PG11, PG12
     12   0   1     661  -1      15    36     0 PG10, PG11, PG12
     13   0  -1    -784  -2     -63    30     0 PG10, PG11, PG12
    200 MAX  -1    -232  -1     -13     0     0 System
                   Avg queue  Avg queue  Avg service
     AG #requests  wait(msec) length     time(msec)
    === ========== ========== ========== ===========
      4         -1     -17.71      -0.09       -5.79
     10       -809       3.56       0.14       11.41
     11        -14       0.03      -0.00     1668.90
     12       -624       3.17      -0.03        8.24
     13       -418      21.21       1.20        2.81