15.00 - schmon -M - Teradata Database

Teradata Database Utilities

Product
Teradata Database
Release Number
15.00
Content Type
Configuration
Publication ID
B035-1102-015K
Language
English (United States)
Last Update
2018-09-25

schmon -M

Purpose  

The schmon -M option displays or monitors Priority Scheduler resource usage statistics for all nodes of an MPP Teradata Database.

Syntax  

 

Syntax element

Description

-M

Displays or monitors Priority Scheduler resource usage statistics for all nodes of Teradata Database. If no additional options are specified, schmon ‑M displays the current statistics one time.

-n

Show names for nodes that have the minimum and maximum resource usage and work requests. The displayed names come from the mpplist file. Due to space limitations, names longer than nine characters may be truncated.

-p

Specifies to display Performance Group names associated with the Allocation Groups that are displayed with the -M command.

-s

Show Resource Partition and allocation group node resource usage statistics. Displays median and standard deviation values. Also displays a frequency table that shows the number of nodes falling within each of ten usage ranges. If no nodes fall within a range, that range is not included in the table.

-V

Verbose mode includes inactive nodes in statistical calculations and in the display. Inactive nodes are nodes that report zero usage, which can change the values calculated for median and standard deviations.

-P

Shows a separate line of information for each Performance Group.

Note: Performance Groups with no CPU usage are not shown.

-b

Displays resource usage over the last second. Without the -b option, schmon shows resource usage information for the last age time period. The age time period is defined by the schmon -t command.

-d

Displays the differences in data between sequential runs of schmon. Use the delay [reps] option to control timing of automatic schmon runs.

The following symbols precede numbers that appear in the output from the -d option.

  • - (a minus sign) indicates the value has decreased by the indicated amount since the previous schmon run.
  • An unsigned number indicates the value has increased by the indicated amount since the previous schmon run.
  • For examples of the -d option output, see “schmon -m” on page 745.

    Note: The output of the -d option shows only those items for which data has changed since the previous schmon run.

    delay [reps]

    Monitors resource usage over time by causing schmon to run again automatically after a specified delay using the current -M options. delay is a positive integer that specifies the number of seconds between schmon executions.

    Use the optional reps argument, a positive integer, to specify the number of times schmon should run. If reps is not specified, schmon runs indefinitely, with delay seconds between executions.

    Note: The difference between time stamps of successive information displays may not precisely match the specified delay value due to the time required for the collection activity itself.

    Usage Notes  

    You can type this command from any node in an MPP system. The statistics are described in the following table.

     

    Statistics category

    Description

    Stats

    The date and time of the statistics displayed, as well as repetition number, if you specify multiple repetitions.

    Resource Partitions

    The statistics for each active Resource Partition, including the following:

  • RP specifies the Resource Partition ID Number.
  • Rel Wgt specifies the weight of the Resource Partition relative to the active Resource Partitions.
  • Note: The sum of relative weights might be greater or less than 100 due to the following considerations:

  • Fractions are truncated as the final step in relative weight calculations.
  • Any relative weight calculated to be less than one is converted to one.
  • Avg CPU specifies the resource usage during the preceding age period for a single CPU. The CPU data is shown in two columns.
  • The % column is the percentage of CPU consumed by the Resource Partition. When statistics are collected from all nodes in an MPP system, this is the average percentage per Resource Partition for all nodes.
  • The msec column is the milliseconds of CPU time consumed by the Resource Partition.
  • Avg I/O specifies a normalized number of data blocks transferred by the Resource Partition. When statistics are collected from all nodes in an MPP system, this is the average normalized resource use per Resource Partition for all nodes on the Teradata Database system. I/O data is shown in two columns.
  • The % column is the percentage of disk consumed by the Resource Partition. When statistics are collected from all nodes in an MPP system, this is the average percentage per Resource Partition for all nodes.
  • The sblks column is the number of blocks read and written by the Resource Partition.
  • Avg # of Tasks specifies the number of tasks assigned to the Resource Partition at the end of the preceding collection period. When statistics are collected from all nodes in an MPP system, this is the average per Resource Partition for all nodes.
  • Avg # of Sets specifies the number of Scheduling Sets associated with the Resource Partition at the end of the preceding collection period. There is one Scheduling Set per session.
  • Minimum CPU specifies the lowest value in milliseconds of CPU resource usages from all nodes.
  • Minimum I/O specifies the lowest value of I/O data blocks transferred from all nodes in sblks.
  • Minimum Tasks specifies the lowest number of tasks from all nodes.
  • Maximum CPU specifies the highest value in milliseconds of CPU resource usages from all nodes.
  • Maximum I/O specifies the highest value of I/O data blocks transferred from all nodes in sblks.
  • Maximum Tasks specifies the highest number of tasks from all nodes.
  • Allocation Groups

    The statistics for each active Allocation Group, including the following:

  • AG specifies the Allocation Group ID Number.
  • Rel Wgt specifies the weight of the Allocation Group relative to the active Allocation Groups of the Resource Partition and the active Resource Partitions.
  • Note: The sum of relative weights might be greater or less than 100 due to the following considerations:

  • Fractions are truncated as the final step in relative weight calculations.
  • Any relative weight calculated to be less than one is converted to one.
  • Avg CPU specifies the resource usage by the Allocation Group during the preceding age period for a single CPU. The CPU data is shown in two columns.
  • The % column is the percentage of total available CPU on the node consumed by the Allocation Group. When statistics are collected from all nodes in an MPP system, this is the average percentage for all nodes.
  • The msec column is the milliseconds of CPU usage consumed by the Allocation Group.
  • Avg I/O specifies a normalized number of data blocks transferred by the Allocation Group. When statistics are collected from all nodes in an MPP system, this is the average normalized resource use per Allocation Group for all nodes on the Teradata Database system. I/O data is shown in two columns.
  • The % column is the percentage of total I/O blocks transferred by the Allocation Group. When statistics are collected from all nodes in an MPP system, this is the average percentage for all nodes.
  • The sblks column is the number of blocks read and written by the Allocation Group.
  • Avg # of Tasks specifies the number of tasks assigned to the Allocation Group at the end of the preceding collection period. When statistics are collected from all nodes in an MPP system, this is the average per Allocation Group for all nodes.
  • Avg # of Sets specifies number of Scheduling Sets associated with the Allocation Group at the end of the preceding collection period. When statistics are collected from all nodes in an MPP system, this is the average per Allocation Group for all nodes. There is one Scheduling Set per session.
  • When you submit the -M command on an MPP system, the following minimum and maximum values are displayed:

  • Minimum CPU specifies the lowest value in milliseconds of CPU resource usages from all nodes.
  • Minimum I/O specifies lowest value of I/O data blocks transferred from all nodes in sblks.
  • Minimum Tasks specifies the lowest number of tasks from all nodes.
  • Maximum CPU specifies the highest value in milliseconds of CPU resource usage from all nodes.
  • Maximum I/O highest value of I/O data blocks transferred from all nodes in sblks.
  • Maximum Tasks highest number of tasks from all nodes.
  • When you submit the -M command or specify the -p option on an SMP system, the affected Performance Groups are displayed instead of the minimum and maximum values:

  • Performance Groups Affected specifies all Performance Groups by name that reference the Allocation Group at that time. Since Allocation Groups can be shared among Performance Groups, this information is useful for resource usage traceback. (This information is not displayed when statistics are collected from all nodes in an MPP system unless you use the -p option.)
  • Note: System information is listed under AG 200. This AG has Rel Wgt set to MAX, which indicates that system work receives the maximum priority, but is not in any Rel Wgt calculations.

    Work Requests

    Displays work request statistics for each active Allocation Group. This data refers to the preceding age period and is the average for all nodes on a multi-node Teradata Database system. The data includes the following:

  • AG specifies the Allocation Group number.
  • # of requests specifies the number of work requests received.
  • Avg queue wait specifies the average time, in milliseconds, that a work request waited on an input queue before being serviced.
  • Avg service time specifies average time, in milliseconds, that a work request required for service.
  • The following apply to MPP configurations:

  • For the -M option:
  • The count of nodes for a single repetition or for multiple repetitions is displayed.
  • Time information is displayed.
  • If a node is offline, it is not be included in the output, and the node count reflects this; however, information about this node being excluded is not displayed.
  • The following apply to SMP configurations:

  • Node count is not shown.
  • A status message that includes time of day regardless of the number of repetitions is displayed.
  • No transient status messages are displayed.
  • Example  

    schmon -M
    Stats: 4 node(s)  Tue Oct 25 14:10:18 2005

                                    Avg   Avg             Node Resource Usage
        Rel   Avg CPU     Avg I/O   # of  # of      Minimum               Maximum
     RP Wgt  %  (msec)   %  (sblks) Tasks Sets   CPU     I/O Tasks     CPU     I/O Tasks
     == === === ======= === ======= ===== ===== ====== ===== ===== =============== ======
      0 100   8    5059  94    1362    10     1  997    1224     9   17120    1611    11

                                    Avg   Avg             Node Resource Usage
        Rel   Avg CPU     Avg I/O   # of  # of      Minimum               Maximum
     AG Wgt  %  (msec)   %  (sblks) Tasks Sets   CPU     I/O Tasks     CPU     I/O Tasks
     == === === ======= === ======= ===== ===== ====== ===== ===== =============== ======
      2  13   8    5059  94    1362    10     1  997    1224     9   17120    1611    11
    200 MAX   2    1577   5      73    83     1  211      63    82    5569      87    85

                    Avg queue  Avg queue  Avg service
      AG #requests  wait(msec) length     time(msec)
     === ========== ========== ========== ===========
       2       1147       1.04       0.02       59.15
     200          7       5.03       0.00      195.83
     
                          Minimum
                    Avg queue  Avg queue  Avg service
      AG #requests  wait(msec) length     time(msec)
     === ========== ========== ========== ===========
       2       1129       0.76       0.01       48.16
     200         14       0.00       0.00      186.07
     
                          Maximum
                    Avg queue  Avg queue  Avg service
      AG #requests  wait(msec) length     time(msec)
     === ========== ========== ========== ===========
       2       1160       1.34       0.03       81.36
     200         16       9.44       0.00      204.38

    Example  

    The following example shows a four-node MPP system with one node where PDE is NULL.

    Note: PDE was not up on one of the nodes.

    To monitor Priority Scheduler statistics for all nodes with a five-second delay between two repetitions of the display, type:

    schmon -M 5 2

    The following appears:

      Stats Collection #1: 3 node(s)  Sat Aug 30 08:44:31 2003
     
                                     Avg   Avg            Node Resource Usage
        Rel   Avg CPU     Avg I/O   # of  # of       Minimum           Maximum
     RP Wgt  %  (msec)   %  (sblks) Tasks Sets  CPU     I/O Tasks CPU      I/O Tasks
    === === === ======= === ======= ===== ===== ================= ==================
      0 100   0      29   0       0    20     1   4       0     1  85        0    57
     
                                     Avg   Avg            Node Resource Usage
        Rel   Avg CPU     Avg I/O   # of  # of       Minimum           Maximum
     AG Wgt  %  (msec)   %  (sblks) Tasks Sets  CPU     I/O Tasks CPU      I/O Tasks
    === === === ======= === ======= ===== ===== ================= ==================
      2  18   0      29   0       0     1     0   4       0     1  83        0     3
      4  72   0       0   0       0    19     0   2       0    57   2        0    57
     
      Stats Collection #2: 3 node(s)  Sat Aug 30 08:44:36 2003
     
                                     Avg   Avg            Node Resource Usage
        Rel   Avg CPU     Avg I/O   # of  # of       Minimum           Maximum
     RP Wgt  %  (msec)   %  (sblks) Tasks Sets  CPU     I/O Tasks CPU      I/O Tasks
    === === === ======= === ======= ===== ===== ================= ==================
      0 100   0      54   0       0    20     1   1       0     1 154        0    57
     
                                     Avg   Avg            Node Resource Usage
        Rel   Avg CPU     Avg I/O   # of  # of       Minimum           Maximum
     AG Wgt  %  (msec)   %  (sblks) Tasks Sets  CPU     I/O Tasks CPU      I/O Tasks
    === === === ======= === ======= ===== ===== ================= ==================
      2  18   0      54   0       0     1     1   1       0     1 152        0     3
      4  72   0       0   0       0    19     0   2       0    57   2        0    57

    Example  

    The following example shows a four-node MPP system with one node where PDE is NULL. This display also appears if you execute the -M command (with or without the -p option) on an SMP system.

    Note: PDE was not up on one of the nodes.

    To display Performance Group names associated with the Allocation Groups that are displayed with the -M command on an MPP system, type:

    schmon -M -p

    The following appears:

      Stats: 3 node(s)  Sat Aug 30 08:45:35 2003

                                    Avg   Avg
        Rel    Avg CPU    Avg I/O   # of  # of
     RP Wgt  %  (msec)   %  (sblks) Tasks Sets
    === === === ======= === ======= ===== ===== ==============================
      0 100   0      44   0       0     1     1

                                    Avg   Avg
        Rel    Avg CPU    Avg I/O   # of  # of
     AG Wgt  %  (msec    %  (sblks) Tasks Sets  Performance Groups Affected
    === === === ======= === ======= ===== ===== ==============================
      2  18   0      44   0       0     1     1 M

    Example  

    To display the names of the nodes that have the minimum and maximum resource usage, type:

    schmon -M -n

    The following appears:

    Stats: 4 node(s)  Tue Oct 25 14:10:18 2005
     
                                     Avg   Avg                Node Resource Usage
         Rel   Avg CPU     Avg I/O   # of  # of         Minimum               Maximum
      RP Wgt  %  (msec)   %  (sblks) Tasks Sets      CPU     I/O Tasks     CPU     I/O Tasks
     === === === ======= === ======= ===== ===== ===================== ===================
       0 100   8    5059  94    1362    10     1     997    1224     9   17120    1611    11
                                     Avg   Avg                Node Resource Usage
         Rel   Avg CPU     Avg I/O   # of  # of         Minimum               Maximum
      AG Wgt  %  (msec)   %  (sblks) Tasks Sets      CPU     I/O Tasks     CPU     I/O Tasks
     === === === ======= === ======= ===== ===== ===================== ===================
       2  13   8    5059  94    1362    10     1     997    1224     9   17120    1611    11
     200 MAX   2    1577   5      73    83     1     211      63    82    5569      87    85
     
     Node Names:
      RP/AG CPU_min    CPU_max    I/O_min    I/O_max    Tasks_min  Tasks_max
      ===== ========== ========== ========== ========== ========== ==========
      RP0   tnt45_byne tnt47_byne tnt47_byne tnt46_byne tnt45_byne tnt44_byne
      AG2   tnt45_byne tnt47_byne tnt47_byne tnt46_byne tnt45_byne tnt44_byne
      AG200 tnt44_byne tnt47_byne tnt45_byne tnt47_byne tnt45_byne tnt44_byne
     
                    Avg queue  Avg queue  Avg service
      AG #requests  wait(msec) length     time(msec)
     === ========== ========== ========== ===========
       2       1147       1.04       0.02       59.15
     200          7       5.03       0.00      195.83
     
                          Minimum
                    Avg queue  Avg queue  Avg service
      AG #requests  wait(msec) length     time(msec)
     === ========== ========== ========== ===========
       2       1129       0.76       0.01       48.16
     200         14       0.00       0.00      186.07
     
                          Maximum
                    Avg queue  Avg queue  Avg service
      AG #requests  wait(msec) length     time(msec)
     === ========== ========== ========== ===========
       2       1160       1.34       0.03       81.36
     200         16       9.44       0.00      204.38
     
     Node Names:
    RP/AG Rqsts_min Rqsts_max QWait_min QWait_max QLen_min  QLen_max  STime_min STime_max
    ===== ========= ========= ========= ========= ========= ========= ========= ========
    AG2   tnt44_byn tnt47_byn tnt46_byn tnt47_byn tnt46_byn tnt47_byn tnt44_byn tnt47_byn
    AG200 tnt46_byn tnt47_byn tnt46_byn tnt47_byn tnt46_byn tnt47_byn tnt46_byn tnt47_byn

    Example  

    To show RP and AG resource usage statistics, type:

    schmon -M -s

    The following appears:

    Stats: 4 node(s)  Tue Oct 25 14:11:30 2005
     
                                     Avg   Avg                Node Resource Usage
         Rel   Avg CPU     Avg I/O   # of  # of         Minimum               Maximum
      RP Wgt  %  (msec)   %  (sblks) Tasks Sets      CPU     I/O Tasks     CPU     I/O Tasks
     === === === ======= === ======= ===== ===== ===================== ===================
       0 100   7    4637  95    1425     9     1     850    1203     9   15811    1608    11
     
                                     Avg   Avg                Node Resource Usage
         Rel   Avg CPU     Avg I/O   # of  # of         Minimum               Maximum
      AG Wgt  %  (msec)   %  (sblks) Tasks Sets      CPU     I/O Tasks     CPU     I/O Tasks
     === === === ======= === ======= ===== ===== ===================== ===================
       2  15   7    4637  95    1425     9     1     850    1203     9   15811    1608    11
     200 MAX   2    1778   4      68    83     1      88      57    82    6652      81    85
     
     Statistics:
      RP 0 (4 Active Nodes):
       CPU Median: 943.50        I/O Median: 1444.50       Tasks Median: 9.50
           StdDev: 6451.43           StdDev: 171.08              StdDev: 0.83
     
              Range       Freq          Range       Freq          Range       Freq
         ======================    ======================    ======================
             850-2346         3       1203-1243         1          9-9            2
           14323-15819        1       1285-1325         1         10-10           1
                                      1572-1612         2         11-11           1
     
      AG 2 (4 Active Nodes):
       CPU Median: 943.50        I/O Median: 1444.50       Tasks Median: 9.50
           StdDev: 6451.43           StdDev: 171.08              StdDev: 0.83
     
              Range       Freq          Range       Freq          Range       Freq
         ======================    ======================    ======================
             850-2346         3       1203-1243         1          9-9            2
           14323-15819        1       1285-1325         1         10-10           1
                                      1572-1612         2         11-11           1
     
      AG 200 (4 Active Nodes):
       CPU Median: 187.00        I/O Median: 68.50         Tasks Median: 82.50
           StdDev: 2814.08           StdDev: 10.01               StdDev: 1.22
     
              Range       Freq          Range       Freq          Range       Freq
         ======================    ======================    ======================
              88-744          3         57-59           1         82-82           2
            6001-6657         1         60-62           1         83-83           1
                                        75-77           1         85-85           1
                                        81-83           1
     
     
                    Avg queue  Avg queue  Avg service
      AG #requests  wait(msec) length     time(msec)
     === ========== ========== ========== ===========
       2       1157       1.13       0.02       58.27
     200          1      22.33       0.00      651.83
     
                          Minimum
                    Avg queue  Avg queue  Avg service
      AG #requests  wait(msec) length     time(msec)
     === ========== ========== ========== ===========
       2       1152       0.78       0.01       49.87
     200          1       0.00       0.00      372.20
     
                          Maximum
                    Avg queue  Avg queue  Avg service
      AG #requests  wait(msec) length     time(msec)
     === ========== ========== ========== ===========
       2       1172       1.76       0.03       77.50
     200          5     134.00       0.00     2050.00
     
     Statistics:
      AG 2 (4 Active Nodes):
       Requests Median: 1152.00       Q Length Median: 0.00
                StdDev: 8.66                   StdDev: 0.00
     
              Range       Freq               Range       Freq
         ======================         ======================
            1152-1154         3               0-0            4
            1170-1172         1
     
       Que Wait Median: 1.00          Srv Time Median: 52.90
                StdDev: 0.38                   StdDev: 11.16
     
              Range       Freq               Range       Freq
         ======================         ======================
               0-0            2              49-51           1
               1-1            2              52-54           2
                                             76-78           1
     
      AG 200 (2 Active Nodes):
       Requests Median: 3.00          Q Length Median: 0.00
                StdDev: 2.83                   StdDev: 0.00
     
              Range       Freq               Range       Freq
         ======================         ======================
               1-1            1               0-0            2
               5-5            1
     
       Que Wait Median: 67.00         Srv Time Median: 1211.10
                StdDev: 94.75                  StdDev: 1186.38
     
              Range       Freq               Range       Freq
         ======================         ======================
               0-13           1             372-539          1
             126-139          1            1884-2051         1

    Example  

    To show statistics in verbose mode, which includes inactive nodes, type:

    schmon -M -s -V

    The following appears:

     Stats: 4 node(s)  Tue Oct 25 14:13:53 2005
                                     Avg   Avg                Node Resource Usage
         Rel   Avg CPU     Avg I/O   # of  # of         Minimum               Maximum
      RP Wgt  %  (msec)   %  (sblks) Tasks Sets      CPU     I/O Tasks     CPU     I/O Tasks
     === === === ======= === ======= ===== ===== ===================== ===================
       0 100  11    6999  93    2422    11     1    2559    2273     9   19741    2602    12
     
                                     Avg   Avg                Node Resource Usage
         Rel   Avg CPU     Avg I/O   # of  # of         Minimum               Maximum
      AG Wgt  %  (msec)   %  (sblks) Tasks Sets      CPU     I/O Tasks     CPU     I/O Tasks
     === === === ======= === ======= ===== ===== ===================== ===================
       2  13  11    6973  93    2422    10     1    2559    2273     9   19638    2602    12
       3  26   0      92   0       0     2     1      92       0     2      92       0     2
       4  53   0      11   0       0     0     1      11       0     0      11       0     0
     200 MAX   3    2196   6     174    83     1     428     160    82    7351     189    85
     
     Statistics:
      RP 0 (4 Active Nodes):
       CPU Median: 2849.00       I/O Median: 2406.50       Tasks Median: 11.50
           StdDev: 7359.65           StdDev: 142.57              StdDev: 1.22
     
              Range       Freq          Range       Freq          Range       Freq
         ======================    ======================    ======================
            2559-4277         3       2273-2305         2          9-9            1
           18030-19748        1       2504-2536         1         11-11           1
                                      2570-2602         1         12-12           2
     
      AG 2 (4 Active Nodes):
       CPU Median: 2849.00       I/O Median: 2406.50       Tasks Median: 10.50
           StdDev: 7315.07           StdDev: 142.57              StdDev: 1.12

               Range       Freq          Range       Freq          Range       Freq
         ======================    ======================    ======================
            2559-4266         3       2273-2305         2          9-9            1
           17931-19638        1       2504-2536         1         10-10           1
                                      2570-2602         1         11-11           1
                                                                  12-12           1
     
      AG 3 (1 Active Node):
       CPU Median: 92.00         I/O Median: 0.00          Tasks Median: 2.00
           StdDev: 0.00              StdDev: 0.00                StdDev: 0.00
     
              Range       Freq          Range       Freq          Range       Freq
         ======================    ======================    ======================
              92-92           1          0-0            1          2-2            1
     
      AG 3 (All Nodes):
       CPU Median: 0.00          I/O Median: 0.00          Tasks Median: 0.00
           StdDev: 39.84             StdDev: 0.00                StdDev: 0.87
     
      AG 4 (1 Active Node):
       CPU Median: 11.00         I/O Median: 0.00          Tasks Median: 0.00
           StdDev: 0.00              StdDev: 0.00                StdDev: 0.00
     
              Range       Freq          Range       Freq          Range       Freq
         ======================    ======================    ======================
              11-11           1          0-0            1          0-0            1
     
      AG 4 (All Nodes):
       CPU Median: 0.00          I/O Median: 0.00          Tasks Median: 0.00
           StdDev: 4.76              StdDev: 0.00                StdDev: 0.00
     
      AG 200 (4 Active Nodes):
       CPU Median: 503.50        I/O Median: 173.50        Tasks Median: 82.50
           StdDev: 2976.18           StdDev: 12.67               StdDev: 1.22
     
              Range       Freq          Range       Freq          Range       Freq
         ======================    ======================    ======================
             428-1120         3        160-162          1         82-82           2
            6665-7357         1        163-165          1         83-83           1
                                       184-186          1         85-85           1