This single operational view API aggregates node-specific RSS data across VantageCloud Lake clusters.
Syntax
REPLACE FUNCTION SYSLIB.MonitorPhysicalResourceSV (
) RETURNS TABLE (
ProcId INTEGER,
Status CHAR CHARACTER SET LATIN,
AmpCount SMALLINT,
PECount SMALLINT,
CPUUse FLOAT,
PrcntKernel FLOAT,
PrcntService FLOAT,
PrcntUser FLOAT,
DiskUse FLOAT,
DiskReads FLOAT,
DiskWrites FLOAT,
DiskOutReqAvg FLOAT,
NetAUse FLOAT,
NetReads FLOAT,
NetWrites FLOAT,
HstBlkRd FLOAT,
HstBlkWrts FLOAT,
MemAllocates FLOAT,
MemAllocateKB FLOAT,
MemFailures FLOAT,
MemAgings FLOAT,
NetAUp CHAR(1) CHARACTER SET LATIN,
NetBUp CHAR(1) CHARACTER SET LATIN,
Type VARCHAR,
Id VARCHAR,
Name VARCHAR,
Group VARCHAR
)
...
;
Syntax Elements
- ProcId
- ID associated with a node.
- Status
- Status of the node associated with this record:
- U = Up/online
- D = Down/offline
- AmpCount
- Number of AMPs executing on the associated node.
- PECount
- Number of active PEs on the associated node.
- CPUUse
- % of CPU usage not spent being idle. The node-level display is computed from ResUsageSpma table data as PercntUser + PercntService. For information on ResUsageSpma table, see resusagespmaV.
- PrcntKernel
- % of CPU resources in idle and waiting for I/O completion. This value is computed from ResUsageSpma data as follows, where NCPUs is the number of CPUs:
- PrcntService
- % of CPU resources spent in PDE user service processing. The value is computed from the ResUsageSpma table data, where x represents the number of CPUs:
- PrcntUser
- % of CPU resources spent in non-service user code processing. This value is computed from the ResUsageSpma table data, where x represents the number of CPUs:
- DiskUse
- % of disk usage per node.
- DiskReads
- Total number of physical disk reads per node during the collection period. This value is computed from ResUsageSldv table data as follows, assuming n is the number of ldv devices used by this node:
- DiskWrites
- Total number of physical disk writes per node during the collection period. This value is computed from ResUsageSldv table data as:
- DiskOutReqAvg
- Average number of outstanding disk requests.
- NetAUse
- % of BYNET A usage (actual BYNET receiver usage). (The BYNET transmitter usage is maintained in resource usage separately and is typically lower than the receiver usage. This is caused by multicasts, where one transmitter sends a message to multiple receivers.) This value is computed from the ResUsageSpma table data as:
- NetReads
- Number of Reads from the BYNET to the node. This value is computed from the ResUsageSpma table data as:
- NetWrites
- Number of messages written from the node to the BYNET during the collection period.
- HstBlkRds
- Number of message blocks (one or more messages sent in one physical group) received from all clients.
- HstBlkWrts
- Number of message blocks (that is, one or more messages sent in one physical group) sent to all hosts.
- MemAllocates
- This column is deprecated and returns zero or NULL.
- MemAllocateKB
- Value represents the change in node-level memory. MemAllocateKB represents a delta from the previous reporting period, reporting negative values as less memory is used.
- MemFailures
- This column is deprecated and returns zero or NULL.
- MemAgings
- This column is deprecated and returns zero or NULL.
- NetAUp
- NetBUp
- Status of the BYNETs (if there are more than two, the first two) on a system-wide basis:
- U = All node BYNETs are up/online.
- D = One or more node BYNETs is down/offline.
- "" = A temporary condition where the BYNET data is not available.
- Type
- Only available in the detailed view. Identifies the group type (compute cluster or primary cluster).
- Id
- Only available in the detailed view. Identifies the id of the group (useful to identify compute clusters).
- Name
- Only available in the detailed view. Provides the name of the compute cluster.
- Group
- Only available in the detailed view. Provides the name of the compute group.
Usage Notes
MonitorPhysicalResourceSV returns the detailed resource usage information for each node. Because the function reports on each node, you can isolate performance concerns by node.