Note: Not all of the OpenView Operations Embedded Performance Component metrics are available on all of the supported platforms and OS versions. The Embedded Performance Component contains a subset of the metrics that are available from the OpenView Performance Agents (OVPA).
GLOBAL Metrics:
GBL_ACTIVE_CPU
GBL_BOOT_TIME
GBL_COLLECTOR
GBL_CPU_CLOCK
GBL_CPU_IDLE_TIME
GBL_CPU_IDLE_UTIL
GBL_CPU_SYS_MODE_TIME
GBL_CPU_SYS_MODE_UTIL
GBL_CPU_TOTAL_TIME
GBL_CPU_TOTAL_UTIL
GBL_CPU_USER_MODE_TIME
GBL_CPU_USER_MODE_UTIL
GBL_DISK_PHYS_BYTE
GBL_DISK_PHYS_BYTE_RATE
GBL_DISK_PHYS_IO
GBL_DISK_PHYS_IO_RATE
GBL_FS_SPACE_UTIL_PEAK
GBL_GMTOFFSET
GBL_INTERRUPT
GBL_INTERRUPT_RATE
GBL_INTERVAL
GBL_MACHINE
GBL_MACHINE_MODEL
GBL_MEM_PAGEIN
GBL_MEM_PAGEIN_RATE
GBL_MEM_PAGEOUT
GBL_MEM_PAGEOUT_RATE
GBL_MEM_PG_SCAN
GBL_MEM_PG_SCAN_RATE
GBL_MEM_PHYS
GBL_MEM_SWAPIN_BYTE
GBL_MEM_SWAPIN_BYTE_RATE
GBL_MEM_SWAPOUT_BYTE
GBL_MEM_SWAPOUT_BYTE_RATE
GBL_MEM_UTIL
GBL_NET_COLLISION
GBL_NET_COLLISION_RATE
GBL_NET_ERROR
GBL_NET_ERROR_RATE
GBL_NET_IN_PACKET
GBL_NET_IN_PACKET_RATE
GBL_NET_OUT_PACKET
GBL_NET_OUT_PACKET_RATE
GBL_NUM_CPU
GBL_NUM_DISK
GBL_NUM_NETWORK
GBL_NUM_USER
GBL_OSNAME
GBL_OSRELEASE
GBL_OSVERSION
GBL_RUN_QUEUE
GBL_STARTED_PROC
GBL_STARTED_PROC_RATE
GBL_STATTIME
GBL_SWAP_SPACE_AVAIL
GBL_SWAP_SPACE_USED
GBL_SWAP_SPACE_UTIL
GBL_SYSCALL
GBL_SYSCALL_RATE
GBL_SYSTEM_ID
TBL_FILE_TABLE_AVAIL
TBL_FILE_TABLE_USED
TBL_FILE_TABLE_UTIL
TBL_MSG_TABLE_AVAIL
TBL_MSG_TABLE_USED
TBL_MSG_TABLE_UTIL
TBL_PROC_TABLE_AVAIL
TBL_PROC_TABLE_USED
TBL_PROC_TABLE_UTIL
TBL_SEM_TABLE_AVAIL
TBL_SEM_TABLE_USED
TBL_SEM_TABLE_UTIL
TBL_SHMEM_TABLE_AVAIL
TBL_SHMEM_TABLE_USED
TBL_SHMEM_TABLE_UTIL
CPU (Processor) Metrics:
BYCPU_CPU_CLOCK
BYCPU_CPU_SYS_MODE_TIME
BYCPU_CPU_SYS_MODE_UTIL
BYCPU_CPU_TOTAL_TIME
BYCPU_CPU_TOTAL_UTIL
BYCPU_CPU_USER_MODE_TIME
BYCPU_CPU_USER_MODE_UTIL
BYCPU_ID
BYCPU_INTERRUPT
BYCPU_INTERRUPT_RATE
DISK Metrics:
BYDSK_BUSY_TIME
BYDSK_DEVNAME
BYDSK_ID
BYDSK_PHYS_BYTE
BYDSK_PHYS_BYTE_RATE
BYDSK_PHYS_IO
BYDSK_PHYS_IO_RATE
BYDSK_PHYS_READ
BYDSK_PHYS_READ_BYTE
BYDSK_PHYS_READ_BYTE_RATE
BYDSK_PHYS_READ_RATE
BYDSK_PHYS_WRITE
BYDSK_PHYS_WRITE_BYTE
BYDSK_PHYS_WRITE_BYTE_RATE
BYDSK_PHYS_WRITE_RATE
BYDSK_UTIL
NETIF (Network Interface) Metrics:
BYNETIF_COLLISION
BYNETIF_COLLISION_RATE
BYNETIF_ERROR
BYNETIF_ERROR_RATE
BYNETIF_ID
BYNETIF_IN_BYTE
BYNETIF_IN_BYTE_RATE
BYNETIF_IN_PACKET
BYNETIF_IN_PACKET_RATE
BYNETIF_NAME
BYNETIF_OUT_BYTE
BYNETIF_OUT_BYTE_RATE
BYNETIF_OUT_PACKET
BYNETIF_OUT_PACKET_RATE
FS (File System) Metrics:
FS_BLOCK_SIZE
FS_DEVNAME
FS_DEVNO
FS_DIRNAME
FS_FRAG_SIZE
FS_INODE_UTIL
FS_MAX_INODES
FS_MAX_SIZE
FS_SPACE_RESERVED
FS_SPACE_USED
FS_SPACE_UTIL
FS_TYPE
Metric Definitions
ASCII field containing collector name and version.
The amount of time in the interval.
This measured interval is slightly larger than the desired or configured interval if the collection program is delayed by a higher priority process and cannot sample the data immediately.
A string representing the name of the operating system. This is the same as the output from the "uname -s" command.
A string representing the version of the operating system. This is the same as the output from the "uname -v" command. This string is limited to 20 characters, therefore the complete version name might be truncated.
On Windows, this is a string representing the service pack installed on the operating system.
The current release of the operating system.
On most Unix systems, this is same as the output from the "uname -r" command.
On AIX, this is the actual patch level of the operating system. This is similar to what is returned by the command "lslpp -l bos.rte" as the most recent level of the COMMITTED Base OS Runtime. For example, "5.2.0".
The network node hostname of the system. This is the same as the output from the "uname -n" command.
On Windows, the name obtained from GetComputerName.
On most Unix systems, this is a text string representing the type of computer. This is similar to what is returned by the command "uname -m".
On AIX, this is a text string representing the model number of the computer. This is similar to what is returned by the command "uname -M". For example, "7043-150".
On Windows, this is a text string representing the type of the computer. For example, "80686".
The number of CPUs physically on the system. This includes all CPUs, either online or offline.
For HP-UX and certain versions of Linux, the sar(1M) command allows you to check the status of the system CPUs.
For SUN and DEC, the commands psrinfo(1M) and psradm(1M) allow you to check or change the status of the system CPUs.
The amount of physical memory in the system (in MBs unless otherwise specified).
On HP-UX, banks with bad memory are not counted. Note that on some machines, the Processor Dependent Code (PDC) code uses the upper 1MB of memory and thus reports less than the actual physical memory of the system. Thus, on a system with 256MB of physical memory, this metric and dmesg(1M) might only report 267,386,880 bytes (255MB). This is all the physical memory that software on the machine can access.
On Windows, this is the total memory available, which may be slightly less than the total amount of physical memory present in the system. This value is also reported in the Control Panel's About Windows help topic.
The number of disks on the system.
Only local disk devices are counted in this metric.
The number of network interfaces on the system. This includes the loopback interface. On certain platforms, this also include FDDI, Hyperfabric, ATM, Serial Software interfaces such as SLIP or PPP, and Wide Area Network interfaces (WAN) such as ISDN or X.25. The "netstat -i" command also displays the list of network interfaces on the system.
The clock speed of the CPUs in MHz if all of the processors have the same clock speed. Otherwise, "N/A" is shown if the processors have different clock speeds.
The CPU model. This is similar to the information returned by the GBL_MACHINE metric and the uname command. However, this metric returns more information on some processors.
On HP-UX, this is the same information returned by the "model" command.
An ASCII string representing the time at the end of the interval, based on local time.
The date and time when the system was last booted.
The difference, in minutes, between local time and GMT (Greenwich Mean Time).
The number of CPUs online on the system.
For HP-UX and certain versions of Linux, the sar(1M) command allows you to check the status of the system CPUs.
For SUN and DEC, the commands psrinfo(1M) and psradm(1M) allow you to check or change the status of the system CPUs.
For AIX, the pstat(1) command allows you to check the status of the system CPUs.
On Unix systems, this is the average number of "runnable" threads over all processors during the interval. The value shown for the run queue represents the average of the 1-minute load averages for all processors.
On Windows, this is approximately the average Processor Queue Length during the interval.
On Unix systems, GBL_RUN_QUEUE will typically be a small number. Larger than normal values for this metric indicate CPU contention among processes. This CPU bottleneck is also normally indicated by 100 percent GBL_CPU_TOTAL_UTIL. It may be OK to have GBL_CPU_TOTAL_UTIL be 100 percent if no other processes are waiting for the CPU. However, if GBL_CPU_TOTAL_UTIL is 100 percent and GBL_RUN_QUEUE is greater than the number of processors, it indicates a CPU bottleneck.
On Windows, the Processor Queue reflects a count of process threads which are ready to execute. A thread is ready to execute (in the Ready state) when the only resource it is waiting on is the processor. The Windows operating system itself has many system threads which intermittently use small amounts of processor time. Several low priority threads intermittently wake up and execute for very short intervals. Depending on when the collection process samples this queue, there may be none or several of these low-priority threads trying to execute. Therefore, even on an otherwise quiescent system, the Processor Queue Length can be high. High values for this metric during intervals where the overall CPU utilization (GBL_CPU_TOTAL_UTIL) is low do not indicate a performance bottleneck. Relatively high values for this metric during intervals where the overall CPU utilization is near 100% can indicate a CPU performance bottleneck.
The number of users logged in at the time of the interval sample. This is the same as the command "who | wc -l".
For Unix systems, the information for this metric comes from the utmp file which is updated by the login command. For more information, read the man page for utmp. Some applications may create users on the system without using login and updating the utmp file. These users are not reflected in this count.
This metric can be a general indicator of system usage. In a networked environment, however, users may maintain inactive logins on several systems.
On Windows systems, the information for this metric comes from the Server Sessions counter in the Performance Libraries Server object. It is a count of the number of users using this machine as a file server.
The number of system calls during the interval.
High system call rates are normal on busy systems, especially with IO intensive applications. Abnormally high system call rates may indicate problems such as a "hung" terminal that is stuck in a loop generating read system calls.
The average number of system calls per second during the interval.
High system call rates are normal on busy systems, especially with IO intensive applications. Abnormally high system call rates may indicate problems such as a "hung" terminal that is stuck in a loop generating read system calls.
The number of processes that started during the interval.
The number of processes that started per second during the interval.
The time, in seconds, that the CPU was in system mode during the interval.
A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode.
On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available.
The time, in seconds, that the CPU was in user mode during the interval.
User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available.
The total time, in seconds, that the CPU was not idle in the interval.
This is calculated as
GBL_CPU_TOTAL_TIME =
GBL_CPU_USER_MODE_TIME + GBL_CPU_SYS_MODE_TIME
On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available.
The time, in seconds, that the CPU was idle during the interval. This is the total idle time, including waiting for I/O.
On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online.
Percentage of time the CPU was in system mode during the interval.
A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode.
This metric is a subset of the GBL_CPU_TOTAL_UTIL percentage.
This is NOT a measure of the amount of time used by system daemon processes, since most system daemons spend part of their time in user mode and part in system calls, like any other process.
On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available.
High system mode CPU percentages are normal for IO intensive applications. Abnormally high system mode CPU percentages can indicate that a hardware problem is causing a high interrupt rate. It can also indicate programs that are not calling system calls efficiently.
The percentage of time the CPU was in user mode during the interval.
User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
This metric is a subset of the GBL_CPU_TOTAL_UTIL percentage.
On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available.
High user mode CPU percentages are normal for computation-intensive applications. Low values of user CPU utilization compared to relatively high values for GBL_CPU_SYS_MODE_UTIL can indicate an application or hardware problem.
Percentage of time the CPU was not idle during the interval.
This is calculated as
GBL_CPU_TOTAL_UTIL =
GBL_CPU_USER_MODE_UTIL + GBL_CPU_SYS_MODE_UTIL
On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online. This represents the usage of the total processing capacity available.
GBL_CPU_TOTAL_UTIL + GBL_CPU_IDLE_UTIL = 100%
This metric varies widely on most systems, depending on the workload. A consistently high CPU utilization can indicate a CPU bottleneck, especially when other indicators such as GBL_RUN_QUEUE are also high. High CPU utilization can also occur on systems that are bottlenecked on memory, because the CPU spends more time paging and swapping.
The percentage of time that the CPU was idle during the interval. This is the total idle time, including waiting for I/O.
On Unix systems, this is the same as the sum of the "%idle" and "%wio" fields reported by the "sar -u" command.
On a system with multiple CPUs, this metric is normalized. That is, the CPU used over all processors is divided by the number of processors online.
The number of physical IOs during the interval. Only local disks are counted in this measurement. NFS devices are excluded.
On Unix systems, this includes all types of physical disk IO, including file system, virtual memory, and raw IO.
The number of physical IOs per second during the interval. Only local disks are counted in this measurement. NFS devices are excluded.
On Unix systems, this includes all types of physical reads and writes to and from disk, including file system IO, virtual memory IO and raw IO.
The number of KBs transferred to and from disks during the interval. The bytes for all types of physical IOs are counted. Only local disks are counted in this measurement. NFS devices are excluded.
It is not directly related to the number of IOs, since IO requests can be of differing lengths.
On Unix systems, this includes file system IO, virtual memory IO, and raw IO.
On Windows, all types of physical IOs are counted.
The average number of KBs per second at which data was transferred to and from disks during the interval. The bytes for all types of physical IOs are counted. Only local disks are counted in this measurement. NFS devices are excluded.
This is a measure of the physical data transfer rate. It is not directly related to the number of IOs, since IO requests can be of differing lengths.
This is an indicator of how much data is being transferred to and from disk devices. Large spikes in this metric can indicate a disk bottleneck.
On Unix systems, this includes file system IO, virtual memory IO, and raw IO.
The percentage of occupied disk space to total disk space for the fullest file system found during the interval. Only locally mounted file systems are counted in this metric.
This metric can be used as an indicator that at least one file system on the system is running out of disk space.
On Unix systems, CDROM and PC file systems are also excluded. This metric can exceed 100 percent. This is because a portion of the file system space is reserved as a buffer and can only be used by root. If the root user has made the file system grow beyond the reserved buffer, the utilization will be greater than 100 percent. This is a dangerous situation since if the root user totally fills the file system, the system may crash.
The number of IO interrupts during the interval.
The average number of IO interrupts per second during the interval.
On SUN, this value includes clock interrupts. Clock interrupts occur 100 per second. To get non-clock device interrupts, subtract 100 from the value.
On most Unix systems, this is the total number of page ins from the disk during the interval. This includes pages paged in from paging space and from the file system.
On AIX, this is the total number of page ins from the disk during the interval. This includes pages paged in from paging space.
On some Unix systems, this is the same as the "page ins" value from the "vmstat -s" command. Remember that "vmstat -s" reports cumulative counts.
On AIX, this metric cannot be compared to the "pi" value from the "vmstat" command. The "pi" value only reports the number of pages paged in from paging space.
The total number of disk blocks paged into memory per second from the disk during the interval. This includes pages paged in from paging space and from the file system.
On some Unix systems, this is the same as the "page ins" value from the "vmstat -s" command. Remember that "vmstat -s" reports cumulative counts.
On AIX, this metric cannot be compared to the "pi" value from the "vmstat" command. The "pi" value only reports the number of pages paged in from paging space.
On SunOS 5.7 and 5.8, this includes page-ins from paging space, but does not include file system page-ins (fpi). For SunOS 5.7, this is the same as the sum of "epi" and "api" values from the "memstat" command, divided by the page size in KB. For SunOS 5.8, this is the same as the sum of "epi" and "api" values from the "vmstat -p" command, divided by the page size in KB.
The total number of page outs to the disk during the interval. This includes pages paged out to paging space and to the file system.
For SunOS 5.7 and 5.8, as well as for AIX, the number of page outs does not include file system page outs.
On HP-UX 11i, the value shown is forced page outs initiated by vhand that are due to memory pressure. For HP-UX 11.0, the page out activity may include memory mapped IOs on some file systems (for example, VxFS).
On some Unix systems, this is the same as the "page outs" value from the "vmstat -s" command. Remember that "vmstat -s" reports cumulative counts.
On AIX, this metric cannot be compared to the "po" value from the "vmstat" command. The "po" value only reports the number of pages paged out to paging space.
The total number of page outs to the disk per second during the interval. This includes pages paged out to paging space and to the file system.
On HP-UX 11i, the value shown is forced page outs initiated by vhand that are due to memory pressure. For HP-UX 11.0, the page out activity may include memory mapped IOs on some file systems (for example, VxFS).
On some Unix systems, this is the same as the "page outs" value from the "vmstat -s" command. Remember that "vmstat -s" reports cumulative counts.
On AIX, this metric cannot be compared to the "po" value from the "vmstat" command. The "po" value only reports the number of pages paged out to paging space.
On SunOS 5.7 and 5.8, this includes page-outs to paging space, but does not include file system page-outs (fpo). For SunOS 5.7, this is the same as the sum of "epo" and "apo" values from the "memstat" command, divided by the page size in KB. For SunOS 5.8, this is the same as the sum of "epo" and "apo" values from the "vmstat -p" command, divided by the page size in KB.
On Windows, this counter also includes paging traffic on behalf of the system cache to access file data for applications and so may be high when there is no memory pressure.
The percentage of physical memory in use during the interval. This includes system memory (occupied by the kernel), buffer cache, and user memory.
On HP-UX, this calculation is done using the byte values for physical memory and used memory, and is therefore more accurate than comparing the reported kilobyte values for physical memory and used memory.
On SUN, high values for this metric may not indicate a true memory shortage. This metric can be influenced by the VMM (Virtual Memory Management) system.
The number of pages scanned by the pageout daemon (or by the Clock Hand on AIX) during the interval. The clock hand algorithm is used to control page aging on the system.
The number of pages scanned per second by the pageout daemon (or by the Clock Hand on AIX) during the interval. The clock hand algorithm is used to control page aging on the system.
The total amount of potential swap space, in MB.
On HP-UX, this is the sum of the device swap areas enabled by the swapon command, the allocated size of any file system swap areas, and the allocated size of pseudo swap in memory if enabled. Note that this is potential swap space. This is the same as (AVAIL: total) as reported by the "swapinfo -mt" command.
On SUN, this is the total amount of swap space available from the physical backing store devices (disks) plus the amount currently available from main memory. This is the same as (used + available) /1024, reported by the "swap -s" command.
On Linux, this is same as (Swap: total) as reported by the "free -m" command.
The amount of swap space used, in MB.
On HP-UX, "Used" indicates written to disk (or locked in memory), rather than reserved. This is the same as (USED: total - reserve) as reported by the "swapinfo -mt" command.
On SUN, "Used" indicates amount written to disk (or locked in memory), rather than reserved. Swap space is reserved (by decrementing a counter) when virtual memory for a program is created. This is the same as (bytes allocated)/1024, reported by the "swap -s" command.
On Linux, this is same as (Swap: used) as reported by the "free -m" command.
The percent of available swap space that was being used by running processes in the interval.
On Windows, this is the percentage of virtual memory, which is available to user processes, that is in use at the end of the interval. It is not an average over the entire interval. It reflects the ratio of committed memory to the current commit limit. The limit may be increased by the operating system if the paging file is extended. This is the same as (Committed Bytes / Commit Limit) * 100 when comparing the results to Performance Monitor.
On HP-UX, swap space must be reserved (but not allocated) before virtual memory can be created. If all of available swap is reserved, then no new processes or virtual memory can be created. Swap space locations are actually assigned (used) when a page is actually written to disk or locked in memory (pseudo swap in memory). This is the same as (PCT USED: total) as reported by the "swapinfo -mt" command.
This metric is a measure of capacity rather than performance. As this metric nears 100 percent, processes are not able to allocate any more memory and new processes may not be able to run. Very low swap utilization values may indicate that too much area has been allocated to swap, and better use of disk space could be made by reallocating some swap partitions to be user filesystems.
The number of KBs transferred in from disk due to swap ins (or reactivations on HP-UX) during the interval.
The number of KBs transferred out to disk due to swap outs (or deactivations on HP-UX) during the interval.
The number of KBs per second transferred from disk due to swap ins (or deactivations on HP-UX) during the interval.
The number of KBs per second transferred out to disk due to swap outs (or reactivations on HP-UX) during the interval.
The number of successful packets received through all network interfaces during the interval. Successful packets are those that have been processed without errors.
For HP-UX, this will be the same as the sum of the "Inbound Unicast Packets" and "Inbound Non-Unicast Packets" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only.
For all other Unix systems, this is the same as the sum of the "Ipkts" column (RX-OK on Linux) from the "netstat -i" command for a network device. See also netstat(1).
On Windows system, the packet size for NBT connections is defined as 1 Kbyte.
The number of successful packets per second received through all network interfaces during the interval. Successful packets are those that have been processed without errors or collisions.
On Windows system, the packet size for NBT connections is defined as 1 Kbyte.
The number of successful packets sent through all network interfaces during the last interval. Successful packets are those that have been processed without errors or collisions.
For HP-UX, this will be the same as the sum of the "Outbound Unicast Packets" and "Outbound Non-Unicast Packets" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only.
For all other Unix systems, this is the same as the sum of the "Opkts" column (TX-OK on Linux) from the "netstat -i" command for a network device. See also netstat(1).
On Windows system, the packet size for NBT connections is defined as 1 Kbyte.
The number of successful packets per second sent through the network interfaces during the interval. Successful packets are those that have been processed without errors or collisions.
On Windows system, the packet size for NBT connections is defined as 1 Kbyte.
The number of collisions that occurred on all network interfaces during the interval. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not include deferred packets.
For HP-UX, this will be the same as the sum of the "Single Collision Frames", "Multiple Collision Frames", "Late Collisions", and "Excessive Collisions" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only.
For all other Unix systems, this is the same as the sum of the "Coll" column from the "netstat -i" command ("collisions" from the "netstat -i -e" command on Linux) for a network device. See also netstat(1).
AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page.
The number of collisions per second that occurred on all network interfaces during the interval. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not include deferred packets.
AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page.
The number of errors that occurred on all network interfaces during the interval.
For HP-UX, this will be the same as the sum of the "Inbound Errors" and "Outbound Errors" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only.
For all other Unix systems, this is the same as the sum of "Ierrs" (RX-ERR on Linux) and "Oerrs" (TX-ERR on Linux) from the "netstat -i" command for a network device. See also netstat(1).
The number of errors per second on all network interfaces during the interval.
The configured maximum number of the proc table entries used by the kernel to manage processes. This number includes both free and used entries.
On HP-UX, this is set by the NPROC value during system generation.
AIX has a "dynamic" proc table, which means the avail has been set higher than should ever be needed.
The number of entries in the proc table currently used by processes.
The percentage of proc table entries currently used by processes.
The configured number of shared memory segments that can be allocated on the system.
On HP-UX, this is the number of shared memory segments currently in use.
On all other Unix systems, this is the number of shared memory segments that have been built. This includes shared memory segments with no processes attached to them.
A shared memory segment is allocated by a program using the shmget(2) call. Also refer to ipcs(1).
The percentage of configured shared memory segments currently in use.
The configured maximum number of message queues that can be allocated on the system. A message queue is allocated by a program using the msgget(2) call. Also refer to ipcs(1).
On HP-UX, this is the number of message queues currently in use.
On all other Unix systems, this is the number of message queues that have been built.
A message queue is allocated by a program using the msgget(2) call. See ipcs(1) to list the message queues.
The percentage of configured message queues currently in use.
The configured number of semaphore identifiers (sets) that can be allocated on the system.
On HP-UX, this is the number of semaphore identifiers currently in use.
On all other Unix systems, this is the number of semaphore identifiers that have been built.
A semaphore identifier is allocated by a program using the semget(2) call. See ipcs(1) to list semaphores.
The percentage of configured semaphores identifiers currently in use.
The number of entries in the file table.
On HP-UX and AIX, this is the configured maximum number of the file table entries used by the kernel to manage open file descriptors.
On HP-UX, this is the sum of the "nfile" and "file_pad" values used in kernel generation.
On SUN, this is the number of entries in the file cache. All entries are not always in use. The cache size is dynamic. Entries in this cache are used to manage open file descriptors. They are reused as files are closed and new ones are opened. The size of the cache will go up or down in chunks as more or less space is required in the cache.
On AIX, the file table entries are dynamically allocated by the kernel if there is no entry available. These entries are allocated in chunks.
The number of entries in the file table currently used by file descriptors.
On SUN, this is the number of file cache entries currently used by file descriptors.
The percentage of file table entries currently used by file descriptors.
On SUN, this is the percentage of file cache entries currently used by file descriptors.
The ID number of this CPU. On some Unix systems, such as SUN, CPUs are not sequentially numbered.
The clock speed of the CPU in the current slot. The clock speed is in MHz for the selected processor.
The number of device interrupts for this CPU during the interval.
The average number of device interrupts per second for this CPU during the interval.
On HP-UX, a value of "N/A" is displayed on a system with multiple CPUs.
The time, in seconds, during the interval that this CPU was in user mode.
User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
The time, in seconds, that this CPU was in system mode during the interval.
A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode.
The total time, in seconds, that this CPU was not idle during the interval.
The percentage of time that this CPU was in user mode during the interval.
User CPU is the time spent in user mode at a normal priority, at real-time priority (on HP-UX, AIX, and Windows systems), and at a nice priority.
The percentage of time that this CPU was in system mode during the interval.
A process operates in either system mode (also called kernel mode on Unix or privileged mode on Windows) or user mode. When a process requests services from the operating system with a system call, it switches into the machine's privileged protection mode and runs in system mode.
The percentage of time that this CPU was not idle during the interval.
The ID number of the network interface.
The name of the network interface.
For HP-UX 11.0 and beyond, these are the same names that appear in the "Description" field of the "lanadmin" command output.
On all other Unix systems, these are the same names that appear in the "Name" column of the "netstat -i" command.
Some examples of device names are:
lo - loop-back driver
ln - Standard Ethernet driver
en - Standard Ethernet driver
le - Lance Ethernet driver
ie - Intel Ethernet driver
tr - Token-Ring driver
et - Ether Twist driver
bf - fiber optic driver
All of the device names will have the unit number appended to the name. For example, a loop-back device in unit 0 will be "lo0".
The number of KBs received from the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate.
Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "N/A" for the physical statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics.
The number of KBs sent to the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate.
Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "N/A" for the physical statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics.
The number of successful physical packets received through the network interface during the interval. Successful packets are those that have been processed without errors or collisions.
For HP-UX, this will be the same as the sum of the "Inbound Unicast Packets" and "Inbound Non-Unicast Packets" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only.
For all other Unix systems, this is the same as the sum of the "Ipkts" column (RX-OK on Linux) from the "netstat -i" command for a network device. See also netstat(1).
Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "N/A" for the physical statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics.
The number of successful physical packets per second received through the network interface during the interval. Successful packets are those that have been processed without errors or collisions.
Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "N/A" for the physical statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics.
The number of successful physical packets sent through the network interface during the interval. Successful packets are those that have been processed without errors or collisions.
For HP-UX, this will be the same as the sum of the "Outbound Unicast Packets" and "Outbound Non-Unicast Packets" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only.
For all other Unix systems, this is the same as the sum of the "Opkts" column (TX-OK on Linux) from the "netstat -i" command for a network device. See also netstat(1).
Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "N/A" for the physical statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics.
The number of successful physical packets per second sent through the network interface during the interval. Successful packets are those that have been processed without errors or collisions.
Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "N/A" for the physical statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics.
The number of physical collisions that occurred on the network interface during the interval. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets.
This data is not collected for non-broadcasting devices, such as loopback (lo), and is always zero.
For HP-UX, this will be the same as the sum of the "Single Collision Frames", "Multiple Collision Frames", "Late Collisions", and "Excessive Collisions" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only.
For most other Unix systems, this is the same as the sum of the "Coll" column from the "netstat -i" command ("collisions" from the "netstat -i -e" command on Linux) for a network device. See also netstat(1).
AIX does not support the collision count for the ethernet interface. The collision count is supported for the token ring (tr) and loopback (lo) interfaces. For more information, please refer to the netstat(1) man page.
Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "N/A" for the physical statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics.
The number of physical collisions per second on the network interface during the interval. A rising rate of collisions versus outbound packets is an indication that the network is becoming increasingly congested. This metric does not currently include deferred packets.
This data is not collected for non-broadcasting devices, such as loopback (lo), and is always zero.
Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "N/A" for the physical statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics.
The number of physical errors that occurred on the network interface during the interval. An increasing number of errors may indicate a hardware problem in the network.
On Unix systems, this data is not available for loop-back (lo) devices and is always zero.
For HP-UX, this will be the same as the sum of the "Inbound Errors" and "Outbound Errors" values from the output of the "lanadmin" utility for the network interface. Remember that "lanadmin" reports cumulative counts. As of the HP-UX 11.0 release and beyond, "netstat -i" shows network activity on the logical level (IP) only.
For all other Unix systems, this is the same as the sum of "Ierrs" (RX-ERR on Linux) and "Oerrs" (TX-ERR on Linux) from the "netstat -i" command for a network device. See also netstat(1).
Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "N/A" for the physical statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics.
The number of physical errors per second on the network interface during the interval.
On Unix systems, this data is not available for loop-back (lo) devices and is always zero.
Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "N/A" for the physical statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics.
The number of KBs per second received from the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate.
Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "N/A" for the physical statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics.
The number of KBs per second sent to the network via this interface during the interval. Only the bytes in packets that carry data are included in this rate.
Physical statistics are packets recorded by the network drivers. These numbers most likely will not be the same as the logical statistics. The values returned for the loopback interface will show "N/A" for the physical statistics since there is no network driver activity.
Logical statistics are packets seen only by the Interface Protocol (IP) layer of the networking subsystem. Not all packets seen by IP will go out and come in through a network driver. An example is the loopback interface (127.0.0.1). Pings or other network generating commands (ftp, rlogin, and so forth) to 127.0.0.1 will not change physical driver statistics. Pings to IP addresses on remote systems will change physical driver statistics.
On Unix systems, this is the major and minor number of the file system.
On Windows, this is the unit number of the disk device on which the logical disk resides.
On Unix systems, this is the path name string of the current device.
On Windows, this is the disk drive string of the current device.
On HP-UX, this is the "fsname" parameter in the mount(1M) command. For NFS devices, this includes the name of the node exporting the file system.
On SUN, this is the path name string of the current device, or "tmpfs" for memory based file systems. See tmpfs(7).
On Unix systems, this is the path name of the mount point of the file system. See mount(1M) and mnttab(4), tmpfs(7) on SUN, and filesystems(4) on AIX.
On Windows, this is the drive letter associated with the selected disk partition.
A string indicating the file system type. On Unix systems, some of the possible types are:
hfs - user file system
ufs - user file system
ext2 - user file system
cdfs - CD-ROM file system
vxfs - Veritas (vxfs) file system
nfs - network file system
nfs3 - network file system Version 3
On Windows, some of the possible types are:
NTFS - New Technology File System
FAT - 16-bit File Allocation Table
FAT32 - 32-bit File Allocation Table
FAT uses a 16-bit file allocation table entry (216 clusters).
FAT32 uses a 32-bit file allocation table entry. However, Windows 2000 reserves the first 4 bits of a FAT32 file allocation table entry, which means FAT32 has a theoretical maximum of 228 clusters. NTFS is native file system of Windows NT and beyond.
The maximum block size of this file system, in bytes.
The fundamental file system block size, in bytes.
Maximum number that this file system could obtain if full, in MB.
Note that this is the user space capacity - it is the file system space accessible to non root users. On most Unix systems, the df command shows the total file system capacity which includes the extra file system space accessible to root users only.
The equivalent fields to look at are "used" and "avail". For the target file system, to calculate the maximum size in MB, use
FS Max Size = (used + avail) / 1024
Percentage of the file system space in use during the interval.
Note that this is the user space capacity - it is the file system space accessible to non root users. On most Unix systems, the df command shows the total file system capacity which includes the extra file system space accessible to root users only.
Number of configured file system inodes.
Percentage of the inodes for this file system that were in use during the interval.
The amount of file system space in MBs that is being used.
The amount of file system space in MBs reserved for superuser allocation.
The ID of the current disk device.
The name of this disk device.
On HP-UX, the name identifying the specific disk spindle is the hardware path which specifies the address of the hardware components leading to the disk device.
On SUN, these names are the same disk names displayed by "iostat".
On AIX, the path name string of this disk device. This is the fsname parameter in the mount(1M) command. If more than one file system is contained on a device (that is, the device is partitioned), this is indicated by an asterisk ("*") at the end of the path name.
On OSF1, this is the path name string of this disk device. This is the file-system parameter in the mount(1M) command.
On Windows, this is the unit number of this disk device.
The number of physical reads for this disk device during the interval.
On AIX, this is an estimated value based on the ratio of read bytes to total bytes transferred. The actual number of reads is not tracked by the kernel. This is computed as
BYDSK_PHYS_READ =
BYDSK_PHYS_IO * (BYDSK_PHYS_READ_BYTE / BYDSK_PHYS_IO_BYTE)
The number of physical writes for this disk device during the interval.
On AIX, this is an estimated value based on the ratio of write bytes to total bytes transferred because the actual number of writes is not tracked by the kernel. This is computed as
BYDSK_PHYS_WRITE =
BYDSK_PHYS_IO * (BYDSK_PHYS_WRITE_BYTE / BYDSK_PHYS_IO_BYTE)
The KBs transferred from this disk device during the interval.
On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads.
The KBs transferred to this disk device during the interval.
On Unix systems, all types of physical disk writes are counted, including file system, virtual memory, and raw writes.
The KBs transferred to or from this disk device during the interval.
On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw IOs.
The number of physical IOs for this disk device during the interval.
On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw IOs.
The average KBs per second transferred from this disk device during the interval.
On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads.
The average KBs per second transferred to this disk device during the interval.
On Unix systems, all types of physical disk writes are counted, including file system, virtual memory, and raw writes.
The average KBs per second transferred to or from this disk device during the interval.
On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw IOs.
The average number of physical IO requests per second for this disk device during the interval.
On Unix systems, all types of physical disk IOs are counted, including file system, virtual memory, and raw IOs.
The average number of physical reads per second for this disk device during the interval.
On Unix systems, all types of physical disk reads are counted, including file system, virtual memory, and raw reads.
On AIX, this is an estimated value based on the ratio of read bytes to total bytes transferred. The actual number of reads is not tracked by the kernel. This is calculated as
BYDSK_PHYS_READ_RATE =
BYDSK_PHYS_IO_RATE * (BYDSK_PHYS_READ_BYTE / BYDSK_PHYS_IO_BYTE)
The average number of physical writes per second for this disk device during the interval.
On Unix systems, all types of physical disk writes are counted, including file system, virtual memory, and raw writes.
On AIX, this is an estimated value based on the ratio of write bytes to total bytes transferred. The actual number of writes is not tracked by the kernel. This is calculated as
BYDSK_PHYS_WRITE_RATE =
BYDSK_PHYS_IO_RATE * (BYDSK_PHYS_WRITE_BYTE / BYDSK_PHYS_IO_BYTE)
The time, in seconds, that this disk device was busy transferring data during the interval.
On HP-UX, this is the time, in seconds, during the interval that the disk device had IO in progress from the point of view of the Operating System. In other words, the time, in seconds, the disk was busy servicing requests for this device.
On HP-UX, this is the percentage of the time during the interval that the disk device had IO in progress from the point of view of the Operating System. In other words, the utilization or percentage of time the disk was busy servicing requests for this device.
On the non-HP-UX systems, this is the percentage of the time that this disk device was busy transferring data during the interval.
Some Linux kernels, typically 2.2 and older kernels, do not support the instrumentation needed to provide values for this metric. This metric will be "N/A" on the affected kernels. The "sar -d" command will also not be present on these systems. Distributions and OS releases that are known to be affected include: TurboLinux 7, SuSE 7.2, and Debian 3.0.
This is a measure of the ability of the IO path to meet the transfer demands being placed on it. Slower disk devices may show a higher utilization with lower IO rates than faster disk devices such as disk arrays. A value of greater than 50% utilization over time may indicate that this device or its IO path is a bottleneck, and the access pattern of the workload, database, or files may need reorganizing for better balance of disk IO load.