Oracle® Clusterware Administration and Deployment Guide 11g Release 1 (11.1) Part Number B28255-01 |
|
|
View PDF |
Oracle provides Cluster Verification Utility (CVU) to perform system checks in preparation for installation, patch updates, or other system changes. Learning how to use CVU can ensure that you have completed the required system configuration and preinstallation steps so that your installation, update, or patch operation completes successfully.
This appendix describes the CVU under the following topics:
See Also:
Your platform-specific Oracle Clusterware and Oracle RAC installation guide for information about how to manually install CVUThe CVU can verify the primary cluster components during an operational phase or stage. A component can be basic, such as free disk space, or it can be complex, such as checking Oracle Clusterware integrity. For example, CVU can verify multiple Oracle Clusterware subcomponents across Oracle Clusterware layers. Additionally, CVU can check disk space, memory, processes, and other important cluster components. A stage could be, for example, database installation, for which CVU can verify whether your system meets the criteria for an Oracle RAC installation. Other stages include the initial hardware setup and the establishing of system requirements through the fully operational cluster setup.
When verifying stages, CVU uses entry and exit criteria. In other words, each stage has entry criteria that define a specific set of verification tasks to be performed before initiating that stage. This check prevents you from beginning a stage, such as installing Oracle Clusterware, unless you meet the Oracle Clusterware stage's prerequisites.
The exit criteria for a stage define another set of verification tasks that you need to perform after the completion of the stage. Post-checks ensure that the activities for that stage have been completed. Post-checks identify stage-specific problems before they propagate to subsequent stages.
The node list that you use with CVU commands should be a comma-delimited list of host names without a domain. The CVU ignores domains while processing node lists. If a CVU command entry has duplicate node entries after removing domain information, then CVU eliminates the duplicate node entries. Wherever supported, you can use the -n all
option to verify all of your cluster nodes that are part of a specific Oracle RAC installation. You do not have to be the root
user to use the CVU and the CVU assumes that the current user is the oracle
user.
Note:
The CVU only supports an English-based syntax and English online help.For network connectivity verification, the CVU discovers all of the available network interfaces if you do not specify an interface on the CVU command line. For storage accessibility verification, the CVU discovers shared storage for all of the supported storage types if you do not specify a particular storage identification on the command line. The CVU also discovers the Oracle Clusterware home if one is available.
Run the CVU command-line tool using the cluvfy
command. Using cluvfy
does not adversely affect your cluster environment or your installed software. You can run cluvfy
commands at any time, even before the Oracle Clusterware installation. In fact, the CVU is designed to assist you as soon as your hardware and operating system are operational. If you run a command that requires Oracle Clusterware on a node, then the CVU reports an error if Oracle Clusterware is not yet installed on that node.
You can enable tracing by setting the environment variable SRVM_TRACE
to true. For example, in tcsh
an entry such as setenv SRVM_TRACE true
enables tracing. The CVU trace files are created in the CV_HOME/cv/log
directory. Oracle automatically rotates the log files and the most recently created log file has the name cvutrace.log.0
. You should remove unwanted log files or archive them to reclaim disk place if needed. The CVU does not generate trace files unless you enable tracing.
At least 30MB free space for the CVU software on the node from which you run the CVU
A location for the current JDK, Java 1.4.1 or later
A work directory with at least 25MB free space on each node
Note:
When using the CVU, the CVU attempts to copy any needed information to the CVU work directory. Make sure that the CVU work directory exists on all of the nodes in your cluster database and that the directory on each node has write permissions established for the CVU user. Set this directory using theCV_DESTLOC
environment variable. If you do not set this variable, then the CVU uses /tmp
as the work directory on Linux and UNIX systems, and C:\temp
on Windows systems.This section describes the following Cluster Verification Utility topics:
The cluvfy
commands have context sensitive help that shows their usage based on the command-line arguments that you enter. For example, if you enter cluvfy
, then the CVU displays high-level generic usage text describing the stage and component syntax. If you enter cluvfy comp -list
, then the CVU shows the valid components with brief descriptions about each of them. If you enter cluvfy comp -help
, then the CVU shows detailed syntax for each of the valid component checks. Similarly, cluvfy stage -list
and cluvfy stage -help
display valid stages and their syntax for their checks respectively. If you enter an invalid CVU command, then the CVU shows the correct usage for that command. For example, if you type cluvfy stage -pre dbinst
, then CVU shows the correct syntax for the precheck commands for the dbinst
stage. Enter the cluvfy -help
command to see detailed CVU command information.
Although by default the CVU reports in nonverbose mode by only reporting the summary of a test, you can obtain detailed output by using the -verbose
argument. The -verbose
argument produces detailed output of individual checks and where applicable shows results for each node in a tabular layout.
If a cluvfy
command responds with UNKNOWN
for a particular node, then this is because the CVU cannot determine whether a check passed or failed. The cause of this could be a loss of reachability or the failure of user equivalence to that node. The cause could also be any system problem that was occurring on that node at the time that CVU was performing a check.
If you run the CVU using the -verbose
argument and the CVU responds with UNKNOWN
for a particular node, then this is because the CVU cannot determine whether a check passed or failed. The following is a list of possible causes for an UNKNOWN
response:
The node is down
Executables that the CVU requires are missing in CRS_home
/bin
or the Oracle home
directory
The user account that ran the CVU does not have privileges to run common operating system executables on the node
The node is missing an operating system patch or a required package
The node has exceeded the maximum number of processes or maximum number of open files, or there is a problem with IPC segments, such as shared memory or semaphores
You can use the following nodelist shortcuts:
To provide the CVU a list of all of the nodes of a cluster, enter -n all
. CVU attempts to obtain the node list in the following order:
If vendor clusterware is available, then the CVU selects all of the configured nodes from the vendor clusterware using the lsnodes
utility.
If Oracle Clusterware is installed, then the CVU selects all of the configured nodes from Oracle Clusterware using the olsnodes
utility.
If neither the vendor nor Oracle Clusterware is installed, then the CVU searches for a value for the CV_NODE_ALL
key in the configuration file.
If vendor and Oracle Clusterware are not installed and no key named CV_NODE_ALL
exists in the configuration file, then the CVU searches for a value for the CV_NODE_ALL
environmental variable.
If you have not set this variable, then the CVU reports an error.
To provide a partial node list, you can set an environmental variable and use it in the CVU command. For example, on Linux or UNIX systems you can enter:
setenv MYNODES node1,node3,node5 cluvfy comp nodecon -n $MYNODES [-verbose]
You can use the CVU configuration file to define specific inputs for the execution of the CVU. The path for the configuration file is CV_HOME/cv/admin/cvu_config
. You can modify this using a text editor. The inputs to the tool are defined in the form of key entries. You must follow these rules when modifying the CVU configuration file:
Key entries have the syntax name=value
Each key entry and the value assigned to the key only defines one property
Lines beginning with the number sign (#
) are comment lines and are ignored
Lines that do not follow the syntax name=value
are ignored
The following is the list of keys supported by CVU:
CV_NODE_ALL
—If set, it specifies the list of nodes that should be picked up when Oracle Clusterware is not installed and a -n
all option has been used in the command line. By default, this entry is commented out.
CV_RAW_CHECK_ENABLED
—If set to TRUE
, it enables the check for accessibility of shared disks on RedHat release 3.0. This shared disk accessibility check requires that you install a cvuqdisk
rpm on all of the nodes. By default, this key is set to TRUE
and shared disk check is enabled.
CV_XCHK_FOR_SSH_ENABLED
—If set to TRUE
, it enables the X-Windows check for verifying user equivalence with ssh. By default, this entry is commented out and X-Windows check is disabled.
ORACLE_SRVM_REMOTESHELL
—If set, it specifies the location for ssh/rsh
command to override the CVU default value. By default, this entry is commented out and the tool uses /usr/sbin/ssh
and /usr/sbin/rsh
.
ORACLE_SRVM_REMOTECOPY
—If set, it specifies the location for the scp
or rcp
command to override the CVU default value. By default, this entry is commented out and CVU uses /usr/bin/scp
and /usr/sbin/rcp
.
If CVU does not find a key entry defined in the configuration file, then the CVU searches for the environment variable that matches the name of the key. If the environment variable is set, then the CVU uses its value, otherwise the CVU uses a default value for that entity.
You can perform the following tests using CVU as described under the following topics:
Cluster Verification Utility System Requirements Verifications
Cluster Verification Utility User and Permissions Verifications
Cluster Verification Utility Node Comparisons and Verifications
Cluster Verification Utility Oracle Clusterware Component Verifications
Cluster Verification Utility Cluster Integrity Verifications
Cluster Verification Utility Argument and Option Definitions
See Also:
Table A-1 for details about the arguments and options used in the following CVU examplesTo verify the minimal system requirements on the nodes prior to installing Oracle Clusterware or Oracle RAC, use the sys
component verification command as follows:
cluvfy comp sys [ -n node_list ] -p { crs | database } } [-r { 10gR1 | 10gR2 | 11gR1} ] [ -osdba osdba_group ] [ -orainv orainventory_group ] [-verbose]
To check the system requirements for installing Oracle RAC, use the -p
database
argument, and to check the system requirements for installing Oracle Clusterware, use the -p
crs
argument. To check the system requirements for installing Oracle Clusterware or Oracle RAC from Oracle Database 11g release 1 (11.1), use the -r 11gR1
argument. For example, verify the system requirements for installing Oracle Clusterware on the cluster nodes known as node1
,node2
and node3
by running the following command:
cluvfy comp sys -n node1,node2,node3 -p crs -verbose
To verify whether storage is shared among the nodes in your cluster database or to identify all of the storage that is available on the system and can be shared across the cluster nodes, use the component verification command ssa
as follows:
cluvfy comp ssa [ -n node_list ] [ -s storageID_list ] [-verbose]
See Also:
"Known Issues for the Cluster Verification Utility" for the types of storage that CVU supportsFor example, discover all of the shared storage systems available on your system by running the following command:
cluvfy comp ssa -n all -verbose
You can verify the accessibility of a specific storage location, such as /dev/sda
, across the cluster nodes by running the following command:
cluvfy comp ssa -n all -s /dev/sda
To verify whether a certain amount of free space is available on a specific location in the nodes of your cluster database, use the component verification command space
.
cluvfy comp space [ -n node_list ] -l storage_location -z disk_space {B|K|M|G} [-verbose]
For example, you can verify the availability of at least 2 GB of free space at the location /home/dbadmin/products
on all of the cluster nodes by running the following command:
cluvfy comp space -n all -l / home/dbadmin/products –z 2G -verbose
To verify the integrity of your Oracle Cluster File System (OCFS) on platforms on which OCFS is available, use the component verification command cfs
as follows:
cluvfy comp cfs [ -n node_list ] -f file_system [-verbose]
For example, you can verify the integrity of the cluster file system /oradbshare
on all of the nodes by running the following command:
cluvfy comp cfs -f /oradbshare –n all -verbose
Note:
The sharededness check for the file system is supported for Oracle Cluster File System version 1.0.14 or higher.To verify the cluster nodes can be reached from the local node or from any other cluster node, use the component verification command nodereach
as follows:
cluvfy comp nodereach -n node_list [ -srcnode node ] [-verbose]
To verify the connectivity between the cluster nodes through all of the available network interfaces or through specific network interfaces, use the component verification command nodecon
as follows:
cluvfy comp nodecon -n node_list [ -i interface_list ] [-verbose]
Use the nodecon
command without the -i
option as follows to use CVU to:
Discover all of the network interfaces that are available on the cluster nodes
Review the interfaces' corresponding IP addresses and subnets
Obtain the list of interfaces that are suitable for use as VIPs and the list of interfaces to private interconnects
Verify the connectivity between all of the nodes through those interfaces
cluvfy comp nodecon -n all [-verbose]
You can run this command in verbose mode to identify the mappings between the interfaces, IP addresses, and subnets. To verify the connectivity between all of the nodes through specific network interfaces, use the comp nodeco
n command with the -i
option. For example, you can verify the connectivity between the nodes node1
,node2
, and node3
, through interface eth0
by running the following command:
cluvfy comp nodecon -n node1,node2,node3 –i eth0 -verbose
To verify user accounts and administrative permissions-related issues, use the component verification command admprv
as follows:
cluvfy comp admprv [ -n node_list ] [-verbose] | -o user_equiv [-sshonly] | -o crs_inst [-orainv orainventory_group ] | -o db_inst [-orainv orainventory_group ] [-osdba osdba_group ] | -o db_config -d oracle_home
To verify whether user equivalence exists on specific nodes, use the -o user_equiv
argument. On Linux and UNIX platforms, this command verifies user equivalence first using ssh
and then using rsh
, if the ssh
check fails. To verify the equivalence only through ssh
, use the -sshonly
option. By default, the equivalence check does not verify X-Windows configurations, such as whether you have disabled X-forwarding, whether you have the proper setting for the DISPLAY
environment variable, and so on.
To verify X-Windows aspects during user equivalence checks, set the CV_XCHK_FOR_SSH_ENABLED
key to TRUE
in the configuration file that resides in the path CV_HOME/cv/admin/cvu_config
before you run the admprv -o user_equiv
command. Use the -o crs_inst
argument to verify whether you have permissions to install Oracle Clusterware.
You can use the -o db_inst
argument to verify the permissions that are required for installing Oracle RAC and the -o db_config
argument to verify the permissions that are required for creating an Oracle RAC database or for modifying an Oracle RAC database's configuration. For example, you can verify user equivalence for all of the nodes by running the following command:
cluvfy comp admprv -n all -o user_equiv -verbose
On Linux and UNIX platforms, this command verifies user equivalence by first using ssh and then using rsh if the ssh check fails. To verify the equivalence only through ssh, use the -sshonly
option. By default, the equivalence check does not verify X-Windows configurations, such as when you have disabled X-forwarding with the setting of the DISPLAY
environment variable. To verify X-Windows aspects during user equivalence checks, set the CV_XCHK_FOR_SSH_ENABLED
key to TRUE
in the configuration file CV_HOME/cv/admin/cvu_config
before you run the admprv -o user_equiv
command.
To verify the existence of node applications, namely VIP, ONS and GSD, on all of the nodes, use the component nodeapp
command:
cluvfy comp nodeapp [ -n node_list ] [-verbose]
Use the component verification peer
command to compare the nodes as follows:
cluvfy comp peer [ -refnode node ] -n node_list [-r { 10gR1 | 10gR2 | 11gR1} ] [ -orainv orainventory_group ] [ -osdba osdba_group ] [-verbose]
The following command lists the values of several preselected properties on different nodes from Oracle Database 11g release 1 (11.1):
cluvfy comp peer -n node_list [-r 11gR1] [-verbose]
You can also use the comp peer
command with the -refnode
argument to compare the properties of other nodes against the reference node.
To verify whether your system meets all of the criteria for an Oracle Clusterware installation, use the -pre crsinst
command for the Oracle Clusterware installation stage as follows:
cluvfy stage -pre crsinst -n node_list [ -c ocr_location ] [-r { 10gR1 | 10gR2 | 11gR1} ][ -q voting_disk ] [ -osdba osdba_group ] [ -orainv orainventory_group ] [-verbose]
After you have completed phase one, verify that Oracle Clusterware is functioning properly before proceeding with phase two of your Oracle RAC installation by running the -post
crsinst command for the Oracle Clusterware installation stage:
cluvfy stage -post crsinst -n node_list [-verbose]
To verify whether your system meets all of the criteria for an Oracle RAC installation, use the pre dbinst
command for the Database Installation stage:
cluvfy stage -pre dbinst -n node_list [-r { 10gR1 | 10gR2 | 11gR1} ] [ -osdba osdba_group ] [ -orainv orainventory_group ] [-verbose]
To verify whether your system meets all of the criteria for creating a database or for making a database configuration change, use the pre dbcfg
command for the Database Configuration stage:
cluvfy stage -pre dbcfg -n node_list -d oracle_home [-verbose]
To check the integrity of your entire cluster, which means to verify that all of the nodes in the cluster have the same view of the cluster configuration, use the component verification command comp clu
, as follows:
cluvfy comp clu
To verify the integrity of all of the Oracle Clusterware components, use the component verification comp crs
command:
cluvfy comp crs [ -n node_list ] [-verbose]
To verify the integrity of each individual Cluster Manager subcomponent, use the component verification command comp clumgr
:
cluvfy comp clumgr [ -n node_list ] [-verbose]
To verify the integrity of the Oracle Cluster Registry, use the component verification command comp ocr
:
cluvfy comp ocr [ -n node_list ] [-verbose]
Table A-1 describes the CVU arguments and options used in the previous examples:
Table A-1 Cluster Verification Utility Arguments and Options
Argument or Option | Definition |
---|---|
|
The comma-delimited list of nondomain qualified node names on which the test should be conducted. If |
|
The comma-delimited list of interface names. |
|
The name of the file system. |
|
The comma-delimited list of storage identifiers. |
|
The storage path. |
|
The required disk space, in units of bytes (B), kilobytes (K), megabytes (M), or gigabytes (G). |
|
The name of the OSDBA group. The default is |
|
The name of the Oracle inventory group. The default is |
|
Makes CVU print detailed output. |
|
Checks user equivalence between the nodes. |
|
Check user equivalence for ssh setup only. |
|
Checks administrative privileges for installing Oracle Clusterware. |
|
Checks administrative privileges for installing Oracle RAC. |
|
Checks administrative privileges for creating or configuring a database. |
|
The node that will be used as a reference for checking compatibility with other nodes. |
|
The node from which the reachability to other nodes should be checked. |
|
The release of the Oracle Database for which the requirements for installation of Oracle Clusterware or Oracle RAC are to be verified. If this option is not specified, then Oracle Database 11g release 1 (11.1) is assumed. |
This section describes the following known limitations for CVU:
The current CVU release supports only Oracle Database 10g or higher, Oracle RAC, and Oracle Clusterware and CVU is not backward compatible. In other words, CVU cannot check or verify Oracle Database products prior to Oracle Database 10g.
The current release of cluvfy
has the following limitations on Linux regarding shared storage accessibility check.
Currently NAS storage (r/w, no attribute caching) and OCFS (version 1.0.14 or higher) are supported.
For sharedness checks on NAS, cluvfy
commands require you to have write permission on the specified path. If the cluvfy
user does not have write permission, cluvfy
reports the path as not
shared
.
To perform discovery and shared storage accessibility checks for SCSI disks on Red Hat Linux 3.0 (or higher) and SUSE Linux Enterprise Server, CVU requires the CVUQDISK package. If you attempt to use CVU and the CVUQDISK package is not installed on all of the nodes in your Oracle RAC environment, then CVU responds with an error.
Perform the following procedure to install the CVUQDISK package:
Login as the root
user.
Copy the rpm, cvuqdisk-1.0.1-1.rpm
, to a local directory. You can find this rpm in the rpm
subdirectory of the top-most directory in the Oracle Clusterware installation media. For example, you can find cvuqdisk-1.0.1-1.rpm
in the directory /
mountpoint
/clusterware/rpm/
where mountpoint
is the mounting point for the disk on which the directory is located.
Set the environment variable to a group that should own the CVUQDISK package binaries. If CVUQDISK_GRP
is not set, then by default the oinstall
group is the owner's group.
Determine whether previous versions of the CVUQDISK package are installed by running the command rpm -q cvuqdisk
. If you find previous versions of the CVUQDISK package, then remove them by running the command rpm -e cvuqdisk
previous_version
where previous_version
is the identifier of the previous CVUQDISK version.
Install the latest CVUQDISK package by running the command rpm -iv cvuqdisk-1.0.1-1.rpm
.