Setting up I/O fencing
Setting up I/O fencing involves the following tasks:
Setting Up Shared Storage for I/O Fencing
Note that to use I/O fencing you must:
Have installed the VRTSvxfen package when you installed VCS
Have installed a version of VERITAS Volume Manager (VxVM) that supports SCSI-III persistent group reservations (SCSI-III PGR). Refer to the installation guide accompanying the Storage Foundation product you are using.
The shared storage you add for use with VCS software must support SCSI-III persistent group reservations, a functionality that enables the use of I/O fencing.
Adding Disks
After you physically add shared disks to cluster systems, you must initialize them as VxVM disks. Use the following examples. The VERITAS Volume Manager Administrator's Guide has more information about adding and configuring disks.
-
The disks you add can be listed by the command:
# lsdev -C disk
-
Use the vxdisk scandisks command to scan all disk drives and their attributes, to update the VxVM device list, and to reconfigure DMP with the new devices. For example:
# vxdisk scandisks
-
To initialize the disks as VxVM disks, use either of two methods:
- Use the interactive vxdiskadm utility to initialize the disks as VxVM disks. Refer to a disk by its VxVM name, such as EMC0_17.
- You can also use the command vxdisksetup to initialize a disk as a VxVM disk. The example that follows specifies the CDS format:
vxdisksetup -i VxVM_device_name format=cdsdisk
# vxdisksetup -i devicename format=cdsdisk
Verifying that Systems See the Same Disk
To perform the test that determines whether a given disk (or LUN) supports SCSI-III persistent group reservations, two systems must simultaneously have access to the same disks. Because a given shared disk is likely to have a different name on each system, a way is needed to make sure of the identity of the disk.
The method to check the identity of a given disk, or LUN, is to check its serial number. You can use the vxfenadm command with the -i option to verify that the same serial number for a LUN is returned on all paths to the LUN.
For example, an EMC array is accessible by path /dev/rdsk/c1t12d0 on node A and by path /dev/rdsk/c1t13d0 on node B. From node A, the command is given:
# vxfenadm -i /dev/rdsk/c1t12d0
Vendor id : EMC
Product id : SYMMETRIX
Revision : 5567
Serial Number : 42031000a
The same serial number information should be returned when the equivalent command is given on node B using path /dev/rdsk/c1t12d0.
On a disk from another manufacturer, Hitachi Data Systems, for example, the output is different. It may resemble:
# vxfenadm -i /dev/rdsk/c2t13d0
Vendor id : HITACHI
Product id : OPEN-3
Revision : 0117
Serial Number : 0401EB6F0002
The output is different on a disk from another manufacturer, HP, for example:
# vxfenadm -i /dev/rdsk/c5t0d0
Vendor id : HP
Product id : OPEN-E
Revision : 2101
Serial Number : R450 00013154 0088
Refer to the vxfenadm(1M) manual page.
Testing Data Storage Disks Using vxfentsthdw
Use the vxfentsthdw utility to test the shared storage arrays that are to be used for data. The utility verifies the disks support SCSI-III persistent group reservations and I/O fencing.
Note
Disks used as coordinator disks must also be tested. See Setting Up Coordinator Disks.
General Guidelines for Using vxfentsthdw
- Connect the shared storage to be used for data to two cluster systems.
Caution
The tests overwrite and destroy data on the disks, unless you use the -r option.
- The two systems must have rsh permission set so that each node has root user access to the other. Temporarily modify the /.rhosts file to enable cluster communications for the vxfentsthdw utility, placing a "+" character in the first line of the file. You can also limit the remote access to specific systems. Refer to the manual page for the /.rhosts file for more information. See Removing rsh Permissions and Restoring Public Network Connections when you complete testing.
- To ensure both systems are connected to the same disk during the testing, use the vxfenadm -i diskpath command to verify a disk's serial number. See Verifying that Systems See the Same Disk.
Running vxfentsthdw
This section describes the steps required to set up and test data disks for your initial installation. It describes using the vxfentsthdw utility with the default options. The vxfentsthdw utility and it options are described in detail in the section vxfentsthdw Options.
The vxfentsthdw utility indicates a disk can be used for I/O fencing with a message resembling:
The disk /dev/rdsk/c1t13d0 is ready to be configured for I/O Fencing on node south
If the utility does not show a message stating a disk is ready, verification has failed.
For the following example, assume you must check a shared device known by two systems as /dev/rdsk/c1t12d0. (Each system could use a different name for the same device.)
-
Make sure system-to-system communication is set up. See Enabling Communication Between Systems.
-
On one system, start the utility:
# /opt/VRTSvcs/vxfen/bin/vxfentsthdw
The utility begins by providing an overview of its function and behavior. It warns you that its tests overwrite any data on the disks you check:
******** WARNING!!!!!!!! ********
THIS UTILITY WILL DESTROY THE DATA ON THE DISK!!
Do you still want to continue : [y/n] (default: n)
y
Enter the first node of the cluster:
north
Enter the second node of the cluster:
south
-
Enter the name of the disk you are checking. For each node, the disk may be known by the same name, as in our example.
Enter the disk name to be checked for SCSI-III PGR on node north in
the format: /dev/
/dev/rdsk/c1t12d0
Enter the disk name to be checked for SCSI-III PGR on node south in
the format: /dev/
Make sure it's the same disk as seen by nodes north and south
/dev/rdsk/c1t12d0
Note the disk names, whether or not they are identical, must refer to the same physical disk.
-
The utility starts to perform the check and report its activities. For example:
Testing north /dev/rdsk/c1t12d0 south /dev/rdsk/c1t12d0
Registering keys on disk /dev/rdsk/c1t12d0 from node north
.........................................................Passed.
Verifying registrations for disk /dev/rdsk/c1t12d0 on node
north ...................................................Passed.
Reads from disk /dev/rdsk/c1t12d0 on node north .........Passed.
Writes to disk /dev/rdsk/c1t12d0 from node north ........Passed.
Reads from disk /dev/rdsk/c1t12d0 on node south .........Passed.
Writes to disk /dev/rdsk/c1t12d0 from node south ........Passed.
Reservations to disk /dev/rdsk/c1t12d0 from node north ..Passed.
Verifying reservation for disk /dev/rdsk/c1t12d0 on node
north ...................................................Passed.
.
.
-
For a disk that is ready to be configured for I/O fencing on each system, the utility reports success. For example:
ALL tests on the disk /dev/rdsk/c1t12d0 have PASSED
The disk is now ready to be configured for I/O Fencing on node
north
ALL tests on the disk /dev/rdsk/c1t12d0 have PASSED
The disk is now ready to be configured for I/O Fencing on node
south
Cleaning up...
Removing temporary files...
Done.
-
Run the vxfentsthdw utility for each disk you intend to verify.
Note
The vxfentsthdw utility has additional options suitable for testing many disks. The options for testing disk groups (-g) and disks listed in a file (-f) are described in detail in vxfentsthdw Options. You can also test disks without destroying data using the -r option.
|