Verifying the Configuration Files
You can inspect the contents of the configuration files that were installed and modified after a successful installation process. These files reflect the configuration based on the information you supplied.
To verify the configuration files
-
Log in as root to any system in the cluster.
-
Set up your environment PATH variable.
# export PATH=$PATH:/sbin:/usr/sbin:/opt/VRTS/bin
LLT Configuration Files
The following files are required by the VCS communication services for Low Latency Transport (LLT).
/etc/llthosts
The file llthosts(4) is a database, containing one entry per system, that links the LLT system ID (in the first column) with the LLT host name. This file is identical on each system in the cluster.
For example, the file /etc/llthosts contains entries that resemble:
0 system01
1 system02
/etc/llttab
The file llttab(1M) contains information that is derived during installation and used by the utility lltconfig(1M). After installation, this file lists the network links that correspond to the specific system.
For example, the file /etc/llttab contains entries that resemble:
set-node system01
set-cluster 100
link lan1 /dev/lan:1 - ether - -
link lan2 /dev/lan:2 - ether - -
The first line identifies the system. The second line identifies the cluster (that is, the cluster ID you entered during installation). The next two lines, beginning with the link command, identify the two network cards used by the LLT protocol.
See the llttab(4) manual page for details about how the LLT configuration may be modified. The manual page describes the ordering of the directives in the llttab file.
Checking LLT Operation
Use the lltstat command to verify that links are active for LLT. This command returns information about the links for LLT for the system on which it is typed. Refer to the lltstat(1M) manual page for more information. In the following example, lltstat -n is typed on each system in the cluster.
To check LLT operation
-
Log into system01.
# lltstat -n
Output resembles:
LLT node information: Node State Links * 0 system01 OPEN 2
1 system02 OPEN 2
-
Log into system02.
# lltstat -n
Output resembles:
LLT node information: Node State Links 0 system01 OPEN 2
* 1 system02 OPEN 2
Note
Each system has two links and that each system is in the OPEN state. An asterisk (*) denotes the system on which the command is typed.
With LLT configured correctly, the output of lltstat -n shows all of the systems in the cluster and two links for each system. If the output shows otherwise, you can use the verbose option of lltstat. For example, type lltstat -nvv | more on a system to view additional information about LLT. In the following example, lltstat -nvv | more is typed on a system in a two-node cluster.
-
Log into system01.
# lltstat -nvv | more
Output resembles:
Node State Link Status Address * 0 system01 OPEN lan1 UP 08:00:20:93:0E:34 lan2 UP 08:00:20:93:0E:34 1 system02 OPEN lan1 UP 08:00:20:8F:D1:F2 lan2 DOWN 08:00:20:8F:D1:F2
2 CONNWAIT lan1 DOWN lan2 DOWN
.
.
. 31 CONNWAIT lan1 DOWN lan2 DOWN
Note
The output lists 32 nodes. It reports on the two cluster nodes, system01 and system02, plus non-existent nodes. For each correctly configured system, the information shows a state of OPEN, a status for each link of UP, and an address for each link. However, in the example above, the output shows that for node system02, the private network may have failed, or the information in /etc/llttab may be incorrect.
To obtain information about the ports open for LLT, type lltstat -p on any system. In the following example, lltstat -p is typed on one system in the cluster.
-
Log into system01.
# lltstat -p
Output resembles:
LLT port information: Port Usage Cookie 0 gab 0x0 opens: 0 1 3 4 5 6 7 8 9 10 11 12 13... connects: 0 1 sys1
Note
The two systems (0 and 1) are connected.
GAB Configuration Files
The following files are required by the VCS communication services for Group Membership and Atomic Broadcast (GAB).
/etc/gabtab
After installation, the file /etc/gabtab contains a gabconfig(1M) command that configures the GAB driver for use.
The file /etc/gabtab contains a line that resembles:
/sbin/gabconfig -c -n N
where the -c option configures the driver for use and -n N specifies that the cluster will not be formed until at least N systems are ready to form the cluster. N is the number of systems in the cluster.
Checking GAB Operation
This section describes how to check GAB operation.
To check GAB operation
-
Enter the following command on each node in the cluster.
# /sbin/gabconfig -a
If GAB is operational, the following output displays with GAB port membership information:
GAB Port Memberships
===============================================================
Port a gen 1bbf01 membership 01
Port b gen 1bbf06 membership 01
Port f gen 1bbf0f membership 01
Port h gen 1bbf03 membership 01
Port v gen 1bbf0b membership 01
Port w gen 1bbf0d membership 01
If GAB is not operational, the following output display with no GAB port membership information:
GAB Port Memberships
===============================================================
For more information on GAB, refer to the VERITAS Cluster Server User's Guide.
Checking Cluster Operation
This section describes how to check cluster operation.
To check cluster operation
-
Enter the following command on any system:
# hastatus -summary
The output for an SFCFS HA installation resembles:
-- SYSTEM STATE
-- System State Frozen
A system01 RUNNING 0
A system02 RUNNING 0
-- GROUP STATE
-- Group System Probed AutoDisabled State
B ClusterService system01 Y N ONLINE
B ClusterService system02 Y N OFFLINE
Note
If the State value is RUNNING, VCS is successfully installed and running on that node. The group state lists the ClusterService group, which is ONLINE on system01 and OFFLINE on system02. Refer to hastatus(1M) manual page. In the VERITAS Cluster Server User's Guide, Appendix A, "System States," describes system states and the transitions between them.
-
Enter the following command on any systems:
# hasys -display
The example on the next page shows the output of system01; the list continues with similar information for system02 (not shown) and any other systems in the cluster. On each system, the output should be similar.
For more information on the hasys -display command, see the hasys(1M) manual page. Also refer to the chapter in the VERITAS Cluster Server User's Guide, "Administering VCS From the Command Line."
#System
|
Attribute
|
Value
|
---|
system01
|
AgentsStopped
|
0
|
system01
|
AvailableCapacity
|
1
|
system01
|
Capacity
|
1
|
system01
|
ConfigBlockCount
|
54
|
system01
|
ConfigCheckSum
|
29776
|
system01
|
ConfigDiskState
|
CURRENT
|
system01
|
ConfigFile
|
/etc/VRTSvcs/conf/config
|
system01
|
ConfigInfoCnt
|
0
|
system01
|
ConfigModDate
|
Tues June 25 23:00:00 2004
|
system01
|
CurrentLimits
|
|
system01
|
DiskHbStatus
|
|
system01
|
DynamicLoad
|
0
|
system01
|
Frozen
|
0
|
system01
|
GUIIPAddr
|
|
system01
|
LLTNodeId
|
0
|
system01
|
Limits
|
|
system01
|
LoadTimeCounter
|
1890
|
system01
|
LoadTimeThreshold
|
600
|
system01
|
LoadWarningLevel
|
80
|
system01
|
MajorVersion
|
2
|
system01
|
MinorVersion
|
0
|
system01
|
NodeId
|
0
|
system01
|
OnGrpCnt
|
1
|
system01
|
ShutdownTimeout
|
60
|
system01
|
SourceFile
|
./main.cf
|
system01
|
SysName
|
system01
|
system01
|
SysState
|
RUNNING
|
system01
|
SystemLocation
|
|
system01
|
SystemOwner
|
|
system01
|
TFrozen
|
0
|
system01
|
TRSE
|
0
|
system01
|
UpDownState
|
Up
|
system01
|
UserInt
|
0
|
system01
|
UserStr
|
|
|