Oracle® Clusterware Installation Guide 11g Release 1 (11.1) for Linux Part Number B28263-01 |
|
|
View PDF |
This chapter describes the storage configuration tasks that you must complete before you start Oracle Universal Installer to install Oracle Clusterware.
This chapter contains the following topics:
This section describes supported options for storing Oracle Clusterware.
There are two ways of storing Oracle Clusterware files:
A supported shared file system: Supported file systems include the following:
A supported cluster file system
Note:
For information about how to download and configure Oracle Cluster File System 2 (OCFS2), refer to the following URLhttp://oss.oracle.com/projects/ocfs/documentation/
OCFS (version 1) is designed for the 2.4 kernel. You must use OCFS2 with this release.
See Also:
The Certify page on OracleMetalink for supported cluster file systemsNetwork File System (NFS): A file-level protocol that enables access and sharing of files
See Also:
The Certify page on OracleMetalink for supported Network Attached Storage (NAS) devicesBlock or Raw Devices: Oracle Clusterware files can be placed on either Block or RAW devices based on shared disk partitions. Oracle recommends using Block devices for easier usage.
For all installations, you must choose the storage option that you want to use for Oracle Clusterware files and Oracle Database files, or for Oracle Clusterware with Oracle Real Application Clusters databases (Oracle RAC). You do not have to use the same storage option for each file type.
Oracle Clusterware files include voting disks, used to monitor cluster node status, and Oracle Cluster Registry (OCR) which contains configuration information about the cluster. The voting disks and OCR are shared files on a cluster or network file system environment. If you do not use a cluster file system, then you must place these files on shared block devices or shared raw devices. Oracle Universal Installer (OUI) automatically initializes the OCR during the Oracle Clusterware installation.
For voting disk file placement, Oracle recommends that each voting disk is configured so that it does not share any hardware device or disk, or other single point of failure. Any node that does not have available to it an absolute majority of voting disks configured (more than half) will be restarted.
The following table shows the storage options supported for storing Oracle Clusterware files. Oracle Clusterware files include the Oracle Cluster Registry (OCR), a mirrored OCR file (optional), the Oracle Clusterware voting disk, and additional voting disk files (optional).
Note:
For information about Oracle Cluster File System version 2 (OCFS2), refer to the following Web site:http://oss.oracle.com/projects/ocfs2/
For OCFS2 certification status, refer to the Certify page on OracleMetaLink.
Table 3-1 Supported Storage Options for Oracle Clusterware
Storage Option | File Types Supported | |
---|---|---|
OCR and Voting Disks | Oracle Software | |
Automatic Storage Management |
No |
No |
OCFS2 |
Yes |
Yes |
Red Hat Global File System (GFS); for Red Hat Enterprise Linux and Oracle Enterprise Linux |
Yes |
Yes |
Local storage |
No |
Yes |
NFS file system Note: Requires a certified NAS device |
Yes |
Yes |
Shared disk partitions (block devices or raw devices) |
Yes |
No |
Use the following guidelines when choosing the storage options that you want to use for Oracle Clusterware:
You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.
You cannot use ASM to store Oracle Clusterware files, because these files must be accessible before any ASM instance starts.
If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.
When upgrading your Oracle9i release 9.2 Oracle RAC environment to Oracle Database 11g release 1 (11.1), you are prompted to specify one or more voting disks during the Oracle Clusterware installation. You must specify a new location for the voting disk in Oracle Database 11g release 1 (11.1). You cannot reuse the old Oracle9i release 9.2 quorum disk for this purpose.
When you have determined your disk storage options, you must perform the following tasks in the order listed:
1: Check for available shared storage with CVU
Refer to Checking for Available Shared Storage with CVU.
2: Configure shared storage for Oracle Clusterware files
To use a file system (NFS, OCFS2, GPS, GPFS) for Oracle Clusterware files, refer to Configuring Storage for Oracle Clusterware Files on a Supported Shared File System.
To use block devices for Oracle Clusterware files, refer to Configuring Disk Devices for Oracle Clusterware Files.
To check for all shared file systems available across all nodes on the cluster on a supported shared file system, log in as the installation owner user (oracle
or crs
), and use the following syntax:
/mountpoint/runcluvfy.sh.sh comp ssa -n node_list
If you want to check the shared accessibility of a specific shared storage type to specific nodes in your cluster, then use the following command syntax:
/mountpoint/runcluvfy.sh comp ssa -n node_list -s storageID_list
In the preceding syntax examples, the variable mountpoint
is the mountpoint path of the installation media, the variable node_list
is the list of nodes you want to check, separated by commas, and the variable storageID_list
is the paths for the storage devices that you want to check.
For example, if you want to check the shared accessibility from node1
and node2
of storage devices /dev/sdb
and /dev/sdc
, and your mountpoint is /dev/dvdrom/
, then enter the following command:
$ /mnt/dvdrom/runcluvfy.sh comp ssa -n node1,node2 -s /dev/sdb,/dev/sdc
If you do not specify storage device IDs in the command, then the command searches for all available storage devices connected to the nodes on the list.
Oracle Universal Installer (OUI) does not suggest a default location for the Oracle Cluster Registry (OCR) or the Oracle Clusterware voting disk. If you choose to create these files on a file system, then review the following sections to complete storage requirements for Oracle Clusterware files:
Requirements for Using a File System for Oracle Clusterware Files
Deciding to Use a Cluster File System for Oracle Clusterware Files
Creating Required Directories for Oracle Clusterware Files on Shared File Systems
Note:
The OCR is a file that contains the configuration information and status of the cluster. Oracle Universal Installer (OUI) automatically initializes the OCR during the Oracle Clusterware installation. Database Configuration Assistant uses the OCR for storing the configurations for the cluster databases that it creates.The OCR is a shared file in a cluster file system environment. If you do not use a cluster file system, then you must place this file on a shared storage device.
To use a file system for Oracle Clusterware files, the file system must comply with the following requirements:
To use a cluster file system, it must be a supported cluster file system, as listed in the section "Deciding to Use a Cluster File System for Oracle Clusterware Files".
To use an NFS file system, it must be on a certified NAS device. Log in to OracleMetaLink at the following URL, and click the Certify tab to find a list of certified NAS devices.
If you choose to place your Oracle Cluster Registry (OCR) files on a shared file system, then Oracle recommends that one of the following is true:
The user account with which you perform the installation (oracle
or crs
) must have write permissions to create the files in the path that you specify.
Note:
If you are upgrading from Oracle9i release 2, then you can continue to use the raw device or shared file that you used for the SRVM configuration repository instead of creating a new file for the OCR.Use Table 3-2 to determine the partition size for shared file systems.
Table 3-2 Shared File System Volume Size Requirements
File Types Stored | Number of Volumes | Volume Size |
---|---|---|
Oracle Clusterware files (OCR and voting disks) with external redundancy |
1 |
At least 280 MB for each OCR volume At least 280 MB for each voting disk volume |
Oracle Clusterware files (OCR and voting disks) with redundancy provided by Oracle software. |
1 |
At least 280 MB for each OCR volume At least 280 MB for each voting disk volume |
In Table 3-2, the total required volume size is cumulative. For example, to store all Oracle Clusterware files on the shared file system with normal redundancy, you should have at least 1.3 GB of storage available over a minimum of three volumes (two separate volume locations for the OCR and OCR mirror, and one voting disk on each volume).
Note:
When you create partitions withfdisk
by specifying a device size, such as +256M
, the actual device created may be smaller than the size requested, based on the cylinder geometry of the disk. This is due to current fdisk restrictions.
Oracle configuration software checks to ensure that devices contain a minimum of 256MB of available disk space. Therefore, Oracle recommends using at least 280MB for the device size. You can check partition sizes by using the command syntax fdisk -s
partition
. For example:
[root@node1]$ fdisk -s /dev/sdb1 281106
For Linux x86 (32-bit) and x86 (64-bit) platforms, Oracle provides Oracle Cluster File System (OCFS2). Use Oracle Cluster File System 2 (OCFS2), rather than OCFS version 1 (OCFS), as OCFS2 is designed for Linux kernel 2.6. You can have a shared Oracle home on OCFS2.
If you have an existing Oracle installation, then use the following command to determine if OCFS2 is installed:
# rpm -qa | grep ocfs
To ensure that OCFS2 is loaded, enter the following command:
/etc/init.d/ocfs status
If you want to install Oracle Clusterware files on an OCFS2 file system, and the packages are not installed, then download them from the following Web site. Follow the instructions listed with the kit to install the packages and configure the file system:
OCFS2:
http://oss.oracle.com/projects/ocfs2/
Note:
For OCFS2 certification status, refer to the Certify page on OracleMetaLink:https://metalink.oracle.com
If you are using NFS, then you must set the values for the NFS buffer size parameters rsize
and wsize
to at least 16384. Oracle recommends that you use the value 32768.
For example, if you decide to use rsize
and wsize
buffer settings with the value 32768, then update the /etc/fstab
file on each node with an entry similar to the following:
nfs_server:/vol/DATA/oradata /home/oracle/netapp nfs\ rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600
Note:
Refer to your storage vendor documentation for additional information about mount options.Use the following instructions to create directories for Oracle Clusterware files. You can also configure shared file systems for the Oracle Database and recovery files.
Note:
For both NFS and OCFS2 storage, you must complete this procedure only if you want to place the Oracle Clusterware files on a separate file system from the Oracle base directory.To create directories for the Oracle Clusterware files on separate file systems from the Oracle base directory, follow these steps:
If necessary, configure the shared file systems that you want to use and mount them on each node.
Note:
The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.Use the df -h
command to determine the free disk space on each mounted file system.
From the display, identify the file systems that you want to use. Choose a file system with a minimum of 512 MB of free disk space (one OCR and one voting disk, with external redundancy).
If you are using the same file system for more than one type of file, then add the disk space requirements for each type to determine the total disk space requirement.
Note the names of the mount point directories for the file systems that you identified.
If the user performing installation (typically, crs
or oracle
) has permissions to create directories on the disks where you plan to install Oracle Clusterware and Oracle Database, then OUI creates the Oracle Clusterware file directory, and DBCA creates the Oracle Database file directory, and the Recovery file directory.
If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on the Oracle Clusterware home (or CRS home). For example, where the user is oracle
, and the CRS home is oracrs
:
# mkdir /mount_point/oracrs # chown oracle:oinstall /mount_point/oracrs # chmod 640 /mount_point/oracrs
Note:
After installation, directories in the installation path for the Oracle Cluster Registry (OCR) files should be owned byroot
, and not writable by any account other than root
.When you have completed creating a subdirectory in the mount point directory, and set the appropriate owner, group, and permissions, you have completed OCFS2 or NFS configuration for Oracle Clusterware.
On Linux systems, O_DIRECT
enables direct read and writes to block devices, avoiding kernel overhead. With Oracle Clusterware release 10.2 and later, Oracle Clusterware files are configured by default to use direct input/output.
With the 2. 6 kernel or later for Red Hat Enterprise Linux, Oracle Enterprise Linux, and SUSE Enterprise Server, you must create a permissions file to maintain permissions on Oracle Cluster Registry (OCR) and voting disk partitions. If you do not create this permissions file, then permissions on disk devices revert to their default values, root:disk
, and Oracle Clusterware will fail to start.
On Asianux 2, Red Hat Enterprise Linux 4, and Oracle Enterprise Linux 4, you must create a permissions file number that is lower than 50.
On Asianux 3, Red Hat Enterprise Linux 5, Oracle Enterprise Linux 5, or SUSE Enterprise Linux 10, you must create a permissions file number that is higher at 50.
To configure a permissions file for disk devices, complete the following tasks:
Create a permissions file in /etc/udev/permissions.d
, to change the permissions from default root ownership to root and members of the oinstall group, called 49-oracle.permissions
, or 51-oracle.permissions
, depending on your Linux distribution. In each case, the contents of the xx-oracle.permissions file are as follows:
devicepartition:root:oinstall:0640
For example, to set permissions for an OCR partition on block device /dev/sda1
, create the following entry:
sda1:root:oinstall:0640
Use the section "Example of Creating a Udev Permissions File for Oracle Clusterware" for a step-by-step example of how to perform this task.
Configure the block devices from the local node with the required partition space for Oracle Clusterware files. Use the section "Example of Configuring Block Device Storage for Oracle Clusterware" to help you configure block devices, if you are unfamiliar with creating partitions.
Load updated block device partition tables on all member nodes of the cluster, using /sbin/partprobe
devicename
. You must do this as root
.
Change the ownership of OCR partitions to the installation owner on all member nodes of the cluster. In the session where you run the Installer, the OCR partitions must be owned by the installation owner (crs
, oracle
) that performs the Oracle Clusterware installation. The installation owner must own the OCR partitions so that the Installer can write to them. During installation, the Installer changes ownership of the OCR partitions back to root
. With subsequent system restarts, ownership is set correctly by the oracle.permissions
file.
Enter the command /sbin/udevstart
. This command should assign the permissions set in the oracle.permissions
file. Check to ensure that your system is configured correctly.
The procedure to create a permissions file to grant oinstall group members write privileges to block devices is as follows:
Log in as root.
Change to the /etc/udev/permissions.d
directory:
# cd /etc/udev/permissions.d
Start a text editor, such as vi, and enter the partition information where you want to place the OCR and voting disk files, using the syntax device[partitions]:root:oinstall:0640. Note that Oracle recommends that you place the OCR and the voting disk files on separate physical disks. For example, to grant oinstall members access to SCSI disks to place OCR files on sda1 and sdb2, and to grant the Oracle Clusterware owner (in this example crs) permissions to place voting disks on sdb3, sdc1 and sda2, add the following information to the file:
# OCR disks sda1:root:oinstall:0640 sdb2:root:oinstall:0640 # Voting disks sda2:crs:oinstall:0640 sdb3:crs:oinstall:0640 sdc1:crs:oinstall:0640
Save the file:
On Red Hat and Oracle Enterprise Linux 4 systems, save the file as 49-oracle.permissions.
On SUSE Linux Enterprise Server 10 systems, save the file as 51-oracle.permissions.
Using the following command, assign the permissions in the udev file to the devices:
# /sbin/udevstart
The procedure to create partitions for Oracle Clusterware files on block devices is as follows:
log in as root
Enter the fdisk command to format a specific storage disk (for example, /sbin/fdisk /dev/sdb
)
Create a new partition, and make the partition 280 MB in size for both OCR and voting disk partitions.
Use the command syntax /sbin/partprobe
diskpath
on each node in the cluster to update the kernel partition table for the shared storage device on each node.
The following is an example of how to use fdisk to create one partition on a shared storage block disk device for an OCR file:
[crs@localnode /] $ su Password: [root@localnode /] # /sbin/fdisk /dev/sdb The number of cylinders for this disk is set to 1024. Command (m for help): n Command action e extended P primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1024, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1024, default 1) Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1024, default 1024):+280m Command (m for help):w The partition table has been altered! Calling ioctl () to re-read partition table. Synching disks. [root@localnode /]# exit [crs@localnode /] $ ssh remotenode Last login Wed Feb 21 20:23:01 from localnode [crs@remotenode ~]$ su Password: [root@localnode /] # /sbin/partprobe /dev/sdb1
Note:
Oracle recommends that you create partitions for Oracle Clusterware files on physically separate disks.