Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Linux Part Number B14203-02 |
|
|
View PDF |
This chapter describes the storage configuration tasks that you must complete before you start Oracle Universal Installer. It includes information about the following tasks:
Reviewing Storage Options for Oracle Clusterware, Database, and Recovery Files
Configuring Storage for Oracle Clusterware Files on a Supported Shared File System
Configuring Storage for Oracle Clusterware Files on Raw Devices
This section describes supported options for storing Oracle Clusterware files, Oracle Database files, and data files. It includes the following sections:
Use the information in this overview to help you select your storage option.
There are two ways of storing Oracle Clusterware files:
A supported shared file system: Supported file systems include the following:
Oracle Cluster File System (OCFS): A cluster file system Oracle provides for the Linux community
Oracle Cluster File System 2 (OCFS2): A cluster file system Oracle provides for the Linux community, which allows shared Oracle homes
Network File System (NFS): A file-level protocol that enables access and sharing of files
Raw partitions: Raw partitions are disk partitions that are not mounted and written to using the Linux file system, but instead are accessed directly by the application.
There are three ways of storing Oracle Database and recovery files:
Automatic Storage Management: Automatic Storage Management (ASM) is an integrated, high-performance database file system and disk manager for Oracle Database files.
A supported shared file system: Supported file systems include the following:
Oracle Cluster File System 1 and 2 (OCFS and OCFS2): Note that if you intend to use OCFS or OCFS2 for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware. If you intend to store Oracle Clusterware files on OCFS, then you must ensure that OCFS volume sizes are at least 500 MB each.
OSCP-Certified NAS Network File System (NFS): Note that if you intend to use NFS for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware.
Raw partitions (database files only): A raw partition is required for each database file.
See Also: For information about certified compatible storage options, refer to the Oracle Storage Compatibility Program (OSCP) Web site, which is at the following URL:
|
For all installations, you must choose the storage option that you want to use for Oracle Clusterware files and Oracle Database files. If you want to enable automated backups during the installation, then you must also choose the storage option that you want to use for recovery files (the flash recovery area). You do not have to use the same storage option for each file type.
For single-instance Oracle Database installations using Oracle Clusterware for failover, you must use OCFS, ASM, or shared raw disks if you do not want the failover processing to include dismounting and remounting disks.
The following table shows the storage options supported for storing Oracle Clusterware files, Oracle Database files, and Oracle Database recovery files. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file. Oracle Clusterware files include the Oracle Cluster Registry (OCR), a mirrored OCR file (optional), the Oracle Clusterware voting disk, and additional voting disk files (optional).
Note: For the most up-to-date information about supported storage options for RAC installations, refer to the Certify pages on the OracleMetaLink Web site:http://metalink.oracle.com For information about Oracle Cluster File System version 2, refer to the following Web site: http://oss.oracle.com/projects/ocfs2/ |
Table 3-1 Supported Storage Options for Oracle Clusterware, Database, and Recovery Files
Storage Option | File Types Supported | |||
---|---|---|---|---|
OCR and Voting Disks | Oracle Software | Database | Recovery | |
Automatic Storage Management | No | No | Yes | Yes |
OCFS | Yes | No | Yes | Yes |
OCFS2 | Yes | Yes | Yes | Yes |
Local storage | No | Yes | No | No |
NFS file system
Note: Requires a certified NAS device |
Yes | Yes | Yes | Yes |
Shared raw partitions | Yes | No | Yes | No |
Use the following guidelines when choosing the storage options that you want to use for each file type:
You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.
Oracle recommends that you choose Automatic Storage Management (ASM) as the storage option for database and recovery files.
For Standard Edition RAC installations, ASM is the only supported storage option for database or recovery files.
You cannot use ASM to store Oracle Clusterware files, because these files must be accessible before any ASM instance starts.
If you intend to use ASM with RAC, and you are configuring a new ASM instance, then your system must meet the following conditions:
All nodes on the cluster have the release 2 (10.2) version of Oracle Clusterware installed.
Any existing ASM instance on any node in the cluster is shut down.
If you intend to upgrade an existing RAC database, or a RAC database with ASM instances, then you must ensure that your system meets the following conditions:
Oracle Universal Installer (OUI) and Database Configuration Assistant (DBCA) are run on the node where the RAC database or RAC database with ASM instance is located.
The RAC database or RAC database with an ASM instance is running on the same nodes that you intend to make members of the new cluster installation. For example, if you have an existing RAC database running on a three-node cluster, then you must install the upgrade on all three nodes. You cannot upgrade only 2 nodes of the cluster, removing the third instance in the upgrade.
See Also: Oracle Database Upgrade Guide for information about how to prepare for upgrading an existing database |
If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.
When you have determined your disk storage options, you must perform the following tasks in the following order:
1: Check for available shared storage with CVU
Refer to Checking for Available Shared Storage with CVU.
2: Configure shared storage for Oracle Clusterware files
To use a file system (NFS, OCFS, OCFS2) for Oracle Clusterware files, refer to Configuring Storage for Oracle Clusterware Files on a Supported Shared File System.
To use raw devices (partitions) for Oracle Clusterware files, refer to "Configuring Storage for Oracle Clusterware Files on Raw Devices".
3: Configure storage for Oracle Database files and recovery files
To use a file system for database or recovery file storage, refer to Configuring Storage for Oracle Clusterware Files on a Supported Shared File System, and ensure that in addition to the volumes you create for Oracle Clusterware files, you also create additional volumes with sizes sufficient to store database files.
To use Automatic Storage Management for database or recovery file storage, refer to "Configuring Database File Storage on ASM and Raw Devices".
To use raw devices (partitions) for database file storage, refer to "Configuring Database File Storage on Raw Devices".
To check for all shared file systems available across all nodes on the cluster on a supported shared file system, use the following command:
/mountpoint/crs/Disk1/cluvfy/runcluvfy.sh comp ssa -n node_list
If you want to check the shared accessibility of a specific shared storage type to specific nodes in your cluster, then use the following command syntax:
/mountpoint/crs/Disk1/cluvfy/runcluvfy.sh comp ssa -n node_list -s storageID_list
In the preceding syntax examples, the variable mountpoint
is the mountpoint path of the installation media, the variable node_list
is the list of nodes you want to check, separated by commas, and the variable storageID_list
is the list of storage device IDs for the storage devices managed by the file system type that you want to check.
For example, if you want to check the shared accessibility from node1
and node2
of storage devices /dev/sdb
and /dev/sdc
, and your mountpoint is /dev/dvdrom/
, then enter the following command:
/dev/dvdrom/crs/Disk1/cluvfy/runcluvfy.sh comp ssa -n node1,node2 -s /dev/sdb,/dev/sdc
If you do not specify storage device IDs in the command, then the command searches for all available storage devices connected to the nodes on the list.
Oracle Universal Installer (OUI) does not suggest a default location for the Oracle Cluster Registry (OCR) or the Oracle Clusterware voting disk. If you choose to create these files on a file system, then review the following sections to complete storage requirements for Oracle Clusterware files:
Requirements for Using a File System for Oracle Clusterware Files
Creating Required Directories for Oracle Clusterware Files on Shared File Systems
To use a file system for Oracle Clusterware files, the file system must comply with the following requirements:
To use a cluster file system, it must be a supported cluster file system, as listed in the section "Deciding to Use a Cluster File System for Data Files".
To use an NFS file system, it must be on a certified NAS device.
Note: If you are using a shared file system on a NAS device to store a shared Oracle home directory for Oracle Clusterware or RAC, then you must use the same NAS device for Oracle Clusterware file storage. |
If you choose to place your Oracle Cluster Registry (OCR) files on a shared file system, then one of the following must be true:
If you intend to use a shared file system to store database files, then use at least two independent file systems, with the database files on one file system, and the recovery files on a different file system.
The oracle
user must have write permissions to create the files in the path that you specify.
Note: If you are upgrading from Oracle9i release 2, then you can continue to use the raw device or shared file that you used for the SRVM configuration repository instead of creating a new file for the OCR. |
Use Table 3-2 to determine the partition size for shared file systems.
Table 3-2 Shared File System Volume Size Requirements
File Types Stored | Number of Volumes | Volume Size |
---|---|---|
Oracle Clusterware files (OCR and voting disks) with external redundancy | 1 | At least 120 MB for each volume |
Oracle Clusterware files (OCR and voting disks) with redundancy provided by Oracle software. | 1 | At least 120 MB for each volume |
Redundant Oracle Clusterware files with redundancy provided by Oracle software (mirrored OCR and two additional voting disks) | 1 | At least 140 MB (100 MB for the mirrored OCR, and 20 MB each for the additional voting disks) |
Oracle Database files | 1 | At least 1.2 GB for each volume |
Recovery files
Note: Recovery files must be on a different volume than database files |
1 | At least 2 GB for each volume |
In Table 3-2, the total required volume size is cumulative. For example, to store all files on the shared file system, you should have at least 3.4 GB of storage available over a minimum of two volumes.
If you have an existing Oracle installation, then use the following command to determine if OCFS or OCFS2 is installed:
# rpm -qa | grep ocfs
If you want to install the Oracle Database files on an OCFS or OCFS2 file system, and the packages are not installed, then download them from the following Web site. Follow the instructions listed with the kit to install the packages and configure the file system:
OCFS:
http://oss.oracle.com/projects/ocfs/
OCFS2:
http://oss.oracle.com/projects/ocfs2/
If you are using NFS, then you must set the values for the NFS buffer size parameters rsize
and wsize
to at least 16384. Oracle recommends that you use the value 32768.
For example, if you decide to use rsize
and wsize
buffer settings with the value 32768, then update the /etc/fstab
file on each node with an entry similar to the following:
nfs_server:/vol/DATA/oradata /home/oracle/netapp nfs\ rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600
Note: Refer to your storage vendor documentation for additional information about mount options. |
Use the following instructions to create directories for Oracle Clusterware files. You can also configure shared file systems for the Oracle Database and recovery files.
Note: For both NFS and OCFS storage, you must complete this procedure only if you want to place the Oracle Clusterware files on a separate file system from the Oracle base directory. |
To create directories for the Oracle Clusterware files on separate file systems from the Oracle base directory, follow these steps:
If necessary, configure the shared file systems that you want to use and mount them on each node.
Note: The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts. |
Use the df -h
command to determine the free disk space on each mounted file system.
From the display, identify the file systems that you want to use:
File Type | File System Requirements |
---|---|
Oracle Clusterware files | Choose a file system with at least 120 MB of free disk space |
Database files | Choose either:
|
Recovery files | Choose a file system with at least 2 GB of free disk space. |
If you are using the same file system for more than one type of file, then add the disk space requirements for each type to determine the total disk space requirement.
Note the names of the mount point directories for the file systems that you identified.
If the user performing installation (typically, oracle
) has permissions to create directories on the disks where you plan to install Oracle Clusterware and Oracle Database, then OUI creates the Oracle Clusterware file directory, and DBCA creates the Oracle Database file directory, and the Recovery file directory.
If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on them:
Oracle Clusterware file directory:
# mkdir /mount_point/oracrs # chown oracle:oinstall /mount_point/oracrs # chmod 775 /mount_point/oracrs
Database file directory:
# mkdir /mount_point/oradata # chown oracle:oinstall /mount_point/oradata # chmod 775 /mount_point/oradata
Recovery file directory (flash recovery area):
# mkdir /mount_point/flash_recovery_area # chown oracle:oinstall /mount_point/flash_recovery_area # chmod 775 /mount_point/flash_recovery_area
By making the oracle
user the owner of these directories, this permits them to be read by multiple Oracle homes, including those with different OSDBA groups.
When you have completed creating subdirectories in each of the mount point directories, and set the appropriate owner, group, and permissions, you have completed OCFS or NFS configuration.
The following subsections describe how to configure Oracle Clusterware files on raw partitions.
Clusterware File Restrictions for Logical Volume Manager on Linux
Creating the Required Raw Partitions on IDE, SCSI, or RAID Devices
The procedures contained in this section describe how to create raw partitions for Oracle Clusterware. Although Red Hat Enterprise Linux and SUSE Linux Enterprise Server provide a Logical Volume Manager (LVM), this LVM is not cluster-aware. For this reason, Oracle does not support the use of logical volumes with RAC for either Oracle Clusterware or database files.
Note: Oracle supports the use of logical volumes for raw devices only for single-instance databases. Their use is not supported for RAC databases. |
Table 3-3 lists the number and size of the raw partitions that you must configure for Oracle Clusterware files.
Table 3-3 Raw Partitions Required for Oracle Clusterware Files on Linux
Note: If you put voting disk and OCR files on Oracle Cluster File System (OCFS and OCFS2) then you should ensure that the volumes are at least 500 MB in size. OCFS requires partitions of at least 500 MB. |
If you intend to use IDE, SCSI, or RAID devices for the raw devices, then follow these steps:
If necessary, install or configure the shared disk devices that you intend to use for the raw partitions and restart the system.
Note: Because the number of partitions that you can create on a single device is limited, you might need to create the required raw partitions on more than one device. |
To identify the device name for the disks that you want to use, enter the following command:
# /sbin/fdisk -l
Depending on the type of disk, the device name can vary:
You can create the required raw partitions either on new devices that you added or on previously partitioned devices that have unpartitioned free space. To identify devices that have unpartitioned free space, examine the start and end cylinder numbers of the existing partitions and determine whether the device contains unused cylinders.
To create raw partitions on a device, enter a command similar to the following:
# /sbin/fdisk devicename
When creating partitions:
Use the p
command to list the partition table of the device.
Use the n
command to create a partition.
After you have created the required partitions on this device, use the w
command to write the modified partition table to the device.
Refer to the fdisk
man page for more information about creating partitions.
After you have created the required partitions, you must bind the partitions to raw devices on every node. However, you must first determine what raw devices are already bound to other devices. The procedure that you must follow to complete this task varies, depending on the Linux distribution that you are using:
Note: If the nodes are configured differently, then the disk device names might be different on some nodes. In the following procedure, be sure to specify the correct disk device names on each node. |
To determine what raw devices are already bound to other devices, enter the following command on every node:
# /usr/bin/raw -qa
Raw devices have device names in the form /dev/raw/raw
n
, where n
is a number that identifies the raw device.
For each device that you want to use, identify a raw device name that is unused on all nodes.
Open the /etc/sysconfig/rawdevices
file in any text editor and add a line similar to the following for each partition that you created:
/dev/raw/raw1 /dev/sdb1
Specify an unused raw device for each partition.
For the raw device that you created for the Oracle Cluster Registry (OCR), enter commands similar to the following to set the owner, group, and permissions on the device file:
# chown root:oinstall /dev/raw/rawn # chmod 640 /dev/raw/rawn
By making the oinstall
group the owner of the OCR, this permits the OCR to be read by multiple Oracle homes, including those with different OSDBA groups.
To bind the partitions to the raw devices, enter the following command:
# /sbin/service rawdevices restart
The system automatically binds the devices listed in the rawdevices
file when it restarts.
Repeat step 2 through step 4 on each node in the cluster.
To determine what raw devices are already bound to other devices, enter the following command on every node:
# /usr/sbin/raw -qa
Raw devices have device names in the form /dev/raw/raw
n
, where n
is a number that identifies the raw device.
For each device that you want to use, identify a raw device name that is unused on all nodes.
Open the /etc/raw
file in any text editor and add a line similar to the following to associate each partition with an unused raw device:
raw1:sdb1
For the raw device that you created for the Oracle Cluster Registry, enter commands similar to the following to set the owner, group, and permissions on the device file:
# chown root:oinstall /dev/raw/rawn # chmod 660 /dev/raw/rawn
To bind the partitions to the raw devices, enter the following command:
# /etc/init.d/raw start
To ensure that the raw devices are bound when the system restarts, enter the following command:
# /sbin/chkconfig raw on
Repeat step 2 through step 5 on the other nodes in the cluster.
Database files consist of the files that make up the database, and the recovery area files. There are four options for storing database files:
Oracle Cluster File System (OCFS and OCFS2)
Network File System (NFS)
Automatic Storage Management (ASM)
Raw partitions (Database files only--not for the recovery area)
During configuration of Oracle Clusterware, if you selected OCFS or NFS, and the volumes that you created are large enough to hold the database files and recovery files, then you have completed required pre-installation steps. You can proceed to Chapter 4, "Installing Oracle Clusterware".
If you want to place your database files on ASM, then proceed to Configuring Disks for Automatic Storage Management.
If you want to place your database files on raw devices, and manually provide storage management for your database and recovery files, then proceed to "Configuring Database File Storage on Raw Devices".
Note: Databases can consist of a mixture of ASM files and non-ASM files. Refer to Oracle Database Administrator's Guide for additional information about ASM. |
This section describes how to configure disks for use with Automatic Storage Management. Before you configure the disks, you must determine the number of disks and the amount of free disk space that you require. The following sections describe how to identify the requirements and configure the disks:
Identifying Storage Requirements for Automatic Storage Management
Configuring Disks for Automatic Storage Management with ASMLIB
Note: For Automatic Storage Management installations:
|
To identify the storage requirements for using Automatic Storage Management, you must determine how many devices and the amount of free disk space that you require. To complete this task, follow these steps:
Determine whether you want to use Automatic Storage Management for Oracle Database files, recovery files, or both.
Note: You do not have to use the same storage mechanism for database files and recovery files. You can use the file system for one file type and Automatic Storage Management for the other.If you choose to enable automated backups and you do not have a shared file system available, then you must choose Automatic Storage Management for recovery file storage. |
If you enable automated backups during the installation, you can choose Automatic Storage Management as the storage mechanism for recovery files by specifying an Automatic Storage Management disk group for the flash recovery area. Depending on how you choose to create a database during the installation, you have the following options:
If you select an installation method that runs Database Configuration Assistant in interactive mode (for example, by choosing the Advanced database configuration option) then you can decide whether you want to use the same Automatic Storage Management disk group for database files and recovery files, or use different disk groups for each file type.
The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.
If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must use the same Automatic Storage Management disk group for database files and recovery files.
Choose the Automatic Storage Management redundancy level that you want to use for the Automatic Storage Management disk group.
The redundancy level that you choose for the Automatic Storage Management disk group determines how Automatic Storage Management mirrors files in the disk group and determines the number of disks and amount of free disk space that you require, as follows:
External redundancy
An external redundancy disk group requires a minimum of one disk device. The effective disk space in an external redundancy disk group is the sum of the disk space in all of its devices.
Because Automatic Storage Management does not mirror data in an external redundancy disk group, Oracle recommends that you use only RAID or similar devices that provide their own data protection mechanisms as disk devices in this type of disk group.
Normal redundancy
In a normal redundancy disk group, Automatic Storage Management uses two-way mirroring by default, to increase performance and reliability. A normal redundancy disk group requires a minimum of two disk devices (or two failure groups). The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices.
For most installations, Oracle recommends that you select normal redundancy disk groups.
High redundancy
In a high redundancy disk group, Automatic Storage Management uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups). The effective disk space in a high redundancy disk group is one-third the sum of the disk space in all of its devices.
While high redundancy disk groups do provide a high level of data protection, you should consider the greater cost of additional storage devices before deciding to select high redundancy disk groups.
Determine the total amount of disk space that you require for the database files and recovery files.
Use the following table to determine the minimum number of disks and the minimum disk space requirements for installing the starter database:
Redundancy Level | Minimum Number of Disks | Database Files | Recovery Files | Both File Types |
---|---|---|---|---|
External | 1 | 1.15 GB | 2.3 GB | 3.45 GB |
Normal | 2 | 2.3 GB | 4.6 GB | 6.9 GB |
High | 3 | 3.45 GB | 6.9 GB | 10.35 GB |
For RAC installations, you must also add additional disk space for the Automatic Storage Management metadata. You can use the following formula to calculate the additional disk space requirements (in MB):
15 + (2 * number_of_disks) + (126 * number_of_Automatic_Storage_Management_instances)
For example, for a four-node RAC installation, using three disks in a high redundancy disk group, you require an additional 525 MB of disk space:
15 + (2 * 3) + (126 * 4) = 525
If an Automatic Storage Management instance is already running on the system, then you can use an existing disk group to meet these storage requirements. If necessary, you can add disks to an existing disk group during the installation.
The following section describes how to identify existing disk groups and determine the free disk space that they contain.
Optionally, identify failure groups for the Automatic Storage Management disk group devices.
Note: You need to complete this step only if you intend to use an installation method that runs Database Configuration Assistant in interactive mode, for example, if you intend to choose the Custom installation type or the Advanced database configuration option. Other installation types do not enable you to specify failure groups. |
If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.
To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.
Note: If you define custom failure groups, then you must specify a minimum of two failure groups for normal redundancy disk groups and three failure groups for high redundancy disk groups. |
If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:
All of the devices in an Automatic Storage Management disk group should be the same size and have the same performance characteristics.
Do not specify more than one partition on a single physical disk as a disk group device. Automatic Storage Management expects each disk group device to be on a separate physical disk.
Although you can specify a logical volume as a device in an Automatic Storage Management disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing Automatic Storage Management from optimizing I/O across the physical devices. They are not supported with RAC.
See Also: The "Configuring Disks for Automatic Storage Management" section for information about completing this task |
If you want to store either database or recovery files in an existing Automatic Storage Management disk group, then you have the following choices, depending on the installation method that you select:
If you select an installation method that runs Database Configuration Assistant in interactive mode (for example, by choosing the Advanced database configuration option), then you can decide whether you want to create a disk group, or to use an existing one.
The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.
If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must choose an existing disk group for the new database; you cannot create a disk group. However, you can add disk devices to an existing disk group if it has insufficient free space for your requirements.
Note: The Automatic Storage Management instance that manages the existing disk group can be running in a different Oracle home directory. |
To determine if an existing Automatic Storage Management disk group exists, or to determine if there is sufficient disk space in a disk group, you can use Oracle Enterprise Manager Grid Control or Database Control. Alternatively, you can use the following procedure:
View the contents of the oratab
file to determine if an Automatic Storage Management instance is configured on the system:
$ more /etc/oratab
If an Automatic Storage Management instance is configured on the system, then the oratab
file should contain a line similar to the following:
+ASM2:oracle_home_path
In this example, +ASM2
is the system identifier (SID) of the Automatic Storage Management instance, with the node number appended, and oracle_home_path
is the Oracle home directory where it is installed. By convention, the SID for an Automatic Storage Management instance begins with a plus sign.
Set the ORACLE_SID and ORACLE_HOME environment variables to specify the appropriate values for the Automatic Storage Management instance that you want to use.
Connect to the Automatic Storage Management instance as the SYS user with SYSDBA privilege and start the instance if necessary:
$ $ORACLE_HOME/bin/sqlplus "SYS/SYS_password as SYSDBA"
SQL> STARTUP
Enter the following command to view the existing disk groups, their redundancy level, and the amount of free disk space in each one:
SQL> SELECT NAME,TYPE,TOTAL_MB,FREE_MB FROM V$ASM_DISKGROUP;
From the output, identify a disk group with the appropriate redundancy level and note the free space that it contains.
If necessary, install or identify the additional disk devices required to meet the storage requirements listed in the previous section.
Note: If you are adding devices to an existing disk group, then Oracle recommends that you use devices that have the same size and performance characteristics as the existing devices in that disk group. |
The Automatic Storage Management library driver (ASMLIB) simplifies the configuration and management of the disk devices by eliminating the need to rebind raw devices used with ASM each time the system is restarted.
A disk that is configured for use with Automatic Storage Management is known as a candidate disk.
If you intend to use Automatic Storage Management for database storage for Linux, then Oracle recommends that you install the ASMLIB driver and associated utilities, and use them to configure candidate disks.
Note: If you do not use the Automatic Storage Management library driver, then you must bind each disk device that you want to use to a raw device, as described in Configuring Database File Storage on ASM and Raw Devices. |
To use the Automatic Storage Management library driver (ASMLIB) to configure Automatic Storage Management devices, complete the following tasks.
Installing and Configuring the Automatic Storage Management Library Driver Software
Configuring the Disk Devices to Use the Automatic Storage Management Library Driver
Administering the Automatic Storage Management Library Driver and Disks
Installing and Configuring the Automatic Storage Management Library Driver Software
To install and configure the ASMLIB driver software, follow these steps:
Enter the following command to determine the kernel version and architecture of the system:
# uname -rm
If necessary, download the required ASMLIB packages from the OTN Web site:
http://www.oracle.com/technology/tech/linux/asmlib/index.html
Note: ASMLIB driver packages for some kernel versions are available in the Oracle Clusterware directory on the 10g Release 2 (10.2) DVD-ROM, in thecrs/RPMS/asmlib directory. However, Oracle recommends that you check the OTN Web site for the most up-to-date packages. |
You must install the following packages, where version
is the version of the ASMLIB driver, arch
is the system architecture, and kernel
is the version of the kernel that you are using:
oracleasm-support-version.arch.rpm oracleasm-kernel-version.arch.rpm oracleasmlib-version.arch.rpm
Switch user to the root
user:
$ su -
Enter a command similar to the following to install the packages:
# rpm -Uvh oracleasm-support-version.arch.rpm \ oracleasm-kernel-version.arch.rpm \ oracleasmlib-version.arch.rpm
For example, if you are using the Red Hat Enterprise Linux AS 4 enterprise kernel on an AMD64 system, then enter a command similar to the following:
# rpm -Uvh oracleasm-support-2.0.0-1.i386.rpm \ oracleasmlib-2.0.0-1.x86_64.rpm \ oracleasm-2.6.9-11.EL-2.0.0-1.x86_64.rpm
Enter the following command to run the oracleasm
initialization script with the configure
option:
# /etc/init.d/oracleasm configure
Enter the following information in response to the prompts that the script displays:
Prompt | Suggested Response |
---|---|
Default user to own the driver interface: | Specify the Oracle software owner user (typically, oracle ). |
Default group to own the driver interface: | Specify the OSDBA group (typically dba ). |
Start Oracle Automatic Storage Management Library driver on boot (y/n): | Enter y to start the Oracle Automatic Storage Management library driver when the system starts. |
The script completes the following tasks:
Creates the /etc/sysconfig/oracleasm
configuration file
Creates the /dev/oracleasm
mount point
Loads the oracleasm
kernel module
Mounts the ASMLIB driver file system
Note: The ASMLIB driver file system is not a regular file system. It is used only by the Automatic Storage Management library to communicate with the Automatic Storage Management driver. |
Repeat this procedure on all nodes in the cluster where you want to install Oracle Real Application Clusters.
Configuring the Disk Devices to Use the Automatic Storage Management Library Driver
To configure the disk devices that you want to use in an Automatic Storage Management disk group, follow these steps:
If you intend to use IDE, SCSI, or RAID devices in the Automatic Storage Management disk group, then follow these steps:
If necessary, install or configure the shared disk devices that you intend to use for the disk group and restart the system.
To identify the device name for the disks that you want to use, enter the following command:
# /sbin/fdisk -l
Depending on the type of disk, the device name can vary:
To include devices in a disk group, you can specify either whole-drive device names or partition device names.
Note: Oracle recommends that you create a single whole-disk partition on each disk that you want to use. |
Use either fdisk
or parted
to create a single whole-disk partition on the disk devices that you want to use.
Enter a command similar to the following to mark a disk as an Automatic Storage Management disk:
# /etc/init.d/oracleasm createdisk DISK1 /dev/sdb1
In this example, DISK1
is a name that you want to assign to the disk.
Note: The disk names that you specify can contain uppercase letters, numbers, and the underscore character. They must start with an uppercase letter.If you are using a multi-pathing disk driver with Automatic Storage Management, then make sure that you specify the correct logical device name for the disk. |
To make the disk available on the other nodes in the cluster, enter the following command as root
on each node:
# /etc/init.d/oracleasm scandisks
This command identifies shared disks attached to the node that are marked as Automatic Storage Management disks.
Administering the Automatic Storage Management Library Driver and Disks
To administer the Automatic Storage Management library driver and disks, use the oracleasm
initialization script with different options, as follows:
Table 3-4 ORACLEASM Script Options
Option | Description |
---|---|
configure |
Use the configure option to reconfigure the Automatic Storage Management library driver, if necessary:
# /etc/init.d/oracleasm configure |
enable disable |
Use the disable and enable options to change the actions of the Automatic Storage Management library driver when the system starts. The enable option causes the Automatic Storage Management library driver to load when the system starts:
# /etc/init.d/oracleasm enable |
start stop restart |
Use the start , stop , and restart options to load or unload the Automatic Storage Management library driver without restarting the system:
# /etc/init.d/oracleasm restart |
createdisk |
Use the createdisk option to mark a disk device for use with the Automatic Storage Management library driver and give it a name:
# /etc/init.d/oracleasm createdisk DISKNAME devicename |
deletedisk |
Use the deletedisk option to unmark a named disk device:
# /etc/init.d/oracleasm deletedisk DISKNAME
Caution: Do not use this command to unmark disks that are being used by an Automatic Storage Management disk group. You must delete the disk from the Automatic Storage Management disk group before you unmark it. |
querydisk |
Use the querydisk option to determine if a disk device or disk name is being used by the Automatic Storage Management library driver:
# /etc/init.d/oracleasm querydisk {DISKNAME | devicename} |
listdisks |
Use the listdisks option to list the disk names of marked Automatic Storage Management library driver disks:
# /etc/init.d/oracleasm listdisks |
scandisks |
Use the scandisks option to enable cluster nodes to identify which shared disks have been marked as Automatic Storage Management library driver disks on another node:
# /etc/init.d/oracleasm scandisks |
When you have completed creating and configuring Automatic Storage Management, with ASMLIB, proceed to Chapter 4, "Installing Oracle Clusterware".
Note: For improved performance and easier administration, Oracle recommends that you use the Automatic Storage Management library driver (ASMLIB) instead of raw devices to configure Automatic Storage Management disks. |
To configure disks for Automatic Storage Management (ASM) using raw devices, complete the following tasks:
To use ASM with raw partitions, you must create sufficient partitions for your data files, and then bind the partitions to raw devices. To do this, follow the instructions provided for Oracle Clusterware in the section "Configuring Storage for Oracle Clusterware Files on Raw Devices".
Make a list of the raw device names you create for the data files, and have the list available during database installation.
When you have completed creating and configuring ASM with raw partitions, proceed to Chapter 4, "Installing Oracle Clusterware".
The following sections describe how to configure raw partitions for database files.
Database File Restrictions for Logical Volume Manager on Linux
Creating Required Raw Partitions for Database Files on IDE, SCSI, or RAID Devices
Creating the Database Configuration Assistant Raw Device Mapping File
The procedures contained in this section describe how to create raw partitions for Oracle Clusterware and database file storage. Although Red Hat Enterprise Linux 3 and SUSE Linux Enterprise Server provide a Logical Volume Manager (LVM), this LVM is not cluster-aware. For this reason, on x86, Oracle does not support the use of logical volumes with RAC for either Oracle Clusterware or database files.
Note: The use of logical volumes for raw devices is supported only for single-instance databases. Their use is not supported for RAC databases. |
Table 3-5 lists the number and size of the raw partitions that you must configure for database files.
Table 3-5 Raw Partitions Required for Database Files on Linux
Note: If you prefer to use manual undo management, instead of automatic undo management, then, instead of theUNDOTBS n raw devices, you must create a single rollback segment tablespace (RBS) raw device that is at least 500 MB in size. |
If you intend to use IDE, SCSI, or RAID devices for the database raw devices, then follow these steps:
If necessary, install or configure the shared disk devices that you intend to use for the raw partitions and restart the system.
Note: Because the number of partitions that you can create on a single device is limited, you might need to create the required raw partitions on more than one device. |
To identify the device name for the disks that you want to use, enter the following command:
# /sbin/fdisk -l
Depending on the type of disk, the device name can vary:
You can create the required raw partitions either on new devices that you added or on previously partitioned devices that have unpartitioned free space. To identify devices that have unpartitioned free space, examine the start and end cylinder numbers of the existing partitions and determine if the device contains unused cylinders.
To create raw partitions on a device, enter a command similar to the following:
# /sbin/fdisk devicename
Use the following guidelines when creating partitions:
Use the p
command to list the partition table of the device.
Use the n
command to create a partition.
After you have created the required partitions on this device, use the w
command to write the modified partition table to the device.
Refer to the fdisk
man page for more information about creating partitions.
After you have created the required partitions for database files, you must bind the partitions to raw devices on every node. However, you must first determine what raw devices are already bound to other devices. The procedure that you must follow to complete this task varies, depending on the Linux distribution that you are using:
Note: If the nodes are configured differently, then the disk device names might be different on some nodes. In the following procedure, be sure to specify the correct disk device names on each node. |
To determine what raw devices are already bound to other devices, enter the following command on every node:
# /usr/bin/raw -qa
Raw devices have device names in the form /dev/raw/raw
n
, where n
is a number that identifies the raw device.
For each device that you want to use, identify a raw device name that is unused on all nodes.
Open the /etc/sysconfig/rawdevices
file in any text editor and add a line similar to the following for each partition that you created:
/dev/raw/raw1 /dev/sdb1
Specify an unused raw device for each partition.
For each raw device that you specified in the rawdevices
file, enter commands similar to the following to set the owner, group, and permissions on the device file:
# chown oracle:dba /dev/raw/rawn # chmod 660 /dev/raw/rawn
To bind the partitions to the raw devices, enter the following command:
# /sbin/service rawdevices restart
The system automatically binds the devices listed in the rawdevices
file when it restarts.
Repeat step 2 through step 4 on the other nodes in the cluster.
To determine what raw devices are already bound to other devices, enter the following command on every node:
# /usr/sbin/raw -qa
Raw devices have device names in the form /dev/raw/raw
n
, where n
is a number that identifies the raw device.
For each device that you want to use, identify a raw device name that is unused on all nodes.
Open the /etc/raw
file in any text editor and add a line similar to the following to associate each partition with an unused raw device:
raw1:sdb1
For each raw device that you specified in the /etc/raw
file, enter commands similar to the following to set the owner, group, and permissions on the device file:
# chown oracle:dba /dev/raw/rawn # chmod 660 /dev/raw/rawn
To bind the partitions to the raw devices, enter the following command:
# /etc/init.d/raw start
To ensure that the raw devices are bound when the system restarts, enter the following command:
# /sbin/chkconfig raw on
Repeat step 2 through step 5 on the other nodes in the cluster.
Note: You must complete this procedure only if you are using raw devices for database files. You do not specify the raw devices for the Oracle Clusterware files in the Database Configuration Assistant raw device mapping file. |
To allow Database Configuration Assistant to identify the appropriate raw device for each database file, you must create a raw device mapping file, as follows:
Set the ORACLE_BASE environment variable to specify the Oracle base directory that you identified or created previously:
Bourne, Bash, or Korn shell:
$ ORACLE_BASE=/u01/app/oracle ; export ORACLE_BASE
C shell:
% setenv ORACLE_BASE /u01/app/oracle
Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:
# mkdir -p $ORACLE_BASE/oradata/dbname
# chown -R oracle:oinstall $ORACLE_BASE/oradata
# chmod -R 775 $ORACLE_BASE/oradata
In this example, dbname
is the name of the database that you chose previously.
Change directory to the $ORACLE_BASE/oradata/
dbname
directory.
Edit the dbname
_raw.conf
file in any text editor to create a file similar to the following:
Note: The following example shows a sample mapping file for a two-instance RAC cluster. |
system=/dev/raw/raw1 sysaux=/dev/raw/raw2 example=/dev/raw/raw3 users=/dev/raw/raw4 temp=/dev/raw/raw5 undotbs1=/dev/raw/raw6 undotbs2=/dev/raw/raw7 redo1_1=/dev/raw/raw8 redo1_2=/dev/raw/raw9 redo2_1=/dev/raw/raw10 redo2_2=/dev/raw/raw11 control1=/dev/raw/raw12 control2=/dev/raw/raw13 spfile=/dev/raw/raw14 pwdfile=/dev/raw/raw15
Use the following guidelines when creating or editing this file:
Each line in the file must have the following format:
database_object_identifier=raw_device_path
For a single-instance database, the file must specify one automatic undo tablespace data file (undotbs1
), and at least two redo log files (redo1_1
, redo1_2
).
For a RAC database, the file must specify one automatic undo tablespace data file (undotbs
n
) and two redo log files (redo
n
_1
, redo
n
_2
) for each instance.
Specify at least two control files (control1
, control2
).
To use manual instead of automatic undo management, specify a single rollback segment tablespace data file (rbs
) instead of the automatic undo management tablespace data files.
Save the file, and note the file name that you specified.
If you are using raw devices for database storage, then set the DBCA_RAW_CONFIG
environment variable to specify the full path to the raw device mapping file:
Bourne, Bash, or Korn shell:
$ DBCA_RAW_CONFIG=$ORACLE_BASE/oradata/dbname/dbname_raw.conf $ export DBCA_RAW_CONFIG
C shell:
$ setenv DBCA_RAW_CONFIG=$ORACLE_BASE/oradata/dbname/dbname_raw.conf