Oracle® Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1) Part Number B28254-01 |
|
|
View PDF |
This chapter describes how to clone Oracle Automatic Storage Management (ASM) and Oracle Real Application Clusters (Oracle RAC) database homes on Linux and UNIX systems to nodes in a new cluster. To extend Oracle RAC to nodes in an existing cluster, see Chapter 8.
This chapter describes a noninteractive cloning technique that you implement through the use of scripts. The cloning techniques described in this chapter are best suited for performing multiple simultaneous cluster installations. Creating the scripts is a manual process and can be error prone. If you only have one cluster to install, then you should use the traditional automated and interactive installation methods, such as Oracle Universal Installer (OUI), or the Provisioning Pack feature of Oracle Enterprise Manager.
Note:
Cloning is not a replacement for Oracle Enterprise Manager cloning that is a part of the Provisioning Pack. During Enterprise Manager cloning, the provisioning process interactively asks you the details about the Oracle home (such as the location to which you want to deploy the clone, the name of the Oracle Database home, a list of the nodes in the cluster, and so on).The Provisioning Pack feature of Oracle Grid Control provides a framework to make it easy for you to automate the provisioning of new nodes and clusters. For data centers with many Oracle RAC clusters, the investment in creating a cloning procedure to easily provision new clusters and new nodes to existing clusters is worth the effort.
This chapter contains the following topics:
Cloning is the process of copying an existing Oracle installation to a different location and updating the copied bits to work in the new environment. The changes made by one-off patches applied on the source Oracle home, would also be present after the clone operation. The source and the destination path (host to be cloned) need not be the same.
Some situations in which cloning is useful are:
Cloning provides a way to prepare an ASM and an Oracle RAC home once and deploy it to many hosts simultaneously. You can complete the installation silently, as a noninteractive process. You do not need to use a graphical user interface (GUI) console and you can perform cloning from an Secure Shell (SSH) terminal session, if required.
Cloning enables you to create a new installation (copy of a production, test, or development installation) with all patches applied to it in a single step. Once you have performed the base installation and applied all patchsets and patches on the source system, the clone performs all of these individual steps as a single procedure. This is in contrast to going through the installation process to perform the separate steps to install, configure, and patch the installation on each node in the cluster.
Installing ASM and Oracle RAC by cloning is a very quick process. For example, cloning an Oracle RAC home to a new cluster of more than two nodes requires a few minutes to install the Oracle base software, plus a few minutes more for each node (approximately the amount of time it takes to run the root.sh
script).
Cloning provides a guaranteed method of repeating the same Oracle installation on multiple clusters.
The cloned installation behaves the same as the source installation. For example, the cloned Oracle home can be removed using OUI or patched using OPatch. You can also use the cloned Oracle home as the source for another cloning operation. You can create a cloned copy of a test, development, or production installation by using the command-line cloning scripts. The default cloning procedure is adequate for most usage cases. However, you can also customize various aspects of cloning, for example, to specify custom port assignments, or to preserve custom settings.
The cloning process works by copying all of the files from the source Oracle home to the destination Oracle home. Thus, any files used by the source instance that are located outside the source Oracle home's directory structure are not copied to the destination location.
The size of the binaries at the source and the destination may differ because these are relinked as part of the clone operation and the operating system patch levels may also differ between these two locations. Additionally, the number of files in the cloned home would increase because several files copied from the source, specifically those being instantiated, are backed up as part of the clone operation.
In the preparation phase, you create a copy of an Oracle Database home that you then use to perform the cloning procedure on one or more nodes, and you install Oracle Clusterware.
Step 1 Install the Oracle Database software
Using the detailed instructions in your platform-specific Oracle Real Application Clusters installation guide to install the Oracle Database software and patches:
Install Oracle Database Release 11g and choose the Software only installation option.
Patch the release to the required level (for example, 11.1.0.n).
Apply one-off patches, if necessary.
Step 2 Create a backup of the source home
Create a copy of the Oracle Database home. You will use this file to copy the Oracle Database home to each node in the cluster (as described in the "Deploying ASM and Oracle RAC to Other Nodes in the Cluster" section).
When creating the backup (tar) file, the best practice is to include the release number in the name of the file. For example:
# cd /opt/oracle/product/11g/db_1# tar –zcvf /pathname/db11101.tgz .
Step 3 Install and start Oracle Clusterware
Before you can use cloning to create a new Oracle RAC home, Oracle Clusterware must be installed and started on the new nodes. In other words, you extend the software onto the new nodes in the same order that you installed the Oracle Clusterware and Oracle database software components on the original nodes.
See Also:
Oracle Clusterware Administration and Deployment Guide for information about cloning Oracle Clusterware homes to create new clusters, and starting Oracle Clusterware by issuing thecrsctl start crs
commandAfter you complete the prerequisite tasks described in the "Preparing to Clone ASM and Oracle RAC" section, you can deploy new ASM and Oracle RAC homes. The deployment steps in this section follow the best practices, which recommend deploying two Oracle Database homes on each node: one home is for the ASM instance and the other home is for the Oracle RAC database instance.
The following sections provide step-by-step instructions for:
You can script the multiple-step processes described in these sections to run automatically, as a silent installation.
You must deploy the ASM home before you can deploy the new Oracle RAC database home. This section provides step-by-step instructions that describe how to:
See Also:
Oracle Database Storage Administrator's Guide for complete information about ASMStep 1 Prepare the new cluster nodes
Perform the Oracle Database preinstallation steps, including such things as:
Specify the kernel parameters.
Use short, nondomain-qualified names for all names in the Hosts file.
Test whether or not the interconnect interfaces are reachable using the ping
command.
See your platform-specific Oracle RAC installation guide for a complete preinstallation checklist.
Note:
Unlike traditional methods of installation, the cloning process does not validate your input during the preparation phase. (By comparison, during the traditional method of installation using the OUI, various checks take place during the interview phase.) Thus, if you make any mistakes during the hardware setup or in the preparation phase, then the cloned installation will fail.Step 2 Deploy the ASM software
To deploy the ASM software, you need to:
Restore the ASM home to all nodes in either the original home directory or to a different directory path. For example:
[root@node1 root]# mkdir -p /opt/oracle/product/11g/asm
[root@node1 root]# cd /opt/oracle/product/11g/asm
[root@node1 asm]# tar –zxvf /pathname/asm11101.tgz
Note that the ASM home location does not have to be in the same directory path as the original source home directory that you used to create the tar file.
Change the ownership of all files to the oracle
and oinstall
group. For example:
[root@node1 asm]# chown –R oracle:oinstall /opt/oracle/product/11g/asm
Note:
You can perform this step at the same time you perform steps 3 and 4 to run theclone.pl
and ASM_home
/root.sh
scripts on each cluster node.Step 3 Run the clone.pl script on each node
To run the clone.pl
script, which performs the main ASM cloning tasks, you must:
Supply environment variables and cloning parameters in a start.sh
script, as described in Table 7-1 and Table 7-2. Because the clone.pl
script is sensitive to the parameters being passed to it, you must be accurate in your use of brackets, single quotes, and double quotes.
Invoke the script as the oracle
operating system user.
Example 7-1 shows an excerpt from the start.sh
script that calls the clone.pl
script.
Example 7-1 Excerpt From the start.sh Script to Clone ASM
ORACLE_BASE=/opt/oracle ASM_home=/opt/oracle/product/11g/asm cd ASM_home/clone THISNODE=`hostname -s` E01=ORACLE_HOME=${ASM_home} E02=ORACLE_HOME_NAME=OraDBASM E03=ORACLE_BASE=/opt/oracle C01="-O'\"CLUSTER_NODES={node1, node2}\"'" C02="'-O\"LOCAL_NODE=$THISNODE\"'" perl ASM_home/clone/bin/clone.pl $E01 $E02 $E03 $C01 $C02
Table 7-1 describes the environment variables E01, E02, and E03 that are shown in bold typeface in Example 7-1.
Table 7-1 Environment Variables Passed to the clone.pl Script
Symbol | Variable | Description |
---|---|---|
E01 |
The location of the ASM home. This directory location must exist and must be owned by the oracle operating system group: |
|
E02 |
The name of the Oracle home for the ASM home. This is stored in the Oracle Inventory. |
|
E03 |
The location of the Oracle Base directory. |
Table 7-2 describes the cloning parameters C01 and C02, that are shown in bold typeface in Example 7-1.
Table 7-2 Cloning Parameters Passed to the clone.pl Script.
Variable | Name | Parameter | Description |
---|---|---|---|
C01 |
Lists the nodes in the cluster. |
||
C02 |
The name of the local node. |
Step 4 Run the ASM_home/root.sh script on each node
Run the ASM_home
/root.sh
as the root
operating system user as soon as the clone.pl
procedure completes on the node.
You should set the LD_LIBRARY_PATH
environment variable before running the root.sh
script. For example:
[root@node1 root]# export LD_LIBRARY_PATH=ASM_home/lib:$LD_LIBRARY_PATH
[root@node1 root]# /opt/oracle/product/11g/asm/root.sh -silent
Note that you can run the script on each node simultaneously:
[root@node2 root]# export LD_LIBRARY_PATH=ASM_home/lib:$LD_LIBRARY_PATH
[root@node2 root]# /opt/oracle/product/11g/asm/root.sh -silent
Ensure the script has completed on each node before proceeding to the next step.
Step 5 Run NETCA to create the listeners
At the end of the Oracle ASM home installation, the OUI creates a listener on each node, and registers the nodes as CRS resources with Oracle Clusterware.
The following example shows how to run NETCA in silent mode to create the listeners. In the response file, provide your node names in place of the variables: node1
,node2
. NETCA uses the response file to create the listener.ora
entries on each node, start the listeners, and add the listeners to the Oracle Cluster Registry (OCR).
[oracle@node1 oracle]$ cd $ORACLE_HOME/bin/ [oracle@node1 bin]$ ./netca /silent \ /responseFile $ORACLE_HOME/network/install/netca_typ.rsp \ /inscomp server \ /nodeinfo node1,node2
Step 6 Run DBCA to create the ASM instances on each node
This step shows how to invoke the DBCA in silent mode and provide response file input to create the ASM instances and the first ASM disk group.
The following example creates ASM instances on each node, registers them in the Oracle Cluster Registry, and creates an ASM disk group called +DATA
that is made up of four disks: sde1
, sdf1
,sdg1
and sdh1
. It also sets the SYS password to mypassword
:
[oracle@node1 oracle]$ export ORACLE_HOME=/opt/oracle/product/11g/asm [oracle@node1 oracle]$ cd $ORACLE_HOME/bin/ [oracle@node1 bin]$ ./dbca -silent -configureASM -gdbName NO -sid NO \ -emConfiguration NONE \ -diskList "/dev/sde1,/dev/sdf1,/dev/sdg1,/dev/sdh1" \ -diskString "/dev/sd[e-h]1" \ -diskGroupName "DATA" \ -datafileJarLocation $ORACLE_HOME/assistants/dbca/templates \ -nodeinfo node1,node2 \ -obfuscatedPasswords false \ -oratabLocation /etc/oratab \ -asmSysPassword mypassword \ -redundancy EXTERNAL
See Also:
Oracle Database 2 Day DBA for information about using DBCA to create and configure a databaseDeployment to a new cluster of the Oracle RAC Database home is a multiple-step process. The steps are almost identical to the steps you performed to for creating the ASM home described in "Deploying ASM Instance Homes". The differences are that when deploying an Oracle RAC database, you do not create listeners inside the database home and you provide different DBCA parameters to deploy an Oracle RAC database home.
This section provides step-by-step instructions that describe how to:
Step 1 Prepare the new cluster nodes
Perform the Oracle Database preinstallation steps, including such things as:
Specify the kernel parameters.
Use short, nondomain-qualified names for all names in the Hosts file.
Verify that you can ping the public and interconnect names.
Ensure Oracle Clusterware is active.
Ensure ASM is active and there is at least one ASM disk group configured.
See your platform-specific Oracle RAC installation guide for a complete preinstallation checklist.
Step 2 Deploy the Oracle RAC database software
To deploy the Oracle RAC database software, you need to:
Restore the Oracle Database home to all nodes. For example:
[root@node1 root]# mkdir -p /opt/oracle/product/11g/db
[root@node1 root]# cd /opt/oracle/product/11g/db
[root@node1 db]# tar –zxvf /pathname/db11101.tgz
When providing the home location and pathname
:
If you are cloning Oracle RAC to a new cluster, then the home location can be in the same directory path or in a different directory path from the source home that you used to create the tar.
If you are cloning Oracle RAC to an existing cluster, then the home location and directory paths must be the same.
Change the ownership of all files to the oracle
and oinstall
group. For example:
[root@node1 db]# chown –R oracle:oinstall /opt/oracle/product/11g/db
Note:
You can perform this step at the same time you perform steps 3 and 4 to run theclone.pl
and ORACLE_HOME/root.sh
scripts on each cluster node.Step 3 Run the clone.pl script on each node
To run the clone.pl
script, which performs the main Oracle RAC cloning tasks, you must:
Supply the environment variables and cloning parameters in the start.sh script, as described in Table 7-3 and Table 7-4. Because the clone.pl
script is sensitive to the parameters being passed to it, you must be accurate in your use of brackets, single quotes, and double quotes.
Invoke the script as the oracle
operating system user.
Example 7-2 shows an excerpt from the start.sh
script that calls the clone.pl
script.
Example 7-2 Excerpt From the start.sh Script to Clone the Oracle RAC Database
ORACLE_BASE=/opt/oracle ORACLE_HOME=/opt/oracle/product/11g/db cd $ORACLE_HOME/clone THISNODE=`hostname -s` E01=ORACLE_HOME=${ORACLE_HOME} E02=ORACLE_HOME_NAME=OraDBRAC E03=ORACLE_BASE=/opt/oracle C01="-O'\"CLUSTER_NODES={node1, node2}\"'" C02="'-O\"LOCAL_NODE=$THISNODE\"'" perl $ORACLE_HOME/clone/bin/clone.pl $E01 $E02 $E03 $C01 $C02
Table 7-3 describes the environment variables E01, E02, and E03 that are shown in bold typeface in Example 7-2.
Table 7-3 Environment Variables Passed to the clone.pl Script
Symbol | Variable | Description |
---|---|---|
E01 |
The location of the Oracle RAC database home. This directory location must exist and must be owned by the oracle operating system group: |
|
E02 |
The name of the Oracle home for the Oracle RAC database. This is stored in the Oracle Inventory. |
|
E03 |
The location of the Oracle Base directory. |
Table 7-4 describes the cloning parameters C01 and C02, that are shown in bold typeface in Example 7-2.
Table 7-4 Cloning Parameters Passed to the clone.pl Script.
Variable | Name | Parameter | Description |
---|---|---|---|
C01 |
Lists the nodes in the cluster. |
||
C02 |
The name of the local node. |
Step 4 Run the $ORACLE_HOME/root.sh script on each node
Run the $ORACLE_HOME/root.sh
as the root
operating system user as soon as the clone.pl
procedure completes on the node.
You should set the LD_LIBRARY_PATH
environment variable before running the root.sh
script. For example:
[root@node1 root]# export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH [root@node1 root]# /opt/oracle/product/11g/db/root.sh -silent
Note that you can run the script on each node simultaneously:
[root@node2 root]# export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH [root@node2 root]# /opt/oracle/product/11g/db/root.sh -silent
Ensure the script has completed on each node before proceeding to the next step.
Step 5 Run DBCA to create the Oracle RAC instances on each node
This step shows how to invoke the DBCA in silent mode and provide response file input to create the Oracle RAC instances.
The following example creates Oracle RAC instances on each node, registers the instances in the OCR, creates the database files in the ASM disk group called DATA
, and creates sample schemas. It also sets the sys
, system
, sysman
and dbsnmp
passwords to mypassword
:
[oracle@node1 oracle]$ export ORACLE_HOME=/opt/oracle/product/11g/db [oracle@node1 oracle]$ cd $ORACLE_HOME/bin/ [oracle@node1 bin]$./dbca -silent -createDatabase -templateName General_Purpose.dbc\ -gdbName ERI -sid ERI \ -sysPassword mypassword -systemPassword mypassword \ -sysmanPassword mypassword -dbsnmpPassword mypassword \ -emConfiguration LOCAL \ -storageType ASM -diskGroupName DATA \ -datafileJarLocation $ORACLE_HOME/assistants/dbca/templates \ -nodeinfo node1,node2 -characterset WE8ISO8859P1 \ -obfuscatedPasswords false -sampleSchema true \ -oratabLocation /etc/oratab
See Also:
Oracle Database 2 Day DBA for information about using DBCA to create and configure a databaseThe cloning script runs multiple tools, each of which may generate its own log files. After the clone.pl
script finishes running, you can view log files to obtain more information about the cloning process.
The following log files that are generated during cloning are the key log files of interest for diagnostic purposes:
Central_Inventory
/logs/cloneActions timestamp.log
Contains a detailed log of the actions that occur during the OUI part of the cloning.
Central_Inventory
/logs/oraInstall timestamp.err
Contains information about errors that occur when OUI is running.
Central_Inventory
/logs/oraInstall timestamp.out
Contains other miscellaneous messages generated by OUI.
$ORACLE_HOME/clone/logs/clone timestamp.log
Contains a detailed log of the actions that occur prior to cloning as well as during the cloning operations.
$ORACLE_HOME/clone/logs/error timestamp.log
Contains information about errors that occur prior to cloning as well as during cloning operations.
Table 7-5 describes how to find the location of the Oracle inventory directory.
Table 7-5 Finding the Location of the Oracle Inventory Directory
Type of System ,,, | Location of the Oracle Inventory Directory |
---|---|
All UNIX computers except Linux and IBM AIX |
/var/opt/oracle/oraInst.loc |
IBM AIX and Linux |
|
Windows |
Obtain the location from the Windows Registry key: |