Oracle® Data Guard Concepts and Administration 11g Release 1 (11.1) Part Number B28294-01 |
|
|
View PDF |
This chapter steps you through the process of creating a logical standby database. It includes the following main topics:
Prerequisite Conditions for Creating a Logical Standby Database
Step-by-Step Instructions for Creating a Logical Standby Database
See Also:
Oracle Database Administrator's Guide for information about creating and using server parameter files
Oracle Data Guard Broker and the Oracle Enterprise Manager online help system for information about using the graphical user interface to automatically create a logical standby database
Before you create a logical standby database, you must first ensure the primary database is properly configured. Table 4-1 provides a checklist of the tasks that you perform on the primary database to prepare for logical standby database creation.
Table 4-1 Preparing the Primary Database for Logical Standby Database Creation
Reference | Task |
---|---|
|
Determine Support for Data Types and Storage Attributes for Tables |
|
Ensure Table Rows in the Primary Database Can Be Uniquely Identified |
Note that a logical standby database uses standby redo logs (SRLs) for redo received from the primary database, and also writes to online redo logs (ORLs) as it applies changes to the standby database. Thus, logical standby databases often require additional ARC
n processes to simultaneously archive SRLs and ORLs. Additionally, because archiving of ORLs takes precedence over archiving of SRLs, a greater number of SRLs may be needed on a logical standby during periods of very high workload.
Before setting up a logical standby database, ensure the logical standby database can maintain the data types and tables in your primary database. See Appendix C for a complete list of data type and storage type considerations.
The physical organization in a logical standby database is different from that of the primary database, even though the logical standby database is created from a backup copy of the primary database. Thus, ROWIDs contained in the redo records generated by the primary database cannot be used to identify the corresponding row in the logical standby database.
Oracle uses primary-key or unique-constraint/index supplemental logging to logically identify a modified row in the logical standby database. When database-wide primary-key and unique-constraint/index supplemental logging is enabled, each UPDATE
statement also writes the column values necessary in the redo log to uniquely identify the modified row in the logical standby database.
If a table has a primary key defined, then the primary key is logged along with the modified columns as part of the UPDATE
statement to identify the modified row.
In the absence of a primary key, the shortest nonnull unique-constraint/index is logged along with the modified columns as part of the UPDATE
statement to identify the modified row.
In the absence of both a primary key and a nonnull unique constraint/index, all columns of bounded size are logged as part of the UPDATE
statement to identify the modified row. In other words, all columns except those with the following types are logged: LONG
, LOB
, LONG RAW
, object type, and collections.
A function-based index, even though it is declared as unique, cannot be used to uniquely identify a modified row. However, logical standby databases support replication of tables that have function-based indexes defined, as long as modified rows can be uniquely identified.
Oracle recommends that you add a primary key or a nonnull unique index to tables in the primary database, whenever possible, to ensure that SQL Apply can efficiently apply redo data updates to the logical standby database.
Perform the following steps to ensure SQL Apply can uniquely identify rows of each table being replicated in the logical standby database.
Step 1 Find tables without unique logical identifier in the primary database.
Query the DBA_LOGSTDBY_NOT_UNIQUE
view to display a list of tables that SQL Apply may not be able to uniquely identify. For example:
SQL> SELECT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_NOT_UNIQUE 2> WHERE (OWNER, TABLE_NAME) NOT IN 3> (SELECT DISTINCT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED) 4> AND BAD_COLUMN = 'Y'
Step 2 Add a disabled primary-key RELY constraint.
If your application ensures the rows in a table are unique, you can create a disabled primary key RELY
constraint on the table. This avoids the overhead of maintaining a primary key on the primary database.
To create a disabled RELY
constraint on a primary database table, use the ALTER TABLE
statement with a RELY DISABLE
clause. The following example creates a disabled RELY
constraint on a table named mytab
, for which rows can be uniquely identified using the id
and name
columns:
SQL> ALTER TABLE mytab ADD PRIMARY KEY (id, name) RELY DISABLE;
When you specify the RELY
constraint, the system will assume that rows are unique. Because you are telling the system to rely on the information, but are not validating it on every modification done to the table, you must be careful to select columns for the disabled RELY
constraint that will uniquely identify each row in the table. If such uniqueness is not present, then SQL Apply will not correctly maintain the table.
To improve the performance of SQL Apply, add a unique-constraint/index to the columns to identify the row on the logical standby database. Failure to do so results in full table scans during UPDATE
or DELETE
statements carried out on the table by SQL Apply.
See Also:
Oracle Database Reference for information about the DBA_LOGSTDBY_NOT_UNIQUE
view
Oracle Database SQL Language Reference for information about the ALTER TABLE
statement syntax and creating RELY
constraints
Section 10.6.1, "Create a Primary Key RELY Constraint" for information about RELY
constraints and actions you can take to increase performance on a logical standby database
This section describes the tasks you perform to create a logical standby database.
Table 4-2 provides a checklist of the tasks that you perform to create a logical standby database and specifies on which database you perform each task. There is also a reference to the section that describes the task in more detail.
Table 4-2 Creating a Logical Standby Database
Reference | Task | Database |
---|---|---|
|
Create a Physical Standby Database |
Primary |
|
Stop Redo Apply on the Physical Standby Database |
Standby |
|
Prepare the Primary Database to Support a Logical Standby Database |
Primary |
|
Transition to a Logical Standby Database |
Standby |
|
Open the Logical Standby Database |
Standby |
|
Verify the Logical Standby Database Is Performing Properly |
Standby |
You create a logical standby database by first creating a physical standby database and then transitioning it to a logical standby database. Follow the instructions in Chapter 3, "Creating a Physical Standby Database" to create a physical standby database.
You can run Redo Apply on the new physical standby database for any length of time before converting it to a logical standby database. However, before converting to a logical standby database, stop Redo Apply on the physical standby database. Stopping Redo Apply is necessary to avoid applying changes past the redo that contains the LogMiner dictionary (described in Section 4.2.3.2, "Build a Dictionary in the Redo Data").
To stop Redo Apply, issue the following statement on the physical standby database. If the database is an Oracle RAC database comprised of multiple instances, then you must first stop all Oracle RAC instances except one before issuing this statement:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
This section contains the following topics:
In Section 3.1.4, "Set Primary Database Initialization Parameters", you set up several standby role initialization parameters to take effect when the primary database is transitioned to the physical standby role.
Note:
This step is necessary only if you plan to perform switchovers.If you plan to transition the primary database to the logical standby role, then you must also modify the parameters shown in bold typeface in Example 4-1, so that no parameters need to change after a role transition:
Change the VALID_FOR
attribute in the original LOG_ARCHIVE_DEST_1
destination to archive redo data only from the online redo log and not from the standby redo log.
Include the LOG_ARCHIVE_DEST_3
destination on the primary database. This parameter only takes effect when the primary database is transitioned to the logical standby role.
Example 4-1 Primary Database: Logical Standby Role Initialization Parameters
LOG_ARCHIVE_DEST_1= 'LOCATION=/arch1/chicago/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=chicago' LOG_ARCHIVE_DEST_3= 'LOCATION=/arch2/chicago/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=chicago' LOG_ARCHIVE_DEST_STATE_3=ENABLE
To dynamically set these initialization parameter, use the SQL ALTER SYSTEM SET
statement and include the SCOPE=BOTH
clause so that the changes take effect immediately and persist after the database is shut down and started up again.
The following table describes the archival processing defined by the changed initialization parameters shown in Example 4-1.
When the Chicago Database Is Running in the Primary Role | When the Chicago Database Is Running in the Logical Standby Role | |
---|---|---|
LOG_ARCHIVE_DEST_1 |
Directs archiving of redo data generated by the primary database from the local online redo log files to the local archived redo log files in /arch1/chicago/ . |
Directs archiving of redo data generated by the logical standby database from the local online redo log files to the local archived redo log files in /arch1/chicago/ . |
LOG_ARCHIVE_DEST_3 |
Is ignored; LOG_ARCHIVE_DEST_3 is valid only when chicago is running in the standby role. |
Directs archiving of redo data from the standby redo log files to the local archived redo log files in /arch2/chicago/ . |
A LogMiner dictionary must be built into the redo data so that the LogMiner component of SQL Apply can properly interpret changes it sees in the redo. As part of building the LogMiner dictionary, supplemental logging is automatically set up to log primary key and unique-constraint/index columns. The supplemental logging information ensures each update contains enough information to logically identify each row that is modified by the statement.
To build the LogMiner dictionary, issue the following statement:
SQL> EXECUTE DBMS_LOGSTDBY.BUILD;
The DBMS_LOGSTDBY.BUILD
procedure waits for all existing transactions to complete. Long-running transactions executed on the primary database will affect the timeliness of this command.
See Also:
The DBMS_LOGSTDBY.BUILD
PL/SQL package in Oracle Database PL/SQL Packages and Types Reference
The UNDO_RETENTION
initialization parameter in Oracle Database Reference
This section describes how to prepare the physical standby database to transition to a logical standby database. It contains the following topics:
The redo logs contain the information necessary to convert your physical standby database to a logical standby database. To continue applying redo data to the physical standby database until it is ready to convert to a logical standby database, issue the following SQL statement:
SQL> ALTER DATABASE RECOVER TO LOGICAL STANDBY db_name;
For db_name, specify a database name to identify the new logical standby database. If you are using a server parameter file (spfile) at the time you issue this statement, then the database will update the file with appropriate information about the new logical standby database. If you are not using an spfile, then the database issues a message reminding you to set the name of the DB_NAME
parameter after shutting down the database.
Note:
If you are creating a logical standby database in the context of performing a rolling upgrade of Oracle software with a physical standby database, you should issue the following command instead:SQL> ALTER DATABASE RECOVER TO LOGICAL STANDBY KEEP IDENTITY;
A logical standby database created with the KEEP IDENTITY
clause retains the same DB_NAME
and DBID
as that of its primary database. Such a logical standby database can only participate in one switchover operation, and thus should only be created in the context of a rolling upgrade with a physical standby database.
The statement waits, applying redo data until the LogMiner dictionary is found in the log files. This may take several minutes, depending on how long it takes redo generated in Section 4.2.3.2, "Build a Dictionary in the Redo Data" to be transmitted to the standby database, and how much redo data needs to be applied. If a dictionary build is not successfully performed on the primary database, this command will never complete. You can cancel the SQL statement by issuing the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL
statement from another SQL session.
Caution:
In earlier releases, you needed to create a new password file before you opened the logical standby database. This is no longer needed. Creating a new password file at the logical standby database will cause redo transport services to not work properly.On the logical standby database, shutdown the instance and issue the STARTUP MOUNT
statement to start and mount the database. Do not open the database; it should remain closed to user access until later in the creation process. For example:
SQL> SHUTDOWN; SQL> STARTUP MOUNT;
You need to modify the LOG_ARCHIVE_DEST_
n
parameters because, unlike physical standby databases, logical standby databases are open databases that generate redo data and have multiple log files (online redo log files, archived redo log files, and standby redo log files). It is good practice to specify separate local destinations for:
Archived redo log files that store redo data generated by the logical standby database. In Example 4-2, this is configured as the LOG_ARCHIVE_DEST_1=LOCATION=/arch1/boston
destination.
Archived redo log files that store redo data received from the primary database. In Example 4-2, this is configured as the LOG_ARCHIVE_DEST_3=LOCATION=/arch2/boston
destination.
Example 4-2 shows the initialization parameters that were modified for the logical standby database. The parameters shown are valid for the Boston logical standby database when it is running in either the primary or standby database role.
Example 4-2 Modifying Initialization Parameters for a Logical Standby Database
LOG_ARCHIVE_DEST_1= 'LOCATION=/arch1/boston/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=boston' LOG_ARCHIVE_DEST_2= 'SERVICE=chicago ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chicago' LOG_ARCHIVE_DEST_3= 'LOCATION=/arch2/boston/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=boston' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_DEST_STATE_3=ENABLE
Note:
If database compatibility is set to 11.1, you can also use the Flash Recovery Area to store the remote archived logs. To do this, set the following parameters (assuming you have already appropriately setDB_RECOVERY_FILE_DEST
and DB_RECOVERY_FILE_DEST_SIZE
):
LOG_ARCHIVE_DEST_1= 'LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ONLINE_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=boston' LOG_ARCHIVE_DEST_3= 'LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(STANDBY_LOGFILES, STANDBY_ROLE) DB_UNIQUE_NAME=boston'
The following table describes the archival processing defined by the initialization parameters shown in Example 4-2.
When the Boston Database Is Running in the Primary Role | When the Boston Database Is Running in the Logical Standby Role | |
---|---|---|
LOG_ARCHIVE_DEST_1 |
Directs archival of redo data generated by the primary database from the local online redo log files to the local archived redo log files in /arch1/boston/ . |
Directs archival of redo data generated by the logical standby database from the local online redo log files to the local archived redo log files in /arch1/boston/ . |
LOG_ARCHIVE_DEST_2 |
Directs transmission of redo data to the remote logical standby database chicago . |
Is ignored; LOG_ARCHIVE_DEST_2 is valid only when boston is running in the primary role. |
LOG_ARCHIVE_DEST_3 |
Is ignored; LOG_ARCHIVE_DEST_3 is valid only when boston is running in the standby role. |
Directs archival of redo data received from the primary database to the local archived redo log files in /arch2/boston/ . |
Note:
TheDB_FILE_NAME_CONVERT
initialization parameter is not honored once a physical standby database is converted to a logical standby database. If necessary, you should register a skip handler and provide SQL Apply with a replacement DDL string to execute by converting the path names of the primary database datafiles to the standby datafile path names. See the DBMS_LOGSTDBY
package in Oracle Database PL/SQL Packages and Types Reference. for information about the SKIP
procedure.To open the new logical standby database, you must open it with the RESETLOGS
option by issuing the following statement:
SQL> ALTER DATABASE OPEN RESETLOGS;
Because this is the first time the database is being opened, the database's global name is adjusted automatically to match the new DB_NAME
initialization parameter.
Issue the following statement to begin applying redo data to the logical standby database. For example:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
At this point, the logical standby database is running and can provide the maximum performance level of data protection. The following list describes additional preparations you can take on the logical standby database:
Upgrade the data protection mode
The Data Guard configuration is initially set up in the maximum performance mode (the default).
Enable Flashback Database
Flashback Database removes the need to re-create the primary database after a failover. Flashback Database enables you to return a database to its state at a time in the recent past much faster than traditional point-in-time recovery, because it does not require restoring datafiles from backup nor the extensive application of redo data. You can enable Flashback Database on the primary database, the standby database, or both. See Section 13.2, "Converting a Failed Primary Into a Standby Database Using Flashback Database" and Section 13.3, "Using Flashback Database After Issuing an Open Resetlogs Statement" for scenarios showing how to use Flashback Database in a Data Guard environment. Also, see Oracle Database Backup and Recovery User's Guide for more information about Flashback Database.