Oracle® Streams Replication Administrator's Guide 11g Release 1 (11.1) Part Number B28322-01 |
|
|
View PDF |
This chapter contains instructions for managing Oracle Streams capture processes, synchronous captures propagations, and Oracle Streams apply processes in an Oracle Streams replication environment. This chapter also includes instructions for managing Oracle Streams tags, and for performing database point-in-time recovery at a destination database in an Oracle Streams environment.
This chapter contains these topics:
Managing Staging and Propagation for Oracle Streams Replication
Resynchronizing a Source Database in a Multiple-Source Environment
Performing Database Point-in-Time Recovery in an Oracle Streams Environment
A capture process or a synchronous capture typically starts the process of replicating a database change by capturing the change, converting the change into a logical change record (LCR), and enqueuing the change into an ANYDATA
queue. From there, the LCR can be propagated to other databases and applied at these database to complete the replication process.
The following sections describe management tasks for a capture process in an Oracle Streams replication environment:
You also might need to perform other management tasks.
See Also:
Oracle Streams Concepts and Administration for more information about managing a capture processYou can create a capture process that captures changes to the local source database, or you can create a capture process that captures changes remotely at a downstream database. If a capture process runs on a downstream database, then redo data from the source database is copied to the downstream database, and the capture process captures changes in the redo data at the downstream database.
You can use any of the following procedures to create a local capture process:
Note:
To create a capture process, a user must be granted DBA
role.
If Oracle Database Vault is installed, then the user who creates the capture process must be granted the BECOME
USER
system privilege. Granting this privilege to the user is not required if Oracle Database Vault is not installed. You can revoke the BECOME
USER
system privilege from the user after the capture process is created, if necessary.
Before you create a capture process, create an ANYDATA
queue to associate with the capture process, if one does not exist.
The following example runs the ADD_SCHEMA_RULES
procedure in the DBMS_STREAMS_ADM
package to create a local capture process:
BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => 'hr', streams_type => 'capture', streams_name => 'strep01_capture', queue_name => 'strep01_queue', include_dml => TRUE, include_ddl => TRUE, include_tagged_lcr => FALSE, source_database => NULL, inclusion_rule => TRUE); END; /
Running this procedure performs the following actions:
Creates a capture process named strep01_capture
. The capture process is created only if it does not already exist. If a new capture process is created, then this procedure also sets the start SCN to the point in time of creation.
Associates the capture process with an existing queue named strep01_queue
.
Creates a positive rule set and associates it with the capture process, if the capture process does not have a positive rule set, because the inclusion_rule
parameter is set to TRUE
. The rule set uses the SYS.STREAMS$_EVALUATION_CONTEXT
evaluation context. The rule set name is system generated.
Creates two rules. One rule evaluates to TRUE
for DML changes to the hr
schema and the database objects in the hr
schema, and the other rule evaluates to TRUE
for DDL changes to the hr
schema and the database objects in the hr
schema. The rule names are system generated.
Adds the two rules to the positive rule set associated with the capture process. The rules are added to the positive rule set because the inclusion_rule
parameter is set to TRUE
.
Specifies that the capture process captures a change in the redo log only if the change has a NULL
tag, because the include_tagged_lcr
parameter is set to FALSE
. This behavior is accomplished through the system-created rules for the capture process.
Creates a capture process that captures local changes to the source database because the source_database
parameter is set to NULL
. For a local capture process, you can also specify the global name of the local database for this parameter.
Prepares all of the database objects in the hr
schema, and all of the database objects added to the hr
schema in the future, for instantiation.
Caution:
When a capture process is started or restarted, it might need to scan redo log files with aFIRST_CHANGE#
value that is lower than start SCN. Removing required redo log files before they are scanned by a capture process causes the capture process to abort. You can query the DBA_CAPTURE
data dictionary view to determine the first SCN, start SCN, and required checkpoint SCN. A capture process needs the redo log file that includes the required checkpoint SCN, and all subsequent redo log files.See Also:
Oracle Streams Concepts and Administration for more information about preparing for a capture process, creating a capture process, including information about creating a downstream capture process, and for more information about the first SCN and start SCN for a capture processYou can use the following procedures to create a synchronous capture:
Before you create a synchronous capture, create the following Oracle Streams components if they do not exist:
An ANYDATA
queue to associate with the synchronous capture
A propagation (only required if the captured changes must be sent to another database)
An apply process to apply the captured changes
The following example runs the ADD_TABLE_RULES
procedure in the DBMS_STREAMS_ADM
package to create a synchronous capture:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.departments', streams_type => 'sync_capture', streams_name => 'sync_capture', queue_name => 'strmadmin.streams_queue'); END; /
Running this procedure performs the following actions:
Creates a synchronous capture named sync_capture
at current database. A synchronous capture with the same name must not exist.
Enables the synchronous capture. A synchronous capture cannot be disabled.
Associates the synchronous capture with an existing queue named streams_queue
.
Creates a positive rule set for synchronous capture sync_capture
. The rule set has a system-generated name.
Creates a rule that captures DML changes to the hr.departments
table, and adds the rule to the positive rule set for the synchronous capture. The rule has a system-generated name.
Configures the user who ran the ADD_TABLE_RULES
procedure as the capture user.
Prepares the hr.employees
table for instantiation by running the DBMS_CAPTURE_ADM.PREPARE_SYNC_INSTANTIATION
function for the table automatically.
Note:
To create a synchronous capture, a user must be granted DBA
role.
When the CREATE_SYNC_CAPTURE
procedure creates a synchronous capture, the procedure must obtain an exclusive lock on each table for which it will capture changes. The rules in the specified rule set for the synchronous capture determine these tables. Similarly, when the ADD_TABLE_RULES
or the ADD_SUBSET_RULES
procedure adds rules to a synchronous capture rule set, the procedure must obtain an exclusive lock on the specified table. In these cases, if there are outstanding transactions on a table for which the synchronous capture will capture changes, then the procedure waits until it can obtain a lock.
If Oracle Database Vault is installed, then the user who creates the synchronous capture must be granted the BECOME
USER
system privilege. Granting this privilege to the user is not required if Oracle Database Vault is not installed. You can revoke the BECOME
USER
system privilege from the user after the synchronous capture is created, if necessary.
See Also:
Oracle Streams Concepts and Administration for more information about preparing for a synchronous capture and creating a synchronous capture
Oracle Database 2 Day + Data Replication and Integration Guide for an example that configures a synchronous capture replication environment
When you use a capture process to capture changes, supplemental logging must be specified for certain columns at a source database for changes to the columns to be applied successfully at a destination database. The following sections illustrate how to manage supplemental logging at a source database:
Specifying Table Supplemental Logging Using Unconditional Log Groups
Specifying Table Supplemental Logging Using Conditional Log Groups
Dropping Database Supplemental Logging of Key Columns
Note:
LOB, LONG
, LONG
RAW
, user-defined type, and Oracle-supplied type columns cannot be part of a supplemental log group.
In addition to the methods described in this section, supplemental logging can also be enabled when database objects are prepared for instantiation.
Supplemental logging is not required when synchronous capture is used to capture changes to database objects.
See Also:
"Supplemental Logging for Oracle Streams Replication" for information about when supplemental logging is required
"Preparing Database Objects for Instantiation at a Source Database"
The following sections describe creating an unconditional log group:
Specifying an Unconditional Supplemental Log Group for Primary Key Column(s)
Specifying an Unconditional Supplemental Log Group for All Table Columns
Specifying an Unconditional Supplemental Log Group that Includes Selected Columns
To specify an unconditional supplemental log group that only includes the primary key column(s) for a table, use an ALTER
TABLE
statement with the PRIMARY
KEY
option in the ADD
SUPPLEMENTAL
LOG
DATA
clause.
For example, the following statement adds the primary key column of the hr.regions
table to an unconditional log group:
ALTER TABLE hr.regions ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
The log group has a system-generated name.
To specify an unconditional supplemental log group that includes all of the columns in a table, use an ALTER
TABLE
statement with the ALL
option in the ADD
SUPPLEMENTAL
LOG
DATA
clause.
For example, the following statement adds all of the columns in the hr.departments
table to an unconditional log group:
ALTER TABLE hr.regions ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
The log group has a system-generated name.
To specify an unconditional supplemental log group that contains columns that you select, use an ALTER
TABLE
statement with the ALWAYS
specification for the ADD
SUPPLEMENTAL
LOG
GROUP
clause.These log groups can include key columns, if necessary.
For example, the following statement adds the department_id
column and the manager_id
column of the hr.departments
table to an unconditional log group named log_group_dep_pk
:
ALTER TABLE hr.departments ADD SUPPLEMENTAL LOG GROUP log_group_dep_pk (department_id, manager_id) ALWAYS;
The ALWAYS
specification makes this log group an unconditional log group.
The following sections describe creating a conditional log group:
Specifying a Conditional Log Group Using the ADD SUPPLEMENTAL LOG DATA Clause
Specifying a Conditional Log Group Using the ADD SUPPLEMENTAL LOG GROUP Clause
You can use the following options in the ADD
SUPPLEMENTAL
LOG
DATA
clause of an ALTER
TABLE
statement:
The FOREIGN
KEY
option creates a conditional log group that includes the foreign key column(s) in the table.
The UNIQUE
option creates a conditional log group that includes the unique key column(s) and bitmap index column(s) in the table.
If you specify more than one option in a single ALTER
TABLE
statement, then a separate conditional log group is created for each option.
For example, the following statement creates two conditional log groups:
ALTER TABLE hr.employees ADD SUPPLEMENTAL LOG DATA (UNIQUE, FOREIGN KEY) COLUMNS;
One conditional log group includes the unique key columns and bitmap index columns for the table, and the other conditional log group includes the foreign key columns for the table. Both log groups have a system-generated name.
Note:
Specifying theUNIQUE
option does not enable supplemental logging of bitmap join index columns.To specify a conditional supplemental log group that includes any columns you choose to add, you can use the ADD
SUPPLEMENTAL
LOG
GROUP
clause in the ALTER
TABLE
statement. To make the log group conditional, do not include the ALWAYS
specification.
For example, suppose the min_salary
and max_salary
columns in the hr.jobs
table are included in a column list for conflict resolution at a destination database. The following statement adds the min_salary
and max_salary
columns to a conditional log group named log_group_jobs_cr
:
ALTER TABLE hr.jobs ADD SUPPLEMENTAL LOG GROUP log_group_jobs_cr (min_salary, max_salary);
To drop a conditional or unconditional supplemental log group, use the DROP
SUPPLEMENTAL
LOG
GROUP
clause in the ALTER
TABLE
statement. For example, to drop a supplemental log group named log_group_jobs_cr
, run the following statement:
ALTER TABLE hr.jobs DROP SUPPLEMENTAL LOG GROUP log_group_jobs_cr;
You also have the option of specifying supplemental logging for all primary key, unique key, bitmap index, and foreign key columns in a source database. You might choose this option if you configure a capture process to capture changes to an entire database. To specify supplemental logging for all primary key, unique key, bitmap index, and foreign key columns in a source database, issue the following SQL statement:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE, FOREIGN KEY) COLUMNS;
If your primary key, unique key, bitmap index, and foreign key columns are the same at all source and destination databases, then running this command at the source database provides the supplemental logging needed for primary key, unique key, bitmap index, and foreign key columns at all destination databases. When you specify the PRIMARY
KEY
option, all columns of a row's primary key are placed in the redo log file any time the table is modified (unconditional logging). When you specify the UNIQUE
option, any columns in a row's unique key and bitmap index are placed in the redo log file if any column belonging to the unique key or bitmap index is modified (conditional logging). When you specify the FOREIGN
KEY
option, all columns of a row's foreign key are placed in the redo log file if any column belonging to the foreign key is modified (conditional logging).
You can omit one or more of these options. For example, if you do not want to supplementally log all of the foreign key columns in the database, then you can omit the FOREIGN
KEY
option, as in the following example:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE) COLUMNS;
In additional to PRIMARY
KEY
, UNIQUE
, and FOREIGN
KEY
, you can also use the ALL
option. The ALL
option specifies that, when a row is changed, all the columns of that row (except for LOB, LONG
, LONG
RAW
, user-defined type, and Oracle-supplied type columns) are placed in the redo log file (unconditional logging).
Supplemental logging statements are cumulative. If you issue two consecutive ALTER
DATABASE
ADD
SUPPLEMENTAL
LOG
DATA
commands, each with a different identification key, then both keys are supplementally logged.
Note:
Specifying theUNIQUE
option does not enable supplemental logging of bitmap join index columns.To drop supplemental logging for all primary key, unique key, bitmap index, and foreign key columns in a source database, issue the ALTER
DATABASE
DROP
SUPPLEMENTAL
LOG
DATA
statement. To drop database supplemental logging for all primary key, unique key, bitmap index, and foreign key columns, issue the following SQL statement:
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE, FOREIGN KEY) COLUMNS;
Note:
Dropping database supplemental logging of key columns does not affect any existing table-level supplemental log groups.The following sections describe management tasks for LCR staging and propagation in an Oracle Streams replication environment:
You also might need to perform other management tasks.
See Also:
Oracle Streams Replication Administrator's Guide for more information about managing message staging and propagationIn an Oracle Streams replication environment, ANYDATA
queues stage LCRs that encapsulate captured changes. These queues can be used by capture processes, synchronous captures, propagations, and apply processes as an LCR goes through a stream from a source database to a destination database.
You use the SET_UP_QUEUE
procedure in the DBMS_STREAMS_ADM
package to create an ANYDATA
queue. This procedure enables you to specify the following for the ANYDATA
queue it creates:
The queue table for the queue
A storage clause for the queue table
The queue name
A queue user that will be configured as a secure queue user of the queue and granted ENQUEUE
and DEQUEUE
privileges on the queue
A comment for the queue
This procedure creates a queue that is both a secure queue and a transactional queue and starts the newly created queue.
For example, to create an ANYDATA
queue named strep01_queue
in the strmadmin
schema with a queue table named strep01_queue_table
, run the following procedure:
BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'strmadmin.strep01_queue_table', queue_name => 'strmadmin.strep01_queue'); END; /
You can also use procedures in the DBMS_AQADM
package to create an ANYDATA
queue.
To replicate LCRs between databases, you must propagate the LCRs from the database where they were first staged in a queue to the database where they are applied. To accomplish this goal, you can use any number of separate propagations.
You can use any of the following procedures to create a propagation:
The following tasks must be completed before you create a propagation:
Create a source queue and a destination queue for the propagation, if they do not exist. See "Creating an ANYDATA Queue to Stage LCRs" for instructions.
Create a database link between the database containing the source queue and the database containing the destination queue. See Oracle Streams Concepts and Administration for more information about creating database links for propagations.
The following example runs the ADD_SCHEMA_PROPAGATION_RULES
procedure in the DBMS_STREAMS_ADM
package to create a propagation:
BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES( schema_name => 'hr', streams_name => 'strep01_propagation', source_queue_name => 'strmadmin.strep01_queue', destination_queue_name => 'strmadmin.strep02_queue@rep2.net', include_dml => TRUE, include_ddl => TRUE, include_tagged_lcr => FALSE, source_database => 'rep1.net', inclusion_rule => TRUE, queue_to_queue => TRUE); END; /
Running this procedure performs the following actions:
Creates a propagation named strep01_propagation
. The propagation is created only if it does not already exist.
Specifies that the propagation propagates LCRs from strep01_queue
in the current database to strep02_queue
in the rep2.net
database.
Specifies that the propagation uses the rep2.net
database link to propagate the LCRs, because the destination_queue_name
parameter contains @rep2.net
.
Creates a positive rule set and associates it with the propagation, if the propagation does not have a positive rule set, because the inclusion_rule
parameter is set to TRUE
. The rule set uses the evaluation context SYS.STREAMS$_EVALUATION_CONTEXT
. The rule set name is system generated.
Creates two rules. One rule evaluates to TRUE
for row LCRs that contain the results of DML changes to the tables in the hr
schema, and the other rule evaluates to TRUE
for DDL LCRs that contain DDL changes to the hr
schema or to the database objects in the hr
schema. The rule names are system generated.
Adds the two rules to the positive rule set associated with the propagation. The rules are added to the positive rule set because the inclusion_rule
parameter is set to TRUE
.
Specifies that the propagation propagates an LCR only if it has a NULL
tag, because the include_tagged_lcr
parameter is set to FALSE
. This behavior is accomplished through the system-created rules for the propagation.
Specifies that the source database for the LCRs being propagated is rep1.net
, which might or might not be the current database. This propagation does not propagate LCRs in the source queue that have a different source database.
Creates a propagation job for the queue-to-queue propagation.
Note:
To use queue-to-queue propagation, the compatibility level must be10.2.0
or higher for each database that contains a queue involved in the propagation.See Also:
Oracle Streams Concepts and Administration for information about creating propagationsWhen an apply process applies a logical change record (LCR) or sends an LCR to an apply handler that executes it, the replication process for the LCR is complete. That is, the database change that is encapsulated in the LCR is shared with the database where the LCR is applied.
The following sections describe management tasks for an apply process in an Oracle Streams replication environment:
You also might need to perform other management tasks.
See Also:
Oracle Streams Concepts and Administration for more information about managing an apply processThis section contains instructions for creating an apply process that applies captured logical change records (LCRs). Captured LCRs are LCRs that were captured by a capture process.
You can use any of the following procedures to create an apply process that applies captured LCRs:
Note:
To create an apply process, a user must be granted DBA
role.
If Oracle Database Vault is installed, then the user who creates the apply process must be granted the BECOME
USER
system privilege. Granting this privilege to the user is not required if Oracle Database Vault is not installed. You can revoke the BECOME
USER
system privilege from the user after the apply process is created, if necessary.
Before you create an apply process, create an ANYDATA
queue to associate with the apply process, if one does not exist.
The following example runs the ADD_SCHEMA_RULES
procedure in the DBMS_STREAMS_ADM
package to create an apply process that applies captured LCRs:
BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => 'hr', streams_type => 'apply', streams_name => 'strep01_apply', queue_name => 'strep02_queue', include_dml => TRUE, include_ddl => TRUE, include_tagged_lcr => FALSE, source_database => 'rep1.net', inclusion_rule => TRUE); END; /
Running this procedure performs the following actions:
Creates an apply process named strep01_apply
that applies captured LCRs to the local database. The apply process is created only if it does not already exist. To create an apply process that applies persistent LCRs, you must use the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.
Associates the apply process with an existing queue named strep02_queue
.
Creates a positive rule set and associates it with the apply process, if the apply process does not have a positive rule set, because the inclusion_rule
parameter is set to TRUE
. The rule set uses the SYS.STREAMS$_EVALUATION_CONTEXT
evaluation context. The rule set name is system generated.
Creates two rules. One rule evaluates to TRUE
for row LCRs that contain the results of DML changes to the tables in the hr
schema, and the other rule evaluates to TRUE
for DDL LCRs that contain DDL changes to the hr
schema or to the database objects in the hr
schema. The rule names are system generated.
Adds the rules to the positive rule set associated with the apply process because the inclusion_rule
parameter is set to TRUE
.
Sets the apply_tag
for the apply process to a value that is the hexadecimal equivalent of '00'
(double zero). Redo entries generated by the apply process have a tag with this value.
Specifies that the apply process applies an LCR only if it has a NULL
tag, because the include_tagged_lcr
parameter is set to FALSE
. This behavior is accomplished through the system-created rule for the apply process.
Specifies that the LCRs applied by the apply process originate at the rep1.net
source database. The rules in the apply process rule sets determine which LCRs are dequeued by the apply process. If the apply process dequeues an LCR with a source database that is different than rep1.net
, then an error is raised.
Note:
Depending on the configuration of the apply process you create, supplemental logging might be required at the source database on columns in the tables for which an apply process applies changes.
If you use the CREATE_APPLY
procedure in the DBMS_APPLY_ADM
package to create an apply process, set the apply_captured
parameter to TRUE
to configure the apply process to apply captured LCRs.
See Also:
Oracle Streams Concepts and Administration for information about creating apply processesThis section contains instructions for creating an apply process that applies persistent LCRs and persistent user messages. Persistent LCRs are row logical change records (row LCRs) that were captured by a synchronous capture or constructed and enqueued by an application. Persistent user messages can be messages of any type that are enqueued by an application.
You must use the DBMS_APPLY_ADM.CREATE_APPLY
procedure to create an apply process that applies persistent LCRs and persistent user messages. Specifically, you must run this procedure with the apply_captured
parameter set to FALSE
.
Note:
To create an apply process, a user must be granted DBA
role.
If Oracle Database Vault is installed, then the user who creates the apply process must be granted the BECOME
USER
system privilege. Granting this privilege to the user is not required if Oracle Database Vault is not installed. You can revoke the BECOME
USER
system privilege from the user after the apply process is created, if necessary.
Before you create an apply process, create an ANYDATA
queue to associate with the apply process, if one does not exist.
The following example runs the CREATE_APPLY
procedure in the DBMS_APPLY_ADM
package to create an apply process that applies persistent LCRs and persistent user messages:
BEGIN DBMS_APPLY_ADM.CREATE_APPLY( queue_name => 'streams_queue', apply_name => 'sync_apply', rule_set_name => 'strmadmin.sync_rule_set', message_handler => NULL, ddl_handler => NULL, apply_user => NULL, apply_database_link => NULL, apply_tag => NULL, apply_captured => FALSE, precommit_handler => NULL, negative_rule_set_name => NULL); END; /
Running this procedure performs the following actions:
Creates an apply process named sync_apply
. An apply process with the same name must not exist.
Associates the apply process with an existing queue named streams_queue
.
Associates the apply process with an existing rule set named sync_rule_set
. This rule set is the positive rule set for the apply process.
Specifies that the apply process does not use a message handler.
Specifies that the apply process does not use a DDL handler.
Specifies that the user who applies the changes is the user who runs the CREATE_APPLY
procedure, because the apply_user
parameter is NULL
.
Specifies that the apply process applies changes to the local database, because the apply_database_link
parameter is set to NULL
.
Specifies that each redo entry generated by the apply process has a NULL
tag.
Specifies that the apply process does not apply captured LCRs. Therefore, the apply process can apply persistent LCRs or persistent user messages that are in the persistent queue portion of the apply process queue.
Specifies that the apply process does not use a precommit handler.
Specifies that the apply process does not use a negative rule set.
After creating the apply process, add rules to the apply process rule set by running one or more procedures in the DBMS_STREAMS_ADM
package. These rules direct the apply process to apply LCRs for the specified database objects.
See Also:
Oracle Streams Concepts and Administration for information about creating apply processesThis section contains instructions for setting and removing the substitute key columns for a table.
See Also:
When an apply process applies changes to a table, substitute key columns can either replace the primary key columns for a table that has a primary key or act as the primary key columns for a table that does not have a primary key. Set the substitute key columns for a table using the SET_KEY_COLUMNS
procedure in the DBMS_APPLY_ADM
package. This setting applies to all of the apply processes that apply local changes to the database.
For example, to set the substitute key columns for the hr.employees
table to the first_name
, last_name
, and hire_date
columns, replacing the employee_id
column, run the following procedure:
BEGIN DBMS_APPLY_ADM.SET_KEY_COLUMNS( object_name => 'hr.employees', column_list => 'first_name,last_name,hire_date'); END; /
Note:
You must specify an unconditional supplemental log group at the source database for all of the columns specified as substitute key columns in the column_list
or column_table
parameter at the destination database. In this example, you would specify an unconditional supplemental log group including the first_name
, last_name
, and hire_date
columns in the hr.employees
table.
If an apply process applies changes to a remote non-Oracle database, then it can use different substitute key columns for the same table. You can run the SET_KEY_COLUMNS
procedure in the DBMS_APPLY_ADM
package to specify substitute key columns for changes that will be applied to a remote non-Oracle database by setting the apply_database_link
parameter to a non-NULL
value.
See Also:
"Managing Supplemental Logging in an Oracle Streams Replication Environment"
"Apply Process Configuration in an Oracle to Non-Oracle Environment" for information about setting a setting key columns for a table in a remote non-Oracle database
You remove the substitute key columns for a table by specifying NULL
for the column_list
or column_table
parameter in the SET_KEY_COLUMNS
procedure in the DBMS_APPLY_ADM
package. If the table has a primary key, then the table's primary key is used by any apply process for local changes to the database after you remove the substitute primary key.
For example, to remove the substitute key columns for the hr.employees
table, run the following procedure:
BEGIN DBMS_APPLY_ADM.SET_KEY_COLUMNS( object_name => 'hr.employees', column_list => NULL); END; /
This section contains instructions for creating, setting, and removing a DML handler.
See Also:
"Apply Processing Options for LCRs"A DML handler must have the following signature:
PROCEDURE user_procedure ( parameter_name IN ANYDATA);
Here, user_procedure
stands for the name of the procedure and parameter_name
stands for the name of the parameter passed to the procedure. The parameter passed to the procedure is an ANYDATA
encapsulation of a row LCR.
The following restrictions apply to the user procedure:
Do not execute COMMIT
or ROLLBACK
statements. Doing so can endanger the consistency of the transaction that contains the LCR.
If you are manipulating a row using the EXECUTE
member procedure for the row LCR, then do not attempt to manipulate more than one row in a row operation. You must construct and execute manually any DML statements that manipulate more than one row.
If the command type is UPDATE
or DELETE
, then row operations resubmitted using the EXECUTE
member procedure for the LCR must include the entire key in the list of old values. The key is the primary key or the smallest unique key that has at least one NOT
NULL
column, unless a substitute key has been specified by the SET_KEY_COLUMNS
procedure. If there is no specified key, then the key consists of all non LOB, non LONG
, and non LONG
RAW
columns.
If the command type is INSERT
, then row operations resubmitted using the EXECUTE
member procedure for the LCR should include the entire key in the list of new values. Otherwise, duplicate rows are possible. The key is the primary key or the smallest unique key that has at least one NOT
NULL
column, unless a substitute key has been specified by the SET_KEY_COLUMNS
procedure. If there is no specified key, then the key consists of all non LOB, non LONG
, and non LONG
RAW
columns.
A DML handler can be used for any customized processing of row LCRs. For example, the handler can modify an LCR and then execute it using the EXECUTE
member procedure for the LCR. When you execute a row LCR in a DML handler, the apply process applies the LCR without calling the DML handler again.
You can also use a DML handler for recording the history of DML changes. For example, a DML handler can insert information about an LCR it processes into a table and then apply the LCR using the EXECUTE
member procedure. To create such a DML handler, first create a table to hold the history information:
CREATE TABLE strmadmin.history_row_lcrs( timestamp DATE, source_database_name VARCHAR2(128), command_type VARCHAR2(30), object_owner VARCHAR2(32), object_name VARCHAR2(32), tag RAW(10), transaction_id VARCHAR2(10), scn NUMBER, commit_scn NUMBER, old_values SYS.LCR$_ROW_LIST, new_values SYS.LCR$_ROW_LIST) NESTED TABLE old_values STORE AS old_values_ntab NESTED TABLE new_values STORE AS new_values_ntab;
CREATE OR REPLACE PROCEDURE history_dml(in_any IN ANYDATA) IS lcr SYS.LCR$_ROW_RECORD; rc PLS_INTEGER; BEGIN -- Access the LCR rc := in_any.GETOBJECT(lcr); -- Insert information about the LCR into the history_row_lcrs table INSERT INTO strmadmin.history_row_lcrs VALUES (SYSDATE, lcr.GET_SOURCE_DATABASE_NAME(), lcr.GET_COMMAND_TYPE(), lcr.GET_OBJECT_OWNER(), lcr.GET_OBJECT_NAME(), lcr.GET_TAG(), lcr.GET_TRANSACTION_ID(), lcr.GET_SCN(), lcr.GET_COMMIT_SCN, lcr.GET_VALUES('old'), lcr.GET_VALUES('new', 'n')); -- Apply row LCR lcr.EXECUTE(TRUE); END; /
Note:
You must specify an unconditional supplemental log group at the source database for any columns needed by a DML handler at the destination database. This example DML handler does not require any additional supplemental logging because it simply records information about the row LCR and does not manipulate the row LCR in any other way.
To test a DML handler before using it, or to debug a DML handler, you can construct row LCRs and run the DML handler procedure outside the context of an apply process.
See Also:
"Managing Supplemental Logging in an Oracle Streams Replication Environment"
"Executing Row LCRs" for an example that constructs and executes row LCRs outside the context of an apply process
Chapter 11, "Managing Logical Change Records (LCRs)" for information about and restrictions regarding DML handlers and LOB, LONG
, and LONG
RAW
data types
"Are There Any Apply Errors in the Error Queue?" for information about common apply errors that you might want to handle in a DML handler
Oracle Streams Concepts and Administration for an example of a precommit handler that can be used with this DML handler to record commit information for applied transactions
A DML handler processes each row LCR dequeued by any apply process that contains a specific operation on a specific table. You can specify multiple DML handlers on the same table, to handle different operations on the table. All apply processes that apply changes to the specified table in the local database use the specified DML handler.
Set the DML handler using the SET_DML_HANDLER
procedure in the DBMS_APPLY_ADM
package. For example, the following procedure sets the DML handler for UPDATE
operations on the hr.locations
table. Therefore, when any apply process that applies changes locally dequeues a row LCR containing an UPDATE
operation on the hr.locations
table, the apply process sends the row LCR to the history_dml
PL/SQL procedure in the strmadmin
schema for processing. The apply process does not apply a row LCR containing such a change directly.
In this example, the apply_name
parameter is set to NULL
. Therefore, the DML handler is a general DML handler that is used by all of the apply processes in the database.
BEGIN DBMS_APPLY_ADM.SET_DML_HANDLER( object_name => 'hr.locations', object_type => 'TABLE', operation_name => 'UPDATE', error_handler => FALSE, user_procedure => 'strmadmin.history_dml', apply_database_link => NULL, apply_name => NULL); END; /
Note:
If an apply process applies changes to a remote non-Oracle database, then it can use a different DML handler for the same table. You can run theSET_DML_HANDLER
procedure in the DBMS_APPLY_ADM
package to specify a DML handler for changes that will be applied to a remote non-Oracle database by setting the apply_database_link
parameter to a non-NULL
value.You unset a DML handler using the SET_DML_HANDLER
procedure in the DBMS_APPLY_ADM
package. When you run that procedure, set the user_procedure
parameter to NULL
for a specific operation on a specific table. After the DML handler is unset, any apply process that applies changes locally will apply a row LCR containing such a change directly.
For example, the following procedure unsets the DML handler for UPDATE
operations on the hr.locations
table:
BEGIN DBMS_APPLY_ADM.SET_DML_HANDLER( object_name => 'hr.locations', object_type => 'TABLE', operation_name => 'UPDATE', error_handler => FALSE, user_procedure => NULL, apply_name => NULL); END; /
This section contains instructions for creating, specifying, and removing the DDL handler for an apply process.
Note:
All applied DDL LCRs commit automatically. Therefore, if a DDL handler calls theEXECUTE
member procedure of a DDL LCR, then a commit is performed automatically.See Also:
Oracle Database PL/SQL Packages and Types Reference for more information about the EXECUTE
member procedure for LCR types
A DDL handler must have the following signature:
PROCEDURE handler_procedure ( parameter_name IN ANYDATA);
Here, handler_procedure
stands for the name of the procedure and parameter_name
stands for the name of the parameter passed to the procedure. The parameter passed to the procedure is an ANYDATA
encapsulation of a DDL LCR.
A DDL handler can be used for any customized processing of DDL LCRs. For example, the handler can modify the LCR and then execute it using the EXECUTE
member procedure for the LCR. When you execute a DDL LCR in a DDL handler, the apply process applies the LCR without calling the DDL handler again.
You can also use a DDL handler to record the history of DDL changes. For example, a DDL handler can insert information about an LCR it processes into a table and then apply the LCR using the EXECUTE
member procedure.
To create such a DDL handler, first create a table to hold the history information:
CREATE TABLE strmadmin.history_ddl_lcrs( timestamp DATE, source_database_name VARCHAR2(128), command_type VARCHAR2(30), object_owner VARCHAR2(32), object_name VARCHAR2(32), object_type VARCHAR2(18), ddl_text CLOB, logon_user VARCHAR2(32), current_schema VARCHAR2(32), base_table_owner VARCHAR2(32), base_table_name VARCHAR2(32), tag RAW(10), transaction_id VARCHAR2(10), scn NUMBER);
CREATE OR REPLACE PROCEDURE history_ddl(in_any IN ANYDATA) IS lcr SYS.LCR$_DDL_RECORD; rc PLS_INTEGER; ddl_text CLOB; BEGIN -- Access the LCR rc := in_any.GETOBJECT(lcr); DBMS_LOB.CREATETEMPORARY(ddl_text, TRUE); lcr.GET_DDL_TEXT(ddl_text); -- Insert DDL LCR information into history_ddl_lcrs table INSERT INTO strmadmin.history_ddl_lcrs VALUES( SYSDATE, lcr.GET_SOURCE_DATABASE_NAME(), lcr.GET_COMMAND_TYPE(), lcr.GET_OBJECT_OWNER(), lcr.GET_OBJECT_NAME(), lcr.GET_OBJECT_TYPE(), ddl_text, lcr.GET_LOGON_USER(), lcr.GET_CURRENT_SCHEMA(), lcr.GET_BASE_TABLE_OWNER(), lcr.GET_BASE_TABLE_NAME(), lcr.GET_TAG(), lcr.GET_TRANSACTION_ID(), lcr.GET_SCN()); -- Apply DDL LCR lcr.EXECUTE(); -- Free temporary LOB space DBMS_LOB.FREETEMPORARY(ddl_text); END; /
A DDL handler processes all DDL LCRs dequeued by an apply process. Set the DDL handler for an apply process using the ddl_handler
parameter in the ALTER_APPLY
procedure in the DBMS_APPLY_ADM
package. For example, the following procedure sets the DDL handler for an apply process named strep01_apply
to the history_ddl
procedure in the strmadmin
schema.
BEGIN DBMS_APPLY_ADM.ALTER_APPLY( apply_name => 'strep01_apply', ddl_handler => 'strmadmin.history_ddl'); END; /
A DDL handler processes all DDL LCRs dequeued by an apply process. You remove the DDL handler for an apply process by setting the remove_ddl_handler
parameter to TRUE
in the ALTER_APPLY
procedure in the DBMS_APPLY_ADM
package. For example, the following procedure removes the DDL handler from an apply process named strep01_apply
.
BEGIN DBMS_APPLY_ADM.ALTER_APPLY( apply_name => 'strep01_apply', remove_ddl_handler => TRUE); END; /
A virtual dependency definition is a description of a dependency that is used by an apply process to detect dependencies between transactions being applied at a destination database. Virtual dependency definitions are useful when apply process parallelism is greater than 1 and dependencies are not described by constraints in the data dictionary at the destination database. There are two types of virtual dependency definitions: value dependencies and object dependencies.
A value dependency defines a table constraint, such as a unique key, or a relationship between the columns of two or more tables. An object dependency defines a parent-child relationship between two objects at a destination database.
The following sections describe using virtual dependency definitions:
See Also:
"Apply Processes and Dependencies" for more information about virtual dependency definitionsUse the SET_VALUE_DEPENDENCY
procedure in the DBMS_APPLY_ADM
package to set or unset a value dependency. The following sections describe scenarios for using value dependencies:
This scenario involves an environment that shares many tables between a source database and destination database, but the schema that owns the tables is different at these two databases. Also, in this replication environment, the source database is in the United States and the destination database is in England. A design firm uses dozens of tables to describe product designs, but the tables use United States measurements (inches, feet, and so on) in the source database and metric measurements in the destination database. The name of the schema that owns the database objects at the source database is us_designs
, while the name of the schema at the destination database is uk_designs
. Therefore, the schema name of the shared database objects must be changed before apply, and all of the measurements must be converted from United States measurements to metric measurements. Both databases use the same constraints to enforce dependencies between database objects.
Rule-based transformations could make the required changes, but the goal is to apply multiple LCRs in parallel. Rule-based transformations must apply LCRs serially. So, a DML handler is configured at the destination database to make the required changes to the LCRs, and apply process parallelism is set to 5. In this environment, the destination database has no information about the schema us_designs
in the LCRs being sent from the source database. Because an apply process calculates dependencies before passing LCRs to apply handlers, the apply process must be informed about the dependencies between LCRs. Value dependencies can be used to describe these dependencies.
In this scenario, suppose a number of tables describe different designs, and each of these tables has a primary key. One of these tables is design_53
, and the primary key column is key_53
. Also, a table named all_designs_summary
includes a summary of all of the individual designs, and this table has a foreign key column for each design table. The all_designs_summary
includes a key_53 column, which is a foreign key of the primary key in the design_53
table. To inform an apply process about the relationship between these tables, run the following procedures to create a value dependency at the destination database:
BEGIN DBMS_APPLY_ADM.SET_VALUE_DEPENDENCY( dependency_name => 'key_53_foreign_key', object_name => 'us_designs.design_53', attribute_list => 'key_53'); END; /
BEGIN DBMS_APPLY_ADM.SET_VALUE_DEPENDENCY( dependency_name => 'key_53_foreign_key', object_name => 'us_designs.all_designs_summary', attribute_list => 'key_53'); END; /
Notice that the value dependencies use the schema at the source database (us_designs
) because LCRs contain the source database schema. The schema will be changed to uk_designs
by the DML handler after the apply process passes the row LCRs to the handler.
To unset a value dependency, run the SET_VALUE_DEPENDENCY
procedure, and specify the name of the value dependency in the dependency_name
parameter and NULL
in the object_name
parameter. For example, to unset the key_53_foreign_key
value dependency that was set previously, run the following procedure:
BEGIN DBMS_APPLY_ADM.SET_VALUE_DEPENDENCY( dependency_name => 'key_53_foreign_key', object_name => NULL, attribute_list => NULL); END; /
See Also:
"Managing a DML Handler"This scenarios involves an environment in which foreign key constraints are used for shared tables at the source database, but no constraints are used for these tables at the destination database. In the replication environment, the destination database is used as a data warehouse where data is written to the database far more often than it is queried. To optimize write operations, no constraints are defined at the destination database.
In such an environment, an apply processes running on the destination database must be informed about the constraints to apply transactions consistently. Value dependencies can be used to inform the apply process about these constraints.
For example, assume that the orders
and order_items
tables in the oe
schema are shared between the source database and the destination database in this environment. On the source database, the order_id
column is a primary key in the orders
table, and the order_id
column in the order_items
table is a foreign key that matches the primary key column in the orders
table. At the destination database, these constraints have been removed. Run the following procedures to create a value dependency at the destination database that informs apply processes about the relationship between the columns in these tables:
BEGIN DBMS_APPLY_ADM.SET_VALUE_DEPENDENCY( dependency_name => 'order_id_foreign_key', object_name => 'oe.orders', attribute_list => 'order_id'); END; /
BEGIN DBMS_APPLY_ADM.SET_VALUE_DEPENDENCY( dependency_name => 'order_id_foreign_key', object_name => 'oe.order_items', attribute_list => 'order_id'); END; /
Also, in this environment, the following actions should be performed so that apply processes can apply transactions consistently:
Value dependencies should be set for each column that has a unique key or bitmap index at the source database.
The DBMS_APPLY_ADM.SET_KEY_COLUMNS
procedure should set substitute key columns for the columns that are primary key columns at the source database.
To unset the value dependency that was set previously, run the following procedure:
BEGIN DBMS_APPLY_ADM.SET_VALUE_DEPENDENCY( dependency_name => 'order_id_foreign_key', object_name => NULL, attribute_list => NULL); END; /
Use the CREATE_OBJECT_DEPENDENCY
and DROP_OBJECT_DEPENDENCY
procedures in the DBMS_APPLY_ADM
package to create or drop an object dependency. The following sections provide detailed instructions for creating and dropping object dependencies.
An object dependency can be used when row LCRs for a particular table always should be applied before the row LCRs for another table, and the data dictionary of the destination database does not contain a constraint to enforce this relationship. When you define an object dependency, the table whose row LCRs should be applied first is the parent table and the table whose row LCRs should be applied second is the child table.
For example, consider an Oracle Streams replication environment with the following characteristics:
The following tables in the ord
schema are shared between a source and destination database:
The customers
table contains information about customers, including each customer's shipping address.
The orders
table contains information about each order.
The order_items
table contains information about the items ordered in each order.
The ship_orders
table contains information about orders that are ready to ship, but it does not contain detailed information about the customer or information about individual items to ship with each order.
The ship_orders
table has no relationships, defined by constraints, with the other tables.
Information about orders is entered into the source database and propagated to the destination database, where it is applied.
The destination database site is a warehouse where orders are shipped to customers. At this site, a DML handler uses the information in the ship_orders
, customers
, orders
, and order_items
tables to generate a report that includes the customer's shipping address and the items to ship.
The information in the report generated by the DML handler must be consistent with the time when the ship order record was created. An object dependency at the destination database can accomplish this goal. In this case, the ship_orders
table is the parent table of the following child tables: customers
, orders
, and order_items
. Because ship_orders
is the parent of these tables, any changes to these tables made after a record in the ship_orders
table was entered will not be applied until the DML handler has generated the report for the ship order.
To create these object dependencies, run the following procedures at the destination database:
BEGIN DBMS_APPLY_ADM.CREATE_OBJECT_DEPENDENCY( object_name => 'ord.customers', parent_object_name => 'ord.ship_orders'); END; /
BEGIN DBMS_APPLY_ADM.CREATE_OBJECT_DEPENDENCY( object_name => 'ord.orders', parent_object_name => 'ord.ship_orders'); END; /
BEGIN DBMS_APPLY_ADM.CREATE_OBJECT_DEPENDENCY( object_name => 'ord.order_items', parent_object_name => 'ord.ship_orders'); END; /
See Also:
"Managing a DML Handler"To drop the object dependencies created in "Creating an Object Dependency", run the following procedure:
BEGIN DBMS_APPLY_ADM.DROP_OBJECT_DEPENDENCY( object_name => 'ord.customers', parent_object_name => 'ord.ship_orders'); END; /
BEGIN DBMS_APPLY_ADM.DROP_OBJECT_DEPENDENCY( object_name => 'ord.orders', parent_object_name => 'ord.ship_orders'); END; /
BEGIN DBMS_APPLY_ADM.DROP_OBJECT_DEPENDENCY( object_name => 'ord.order_items', parent_object_name => 'ord.ship_orders'); END; /
This section describes the following tasks:
See Also:
Set an update conflict handler using the SET_UPDATE_CONFLICT_HANDLER
procedure in the DBMS_APPLY_ADM
package. You can use one of the following prebuilt methods when you create an update conflict resolution handler:
OVERWRITE
DISCARD
MAXIMUM
MINIMUM
For example, suppose an Oracle Streams environment captures changes to the hr.jobs
table at dbs1.net
and propagates these changes to the dbs2.net
destination database, where they are applied. In this environment, applications can perform DML changes on the hr.jobs
table at both databases, but, if there is a conflict for a particular DML change, then the change at the dbs1.net
database should always overwrite the change at the dbs2.net
database. In this environment, you can accomplish this goal by specifying an OVERWRITE
handler at the dbs2.net
database.
To specify an update conflict handler for the hr.jobs
table in the hr
schema at the dbs2.net
database, run the following procedure at dbs2.net
:
DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'job_title'; cols(2) := 'min_salary'; cols(3) := 'max_salary'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name => 'hr.jobs', method_name => 'OVERWRITE', resolution_column => 'job_title', column_list => cols); END; /
All apply processes running on a database that apply changes to the specified table locally use the specified update conflict handler.
Note:
The resolution_column
is not used for OVERWRITE
and DISCARD
methods, but one of the columns in the column_list
still must be specified.
You must specify a conditional supplemental log group at the source database for all of the columns in the column_list
at the destination database. In this example, you would specify a conditional supplemental log group including the job_title
, min_salary
, and max_salary
columns in the hr.jobs
table at the dbs1.net
database.
Prebuilt update conflict handlers do not support LOB, LONG
, LONG
RAW
, user-defined type, and Oracle-supplied type columns. Therefore, you should not include these types of columns in the column_list
parameter when running the procedure SET_UPDATE_CONFLICT_HANDLER
.
See Also:
"Managing Supplemental Logging in an Oracle Streams Replication Environment"
Chapter 21, "N-Way Replication Example" for an example Oracle Streams environment that illustrates using the MAXIMUM
prebuilt method for time-based conflict resolution
You can modify an existing update conflict handler by running the SET_UPDATE_CONFLICT_HANDLER
procedure in the DBMS_APPLY_ADM
package. To update an existing conflict handler, specify the same table and resolution column as the existing conflict handler.
To modify the update conflict handler created in "Setting an Update Conflict Handler", you specify the hr.jobs
table and the job_title
column as the resolution column. You can modify this update conflict handler by specifying a different type of prebuilt method or a different column list, or both. However, if you want to change the resolution column for an update conflict handler, then you must remove and re-create the handler.
For example, suppose the environment changes, and you want changes from dbs1.net
to be discarded in the event of a conflict, whereas previously changes from dbs1.net
overwrote changes at dbs2.net
. You can accomplish this goal by specifying a DISCARD
handler at the dbs2.net
database.
To modify the existing update conflict handler for the hr.jobs
table in the hr
schema at the dbs2.net
database, run the following procedure:
DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'job_title'; cols(2) := 'min_salary'; cols(3) := 'max_salary'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name => 'hr.jobs', method_name => 'DISCARD', resolution_column => 'job_title', column_list => cols); END; /
You can remove an existing update conflict handler by running the SET_UPDATE_CONFLICT_HANDLER
procedure in the DBMS_APPLY_ADM
package. To remove a an existing conflict handler, specify NULL
for the method, and specify the same table, column list, and resolution column as the existing conflict handler.
For example, suppose you want to remove the update conflict handler created in "Setting an Update Conflict Handler" and then modified in "Modifying an Existing Update Conflict Handler". To remove this update conflict handler, run the following procedure:
DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'job_title'; cols(2) := 'min_salary'; cols(3) := 'max_salary'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name => 'hr.jobs', method_name => NULL, resolution_column => 'job_title', column_list => cols); END; /
You can stop conflict detection for nonkey columns using the COMPARE_OLD_VALUES
procedure in the DBMS_APPLY_ADM
package.
For example, suppose you configure a time
column for conflict resolution for the hr.employees
table, as described in "MAXIMUM". In this case, you can decide to stop conflict detection for the other nonkey columns in the table. After adding the time
column and creating the trigger as described in that section, add the columns in the hr.employees
table to the column list for an update conflict handler:
DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'first_name'; cols(2) := 'last_name'; cols(3) := 'email'; cols(4) := 'phone_number'; cols(5) := 'hire_date'; cols(6) := 'job_id'; cols(7) := 'salary'; cols(8) := 'commission_pct'; cols(9) := 'manager_id'; cols(10) := 'department_id'; cols(11) := 'time'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name => 'hr.employees', method_name => 'MAXIMUM', resolution_column => 'time', column_list => cols); END; /
This example does not include the primary key for the table in the column list because it assumes that the primary key is never updated. However, other key columns are included in the column list.
To stop conflict detection for all nonkey columns in the table for both UPDATE
and DELETE
operations at a destination database, run the following procedure:
DECLARE cols DBMS_UTILITY.LNAME_ARRAY; BEGIN cols(1) := 'first_name'; cols(2) := 'last_name'; cols(3) := 'email'; cols(4) := 'phone_number'; cols(5) := 'hire_date'; cols(6) := 'job_id'; cols(7) := 'salary'; cols(8) := 'commission_pct'; DBMS_APPLY_ADM.COMPARE_OLD_VALUES( object_name => 'hr.employees', column_table => cols, operation => '*', compare => FALSE); END; /
The asterisk (*
) specified for the operation
parameter means that conflict detection is stopped for both UPDATE
and DELETE
operations. After you run this procedure, all apply processes running on the database that apply changes to the specified table locally do not detect conflicts on the specified columns. Therefore, in this example, the time
column is the only column used for conflict detection.
Note:
The example in this section sets an update conflict handler before stopping conflict detection for nonkey columns. However, an update conflict handler is not required before you stop conflict detection for nonkey columns.See Also:
Chapter 21, "N-Way Replication Example" for a detailed example that uses time-based conflict resolution
Oracle Database PL/SQL Packages and Types Reference for more information about the COMPARE_OLD_VALUES
procedure
You can set or get the value of the tags generated by the current session or by an apply process. The following sections describe how to set and get tag values.
This section contains instructions for setting and getting the tag for the current session.
You can set the tag for all redo entries generated by the current session using the SET_TAG
procedure in the DBMS_STREAMS
package. For example, to set the tag to the hexadecimal value of '1D'
in the current session, run the following procedure:
BEGIN DBMS_STREAMS.SET_TAG( tag => HEXTORAW('1D')); END; /
After running this procedure, each redo entry generated by DML or DDL statements in the current session will have a tag value of 1D
. Running this procedure affects only the current session.
The following are considerations for the SET_TAG
procedure:
This procedure is not transactional. That is, the effects of SET_TAG
cannot be rolled back.
If the SET_TAG
procedure is run to set a non-NULL
session tag before a data dictionary build has been performed on the database, then the redo entries for a transaction that started before the dictionary build might not include the specified tag value for the session. Therefore, perform a data dictionary build before using the SET_TAG
procedure in a session. A data dictionary build happens when the DBMS_CAPTURE_ADM.BUILD
procedure is run. The BUILD
procedure can be run automatically when a capture process is created.
You can get the tag for all redo entries generated by the current session using the GET_TAG
procedure in the DBMS_STREAMS
package. For example, to get the hexadecimal value of the tags generated in the redo entries for the current session, run the following procedure:
SET SERVEROUTPUT ON DECLARE raw_tag RAW(2048); BEGIN raw_tag := DBMS_STREAMS.GET_TAG(); DBMS_OUTPUT.PUT_LINE('Tag Value = ' || RAWTOHEX(raw_tag)); END; /
You can also display the tag value for the current session by querying the DUAL
view:
SELECT DBMS_STREAMS.GET_TAG FROM DUAL;
This section contains instructions for setting and removing the tag for an apply process.
See Also:
"Tags and an Apply Process" for conceptual information about how tags are used by an apply process and apply handlers
An apply process generates redo entries when it applies changes to a database or invokes handlers. You can set the default tag for all redo entries generated by an apply process when you create the apply process using the CREATE_APPLY
procedure in the DBMS_APPLY_ADM
package, or when you alter an existing apply process using the ALTER_APPLY
procedure in the DBMS_APPLY_ADM
package. In both of these procedures, set the apply_tag
parameter to the value you want to specify for the tags generated by the apply process.
For example, to set the value of the tags generated in the redo log by an existing apply process named strep01_apply
to the hexadecimal value of '7'
, run the following procedure:
BEGIN DBMS_APPLY_ADM.ALTER_APPLY( apply_name => 'strep01_apply', apply_tag => HEXTORAW('7')); END; /
After running this procedure, each redo entry generated by the apply process will have a tag value of 7
.
You remove the apply tag for an apply process by setting the remove_apply_tag
parameter to TRUE
in the ALTER_APPLY
procedure in the DBMS_APPLY_ADM
package. Removing the apply tag means that each redo entry generated by the apply process has a NULL
tag. For example, the following procedure removes the apply tag from an apply process named strep01_apply
.
BEGIN DBMS_APPLY_ADM.ALTER_APPLY( apply_name => 'strep01_apply', remove_apply_tag => TRUE); END; /
The following sections describe how to split and merge streams and provide examples that do so:
Splitting and merging an Oracle Streams destination is useful under the following conditions:
A capture process captures changes that are sent to two or more destination databases.
A destination queue stops accepting propagated changes captured by the capture process. The destination queue might stop accepting changes if, for example, the database that contains the queue goes down, there is a problem with the destination queue, the computer system running the database that contains the queue goes down, or for some other reason.
When these conditions are met, captured changes that cannot be sent to a destination queue remain in the source queue, causing the source queue size to increase. Eventually, the source queue will spill captured LCRs to hard disk, and the performance of the Oracle Streams replication environment will suffer.
Figure 9-1 shows an Oracle Streams replication environment with a problem destination. Destination database A is down, and messages intended for destination database A are building up in the queue at the capture database.
Figure 9-1 Problem Destination in an Oracle Streams Replication Environment
You can use the following data dictionary views to determine when there is a problem with a propagation:
Query the V$BUFFERED_QUEUES
view to identify how many messages are in a buffered queue and how many of these messages have spilled to hard disk.
Query the DBA_PROPAGATION
and V$PROPAGATION_SENDER
views to show the propagations in a database and the status of each propagation
To avoid degraded performance in this situation, use the SPLIT_STREAMS
, MERGE_STREAMS_JOB
, and MERGE_STREAMS
procedures in the DBMS_STREAMS_ADM
package. The SPLIT_STREAMS
procedure splits off the stream for the problem propagation destination from all of the other streams flowing from a capture process to other destinations. The SPLIT_STREAMS
procedure clones the capture process, queue, and propagation. The cloned versions of these components are used by the stream that is split off. While the stream that cannot propagate changes is split off, the streams to other destinations proceed as usual.
Figure 9-2 shows the cloned stream created by the SPLIT_STREAMS
procedure.
When the problem destination becomes available again, you start the cloned capture process, and the cloned stream begins to send messages to the destination database again.
Figure 9-3 shows a destination database A that is up and running and a cloned capture process that is enabled at the capture database. The cloned stream begins to flow and starts to catch up to the original streams.
Figure 9-3 Cloned Stream Begins Flowing and Starts to Catch Up to One of the Original Oracle Streams
When the cloned propagation catches up to one of the original propagations, you can run one of the following procedures to merge the streams:
The MERGE_STREAMS
procedure merges the stream that was split off back into the other streams flowing from the original capture process.
The MERGE_STREAMS_JOB
procedure determines whether or not the streams with a user-specified merge threshold. If they are, then the MERGE_STREAMS_JOB
procedure runs the MERGE_STREAMS
procedure. If the streams are not within the merge threshold, then the MERGE_STREAMS_JOB
procedure does nothing.
Typically, it is best to run the MERGE_STREAMS_JOB
procedure instead of running the MERGE_STREAMS
procedure directly, because the MERGE_STREAMS_JOB
procedure determines whether the streams are ready to merge before merging them.
Figure 9-4 shows the results of running the MERGE_STREAMS
procedure. The Oracle Streams replication environment has its original components, and all of the streams are flowing normally.
When you split streams with the SPLIT_STREAMS
procedure, the auto_merge_threshold
parameter gives you the option of automatically merging the stream back to the original capture process when the problem at the destination is corrected. After the destination queue for the cloned propagation is accepting messages, you can start the cloned capture process and wait for the cloned capture process to catch up to the original capture process. When the cloned capture process nearly catches up, the auto_merge_threshold
parameter setting determines whether the split stream is merged automatically or manually:
When auto_merge_threshold
is set to a positive number, the SPLIT_STREAMS
procedure creates an Oracle Scheduler job with a schedule. The job runs the MERGE_STREAMS_JOB
procedure and specifies a merge threshold equal to the value specified in the auto_merge_threshold
parameter. You can modify the schedule for a job after it is created.
In this case, the split stream is merged back with the original streams automatically when the difference, in seconds, between CAPTURE_MESSAGE_CREATE_TIME
in the GV$STREAMS_CAPTURE
view of the cloned capture process and the original capture process is less than or equal to the value specified for the auto_merge_threshold
parameter. The CAPTURE_MESSAGE_CREATE_TIME
records the time when a captured change was recorded in the redo log.
When auto_merge_threshold
is set to NULL
or 0
(zero), the split stream is not merged back with the original streams automatically. To merge the split stream with the original streams, run the MERGE_STREAMS_JOB
or MERGE_STREAMS
procedure manually.
The SPLIT_STREAMS
and MERGE_STREAMS
procedures can perform actions directly or generate a script that performs the actions when the script is run. Using a procedure to perform actions directly is simpler than running a script, and the split or merge operation is performed immediately. However, you might choose to generate a script for the following reasons:
You want to review the actions performed by the procedure before splitting or merging streams.
You want to modify the script to customize its actions.
For example, you might choose to modify the script if you want to change the rules in the rule set for the cloned capture process. In some Oracle Streams replication environments, only a subset of the changes made to the source database are sent to each destination database, and each destination database might receive a different subset of the changes. In such an environment, you can modify the rule set for the cloned capture process so that it only captures changes that are propagated by the cloned propagation.
The perform_actions
parameter in each procedure controls whether the procedure performs actions directly:
To split or merge streams directly when you run one of these procedures, set the perform_actions
parameter to TRUE
. The default value for this parameter is TRUE
.
To generate a script when you run one of these procedures, set the perform_actions
parameter to FALSE
, and use the script_name
and script_directory_object
parameters to specify the name and location of the script.
The following sections provide instructions for splitting and merging streams:
Splitting and Merging an Oracle Streams Destination Directly and Automatically
Splitting and Merging an Oracle Streams Destination Manually With Scripts
In both examples, the Oracle Streams replication environment has the following properties:
A single capture process named strms_capture
captures changes that are sent to three destination databases.
The propagations that send these changes to the destination queues at the destination databases are the following:
strms_prop_a
strms_prop_b
strms_prop_c
A queue named streams_queue
is the source queue for all three propagations.
There is a problem at the destination for the strms_prop_a
propagation. This propagation cannot send messages to the destination queue, and the retained messages are causing the source queue streams_queue
size to increase.
The other two propagations (strms_prop_b
and strms_prop_c
) are propagating messages normally.
See Also:
Oracle Database PL/SQL Packages and Types Reference for more information about theSPLIT_STREAMS
procedure and the MERGE_STREAMS
procedureThe example in this section splits and merges streams directly. That is, the perform_actions
parameter is set to TRUE
in the SPLIT_STREAMS
procedure. Also, the example merges the streams automatically at the appropriate time. That is, the auto_merge_threshold
parameter is to set a positive number (60) in the SPLIT_STREAMS
procedure.
Complete the following steps to split streams directly and merge streams automatically:
Connect as the Oracle Streams administrator:
CONNECT strmadmin/user-password
Run the following procedure to split the stream flowing through propagation strms_prop_a
from the other propagations flowing from the strms_capture
capture process:
DECLARE schedule_name VARCHAR2(30); job_name VARCHAR2(30); BEGIN schedule_name := 'merge_job1_schedule'; job_name := 'merge_job1'; DBMS_STREAMS_ADM.SPLIT_STREAMS( propagation_name => 'strms_prop_a', cloned_propagation_name => 'cloned_prop_a', cloned_queue_name => 'cloned_queue', cloned_capture_name => 'cloned_capture', perform_actions => TRUE, auto_merge_threshold => 60, schedule_name => schedule_name, merge_job_name => job_name); END; /
Running this procedure performs the following actions:
Creates a new queue called cloned_queue
.
Creates a new propagation called cloned_prop_a
that propagates messages from the cloned_queue
queue to the existing destination queue used by the strms_prop_a
propagation. The cloned propagation cloned_prop_a
uses the same rule set as the original propagation strms_prop_a
.
Stops the capture process strms_capture
.
Queries the acknowledge SCN for the original propagation strms_prop_a
. The acknowledged SCN is the last SCN acknowledged by the apply process that applies the changes sent by the propagation. The ACKED_SCN
value in the DBA_PROPAGATION
view shows the acknowledged SCN for a propagation.
Creates a new capture process called cloned_capture
. The start SCN for cloned_capture
is set to the value of the acknowledged SCN for the strms_prop_a
propagation. The cloned capture process cloned_capture
uses the same rule set as the original capture process strms_capture
.
Drops the original propagation strms_prop_a
.
Starts the original capture process strms_capture
with the start SCN set to the value of the acknowledged SCN for the strms_prop_a
propagation.
Creates an Oracle Scheduler job named merge_job1
with a schedule named merge_job1_schedule
. Both the job and the schedule are owned by the user who ran the SPLIT_STREAMS
procedure. The schedule starts to run when the SPLIT_STREAMS
procedure completes. The system defines the initial schedule, but you can modify it in the same way that you would modify any Oracle Scheduler job. See Oracle Database Administrator's Guide for instructions.
Correct the problem with the destination of cloned_prop_a
. The problem is corrected when the destination queue for cloned_prop_a
can accept propagated messages and an apply process at the destination database can dequeue and process these messages.
While connected as the Oracle Streams administrator, start the cloned capture process by running the following procedure:
exec DBMS_CAPTURE_ADM.START_CAPTURE('cloned_capture');
After the cloned capture process cloned_capture
starts running, it captures changes that satisfy its rule sets from the acknowledged SCN forward. These changes are propagated by the cloned_prop_a
propagation and processed by an apply process at the destination database.
During this time, the Oracle Scheduler job runs the MERGE_STREAMS_JOB
procedure according to its schedule. When the difference between CAPTURE_MESSAGE_CREATE_TIME
in the GV$STREAMS_CAPTURE
view of the cloned capture process cloned_capture
and the original capture process strms_capture
is less than or equal 60 seconds, the MERGE_STREAMS_JOB
procedure determines that the streams are ready to merge. The MERGE_STREAMS_JOB
procedure runs the MERGE_STREAMS
procedure automatically to merge the streams back together.
The MERGE_STREAMS
procedure performs the following actions:
Stops the cloned capture process cloned_capture
.
Stops the original capture process strms_capture
.
Re-creates the propagation called strms_prop_a
. This propagation propagates messages from the streams_queue
queue to the existing destination queue used by the cloned_prop_a
propagation. The re-created propagation strms_prop_a
uses the same rule set as the cloned propagation cloned_prop_a
.
Starts the original capture process strms_capture
from the lower SCN value of these two SCN values:
The acknowledged SCN of the cloned propagation cloned_prop_a
.
The lowest acknowledged SCN of the other propagations that propagate changes captured by the original capture process (propagations strms_prop_b
and strms_prop_c
in this example).
When the strms_capture
capture process is started, it might recapture changes that it already captured, or it might capture changes that were already captured by the cloned capture process cloned_capture
. In either case, the relevant apply processes will discard any duplicate changes they receive.
Drops the cloned propagation cloned_prop_a
.
Drops the cloned capture process cloned_capture
.
Drops the cloned queue cloned_queue
.
After the streams are merged, the Oracle Streams replication environment has the same components as it had before the split and merge operation.
The example in this section splits and merges streams by generating and running scripts. That is, the perform_actions
parameter is set to FALSE
in the SPLIT_STREAMS
procedure. Also, the example merges the streams manually at the appropriate time. That is, the auto_merge_threshold
parameter is set to NULL
in the SPLIT_STREAMS
procedure.
Complete the following steps to use scripts to split and merge streams:
Connect as the Oracle Streams administrator:
CONNECT strmadmin/user-password
If it does not already exist, then create a directory object named db_dir
to hold the scripts generated by the procedures:
CREATE DIRECTORY db_dir AS '/usr/db_files';
Run the following procedure to generate a script to split the streams:
DECLARE schedule_name VARCHAR2(30); job_name VARCHAR2(30); BEGIN DBMS_STREAMS_ADM.SPLIT_STREAMS( propagation_name => 'strms_prop_a', cloned_propagation_name => 'cloned_prop_a', cloned_queue_name => 'cloned_queue', cloned_capture_name => 'cloned_capture', perform_actions => FALSE, script_name => 'split.sql', script_directory_object => 'db_dir', auto_merge_threshold => NULL, schedule_name => schedule_name, merge_job_name => job_name); END; /
Running this procedure generates the split.sql
script. The script contains the actions that will split the stream flowing through propagation strms_prop_a
from the other propagations flowing from the strms_capture
capture process.
Go to the directory used by the db_dir
directory object, and open the split.sql
script with a text editor.
Examine the script and make modifications, if necessary.
Save and close the script.
While connected as the Oracle Streams administrator in SQL*Plus, run the script:
@/usr/db_files/split.sql
Running the script performs the following actions:
Runs the SET_UP_QUEUE
procedure in the DBMS_STREAMS_ADM
package to create a new queue called cloned_queue
.
Runs the CREATE_PROPAGATION
procedure in the DBMS_PROPAGATION_ADM
package to create a new propagation called cloned_prop_a
. This new propagation propagates messages from the cloned_queue
queue to the existing destination queue used by the strms_prop_a
propagation. The cloned propagation cloned_prop_a
uses the same rule set as the original propagation strms_prop_a
.
The CREATE_PROPAGATION
procedure sets the original_propagation_name
parameter to strms_prop_a
and the auto_merge_threshold
parameter to NULL
.
Runs the STOP_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to stop the capture process strms_capture
.
Queries the acknowledge SCN for the original propagation strms_prop_a
. The acknowledged SCN is the last SCN acknowledged by the apply process that applies the changes sent by the propagation. The ACKED_SCN
value in the DBA_PROPAGATION
view shows the acknowledged SCN for a propagation.
Runs the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to create a new capture process called cloned_capture
. The start SCN for cloned_capture
is set to the value of the acknowledged SCN for the strms_prop_a
propagation. The cloned capture process cloned_capture
uses the same rule set as the original capture process strms_capture
.
Runs the DROP_PROPAGATION
procedure in the DBMS_PROPAGATION_ADM
package to drop the original propagation strms_prop_a
.
Runs the START_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to start the original capture process strms_capture
with the start SCN set to the value of the acknowledged SCN for the strms_prop_a
propagation.
Correct the problem with the destination of cloned_prop_a
. The problem is corrected when the destination queue for cloned_prop_a
can accept propagated messages and an apply process at the destination database can dequeue and process these messages.
While connected as the Oracle Streams administrator, start the cloned capture process by running the following procedure:
exec DBMS_CAPTURE_ADM.START_CAPTURE('cloned_capture');
Monitor the Oracle Streams replication environment until the cloned capture process catches up to, or nearly catches up to, the original capture process. Specifically, query the CAPTURE_MESSAGE_CREATION_TIME
column in the GV$STREAMS_CAPTURE
view for each capture process.
Run the following query to check the CAPTURE_MESSAGE_CREATE_TIME
for each capture process periodically:
SELECT CAPTURE_NAME, TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') FROM GV$STREAMS_CAPTURE;
Do not move on to the next step until the difference between CAPTURE_MESSAGE_CREATE_TIME
of the cloned capture process cloned_capture
and the original capture process strms_capture
is relatively small.
Run the following procedure to generate a script to merge the streams:
BEGIN DBMS_STREAMS_ADM.MERGE_STREAMS( cloned_propagation_name => 'cloned_prop_a', perform_actions => FALSE, script_name => 'merge.sql', script_directory_object => 'db_dir'); END; /
Running this procedure generates the merge.sql
script. The script contains the actions that will merge the stream flowing through propagation cloned_prop_a
with the other propagations flowing from the strms_capture
capture process.
Go to the directory used by the db_dir
directory object, and open the merge.sql
script with a text editor.
Examine the script and make modifications, if necessary.
Save and close the script.
While connected as the Oracle Streams administrator in SQL*Plus, run the script:
@/usr/db_files/merge.sql
Running the script performs the following actions:
Runs the STOP_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to stop the cloned capture process cloned_capture
.
Runs the STOP_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to stop the original capture process strms_capture
.
Runs the CREATE_PROPAGATION
procedure in the DBMS_PROPAGATION_ADM
package to re-create the propagation called strms_prop_a
. This propagation propagates messages from the streams_queue
queue to the existing destination queue used by the cloned_prop_a
propagation. The re-created propagation strms_prop_a
uses the same rule set as the cloned propagation cloned_prop_a
.
Starts the original capture process strms_capture
from the lower SCN value of these two SCN values:
The acknowledged SCN of the cloned propagation cloned_prop_a
.
The lowest acknowledged SCN of the other propagations that propagate changes captured by the original capture process (propagations strms_prop_b
and strms_prop_c
in this example).
When the strms_capture
capture process is started, it might recapture changes that it already captured, or it might capture changes that were already captured by the cloned capture process cloned_capture
. In either case, the relevant apply processes will discard any duplicate changes they receive.
Runs the DROP_PROPAGATION
procedure in the DBMS_PROPAGATION_ADM
package to drop the cloned propagation cloned_prop_a
.
Runs the DROP_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to drop the cloned capture process cloned_capture
.
Runs the REMOVE_QUEUE
procedure in the DBMS_STREAMS_ADM
package to drop the cloned queue cloned_queue
.
After the script runs successfully, the streams are merged, and the Oracle Streams replication environment has the same components as it had before the split and merge operation.
Typically, database administrators change the DBID
and global name of a database when it is a clone of another database. You can view the DBID
of a database by querying the DBID
column in the V$DATABASE
dynamic performance view, and you can view the global name of a database by querying the GLOBAL_NAME
static data dictionary view. When you change the DBID
or global name of a source database, any existing capture processes that capture changes originating at this source database become unusable. The capture processes can be local capture processes or downstream capture processes that capture changes that originated at the source database. Also, any existing apply processes that apply changes from the source database become unusable. However, existing synchronous captures and propagations do not need to be re-created, although modifications to propagation rules might be necessary.
If a capture process or synchronous capture is capturing changes to a source database for which you have changed the DBID
or global name, then complete the following steps:
Shut down the source database.
Restart the source database with RESTRICTED
SESSION
enabled using STARTUP
RESTRICT
.
Drop the capture process using the DROP_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package. The capture process can be a local capture process at the source database or a downstream capture process at a remote database. Synchronous captures do not need to be dropped.
At the source database, run the ALTER
SYSTEM
SWITCH
LOGFILE
statement on the database.
If any changes have been captured from the source database, then manually resynchronize the data at all destination databases that apply changes originating at this source database. If the database never captured any changes, then this step is not necessary.
Modify any rules that use the source database name as a condition. The source database name should be changed to the new global name of the source database where appropriate in these rules. You might need to modify capture process rules, propagation rules, and apply process rules at the local database and at remote databases in the environment. Typically, synchronous capture rules do not contain a condition for the source database.
Drop the apply processes that apply changes from the capture process that you dropped in Step 3. Use the DROP_APPLY
procedure in the DBMS_APPLY_ADM
package to drop an apply process. Apply processes that apply changes captured by synchronous capture do not need to be dropped.
At each destination database that applies changes from the source database, re-create the apply processes you dropped in Step 7. You might want to associate the each apply process with the same rule sets it used before it was dropped. See "Creating an Apply Process That Applies Captured LCRs" for instructions.
Re-create the capture process you dropped in Step 3, if necessary. You might want to associate the capture process with the same rule sets used by the capture process you dropped in Step 3. See "Creating a Capture Process" for instructions.
At the source database, prepare database objects whose changes will be captured by the re-created capture process for instantiation. See "Preparing Database Objects for Instantiation at a Source Database".
At each destination database that applies changes from the source database, set the instantiation SCN for all databases objects to which changes from the source database will be applied. See "Setting Instantiation SCNs at a Destination Database" for instructions.
Disable the restricted session using the ALTER
SYSTEM
DISABLE
RESTRICTED
SESSION
statement.
At each destination database that applies changes from the source database, start the apply processes you created in Step 8.
At the source database, start the capture process you created in Step 9.
See Also:
Oracle Database Utilities for more information about changing theDBID
of a database using the DBNEWID utilityA multiple-source environment is one in which there is more than one source database for any of the shared data. If a source database in a multiple-source environment cannot be recovered to the current point in time, then you can use the method described in this section to resynchronize the source database with the other source databases in the environment. Some reasons why a database cannot be recovered to the current point in time include corrupted archived redo logs or the media failure of an online redo log group.
For example, a bidirectional Oracle Streams environment is one in which exactly two databases share the replicated database objects and data. In this example, assume that database A is the database that must be resynchronized and that database B is the other source database in the environment. To resynchronize database A in this bidirectional Oracle Streams environment, complete the following steps:
Verify that database B has applied all of the changes sent from database A. You can query the V$BUFFERED_SUBSCRIBERS
data dictionary view at database B to determine whether the apply process that applies these changes has any unapplied changes in its queue. See the example on viewing propagations dequeuing LCRs from each buffered queue in Oracle Streams Concepts and Administration for an example of such a query. Do not continue until all of these changes have been applied.
Remove the Oracle Streams configuration from database A by running the REMOVE_STREAMS_CONFIGURATION
procedure in the DBMS_STREAMS_ADM
package. See Oracle Database PL/SQL Packages and Types Reference for more information about this procedure.
At database B, drop the apply process that applies changes from database A. Do not drop the rule sets used by this apply process because you will re-create the apply process in a subsequent step.
Complete the steps in "Adding a New Database to an Existing Multiple-Source Environment" to add database A back into the Oracle Streams environment.
Point-in-time recovery is the recovery of a database to a specified noncurrent time, SCN, or log sequence number. The following sections discuss performing point-in-time recovery in an Oracle Streams replication environment:
Performing Point-in-Time Recovery on the Source in a Single-Source Environment
Performing Point-in-Time Recovery in a Multiple-Source Environment
See Also:
Oracle Database Backup and Recovery User's Guide for more information about point-in-time recoveryA single-source Oracle Streams replication environment is one in which there is only one source database for shared data. If database point-in-time recovery is required at the source database in a single-source Oracle Streams environment, and any capture processes that capture changes generated at a source database are running, then you must stop these capture processes before you perform the recovery operation. Both local and downstream capture process that capture changes generated at the source database must be stopped. Typically, database administrators reset the log sequence number of a database during point-in-time recovery. The ALTER
DATABASE
OPEN
RESETLOGS
statement is an example of a statement that resets the log sequence number.
The instructions in this section assume that the single-source replication environment has the following characteristics:
Only one capture process named strm01_capture
, which can be a local or downstream capture process
Only one destination database with the global name dest.net
Only one apply process named strm01_apply
at the destination database
If point-in-time recovery must be performed on the source database, then you can follow these instructions to recover as many transactions as possible at the source database by using transactions applied at the destination database. These instructions assume that you can identify the transactions applied at the destination database after the source point-in-time SCN and execute these transactions at the source database.
Note:
Oracle recommends that you set the apply process parameterCOMMIT_SERIALIZATION
to FULL
when performing point-in-time recovery in a single-source Oracle Streams replication environment.Complete the following steps to perform point-in-time recovery on the source database in a single-source Oracle Streams replication environment:
Perform point-in-time recovery on the source database if you have not already done so. Note the point-in-time recovery SCN because it is needed in subsequent steps.
Ensure that the source database is in restricted mode.
Stop the capture process using the STOP_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.
At the source database, perform a data dictionary build:
SET SERVEROUTPUT ON DECLARE scn NUMBER; BEGIN DBMS_CAPTURE_ADM.BUILD( first_scn => scn); DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn); END; /
Note the SCN value returned because it is needed in Step 13.
At the destination database, wait until all of the transactions from the source database in the apply process queue have been applied. The apply processes should become idle when these transactions have been applied. You can query the STATE
column in both the V$STREAMS_APPLY_READER
and V$STREAMS_APPLY_SERVER
. The state should be IDLE
for the apply process in both views before you continue.
Perform a query at the destination database to determine the highest SCN for a transaction that was applied.
If the apply process is running, then perform the following query:
SELECT HWM_MESSAGE_NUMBER FROM V$STREAMS_APPLY_COORDINATOR WHERE APPLY_NAME = 'STRM01_APPLY';
If the apply process is disabled, then perform the following query:
SELECT APPLIED_MESSAGE_NUMBER FROM DBA_APPLY_PROGRESS WHERE APPLY_NAME = 'STRM01_APPLY';
Note the highest apply SCN returned by the query because it is needed in subsequent steps.
If the highest apply SCN obtained in Step 6 is less than the point-in-time recovery SCN noted in Step 1, then proceed to step 8. Otherwise, if the highest apply SCN obtained in Step 6 is greater than or equal to the point-in-time recovery SCN noted in Step 1, then the apply process has applied some transactions from the source database after point-in-time recovery SCN. In this case complete the following steps:
Manually execute transactions applied after the point-in-time SCN at the source database. When you execute these transactions at the source database, ensure that you set an Oracle Streams tag in the session so that the transactions will not be captured by the capture process. If no such Oracle Streams session tag is set, then these changes can be cycled back to the destination database. See "Managing Oracle Streams Tags for the Current Session" for instructions.
Disable the restricted session at the source database.
If you completed the actions in Step 7, then proceed to Step 12. Otherwise, if the highest apply SCN obtained in Step 6 is less than the point-in-time recovery SCN noted in Step 1, then the apply process has not applied any transactions from the source database after point-in-time recovery SCN. In this case, complete the following steps:
Disable the restricted session at the source database.
Ensure that the apply process is running at the destination database.
Set the maximum_scn
capture process parameter of the original capture process to the point-in-time recovery SCN using the SET_PARAMETER
procedure in the DBMS_CAPTURE_ADM
package.
Set the start SCN of the original capture process to the oldest SCN of the apply process. You can determine the oldest SCN of a running apply process by querying the OLDEST_SCN_NUM
column in the V$STREAMS_APPLY_READER
dynamic performance view at the destination database. To set the start SCN of the capture process, specify the start_scn
parameter when you run the ALTER_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.
Ensure that the capture process writes information to the alert log by running the following procedure:
BEGIN DBMS_CAPTURE_ADM.SET_PARAMETER( capture_name => 'strm01_capture', parameter => 'write_alert_log', value => 'Y'); END; /
Start the original capture process using the START_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.
Ensure that the original capture process has captured all changes up to the maximum_scn
setting by querying the CAPTURED_SCN
column in the DBA_CAPTURE
data dictionary view. When the value returned by the query is equal to or greater than the maximum_scn
value, the capture process should stop automatically. When the capture process is stopped, proceed to the next step.
Find the value of the LAST_ENQUEUE_MESSAGE_NUMBER
in the alert log. Note this value because it is needed in subsequent steps.
At the destination database, wait until all the changes are applied. You can monitor the applied changes for the apply process strm01_apply
by running the following queries at the destination database:
SELECT DEQUEUED_MESSAGE_NUMBER
FROM V$STREAMS_APPLY_READER
WHERE APPLY_NAME = 'STRM01_APPLY' AND
DEQUEUED_MESSAGE_NUMBER = last_enqueue_message_number;
Substitute the LAST_ENQUEUE_MESSAGE_NUMBER
found in the alert log in Step h for last_enqueue_message_number on the last line of the query. When this query returns a row, all of the changes from the capture database have been applied at the destination database.
Also, ensure that the state of the apply process reader server and each apply server is IDLE
. For example, run the following queries for an apply process named strm01_apply
:
SELECT STATE FROM V$STREAMS_APPLY_READER WHERE APPLY_NAME = 'STRM01_APPLY'; SELECT STATE FROM V$STREAMS_APPLY_SERVER WHERE APPLY_NAME = 'STRM01_APPLY';
When both of these queries return IDLE
, move on to the next step.
At the destination database, drop the apply process using the DROP_APPLY
procedure in the DBMS_APPLY_ADM
package.
At the destination database, create a new apply process. The new apply process should use the same queue and rule sets used by the original apply process.
At the destination database, start the new apply process using the START_APPLY
procedure in the DBMS_APPLY_ADM
package.
Drop the original capture process using the DROP_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.
Create a new capture process using the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to replace the capture process you dropped in Step 12. Specify the SCN returned by the data dictionary build in Step 4 for both the first_scn
and start_scn
parameters. The new capture process should use the same queue and rule sets as the original capture process.
Start the new capture process using the START_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.
A multiple-source environment is one in which there is more than one source database for any of the shared data. If database point-in-time recovery is required at a source database in a multiple-source Oracle Streams environment, then you can use another source database in the environment to recapture the changes made to the recovered source database after the point-in-time recovery.
For example, in a multiple-source Oracle Streams environment, one source database can become unavailable at time T2 and undergo point in time recovery to an earlier time T1. After recovery to T1, transactions performed at the recovered database between T1 and T2 are lost at the recovered database. However, before the recovered database became unavailable, assume that these transactions were propagated to another source database and applied. In this case, this other source database can be used to restore the lost changes to the recovered database.
Specifically, to restore changes made to the recovered database after the point-in-time recovery, you configure a capture process to recapture these changes from the redo logs at the other source database, a propagation to propagate these changes from the database where changes are recaptured to the recovered database, and an apply process at the recovered database to apply these changes.
Changes originating at the other source database that were applied at the recovered database between T1 and T2 also have been lost and must be recovered. To accomplish this, alter the capture process at the other source database to start capturing changes at an earlier SCN. This SCN is the oldest SCN for the apply process at the recovered database.
The following SCN values are required to restore lost changes to the recovered database:
Point-in-time SCN: The SCN for the point-in-time recovery at the recovered database.
Instantiation SCN: The SCN value to which the instantiation SCN must be set for each database object involved in the recovery at the recovered database while changes are being reapplied. At the other source database, this SCN value corresponds to one less than the commit SCN of the first transaction that was applied at the other source database and lost at the recovered database.
Start SCN: The SCN value to which the start SCN is set for the capture process created to recapture changes at the other source database. This SCN value corresponds to the earliest SCN at which the apply process at the other source database started applying a transaction that was lost at the recovered database. This capture process can be a local or downstream capture process that uses the other source database for its source database.
Maximum SCN: The SCN value to which the maximum_scn
parameter for the capture process created to recapture lost changes should be set. The capture process stops capturing changes when it reaches this SCN value. The current SCN for the other source database is used for this value.
You should record the point-in-time SCN when you perform point-in-time recovery on the recovered database. You can use the GET_SCN_MAPPING
procedure in the DBMS_STREAMS_ADM
package to determine the other necessary SCN values.
See Also:
Oracle Database PL/SQL Packages and Types Reference for more information about theGET_SCN_MAPPING
procedureIf database point-in-time recovery is required at a destination database in an Oracle Streams environment, then you must reapply the captured changes that had already been applied after the point-in-time recovery.
For each relevant capture process, you can choose either of the following methods to perform point-in-time recovery at a destination database in an Oracle Streams environment:
Reset the start SCN for the existing capture process that captures the changes that are applied at the destination database.
Create a new capture process to capture the changes that must be reapplied at the destination database.
Resetting the start SCN for the capture process is simpler than creating a new capture process. However, if the capture process captures changes that are applied at multiple destination databases, then the changes are resent to all the destination databases, including the ones that did not perform point-in-time recovery. If a change is already applied at a destination database, then it is discarded by the apply process, but you might not want to use the network and computer resources required to resend the changes to multiple destination databases. In this case, you can create and temporarily use a new capture process and a new propagation that propagates changes only to the destination database that was recovered.
The following sections provide instructions for each task:
If there are multiple apply processes at the destination database where you performed point-in-time recovery, then complete one of the tasks in this section for each apply process.
Neither of these methods should be used if any of the following conditions are true regarding the destination database you are recovering:
A propagation propagates persistent LCRs to the destination database. Both of these methods reapply only captured LCRs at the destination database, not persistent LCRs.
In a directed networks configuration, the destination database is used to propagate LCRs from a capture process to other databases, but the destination database does not apply LCRs from this capture process.
The oldest message number for an apply process at the destination database is lower than the first SCN of a capture process that captures changes for this apply process. The following query at a destination database lists the oldest message number (oldest SCN) for each apply process:
SELECT APPLY_NAME, OLDEST_MESSAGE_NUMBER FROM DBA_APPLY_PROGRESS;
The following query at a source database lists the first SCN for each capture process:
SELECT CAPTURE_NAME, FIRST_SCN FROM DBA_CAPTURE;
The archived log files that contain the intended start SCN are no longer available.
If any of these conditions are true in your environment, then you cannot use the methods described in this section. Instead, you must manually resynchronize the data at all destination databases.
Note:
If you are using combined capture and apply in a single-source replication environment, and the destination database has undergone point-in-time recovery, then the Oracle Streams capture process automatically detects where to capture changes upon restart, and no extra steps are required for it. See Oracle Streams Concepts and Administration for more information.See Also:
Oracle Streams Concepts and Administration for more information about SCN values relating to a capture process and directed networksIf you decide to reset the start SCN for the existing capture process to perform point-in-time recovery, then complete the following steps:
If the destination database is also a source database in a multiple-source Oracle Streams environment, then complete the actions described in "Performing Point-in-Time Recovery in a Multiple-Source Environment".
If you are not using directed networks between the source database and destination database, then drop the propagation that propagates changes from the source queue at the source database to the destination queue at the destination database. Use the DROP_PROPAGATION
procedure in the DBMS_PROPAGATION_ADM
package to drop the propagation.
If you are using directed networks, and there are intermediate databases between the source database and destination database, then drop the propagation at each intermediate database in the path to the destination database, including the propagation at the source database.
Do not drop the rule sets used by the propagations you drop.
Note:
You must drop the appropriate propagation(s). Disabling them is not sufficient. You will re-create the propagation(s) in Step 7, and dropping them now ensures that only LCRs created after resetting the start SCN for the capture process are propagated.See Also:
Oracle Streams Concepts and Administration for more information about directed networksPerform the point-in-time recovery at the destination database.
Query for the oldest message number (oldest SCN) from the source database for the apply process at the destination database. Make a note of the results of the query. The oldest message number is the earliest system change number (SCN) that might need to be applied.
The following query at a destination database lists the oldest message number for each apply process:
SELECT APPLY_NAME, OLDEST_MESSAGE_NUMBER FROM DBA_APPLY_PROGRESS;
Stop the existing capture process using the STOP_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.
Reset the start SCN of the existing capture process.
To reset the start SCN for an existing capture process, run the ALTER_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package and set the start_scn
parameter to the value you recorded from the query in Step 4. For example, to reset the start SCN for a capture process named strm01_capture
to the value 829381993
, run the following ALTER_CAPTURE
procedure:
BEGIN DBMS_CAPTURE_ADM.ALTER_CAPTURE( capture_name => 'strm01_capture', start_scn => 829381993); END; /
If you are not using directed networks between the source database and destination database, then create a new propagation to propagate changes from the source queue to the destination queue using the CREATE_PROPAGATION
procedure in the DBMS_PROPAGATION_ADM
package. Specify any rule sets used by the original propagation when you create the propagation.
If you are using directed networks, and there are intermediate databases between the source database and destination database, then create a new propagation at each intermediate database in the path to the destination database, including the propagation at the source database.
Start the existing capture process using the START_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.
If you decide to create a new capture process to perform point-in-time recovery, then complete the following steps:
If the destination database is also a source database in a multiple-source Oracle Streams environment, then complete the actions described in "Performing Point-in-Time Recovery in a Multiple-Source Environment".
If you are not using directed networks between the source database and destination database, then drop the propagation that propagates changes from the source queue at the source database to the destination queue at the destination database. Use the DROP_PROPAGATION
procedure in the DBMS_PROPAGATION_ADM
package to drop the propagation.
If you are using directed networks, and there are intermediate databases between the source database and destination database, then drop the propagation that propagates LCRs between the last intermediate database and the destination database. You do not need to drop the propagations at the other intermediate databases nor at the source database.
Note:
You must drop the appropriate propagation. Disabling it is not sufficient.See Also:
Oracle Streams Concepts and Administration for more information about directed networksPerform the point-in-time recovery at the destination database.
Query for the oldest message number (oldest SCN) from the source database for the apply process at the destination database. Make a note of the results of the query. The oldest message number is the earliest system change number (SCN) that might need to be applied.
The following query at a destination database lists the oldest message number for each apply process:
SELECT APPLY_NAME, OLDEST_MESSAGE_NUMBER FROM DBA_APPLY_PROGRESS;
Create a queue at the source database to be used by the capture process using the SET_UP_QUEUE
procedure in the DBMS_STREAMS_ADM
package.
If you are using directed networks, and there are intermediate databases between the source database and destination database, then create a queue at each intermediate database in the path to the destination database, including the new queue at the source database. Do not create a new queue at the destination database.
If you are not using directed networks between the source database and destination database, then create a new propagation to propagate changes from the source queue created in Step 5 to the destination queue using the CREATE_PROPAGATION
procedure in the DBMS_PROPAGATION_ADM
package. Specify any rule sets used by the original propagation when you create the propagation.
If you are using directed networks, and there are intermediate databases between the source database and destination database, then create a propagation at each intermediate database in the path to the destination database, including the propagation from the source database to the first intermediate database. These propagations propagate changes captured by the capture process you will create in Step 7 between the queues created in Step 5.
Create a new capture process at the source database using the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package. Set the source_queue
parameter to the local queue you created in Step 5 and the start_scn
parameter to the value you recorded from the query in Step 4. Also, specify any rule sets used by the original capture process. If the rule sets used by the original capture process instruct the capture process to capture changes that should not be sent to the destination database that was recovered, then you can create and use smaller, customized rule sets that share some rules with the original rule sets.
Start the capture process you created in Step 7 using the START_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.
When the oldest message number of the apply process at the recovered database is approaching the capture number of the original capture process at the source database, stop the original capture process using the STOP_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.
At the destination database, you can use the following query to determine the oldest message number from the source database for the apply process:
SELECT APPLY_NAME, OLDEST_MESSAGE_NUMBER FROM DBA_APPLY_PROGRESS;
At the source database, you can use the following query to determine the capture number of the original capture process:
SELECT CAPTURE_NAME, CAPTURE_MESSAGE_NUMBER FROM V$STREAMS_CAPTURE;
When the oldest message number of the apply process at the recovered database is beyond the capture number of the original capture process at the source database, drop the new capture process created in Step 7.
If you are not using directed networks between the source database and destination database, then drop the new propagation created in Step 6.
If you are using directed networks, and there are intermediate databases between the source database and destination database, then drop the new propagation at each intermediate database in the path to the destination database, including the new propagation at the source database.
If you are not using directed networks between the source database and destination database, then remove the queue created in Step 5.
If you are using directed networks, and there are intermediate databases between the source database and destination database, then drop the new queue at each intermediate database in the path to the destination database, including the new queue at the source database. Do not drop the queue at the destination database.
If you are not using directed networks between the source database and destination database, then create a propagation that propagates changes from the original source queue at the source database to the destination queue at the destination database. Use the CREATE_PROPAGATION
procedure in the DBMS_PROPAGATION_ADM
package to create the propagation. Specify any rule sets used by the original propagation when you create the propagation.
If you are using directed networks, and there are intermediate databases between the source database and destination database, then re-create the propagation from the last intermediate database to the destination database. You dropped this propagation in Step 2.
Start the capture process you stopped in Step 9.
All of the steps after Step 8 can be deferred to a later time, or they can be done as soon as the condition described in Step 9 is met.