Oracle® Database 2 Day + Data Replication and Integration Guide 11g Release 1 (11.1) Part Number B28324-01 |
|
|
View PDF |
This chapter contains conceptual information about replication using Oracle Streams and describes how to replicate data continuously between databases.
This chapter contains the following sections:
Example: Configuring Latest Time Conflict Resolution for a Table
Example: Configuring Read-Only Hub-and-Spoke Replication with Local Capture
Example: Configuring Read-Only Hub-and-Spoke Replication with Downstream Capture
Example: Configuring Read/Write Hub-and-Spoke Replication with Capture Processes
Example: Configuring Read/Write Hub-and-Spoke Replication with Synchronous Captures
See Also:
Replication is the process of sharing database objects and data at multiple databases. To maintain the database objects and data at multiple databases, a change to one of these database objects at a database is shared with the other databases. In this way, the database objects and data are kept synchronized at all of the databases in the replication environment.
Some replication environments must continually replicate changes made to shared database objects. Oracle Streams is the Oracle Database feature for continuous replication. Typically, in such environments, the databases that contain the shared database objects are connected to the network nearly all the time and continually push database changes over these network connections.
When a change is made to one shared database object, Oracle Streams performs the following actions to ensure that the same change is made to the corresponding shared database object at each of the other databases:
Oracle Streams automatically captures the change and stages it in a queue.
Oracle Streams automatically pushes the change to a queue in each of the other databases that contain the shared database object.
Oracle Streams automatically consumes the change at each of the other databases. During consumption, Oracle Streams dequeues the change and applies the change to the shared database object.
Figure 4-1 shows the Oracle Streams information flow:
Figure 4-1 Oracle Streams Information Flow
You can use Oracle Streams replication to share data at multiple databases and efficiently keep the data current at these databases. For example, a company with several call centers throughout the world might want to store customer information in a local database at each call center. In such an environment, continuous replication with Oracle Streams can ensure that a change made to customer data at one location is pushed to all of the other locations as soon as possible.
When you use Oracle Streams to capture changes to database objects, the changes are formatted into logical change records (LCRs). An LCR is a message with a specific format that describes a database change. If the change was a data manipulation language (DML) operation, then a row LCR encapsulates each row change resulting from the DML operation. One DML operation might result in multiple row changes, and so one DML operation might result in multiple row LCRs. If the change was a data definition language (DDL) operation, then a single DDL LCR encapsulates the DDL change.
The following topics describe Oracle Streams replication in more detail:
About Rules for Controlling the Behavior of Capture, Propagation, and Apply
About the Common Types of Oracle Streams Replication Environments
Oracle Streams provides two ways to capture database changes automatically:
About Change Capture with a Capture Process: A capture process should be used to capture data manipulation language (DML) changes to a relatively large number of tables, an entire schema, or a database. Also, a capture process must be used to capture data definition language (DDL) changes to tables and other database objects.
About Change Capture with a Synchronous Capture: A synchronous capture should be used to capture DML changes to a relatively small number of tables.
A single capture process or a single synchronous capture can capture changes made to only one database. The database where a change originated is called the source database for the change.
Note:
The examples in this guide replicate DML changes only. You should understand the implications of replicating DDL changes before doing so. See Oracle Streams Replication Administrator's Guide and Oracle Database PL/SQL Packages and Types Reference for information about replicating DDL changes.A capture process is an optional Oracle Database background process that asynchronously captures changes recorded in the redo log. When a capture process captures a database change, it converts it into a logical change record (LCR) and enqueues the LCR.
A capture process is always associated with a single queue of ANYDATA
type (called an ANYDATA
queue), and it enqueues LCRs into this queue only. For improved performance, captured LCRs are always stored in a buffered queue, which is System Global Area (SGA) memory associated with a queue.
Figure 4-2 shows how a capture process works.
A capture process can run on the source database or on a remote database. When a capture process runs on the source database, the capture process is a local capture process. When a capture process runs on a remote database, the capture process is called a downstream capture process.
With downstream capture, redo transport services use the log writer process (LGWR) at the source database to send redo data to the database that runs the downstream capture process. A downstream capture process requires fewer resources at the source database because a different database captures the changes. A local capture process, however, is easier to configure and manage than a downstream capture process. Local capture processes also provide more flexibility in replication environments with different platforms or different versions of Oracle Database.
See Also:
Instead of asynchronously mining the redo log, a synchronous capture uses an internal mechanism to capture data manipulation language (DML) changes when they are made to tables. A single DML change can result in changes to one or more rows in the table. A synchronous capture captures each row change, converts it into a row LCR, and enqueues it.
A synchronous capture is always associated with a single queue of ANYDATA
type (called an ANYDATA
queue), and it enqueues messages into this queue only. Synchronous capture always enqueues row LCRs into the persistent queue. The persistent queue is the portion of a queue that stores messages on hard disk in a queue table, not in memory.
It is usually best to use synchronous capture in a replication environment that captures changes to a relatively small number of tables. If you must capture changes to many tables, to an entire schema, or to an entire database, then you should use a capture process instead of a synchronous capture.
Figure 4-3 shows how a synchronous capture works.
Note:
If you are using Oracle Database 11g Standard Edition, then synchronous capture is the only Oracle Streams component that can capture database changes automatically. To use capture processes, you must have Oracle Database 11g Enterprise Edition.A propagation sends messages from one queue to another. You can use Oracle Streams to configure message propagation between two queues in the same database or in different databases. Oracle Streams uses a database link and Oracle Scheduler jobs to send messages. A propagation is always between a source queue and a destination queue. In an Oracle Streams replication environment, a propagation typically sends database changes (in the form of LCRs) from a source queue in the local database to a destination queue in a remote database.
Figure 4-4 shows a propagation.
See Also:
Oracle Database Administrator's Guide for information about Oracle Scheduler
After database changes have been captured and propagated, they reside in a queue and are ready to be applied to complete the replication process. An apply process is an optional Oracle Database background process that dequeues logical change records (LCRs) and other types of messages from a specific queue. In a simple Oracle Streams replication environment, an apply process typically applies the changes in the LCRs that it dequeues directly to the database objects in the local database.
An apply process is always associated with a single queue of ANYDATA
type (called an ANYDATA
queue), and it dequeues messages from this queue only. A single apply process either can dequeue messages from the buffered queue or from the persistent queue, but not both. Therefore, if an apply process will apply changes that were captured by a capture process, then the apply process must be configured to dequeue LCRs from the buffered queue. However, if an apply process will apply changes that were captured by a synchronous capture, then the apply process must be configured to dequeue LCRs from the persistent queue.
Figure 4-5 shows how an apply process works.
When an apply process cannot apply an LCR successfully, it moves the LCR, and all of the other LCRs in the transaction, to a special queue called the error queue. The error queue contains all of the current apply errors for a database. If there are multiple apply processes in a database, then the error queue contains the apply errors for each apply process. You can correct the condition that caused an error and then reexecute the corresponding transaction in the error queue to apply its changes. For example, you might modify a row in a table to correct the condition that caused an error in a transaction and then reexecute the transaction.
For an apply process to apply changes to a database object, an instantiation system change number (SCN) must be set for the database object. An instantiation SCN is the SCN for a database object that specifies that only changes that were committed after the SCN at the source database are applied by an apply process. Typically, the instantiation SCN is set automatically when you configure the Oracle Streams replication environment.
Capture processes, synchronous captures, propagations, and apply processes are called Oracle Streams clients because they are clients of an internal rules engine that is part of Oracle Database. Rules can be organized into rule sets, and the behavior of each Oracle Streams client is determined by the rules in the rule sets that are associated with the Oracle Streams client. You can associate a positive rule set and a negative rule set with a capture process, a propagation, and an apply process, but a synchronous capture can only have a positive rule set.
In a replication environment, an Oracle Streams client performs its task if a database change satisfies its rule sets. In general, a change satisfies the rule sets for an Oracle Streams client if no rules in the negative rule set evaluate to TRUE
for the change, and at least one rule in the positive rule set evaluates to TRUE
for the change. The negative rule set is always evaluated first.
Specifically, you use rule sets in an Oracle Streams replication environment to do the following:
Specify the changes that a capture process captures from the redo log or discards. That is, if a change found in the redo log satisfies the rule sets for a capture process, then the capture process captures the change. If a change found in the redo log does not satisfy the rule sets for a capture process, then the capture process discards the change.
Specify the changes that a synchronous capture captures. That is, if a data manipulation language (DML) change satisfies the rule set for a synchronous capture, then the synchronous capture captures the change immediately after the change is committed. If a DML change made to a table does not satisfy the rule set for a synchronous capture, then the synchronous capture does not capture the change.
Specify the changes (encapsulated in LCRs) that a propagation sends from one queue to another or discards. That is, if an LCR in a queue satisfies the rule sets for a propagation, then the propagation sends the LCR. If an LCR in a queue does not satisfy the rule sets for a propagation, then the propagation discards the LCR.
Specify the LCRs that an apply process dequeues or discards. That is, if an LCR in a queue satisfies the rule sets for an apply process, then the apply process dequeues and processes the LCR. If an LCR in a queue does not satisfy the rule sets for an apply process, then the apply process discards the LCR.
See Also:
Oracle Streams Concepts and Administration for detailed information about rules
When a database object is not identical at the databases that share the object, Oracle Streams enables you to modify changes to the database object so that the changes can be applied successfully at each database. These modifications are called rule-based transformations. A rule-based transformation is any modification to a message when a rule in a positive rule set evaluates to TRUE
.
For example, suppose a table has five columns at the database where changes are captured, but the shared table at a different database only has four of the five columns. When a data manipulation language (DML) operation is performed on the table at the capture database, the row changes are captured and formatted as row LCRs. A rule-based transformation can delete the extra column in these row LCRs so that they can be applied successfully at the other database. If the row LCRs are not transformed, then the apply process at the other database will raise errors because the row LCRs have an extra column.
There are two types of rule-based transformations: declarative and custom. Declarative rule-based transformations include a set of common transformation scenarios for row changes resulting from DML changes (row LCRs). Custom rule-based transformations require a user-defined PL/SQL function to perform the transformation. This guide discusses only declarative rule-based transformations.
The following declarative rule-based transformations are available:
An add column transformation adds a column to a row LCR.
A delete column transformation deletes a column from a row LCR.
A rename column transformation renames a column in a row LCR.
A rename schema transformation renames the schema in a row LCR.
A rename table transformation renames the table in a row LCR.
When you add one of these declarative rule-based transformations, you specify the rule to associate with the transformation. When the specified rule evaluates to TRUE
for a row LCR, Oracle Streams performs the declarative transformation internally on the row LCR. Typically, rules and rule sets are created automatically when you configure your Oracle Streams replication environment.
A transformation can occur at any stage in the Oracle Streams information flow: during capture, propagation, or apply. When a transformation occurs depends on the rule with which the transformation is associated. For example, to perform a transformation during propagation, associate the transformation with a rule in the positive rule set for a propagation.
See Also:
Oracle Streams Concepts and Administration for detailed information about rule-based transformations
Supplemental logging is the process of adding additional column data to the redo log whenever an operation is performed. A capture process captures this additional information and places it in logical change records (LCRs). Apply processes that apply these LCRs might need this additional information to apply database changes properly.
See Also:
Oracle Streams Replication Administrator's Guide for detailed information about supplemental logging
Conflicts occur when two different databases that are sharing data in a table modify the same row in the table at nearly the same time. When these changes are captured at one of these databases and sent to the other database, an apply process detects the conflict when it attempts to apply a row LCR to the table. By default, apply errors are placed in the error queue, where they can be resolved manually. To avoid apply errors, you must configure conflict resolution so that apply processes handle conflicts in the best way for your environment.
Oracle Database supplies prebuilt conflict handlers that provide conflict resolution when a conflict results from an UPDATE
on a row. These handlers are called prebuilt update conflict handlers.
When an apply process encounters an update conflict for a row LCR that it has dequeued, it must either apply the row LCR or discard it to keep the data in the two databases consistent. The most common way to resolve update conflicts is to keep the change with the most recent time stamp and discard the older change. The "Example: Configuring Latest Time Conflict Resolution for a Table" contains instructions for adding a time
column to a table and configuring a trigger to update the column when a row is changed.
The following topics discuss how to configure conflict resolution in a particular type of replication environment:
Example: Configuring Read/Write Hub-and-Spoke Replication with Capture Processes
Example: Configuring Read/Write Hub-and-Spoke Replication with Synchronous Captures
Note:
Conflicts are not possible in a single-source replication environment if the replicas of each source table are read-only. Therefore, conflicts are not possible in the example described in "Example: Configuring Read-Only Hub-and-Spoke Replication with Local Capture" and "Example: Configuring Read-Only Hub-and-Spoke Replication with Downstream Capture".See Also:
"Example: Configuring Latest Time Conflict Resolution for a Table"
Oracle Streams Replication Administrator's Guide for detailed information about conflicts and conflict resolution
Change cycling means sending a change back to the database where it originated. Typically, change cycling should be avoided because it can result in each database change going through endless loops to the database where it originated. Such loops can result in unintended data in the database and tax the networking and computer resources of an environment. By default, Oracle Streams is designed to avoid change cycling.
A tag is additional information in a change record. Each redo entry that records a database change and each logical change record (LCR) that encapsulates a database change includes a tag. The data type of the tag is RAW
.
By default, change records have the following tag values:
When a user or application generates redo entries, the value of the tag is NULL
for each redo entry. This default can be changed for a particular database session.
When an apply process generates redo entries by applying changes to database objects, the tag value for each redo entry is the hexadecimal equivalent of '00'
(double zero). This default can be changed for a particular apply process.
The tag value in an LCR depends on how the LCR was captured:
An LCR captured by a capture process has the tag value of the redo record that was captured.
An LCR captured by a synchronous capture has the tag value of the database session that made the change.
Rules for Oracle Streams clients can include conditions for tag values. For example, the rules for a capture process can determine whether a change in the redo log is captured based on the tag value of the redo record. In an Oracle Streams replication environment, Oracle Streams clients use tags and rules to avoid change cycling.
The following topics discuss how change cycling is avoided in a particular type of replication environment:
Example: Configuring Read/Write Hub-and-Spoke Replication with Capture Processes
Example: Configuring Read/Write Hub-and-Spoke Replication with Synchronous Captures
Note:
Change cycling is not possible in a single-source replication environment because changes to the shared database objects are captured in only one location. Therefore, change cycling is not possible in the example described in "Example: Configuring Read-Only Hub-and-Spoke Replication with Local Capture" and "Example: Configuring Read-Only Hub-and-Spoke Replication with Downstream Capture".See Also:
Oracle Streams Replication Administrator's Guide for detailed information about tags
Oracle Streams enables you to configure many different types of custom replication environments. However, two types of replication environments are the most common: hub-and-spoke and n-way.
The following topics describe these common types of replication environments and help you decide which one is best for you:
A hub-and-spoke replication environment is one in which a central database, or hub, communicates with secondary databases, or spokes. The spokes do not communicate directly with each other. In a hub-and-spoke replication environment, the spokes might or might not allow changes to the replicated database objects.
If the spokes do not allow changes, then they contain read-only replicas of the database objects at the hub. This type of hub-and-spoke replication environment typically has the following basic components:
The hub has a capture process or synchronous capture to capture changes to the replicated database objects.
The hub has propagations that send the captured changes to each of the spokes.
Each spoke has an apply process to apply changes from the hub.
For the best performance, each capture process and apply process has its own queue.
Figure 4-6 shows a hub-and-spoke replication environment with read-only spokes.
Figure 4-6 Hub-and-Spoke Replication Environment with Read-Only Spokes
If the spokes allow changes to the database objects, then typically the changes are captured and sent back to the hub, and the hub replicates the changes with the other spokes. This type of hub-and-spoke replication environment typically has the following basic components:
The hub has a capture process or synchronous capture to capture changes to the replicated database objects.
The hub has propagations that send the captured changes to each of the spokes.
Each spoke has a capture process or synchronous capture to capture changes to the replicated database objects.
Each spoke has a propagation that sends changes made at the spoke back to the hub.
Each spoke has an apply process to apply changes from the hub and from the other spokes.
The hub has a separate apply process to apply changes from each spoke.
For the best performance, each capture process and apply process has its own queue.
Figure 4-7 shows a hub-and-spoke replication environment with read/write spokes.
Figure 4-7 Hub-and-Spoke Replication Environment with Read/Write Spokes
Some hub-and-spoke replication environments allow changes to the replicated database objects at some spokes but not at others.
The easiest way to configure a hub-and-spoke replication environment is by running one of the following procedures in the DBMS_STREAMS_ADM
package:
MAINTAIN_GLOBAL
configures an Oracle Streams environment that replicates changes at the database level between two databases.
MAINTAIN_SCHEMAS
configures an Oracle Streams environment that replicates changes to specified schemas between two databases.
MAINTAIN_SIMPLE_TTS
clones a simple tablespace from a source database at a destination database and uses Oracle Streams to maintain this tablespace at both databases.
MAINTAIN_TABLES
configures an Oracle Streams environment that replicates changes to specified tables between two databases.
MAINTAIN_TTS
clones a set of tablespaces from a source database at a destination database and uses Oracle Streams to maintain these tablespaces at both databases.
You run the appropriate procedure once for each spoke that you want to add to the hub-and-spoke environment.
Note:
Currently, these procedures configure only replication environments that use capture processes to capture changes. You cannot use these procedures to configure a replication environment that uses synchronous captures.See Also:
"Example: Configuring Read-Only Hub-and-Spoke Replication with Downstream Capture"
"Example: Configuring Read/Write Hub-and-Spoke Replication with Capture Processes"
"Example: Configuring Read/Write Hub-and-Spoke Replication with Synchronous Captures"
Oracle Database PL/SQL Packages and Types Reference for more information about the procedures in the DBMS_STREAMS_ADM
package
An n-way replication environment is one in which each database communicates directly with each other database in the environment. The changes made to replicated database objects at one database are captured and sent directly to each of the other databases in the environment, where they are applied.
An n-way replication environment typically has the following basic components:
Each database has one or more capture processes or synchronous captures to capture changes to the replicated database objects.
Each database has propagations that send the captured changes to each of the other databases.
Each database has apply processes that apply changes from each of the other databases. A different apply process must apply changes from each source database.
For the best performance, each capture process and apply process has its own queue.
Figure 4-8 shows an n-way replication environment.
You can configure an n-way replication environment by using the following Oracle-supplied packages:
DBMS_STREAMS_ADM
can be used to perform most of the configuration actions, including setting up queues, creating capture processes or synchronous captures, creating propagations, creating apply processes, and configuring rules and rule sets for the replication environment.
DBMS_CAPTURE_ADM
can be used to start any capture processes you configured in the replication environment.
DBMS_APPLY_ADM
can be used to configure apply processes, configure conflict resolution, and start apply processes, as well as other configuration tasks.
Configuring an n-way replication environment is beyond the scope of this guide. See Oracle Streams Replication Administrator's Guide for a detailed example that configures an n-way replication environment.
Before configuring Oracle Streams replication, prepare the databases that will participate in the replication environment.
To prepare for Oracle Streams replication:
Set initialization parameters properly before you configure a replication environment with Oracle Streams:
Global Names: Set the GLOBAL_NAMES
initialization parameter to TRUE
at each database that will participate in the Oracle Streams replication environment. See "Setting the GLOBAL_NAMES Initialization Parameter to TRUE".
Compatibility: To use the latest features of Oracle Streams, it is best to set the COMPATIBLE
initialization parameter as high as you can. If possible, then set this parameter to 11.0.0
or higher.
System Global Area (SGA) and the Oracle Streams pool: Ensure that the Oracle Streams pool is large enough to accommodate the Oracle Streams components created for the replication environment. The Oracle Streams pool is part of the System Global Area (SGA). You can manage the Oracle Streams pool by setting the MEMORY_TARGET
initialization parameter (Automatic Memory Management), the SGA_TARGET
initialization parameter (Automatic Shared Memory Management), or the STREAMS_POOL
initialization parameter. See Oracle Streams Concepts and Administration for more information about the Oracle Streams pool.
The memory requirements for Oracle Streams components are:
Each queue requires at least 10 MB of memory.
Each capture process parallelism requires at least 10 MB of memory. The parallelism
capture process parameter controls the number of processes used by the capture process to capture changes. You might be able to improve capture process performance by adjusting capture process parallelism.
Each propagation requires at least 1 MB of memory.
Each apply process parallelism requires at least 1 MB of memory. The parallelism
apply process parameter controls the number of processes used by the apply process to apply changes. You might be able to improve apply process performance by adjusting apply process parallelism.
Processes and Sessions: Oracle Streams capture processes, propagations, and apply processes use processes that run in the background. You might need to increase the value of the PROCESSES
and SESSIONS
initialization parameters to accommodate these processes.
Review the best practices for Oracle Streams replication environments and follow the best practices when you configure the environment. See Oracle Streams Replication Administrator's Guide for information about best practices.
Following the best practices ensures that your environment performs optimally and avoids problems. The MAINTAIN_
procedures follow the best practices automatically. However, if you plan to configure an Oracle Streams replication environment without using a MAINTAIN_
procedure, then learn about the best practices and follow them whenever possible.
See Also:
Oracle Streams Concepts and Administration for information about initialization parameters that are important in an Oracle Streams environment
Conflict resolution automatically resolves conflicts in a replication environment. See "About Conflicts and Conflict Resolution" for more information about conflict resolution.
The most common way to resolve update conflicts is to keep the change with the most recent time stamp and discard the older change. With this method, when a conflict is detected during apply, the apply process applies the change if the time-stamp column for the change is more recent than the corresponding row in the table. If the time-stamp column in the table is more recent, then the apply process discards the change.
The example in this topic configures latest time conflict resolution for the hr.departments
table by completing the following actions:
Adds a time
column of the TIMESTAMP
data type to the table
Configures a trigger to update the time column in a row with the current time when the row is changed
Adds supplemental logging for the columns in the table
Runs the SET_UPDATE_CONFLICT_HANDLER
procedure in the DBMS_APPLY_ADM
package to configure conflict resolution for the table
You can use the steps in this topic to configure conflict resolution for any table. To do so, substitute your schema name for hr
and your table name for departments
. Also, substitute the columns in your table for the columns in the hr.departments
table when you run the SET_UPDATE_CONFLICT_HANDLER
procedure.
To configure latest time conflict resolution for the hr.departments table:
Add a time
column to the table.
In Oracle Enterprise Manager, log in to the database as an administrative user, such as the Oracle Streams administrator or SYSTEM
.
Go to the Database Home page.
Click Schema to open the Schema subpage.
Click Tables in the Database Objects section.
On the Tables page, enter the schema that contains the table in the Schema field and click Go.
If there are many tables in the schema, then you can also enter the table in the Object Name field before you click Go.
In this example, enter hr
in the Schema field and click Go. The result list shows all of the tables in the hr
schema, including the hr.departments
table.
Select the table to which you want to add a column. In this example, select the hr.departments
table.
Click Edit.
The Edit Table page appears, showing the General subpage.
In the first available Name field that is blank, enter time
as the new column name. You might need to click Next to see a row with a blank Name field.
For the new time
column, select TIMESTAMP in the Data Type list.
For the new time
column, enter SYSDATE
in the Default Value field.
For the new time
column, leave the remaining fields in the row blank.
Click Apply to add the column.
Create a trigger to update the time
column in each master table with the current time when a change occurs.
In Oracle Enterprise Manager, log in to the database as an administrative user, such as the Oracle Streams administrator or SYSTEM
.
Go to the Database Home page.
Click Schema to open the Schema subpage.
Click Triggers in the Programs section.
On the Triggers page, click Create.
The Create Trigger page appears, showing the General subpage.
Enter the name of the trigger in the Name field. In this example, enter insert_departments_time
.
Enter the schema that owns the table in the Schema field. In this example, enter hr
in the Schema field.
Enter the following in the Trigger Body field:
BEGIN -- Consider time synchronization problems. The previous update to this -- row might have originated from a site with a clock time ahead of the -- local clock time. IF :OLD.TIME IS NULL OR :OLD.TIME < SYSTIMESTAMP THEN :NEW.TIME := SYSTIMESTAMP; ELSE :NEW.TIME := :OLD.TIME + 1 / 86400; END IF; END;
Click Event to open the Event subpage.
Ensure that Table is selected in the Trigger On list.
Enter the table name in the form schema.table
in the Table (Schema.Table) field, or use the flashlight icon to find the database object. In this example, enter hr.departments
.
Ensure that Before is selected for Fire Trigger.
Select Insert and Update of Columns for Event.
The columns in the table appear.
Select every column in the table except for the new time
column.
Click Advanced to open the Advanced subpage.
Select Trigger for each row.
Click OK to create the trigger.
In SQL*Plus, connect to the database as the Oracle Streams administrator:
sqlplus strmadmin/user-password
See Oracle Database 2 Day DBA for more information about starting SQL*Plus.
Add supplemental logging for the columns in the table:
ALTER TABLE hr.departments ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
Supplemental logging is required for conflict resolution during apply.
Run the SET_UPDATE_CONFLICT_HANDLER
procedure to configure latest time conflict resolution for the table.
For example, run the following procedure to configure latest time conflict resolution for the hr.departments
table:
DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'department_id'; cols(2) := 'department_name'; cols(3) := 'manager_id'; cols(4) := 'location_id'; cols(5) := 'time'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name => 'hr.departments', method_name => 'MAXIMUM', resolution_column => 'time', column_list => cols); END; /
Include all of the columns in the table in the cols
column list.
Repeat these steps for any tables that require conflict resolution in your replication environment. You might need to configure conflict resolution for the tables at several databases.
If you are completing an example that configures or extends a replication environment, then configure latest time conflict resolution for the appropriate tables:
For "Example: Configuring Read/Write Hub-and-Spoke Replication with Capture Processes", configure conflict resolution for all of the tables in the hr
schema at the hub.net
, spoke1.net
, and spoke2.net
databases. This schema includes the countries
, departments
, employees
, jobs
, job_history
, locations
, and regions
tables.
For "Example: Configuring Read/Write Hub-and-Spoke Replication with Synchronous Captures", configure conflict resolution for the hr.departments
and hr.employees
tables at the hub.net
and spoke.net
databases.
For "Example: Adding Database Objects to a Replication Environment", configure conflict resolution for the oe.orders
and oe.order_items
tables at the hub.net
, spoke1.net
, and spoke2.net
databases.
For "Example: Adding Databases to a Replication Environment", configure conflict resolution for all of the tables in the hr
schema at the spoke3.net
database. This schema includes the countries
, departments
, employees
, jobs
, job_history
, locations
, and regions
tables.
If you were directed to this section from an example, then go back to the example now.
The example in this topic configures an Oracle Streams replication environment that replicates data manipulation language (DML) changes to all of the tables in the hr
schema. This example configures a hub-and-spoke replication environment with a single spoke. Hub-and-spoke replication means that a central hub database replicates changes with one or more databases. The spoke databases do not communicate with each other directly.
In addition, this example configures a local capture process to capture changes. The local capture process runs on the hub database.
In a hub-and-spoke replication environment, the spoke databases might or might not allow changes to the replicated database objects. In this example, the replicated database objects are read-only at the spoke databases. This type of hub-and-spoke replication environment is common when the spoke databases are used for reporting or for analysis of the data.
The hub-and-spoke replication environment configured in this example includes the following databases and Oracle Streams components:
A read/write hub with the global name hub.net
. A local capture process captures DML changes to the replicated database objects from the redo log.
One read-only spoke database with the global name spoke.net
. This database receives changes made to the replicated database objects from the hub database, and an apply process applies the changes. The replicated database objects are read-only at the spoke database.
This example uses the MAINTAIN_SCHEMAS
procedure in the DBMS_STREAMS_ADM
package to configure the hub-and-spoke replication environment. This procedure is the fastest and simplest way to configure an Oracle Streams environment that replicates one or more schemas. In addition, the procedure follows established best practices for Oracle Streams replication environments.
This example uses the global database names hub.net
and spoke.net
. However, you can substitute databases in your environment to complete the example.
Figure 4-9 provides an overview of the environment created in this example.
Figure 4-9 Sample Hub-and-Spoke Environment with a Local Capture Process and a Read-Only Spoke
To configure this hub-and-spoke replication environment:
Complete the following tasks to prepare for the hub-and-spoke replication environment:
Configure network connectivity so that the hub.net
database can communicate with the spoke.net
database.
See Oracle Database 2 Day DBA for information about configuring network connectivity between databases.
Configure an Oracle Streams administrator at each database that will participate in the replication environment. See "Example: Creating an Oracle Streams Administrator" for instructions. This example assumes that the Oracle Streams administrator is strmadmin
.
Create a database link from the hub database to the spoke database. In this example, create a database link from the hub.net
database to the spoke.net
database.
The database link should be created in the Oracle Streams administrator's schema. Also, the database link should connect to the Oracle Streams administrator at the destination database. See "Example: Creating a Database Link" for instructions.
Set initialization parameters properly at each database that will participate in the Oracle Streams replication environment. See "Preparing for Oracle Streams Replication" for instructions.
Configure each source database to run in ARCHIVELOG
mode. For a capture process to capture changes generated at a source database, the source database must be running in ARCHIVELOG
mode. In this example, the hub.net
database must be running in ARCHIVELOG
mode. See Oracle Database Administrator's Guide for information about configuring a database to run in ARCHIVELOG
mode.
In SQL*Plus, connect to the spoke.net
database as the Oracle Streams administrator:
CONNECT strmadmin/user-password@spoke.net
Create a directory object to hold files that will be generated by the MAINTAIN_SCHEMAS
procedure. The directory object can point to any accessible directory on the computer system. For example, the following statement creates a directory object named hns_dir
that points to the /usr/db_files
directory:
CREATE DIRECTORY hns_dir AS '/usr/db_files';
In SQL*Plus, connect to the hub.net
database as the Oracle Streams administrator:
CONNECT strmadmin/user-password@hub.net
Create a directory object to hold files that will be generated by the MAINTAIN_SCHEMAS
procedure. The directory object can point to any accessible directory on the computer system. For example, the following statement creates a directory object named hns_dir
that points to the /usr/db_files
directory:
CREATE DIRECTORY hns_dir AS '/usr/db_files';
Run the MAINTAIN_SCHEMAS
procedure to configure the replication between the hub.net
database and the spoke.net
database:
BEGIN DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS( schema_names => 'hr', source_directory_object => 'hns_dir', destination_directory_object => 'hns_dir', source_database => 'hub.net', destination_database => 'spoke.net'); END; /
The MAINTAIN_SCHEMAS
procedure can take some time to run because it is performing many configuration tasks.
If you encounter any errors when you run the MAINTAIN_SCHEMAS
procedure, then see Oracle Streams Replication Administrator's Guide for instructions about using the RECOVER_OPERATION
procedure in the DBMS_STREAMS_ADM
package to recover from these errors.
When you complete the example, a hub-and-spoke replication environment with the following characteristics is configured:
At the hub database hub.net
, supplemental logging is configured for the tables in the hr
schema.
The hub database hub.net
has the components:
A capture process with a system-generated name. The capture process captures DML changes to the hr
schema.
A queue with a system-generated name. This queue is for the capture process at the database.
A propagation with a system-generated name that sends changes from the queue at the hub.net
database to the queue at the spoke.net
database.
The spoke database spoke.net
has the following components:
A queue named with a system-generated name that receives the changes sent from the hub.net
database. This queue is for the apply process at the local database.
An apply process with a system-generated name. The apply process dequeues changes from its queue and applies them to the hr
schema.
To replicate changes:
At the hub database, make DML changes to any table in the hr
schema.
After some time has passed to allow for replication of the changes, use SQL*Plus to query the modified table at the spoke database to view the DML changes.
Note:
TheMAINTAIN_
procedures do not configure the replicated tables to be read only at the spoke databases. If they should be read only, then configure privileges at the spoke databases accordingly. However, the apply user for the apply process must be able to make DML changes to the replicated database objects. In this example, the apply user is the Oracle Streams administrator. See Oracle Database Security Guide for information about configuring privileges.See Also:
Chapter 5, "Administering an Oracle Streams Replication Environment"
Oracle Streams Replication Administrator's Guide for more detailed instructions about using the MAINTAIN_SCHEMAS
procedure
Oracle Database PL/SQL Packages and Types Reference for reference information about the MAINTAIN_SCHEMAS
procedure
The example in this topic configures an Oracle Streams replication environment that replicates data manipulation language (DML) changes to all of the tables in the hr
schema. This example configures a hub-and-spoke replication environment with a single spoke. Hub-and-spoke replication means that a central hub database replicates changes with one or more databases. The spoke databases do not communicate with each other directly.
In addition, this example configures a downstream capture process to capture changes. The downstream capture process runs on the single spoke database. Therefore, the resources required to capture changes are freed at the source database. This example configures a real-time downstream capture process, not an archived-log downstream capture process. The advantage of real-time downstream capture over archived-log downstream capture is that real-time downstream capture reduces the amount of time required to capture changes made at the source database. The time is reduced because the real-time capture process does not need to wait for the redo log file to be archived before it can capture data from it.
In a hub-and-spoke replication environment, the spoke databases might or might not allow changes to the replicated database objects. In this example, the replicated database objects are read-only at the spoke databases. This type of hub-and-spoke replication environment is common when the spoke databases are used for reporting or for analysis of the data.
The hub-and-spoke replication environment configured in this example includes the following databases and Oracle Streams components:
A read/write hub with the global name hub.net
. Redo transport services send the redo log information that contains database changes for the hub database to the spoke database.
One read-only spoke database with the global name spoke.net
. This database receives the redo log information sent from the hub database. A downstream capture process captures changes to the hr
schema from the redo log information, and an apply process applies the changes to the replicated database objects. The replicated database objects are read-only at the spoke database.
This example uses the MAINTAIN_SCHEMAS
procedure in the DBMS_STREAMS_ADM
package to configure the hub-and-spoke replication environment. This procedure is the fastest and simplest way to configure an Oracle Streams environment that replicates one or more schemas. In addition, the procedure follows established best practices for Oracle Streams replication environments.
This example uses the global database names hub.net
and spoke.net
. However, you can substitute databases in your environment to complete the example.
Figure 4-10 provides an overview of the environment created in this example.
Figure 4-10 Sample Hub-and-Spoke Environment with a Downstream Capture Process and a Read-Only Spoke
Note:
Local capture processes provide more flexibility in replication environments with different platforms or different versions of Oracle Database. See Oracle Streams Concepts and Administration for more information.To configure this hub-and-spoke replication environment:
Complete the following tasks to prepare for the hub-and-spoke replication environment:
Configure network connectivity so that the hub.net
database and the spoke.net
database can communicate with each other.
See Oracle Database 2 Day DBA for information about configuring network connectivity between databases.
Configure an Oracle Streams administrator at each database that will participate in the replication environment. See "Example: Creating an Oracle Streams Administrator" for instructions. This example assumes that the Oracle Streams administrator is strmadmin
.
Create a database link from the hub database to the spoke database and from the spoke database to the hub database. In this example, create the following database links:
From the hub.net
database to the spoke.net
database
From the spoke.net
database to the hub.net
database
The database link from the spoke database to the hub database is necessary because the hub database is the source database for the downstream capture process at the spoke database. This database link simplifies the creation and configuration of the capture process.
Each database link should be created in the Oracle Streams administrator's schema. Also, each database link should connect to the Oracle Streams administrator at the destination database. See "Example: Creating a Database Link" for instructions.
Set initialization parameters properly at each database that will participate in the Oracle Streams replication environment. See "Preparing for Oracle Streams Replication" for instructions.
Configure each source database and each database that runs a downstream capture process to run in ARCHIVELOG
mode. For a downstream capture process to capture changes generated at a source database, both the source database and the downstream database must be running in ARCHIVELOG
mode. In this example, the hub.net
and spoke.net
databases must be running in ARCHIVELOG
mode. See Oracle Database Administrator's Guide for information about configuring a database to run in ARCHIVELOG
mode.
Configure authentication at both databases to support the transfer of redo data.
Redo transport sessions are authenticated using either the Secure Sockets Layer (SSL) protocol or a remote login password file. If the source database has a remote login password file, then copy it to the appropriate directory on the downstream capture database system. In this example, the source database is hub.net
and the downstream capture database is spoke.net
. See Oracle Data Guard Concepts and Administration for detailed information about authentication requirements for redo transport.
At the source database hub.net
, set the following initialization parameters to configure redo transport services to use the log writer process (LGWR) to copy redo data from the online redo log at the source database to the standby redo log at the downstream database spoke.net
:
Set at least one archive log destination in the LOG_ARCHIVE_DEST_
n
initialization parameter to the computer system running the downstream database. To do this, set the following attributes of this parameter:
SERVICE
- Specify the network service name of the downstream database.
LGWR
ASYNC
- Specify this attribute so that the log writer process (LGWR) will send redo data to the downstream database.
NOREGISTER
- Specify this attribute so that the downstream database location is not recorded in the downstream database control file.
VALID_FOR
- Specify either (ONLINE_LOGFILE,PRIMARY_ROLE)
or (ONLINE_LOGFILE,ALL_ROLES)
.
The following example is a LOG_ARCHIVE_DEST_
n
setting at the source database that specifies a real-time downstream capture database:
LOG_ARCHIVE_DEST_2='SERVICE=SPOKE.NET LGWR ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE)'
You can specify other attributes in the LOG_ARCHIVE_DEST_
n
initialization parameter if necessary.
Set the LOG_ARCHIVE_DEST_STATE_
n
initialization parameter that corresponds with the LOG_ARCHIVE_DEST_
n
parameter for the downstream database to ENABLE
.
For example, if the LOG_ARCHIVE_DEST_2
initialization parameter is set for the downstream database, then set one LOG_ARCHIVE_DEST_STATE_2
parameter in the following way:
LOG_ARCHIVE_DEST_STATE_2=ENABLE
Ensure that the setting for the LOG_ARCHIVE_CONFIG
initialization parameter includes the send
value. The default value for this parameter includes send
.
See Also:
Oracle Database Reference and Oracle Data Guard Concepts and Administration for more information about these initialization parametersAt the downstream database spoke.net
, set the following initialization parameters to configure the downstream database to receive redo data from the source database LGWR and write the redo data to the standby redo log at the downstream database:
Set at least one archive log destination in the LOG_ARCHIVE_DEST_
n
initialization parameter to a directory on the computer system running the downstream database. To do this, set the following attributes of this parameter:
LOCATION
- Specify a valid path name for a disk directory on the system that hosts the downstream database. Each destination that specifies the LOCATION
attribute must identify a unique directory path name. This is the local destination for archived redo log files written from the standby redo logs. Log files from a remote source database should be kept separate from local database log files. In addition, if the downstream database contains log files from multiple source databases, then the log files from each source database should be kept separate from each other.
MANDATORY
- Successful archiving of a standby redo log file must succeed before the corresponding standby redo log file can be overwritten.
VALID_FOR
- Specify either (STANDBY_LOGFILE,PRIMARY_ROLE)
or (STANDBY_LOGFILE,ALL_ROLES)
.
The following example is a LOG_ARCHIVE_DEST_
n
setting at the real-time downstream capture database:
LOG_ARCHIVE_DEST_2='LOCATION=/home/arc_dest/srl_spoke MANDATORY VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)'
You can specify other attributes in the LOG_ARCHIVE_DEST_
n
initialization parameter if necessary.
Ensure that the setting for the LOG_ARCHIVE_CONFIG
initialization parameter includes the receive
value. The default value for this parameter includes receive
.
Optionally set the LOG_ARCHIVE_FORMAT
initialization parameter to generate the filenames in a specified format for the archived redo log files. The following example is a valid LOG_ARCHIVE_FORMAT
setting:
LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
Set the LOG_ARCHIVE_DEST_STATE_
n
initialization parameter that corresponds with the LOG_ARCHIVE_DEST_
n
parameter for the downstream database to ENABLE
.
For example, if the LOG_ARCHIVE_DEST_2
initialization parameter is set for the downstream database, then set one LOG_ARCHIVE_DEST_STATE_2
parameter in the following way:
LOG_ARCHIVE_DEST_STATE_2=ENABLE
If you set other archive destinations at the downstream database, then, to keep archived standby redo log files separate from archived online redo log files from the downstream database, explicitly specify ONLINE_LOGFILE
or STANDBY_LOGFILE
, instead of ALL_LOGFILES
, in the VALID_FOR
attribute.
For example, if the LOG_ARCHIVE_DEST_1
parameter specifies the archive destination for the online redo log files at the downstream database, then avoid the ALL_LOGFILES
keyword in the VALID_FOR
attribute when you set the LOG_ARCHIVE_DEST_1
parameter.
See Also:
Oracle Database Reference and Oracle Data Guard Concepts and Administration for more information about these initialization parametersIf you reset any initialization parameters while an instance was running at a database in Step 2 or 3, then you might want to reset them in the relevant initialization parameter file as well, so that the new values are retained when the database is restarted.
If you did not reset the initialization parameters while an instance was running, but instead reset them in the initialization parameter file in Step 2 or 3, then restart the database. The source database must be open when it sends redo data to the downstream database, because the global name of the source database is sent to the downstream database only if the source database is open.
At the downstream database spoke.net
, connect as an administrative user and create standby redo log files.
Note:
The following steps outline the general procedure for adding standby redo log files to the downstream database. The specific steps and SQL statements used to add standby redo log files depend on your environment. For example, in an Oracle Real Application Clusters (Oracle RAC) environment, the steps are different. See Oracle Data Guard Concepts and Administration for detailed instructions about adding standby redo log files to a database.Determine the log file size used on the source database hub.net
. The standby log file size must exactly match (or be larger than) the source database log file size. For example, if the source database log file size is 500 MB, then the standby log file size must be 500 MB or larger. You can determine the size of the redo log files at the source database (in bytes) by querying the V$LOG
view at the source database.
For example, in SQL*Plus, connect as an administrative user to hub.net
and query the V$LOG
view:
sqlplus system/user-password@hub.net
SELECT BYTES FROM V$LOG;
Determine the number of standby log file groups required on the downstream database spoke.net
. The number of standby log file groups must be at least one more than the number of online log file groups on the source database. For example, if the source database has two online log file groups, then the downstream database must have at least three standby log file groups. You can determine the number of source database online log file groups by querying the V$LOG
view at the source database.
For example, while still connected in SQL*Plus as an administrative user to hub.net
, query the V$LOG
view:
SELECT COUNT(GROUP#) FROM V$LOG;
Use the SQL statement ALTER
DATABASE
ADD
STANDBY
LOGFILE
to add the standby log file groups to the downstream database spoke.net
.
For example, assume that the source database has two online redo log file groups and is using a log file size of 500 MB. In this case, connect as an administrative user in SQL*Plus to spoke.net
and use the following statements to create the appropriate standby log file groups:
sqlplus system/user-password@spoke.net
ALTER DATABASE ADD STANDBY LOGFILE GROUP 3
('/oracle/dbs/slog3a.rdo', '/oracle/dbs/slog3b.rdo') SIZE 500M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 4
('/oracle/dbs/slog4a.rdo', '/oracle/dbs/slog4b.rdo') SIZE 500M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 5
('/oracle/dbs/slog5a.rdo', '/oracle/dbs/slog5b.rdo') SIZE 500M;
Ensure that the standby log file groups were added successfully by running the following query at spoke.net
:
SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$STANDBY_LOG;
You output should be similar to the following:
GROUP# THREAD# SEQUENCE# ARC STATUS ---------- ---------- ---------- --- ---------- 3 0 0 YES UNASSIGNED 4 0 0 YES UNASSIGNED 5 0 0 YES UNASSIGNED
Ensure that log files from the source database are appearing in the directory specified in Step 3. You might need to switch the log file at the source database to see files in the directory.
In SQL*Plus, connect to the hub.net
database as the Oracle Streams administrator:
CONNECT strmadmin/user-password@hub.net
Create a directory object to hold files that will be generated by the MAINTAIN_SCHEMAS
procedure. The directory object can point to any accessible directory on the computer system. For example, the following statement creates a directory object named hns_dir
that points to the /usr/db_files
directory:
CREATE DIRECTORY hns_dir AS '/usr/db_files';
In SQL*Plus, connect to the spoke.net
database as the Oracle Streams administrator:
CONNECT strmadmin/user-password@spoke.net
Create a directory object to hold files that will be generated by the MAINTAIN_SCHEMAS
procedure. The directory object can point to any accessible directory on the computer system. For example, the following statement creates a directory object named hns_dir
that points to the /usr/db_files
directory:
CREATE DIRECTORY hns_dir AS '/usr/db_files';
While still connected to the spoke.net
database as the Oracle Streams administrator, run the MAINTAIN_SCHEMAS
procedure to configure the replication between the hub.net
database and the spoke.net
database:
BEGIN DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS( schema_names => 'hr', source_directory_object => 'hns_dir', destination_directory_object => 'hns_dir', source_database => 'hub.net', destination_database => 'spoke.net', capture_name => 'capture_hns', capture_queue_table => 'streams_queue_qt', capture_queue_name => 'streams_queue', apply_name => 'apply_spoke', apply_queue_table => 'streams_queue_qt', apply_queue_name => 'streams_queue', bi_directional => FALSE); END; /
The MAINTAIN_SCHEMAS
procedure can take some time to run because it is performing many configuration tasks.
If you encounter any errors when you run the MAINTAIN_SCHEMAS
procedure, then see Oracle Streams Replication Administrator's Guide for instructions about using the RECOVER_OPERATION
procedure in the DBMS_STREAMS_ADM
package to recover from these errors.
While still connected to the spoke.net
database as the Oracle Streams administrator, set the downstream_real_time_mine
capture process parameter to Y
:
BEGIN DBMS_CAPTURE_ADM.SET_PARAMETER( capture_name => 'capture_hns', parameter => 'downstream_real_time_mine', value => 'Y'); END; /
If you would rather set the capture process parameter using Enterprise Manager, then see "Setting a Capture Process Parameter" for instructions.
When you complete the example, a hub-and-spoke replication environment with the following characteristics is configured:
At the hub database hub.net
, supplemental logging is configured for the tables in the hr
schema.
The spoke database spoke.net
has the following components:
A capture process named capture_hns
. The capture process captures changes to the hr
schema in the redo log information sent from the source database hub.net
.
A queue named streams_queue
that uses a queue table named streams_queue_qt
. This queue is for the capture process and apply process at the database.
An apply process named apply_spoke
. The apply process applies changes to the hr
schema.
To replicate changes:
At the hub database, make DML changes to any table in the hr
schema.
After some time has passed to allow for replication of the changes, use SQL*Plus to query the modified table at the spoke database to view the DML changes.
Note:
The MAINTAIN_
procedures do not configure the replicated tables to be read only at the spoke databases. If they should be read only, then configure privileges at the spoke databases accordingly. However, the apply user for the apply process must be able to make DML changes to the replicated database objects. In this example, the apply user is the Oracle Streams administrator. See Oracle Database Security Guide for information about configuring privileges.
The downstream capture process captures the changes after the archived redo log files are sent to the downstream database. Optionally, you can switch the log file at the source database to speed up the process. To do so, connect to the source database in SQL*Plus as an administrative user, and enter the following statement:
ALTER SYSTEM SWITCH LOGFILE;
See Also:
Chapter 5, "Administering an Oracle Streams Replication Environment"
Oracle Streams Concepts and Administration for more information about downstream capture processes
Oracle Streams Replication Administrator's Guide for more detailed instructions about using the MAINTAIN_SCHEMAS
procedure
Oracle Database PL/SQL Packages and Types Reference for reference information about the MAINTAIN_SCHEMAS
procedure
The example in this topic configures an Oracle Streams replication environment that replicates data manipulation language (DML) changes to all of the tables in the hr
schema. This example uses a capture process at each database to capture these changes. Hub-and-spoke replication means that a central hub database replicates changes with one or more spoke databases. The spoke databases do not communicate with each other directly. However, the hub database might send changes generated at one spoke database to another spoke database.
This example uses the MAINTAIN_SCHEMAS
procedure in the DBMS_STREAMS_ADM
package to configure the hub-and-spoke replication environment. This procedure is the fastest and simplest way to configure an Oracle Streams environment that replicates one or more schemas. In addition, the procedure follows established best practices for Oracle Streams replication environments.
In this example, the global name of the hub database is hub.net
. This example configures two spoke databases with the global names spoke1.net
and spoke2.net
. However, you can substitute databases in your environment to complete the example.
Figure 4-11 provides an overview of the environment created in this example.
Figure 4-11 Sample Hub-and-Spoke Environment with Capture Processes and Read/Write Spokes
To configure this hub-and-spoke replication environment with read/write spokes:
Complete the following tasks to prepare for the hub-and-spoke replication environment:
Configure an Oracle Streams administrator at each database that will participate in the replication environment. See "Example: Creating an Oracle Streams Administrator" for instructions. This example assumes that the Oracle Streams administrator is strmadmin
.
Configure latest time conflict resolution for all of the tables in the hr
schema at the hub.net
, spoke1.net
, and spoke2.net
databases. This schema includes the countries
, departments
, employees
, jobs
, job_history
, locations
, and regions
tables. See "Example: Configuring Latest Time Conflict Resolution for a Table" for instructions.
The MAINTAIN_SCHEMAS
procedure instantiates the replicated schemas and database objects at the destination database. There are two scenarios to consider when you configure conflict resolution for your replication environment:
Replicated schemas and database objects do not exist at the spoke databases: During instantiation, the triggers in the schema and the supplemental logging specifications are configured at the destination database. Therefore, you do not need create the triggers or add supplemental logging manually at the destination databases.
Replicated schemas and database objects exist at the spoke databases: During instantiation, the triggers in the schema and the supplemental logging specifications are not configured at the destination database. Therefore, you must create the triggers or add supplemental logging manually at the destination databases.
In either case, the instantiation does not configure the update conflict handlers at the destination database. Therefore, you must run the SET_UPDATE_CONFLICT_HANDLER
procedure to configure the update conflict handlers.
In this example, the hr
schema already exists at the hub.net
, spoke1.net
, and spoke2.net
databases. Therefore, you must create the triggers, add supplemental logging, and configure the update conflict handlers at all of the databases.
Configure network connectivity so that the following databases can communicate with each other:
The hub.net
database and the spoke1.net
database
The hub.net
database and the spoke2.net
database
See Oracle Database 2 Day DBA for information about configuring network connectivity between databases.
Create a database link from the hub database to each spoke database and from each spoke database to the hub database. In this example, create the following database links:
From the hub.net
database to the spoke1.net
database
From the hub.net
database to the spoke2.net
database
From the spoke1.net
database to the hub.net
database
From the spoke2.net
database to the hub.net
database
Each database link should be created in the Oracle Streams administrator's schema. Also, each database link should connect to the Oracle Streams administrator at the destination database. See "Example: Creating a Database Link" for instructions.
Set initialization parameters properly at each database that will participate in the Oracle Streams replication environment. See "Preparing for Oracle Streams Replication" for instructions.
Configure each source database to run in ARCHIVELOG
mode. For a capture process to capture changes generated at a source database, the source database must be running in ARCHIVELOG
mode. In this example, all databases must be running in ARCHIVELOG
mode. See Oracle Database Administrator's Guide for information about configuring a database to run in ARCHIVELOG
mode.
In SQL*Plus, connect to the spoke1.net
database as the Oracle Streams administrator:
CONNECT strmadmin/user-password@spoke1.net
Create a directory object to hold files that will be generated by the MAINTAIN_SCHEMAS
procedure. The directory object can point to any accessible directory on the computer system. For example, the following statement creates a directory object named hns_dir
that points to the /usr/db_files
directory:
CREATE DIRECTORY hns_dir AS '/usr/db_files';
In SQL*Plus, connect to the spoke2.net
database as the Oracle Streams administrator:
CONNECT strmadmin/user-password@spoke2.net
Create a directory object to hold files that will be generated by the MAINTAIN_SCHEMAS
procedure. The directory object can point to any accessible directory on the computer system. For example, the following statement creates a directory object named hns_dir
that points to the /usr/db_files
directory:
CREATE DIRECTORY hns_dir AS '/usr/db_files';
In SQL*Plus, connect to the hub.net
database as the Oracle Streams administrator:
CONNECT strmadmin/user-password@hub.net
Create a directory object to hold files that will be generated by the MAINTAIN_SCHEMAS
procedure. The directory object can point to any accessible directory on the computer system. For example, the following statement creates a directory object named hns_dir
that points to the /usr/db_files
directory:
CREATE DIRECTORY hns_dir AS '/usr/db_files';
While still connected in SQL*Plus to the hub.net
database as the Oracle Streams administrator, run the MAINTAIN_SCHEMAS
procedure to configure the replication between the hub.net
database and the spoke1.net
database:
BEGIN DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS( schema_names => 'hr', source_directory_object => 'hns_dir', destination_directory_object => 'hns_dir', source_database => 'hub.net', destination_database => 'spoke1.net', capture_name => 'capture_hns', capture_queue_table => 'source_hns_qt', capture_queue_name => 'source_hns', propagation_name => 'propagation_spoke1', apply_name => 'apply_spoke1', apply_queue_table => 'destination_spoke1_qt', apply_queue_name => 'destination_spoke1', bi_directional => TRUE); END; /
The MAINTAIN_SCHEMAS
procedure can take some time to run because it is performing many configuration tasks. Do not allow data manipulation language (DML) or data definition language (DDL) changes to the replicated database objects at the destination database while the procedure is running.
If you encounter any errors when you run the MAINTAIN_SCHEMAS
procedure, then see Oracle Streams Replication Administrator's Guide for instructions about using the RECOVER_OPERATION
procedure in the DBMS_STREAMS_ADM
package to recover from these errors.
While still connected in SQL*Plus to the hub.net
database as the Oracle Streams administrator, run the MAINTAIN_SCHEMAS
procedure to configure the replication between the hub.net
database and the spoke2.net
database:
BEGIN DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS( schema_names => 'hr', source_directory_object => 'hns_dir', destination_directory_object => 'hns_dir', source_database => 'hub.net', destination_database => 'spoke2.net', capture_name => 'capture_hns', capture_queue_table => 'source_hns_qt', capture_queue_name => 'source_hns', propagation_name => 'propagation_spoke2', apply_name => 'apply_spoke2', apply_queue_table => 'destination_spoke2_qt', apply_queue_name => 'destination_spoke2', bi_directional => TRUE); END; /
The MAINTAIN_SCHEMAS
procedure can take some time to run because it is performing many configuration tasks. Do not allow data manipulation language (DML) or data definition language (DDL) changes to the replicated database objects at the destination database while the procedure is running.
If you encounter any errors when you run the MAINTAIN_SCHEMAS
procedure, then see Oracle Streams Replication Administrator's Guide for instructions about using the RECOVER_OPERATION
procedure in the DBMS_STREAMS_ADM
package to recover from these errors.
Note:
In this example, you configured the update conflict handlers at the spoke databases in Step 1b. However, if the replicated schema did not exist at the spoke databases before you ran theMAINTAIN_
procedure, then you should configure the update conflict handlers now. See Step 5 in "Example: Configuring Latest Time Conflict Resolution for a Table" for instructions.When you complete the example, a hub-and-spoke replication environment with the following characteristics is configured:
Supplemental logging is configured for the tables in the hr
schema at each database.
Each database has a capture process named capture_hns
. The capture process captures changes to the hr
schema at the database.
Each database has a queue named source_hns
that uses a queue table named source_hns_qt
. This queue is for the capture process at the database.
The hub database hub.net
has the following additional components:
An apply process named apply_spoke1
. The apply process applies changes to the hr
schema that were sent from the spoke1.net
database.
A queue named destination_spoke1
that uses a queue table named destination_spoke1_qt
. This queue is for the apply_spoke1
apply process at the database.
An apply process named apply_spoke2
. The apply process applies changes to the hr
schema that were sent from the spoke2.net
database.
A queue named destination_spoke2
that uses a queue table named destination_spoke2_qt
. This queue is for the apply_spoke2
apply process at the database.
A propagation named propagation_spoke1
. The propagation sends changes to the hr
schema from the source_hns
queue to the destination_spoke1
queue at the spoke1.net
database.
A propagation named propagation_spoke2
. The propagation sends changes to the hr
schema from the source_hns
queue to the destination_spoke2
queue at the spoke2.net
database.
The spoke database spoke1.net
has the following additional components:
An apply process named apply_spoke1
. The apply process applies changes to the hr
schema that were sent from the hub.net
database.
A queue named destination_spoke1
that uses a queue table named destination_spoke1_qt
. This queue is for the apply_spoke1
apply process at the database.
A propagation named propagation_spoke1
. The propagation sends changes to the hr
schema from the source_hns
queue to the destination_spoke1
queue at the hub.net
database.
The spoke database spoke2.net
has the following additional components:
An apply process named apply_spoke2
. The apply process applies changes to the hr
schema that were sent from the hub.net
database.
A queue named destination_spoke2
that uses a queue table named destination_spoke2_qt
. This queue is for the apply_spoke2
apply process at the database.
A propagation named propagation_spoke2
. The propagation sends changes to the hr
schema from the source_hns
queue to the destination_spoke2
queue at the hub.net
database.
Tags are used to avoid change cycling in the following way:
Each apply process uses an apply tag, and redo records for changes applied by the apply process include the tag. Each apply process uses an apply tag that is unique in the replication environment.
Each capture process captures all of the changes to the replicated database objects, regardless of the tag in the redo record. Therefore, each capture process captures the changes applied by the apply processes on its source database.
Each propagation sends all changes made to the replicated database objects to another database in the replication environment, except for changes that originated at the other database. The propagation rules instruct the propagation to discard these changes.
See "About Tags for Avoiding Change Cycling" and Oracle Database PL/SQL Packages and Types Reference for more information about how the replication environment avoids change cycling.
To replicate changes:
At one of the databases, make DML changes to any table in the hr
schema.
After some time has passed to allow for replication of the changes, use SQL*Plus to query the modified table at the other databases to view the DML changes.
See Also:
Chapter 5, "Administering an Oracle Streams Replication Environment"
Oracle Streams Replication Administrator's Guide for more detailed instructions about using the MAINTAIN_SCHEMAS
procedure
Oracle Database PL/SQL Packages and Types Reference for reference information about the MAINTAIN_SCHEMAS
procedure
The example in this topic configures an Oracle Streams replication environment that replicates data manipulation language (DML) changes to two tables in the hr
schema. This example uses a synchronous capture at each database to capture these changes. Hub-and-spoke replication means that a central hub database replicates changes with one or more spoke databases. The spoke databases do not communicate with each other directly. However, the hub database might send changes generated at one spoke database to another spoke database.
Specifically, this example configures the following Oracle Streams replication environment:
The hr.employees
table is shared at the hub.net
and spoke.net
databases.
The hr.departments
table is shared at the hub.net
and spoke.net
databases.
The two databases replicate all of the DML changes to these tables. In this example, the global names of the databases in the Oracle Streams replication environment are hub.net
and spoke.net
. The hub.net
database is the hub, and the spoke.net
database is the single spoke in this hub-and-spoke configuration. However, you can substitute any two databases in your environment to complete the example.
Also, this example uses tables in the hr
sample schema. The hr
sample schema is installed by default with Oracle Database.
Figure 4-12 provides an overview of the environment created in this example.
Figure 4-12 Sample Hub-and-Spoke Environment with Synchronous Captures and a Read/Write Spoke
Complete the following tasks before you start this example:
Configure network connectivity so that the two databases can communicate with each other. See Oracle Database 2 Day DBA for information about configuring network connectivity between databases.
Configure an Oracle Streams administrator at each database that will participate in the replication environment. See "Example: Creating an Oracle Streams Administrator" for instructions. This example assumes that the Oracle Streams administrator is strmadmin
.
Set initialization parameters properly at each database that will participate in the Oracle Streams replication environment. See "Preparing for Oracle Streams Replication" for instructions.
Configure latest time conflict resolution for the hr.departments
and hr.employees
tables at the hub.net
and spoke.net
databases. See "Example: Configuring Latest Time Conflict Resolution for a Table" for instructions.
Ensure that the hr.employees
and hr.departments
tables exist at the two databases and are consistent at these databases. If the database objects exist at only one database, then you can use export/import to create and populate them at the other database. See Oracle Database Utilities for information about export/import.
To configure this replication environment with synchronous capture:
Create two ANYDATA
queues at each database. See "Creating an ANYDATA Queue" for instructions. For this example, create the following two queues at each database:
A queue named capture_queue
owned by the Oracle Streams administrator strmadmin
. This queue will be used by the synchronous capture at the database.
A queue named apply_queue
owned by the Oracle Streams administrator strmadmin
. This queue will be used by the apply process at the database.
Create a database link from each database to the other database:
Create a database link from the hub.net
database to the spoke.net
database. The database link should be created in the Oracle Streams administrator's schema. Also, the database link should connect to the Oracle Streams administrator at the spoke.net
database.
Create a database link from the spoke.net
database to the hub.net
database. The database link should be created in the Oracle Streams administrator's schema. Also, the database link should connect to the Oracle Streams administrator at the hub.net
database.
See "Example: Creating a Database Link" for instructions.
Configure an apply process at the hub.net
database. This apply process will apply changes to the shared tables that were captured at the spoke.net
database and propagated to the hub.net
database.
Open SQL*Plus and connect to the hub.net
database as the Oracle Streams administrator:
sqlplus strmadmin/user-password@hub.net
See Oracle Database 2 Day DBA for more information about starting SQL*Plus.
Create a rule set for the new apply process:
BEGIN DBMS_RULE_ADM.CREATE_RULE_SET( rule_set_name => 'strmadmin.apply_rules', evaluation_context => 'SYS.STREAMS$_EVALUATION_CONTEXT'); END; /
Create the apply process:
BEGIN DBMS_APPLY_ADM.CREATE_APPLY( queue_name => 'strmadmin.apply_queue', apply_name => 'apply_emp_dep', rule_set_name => 'strmadmin.apply_rules', apply_captured => FALSE); END; /
The apply_captured
parameter is set to FALSE
because the apply process applies changes in the persistent queue. These are changes that were captured by a synchronous capture. The apply_captured
parameter should be set to TRUE
only when the apply process applies changes captured by a capture process.
Do not start the apply process.
Alter the apply process so that the redo records generated by the apply process have a unique tag in the replication environment:
BEGIN DBMS_APPLY_ADM.ALTER_APPLY( apply_name => 'apply_emp_dep', apply_tag => HEXTORAW('1')); END; /
To avoid change cycling, this example configures the propagation at the current hub.net
database to discard changes with a tag value that is the hexadecimal equivalent of '1'
. Therefore, the changes that originated at the spoke.net
database are not sent back to the spoke.net
database. If you have multiple spokes in a hub-and-spoke replication environment, then a different apply process applies changes from each spoke, and each apply process uses a different apply tag. See "About Tags for Avoiding Change Cycling".
Add a rule to the apply process rule set:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.employees', streams_type => 'apply', streams_name => 'apply_emp_dep', queue_name => 'strmadmin.apply_queue', source_database => 'spoke.net'); END; /
This rule instructs the apply process apply_emp_dep
to apply all DML changes to the hr.employees
table that appear in the apply_queue
queue. The rule also specifies that the apply process applies only changes that were captured at the spoke.net
source database.
Add additional rules to the apply process rule set:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.departments', streams_type => 'apply', streams_name => 'apply_emp_dep', queue_name => 'strmadmin.apply_queue', source_database => 'spoke.net'); END; /
This rule instructs the apply process apply_emp_dep
to apply all DML changes to the hr.departments
table that appear in the apply_queue
queue. The rule also specifies that the apply process applies only changes that were captured at the spoke.net
source database.
Configure an apply process at the spoke.net
database. This apply process will apply changes that were captured at the hub.net
database and propagated to the spoke.net
database.
In SQL*Plus, connect to the spoke.net
database as the Oracle Streams administrator:
CONNECT strmadmin/user-password@spoke.net
Create a rule set for the new apply process:
BEGIN DBMS_RULE_ADM.CREATE_RULE_SET( rule_set_name => 'strmadmin.apply_rules', evaluation_context => 'SYS.STREAMS$_EVALUATION_CONTEXT'); END; /
Create the apply process:
BEGIN DBMS_APPLY_ADM.CREATE_APPLY( queue_name => 'strmadmin.apply_queue', apply_name => 'apply_emp_dep', rule_set_name => 'strmadmin.apply_rules', apply_captured => FALSE); END; /
The apply_captured
parameter is set to FALSE
because the apply process applies changes in the persistent queue. These changes were captured by a synchronous capture. The apply_captured
parameter should be set to TRUE
only when the apply process applies changes captured by a capture process.
You do not need to alter the apply tag for this apply process. If you use a procedure in the DBMS_STREAMS_ADM
package to create an apply process, then the apply process generates non-NULL
tags with a value of '00'
in the redo log by default. Typically, spoke databases in hub-and-spoke replication environments only contain one apply process. So, in this case, the default '00'
non-NULL
tags are sufficient for changes that originated at the hub database. See "About Tags for Avoiding Change Cycling".
Do not start the apply process.
Add a rule to the apply process rule set:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.employees', streams_type => 'apply', streams_name => 'apply_emp_dep', queue_name => 'strmadmin.apply_queue', source_database => 'hub.net'); END; /
This rule instructs the apply process apply_emp_dep
to apply all DML changes that appear in the apply_queue
queue to the hr.employees
table. The rule also specifies that the apply process applies only changes that were captured at the hub.net
source database.
Add additional rules to the apply process rule set:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.departments', streams_type => 'apply', streams_name => 'apply_emp_dep', queue_name => 'strmadmin.apply_queue', source_database => 'hub.net'); END; /
This rule instructs the apply process apply_emp_dep
to apply all DML changes that appear in the apply_queue
queue to the hr.departments
table. The rule also specifies that the apply process applies only changes that were captured at the hub.net
source database.
Create a propagation to send changes from a queue at the hub.net
database to a queue at the spoke.net
database:
In SQL*Plus, connect to the hub.net
database as the Oracle Streams administrator:
CONNECT strmadmin/user-password@hub.net
Create the propagation that sends changes to the spoke.net
database:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES( table_name => 'hr.employees', streams_name => 'send_emp_dep', source_queue_name => 'strmadmin.capture_queue', destination_queue_name => 'strmadmin.apply_queue@spoke.net', include_tagged_lcr => TRUE, source_database => 'hub.net', inclusion_rule => TRUE, and_condition => ':lcr.get_tag() IS NULL OR ' || ':lcr.get_tag()!=HEXTORAW(''1'')', queue_to_queue => TRUE); END; /
The ADD_TABLE_PROPAGATION_RULES
procedure creates the propagation and its positive rule set. This procedure also adds a rule to the propagation rule set that instructs it to send DML changes to the hr.employees
table to the apply_queue
queue in the spoke.net
database.
The condition specified in the and_condition
parameter is added to the rule condition to avoid change cycling. Specifically, it ensures that a change is not sent to the spoke.net
database if it originated at the spoke.net
database. See "About Tags for Avoiding Change Cycling".
Add additional rules to the propagation rule set:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES( table_name => 'hr.departments', streams_name => 'send_emp_dep', source_queue_name => 'strmadmin.capture_queue', destination_queue_name => 'strmadmin.apply_queue@spoke.net', include_tagged_lcr => TRUE, source_database => 'hub.net', inclusion_rule => TRUE, and_condition => ':lcr.get_tag() IS NULL OR ' || ':lcr.get_tag()!=HEXTORAW(''1'')', queue_to_queue => TRUE); END; /
The ADD_TABLE_PROPAGATION_RULES
procedure adds a rule to the propagation rule set that instructs it to send DML changes to the hr.employees
table to the apply_queue
queue in the spoke.net
database.
The condition specified in the and_condition
parameter is added to the rule condition to avoid change cycling. Specifically, it ensures that a change is not sent to the spoke.net
database if it originated at the spoke.net
database. See "About Tags for Avoiding Change Cycling".
Create a propagation to send changes from a queue at the spoke.net
database to a queue at the hub.net
database:
In SQL*Plus, connect to the spoke.net
database as the Oracle Streams administrator:
CONNECT strmadmin/user-password@spoke.net
Create the propagation that sends changes to the hub.net
database:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES( table_name => 'hr.employees', streams_name => 'send_emp_dep', source_queue_name => 'strmadmin.capture_queue', destination_queue_name => 'strmadmin.apply_queue@hub.net', include_tagged_lcr => FALSE, source_database => 'spoke.net', inclusion_rule => TRUE, queue_to_queue => TRUE); END; /
The ADD_TABLE_PROPAGATION_RULES
procedure creates the propagation and its positive rule set. This procedure also adds a rule to the propagation rule set that instructs it to send DML changes to the hr.employees
table to the apply_queue
queue in the hub.net
database.
The included_tagged_lcr
is set to FALSE
to avoid change cycling. Any changes applied by the apply process at the spoke.net database have a tag that is the hexadecimal equivalent of '00'
, but the propagation only sends changes with NULL
tags. Therefore, any changes that originated at other databases are not sent back to the hub database. See "About Tags for Avoiding Change Cycling".
Add additional rules to the apply process rule set:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES( table_name => 'hr.departments', streams_name => 'send_emp_dep', source_queue_name => 'strmadmin.capture_queue', destination_queue_name => 'strmadmin.apply_queue@hub.net', include_tagged_lcr => FALSE, source_database => 'spoke.net', inclusion_rule => TRUE, queue_to_queue => TRUE); END; /
The ADD_TABLE_PROPAGATION_RULES
procedure adds a rule to the propagation rule set that instructs it to send DML changes to the hr.employees
table to the apply_queue
queue in the hub.net
database.
The included_tagged_lcr
is set to FALSE
to avoid change cycling. Any changes applied by the apply process at the spoke.net database have a tag that is the hexadecimal equivalent of '00'
, but the propagation only sends changes with NULL
tags. Therefore, any changes that originated at other databases are not sent back to the hub database. See "About Tags for Avoiding Change Cycling".
Configure a synchronous capture at the hub.net
database:
In SQL*Plus, connect to the hub.net
database as the Oracle Streams administrator:
CONNECT strmadmin/user-password@hub.net
Run the ADD_TABLE_RULES
procedure to create the synchronous capture and add a rule to instruct it to capture changes to the hr.employees
table:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.employees', streams_type => 'sync_capture', streams_name => 'capture_emp_dep', queue_name => 'strmadmin.capture_queue'); END; /
Run the ADD_TABLE_RULES
procedure to create the synchronous capture and add a rule to instruct it to capture changes to the hr.departments
table:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.departments', streams_type => 'sync_capture', streams_name => 'capture_emp_dep', queue_name => 'strmadmin.capture_queue'); END; /
Running these procedures performs the following actions:
Creates a synchronous capture named capture_emp_dep
at the current database. A synchronous capture with the same name must not exist.
Enables the synchronous capture. A synchronous capture cannot be disabled.
Associates the synchronous capture with an existing queue named capture_queue
owned by strmadmin
.
Creates a positive rule set for synchronous capture capture_emp_dep
. The rule set has a system-generated name.
Creates a rule that captures DML changes to the hr.employees
table and adds the rule to the positive rule set for the synchronous capture. The rule has a system-generated name.
Prepares the hr.employees
table for instantiation by running the DBMS_CAPTURE_ADM.PREPARE_SYNC_INSTANTIATION
function for the table automatically.
Creates a rule that captures DML changes to the hr.departments
table and adds the rule to the positive rule set for the synchronous capture. The rule has a system-generated name.
Prepares the hr.departments
table for instantiation by running the DBMS_CAPTURE_ADM.PREPARE_SYNC_INSTANTIATION
function for the table automatically.
Configure a synchronous capture at the spoke.net
database:
In SQL*Plus, connect to the spoke.net
database as the Oracle Streams administrator:
CONNECT strmadmin/user-password@spoke.net
Run the ADD_TABLE_RULES
procedure to create the synchronous capture and add a rule to instruct it to capture changes to the hr.employees
table:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.employees', streams_type => 'sync_capture', streams_name => 'capture_emp_dep', queue_name => 'strmadmin.capture_queue'); END; /
Run the ADD_TABLE_RULES
procedure to create the synchronous capture and add a rule to instruct it to capture changes to the hr.departments
table:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.departments', streams_type => 'sync_capture', streams_name => 'capture_emp_dep', queue_name => 'strmadmin.capture_queue'); END; /
Step 7 describes the actions performed by these procedures at the current database.
Set the instantiation SCN for the tables at the spoke.net
database:
In SQL*Plus, connect to the hub.net
database as the Oracle Streams administrator:
CONNECT strmadmin/user-password@hub.net
Set the instantiation SCN for the hr.employees
table at the spoke.net
database:
DECLARE iscn NUMBER; -- Variable to hold instantiation SCN value BEGIN iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER(); DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@spoke.net( source_object_name => 'hr.employees', source_database_name => 'hub.net', instantiation_scn => iscn); END; /
Set the instantiation SCN for the hr.departments
table at the spoke.net
database:
DECLARE iscn NUMBER; -- Variable to hold instantiation SCN value BEGIN iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER(); DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@spoke.net( source_object_name => 'hr.departments', source_database_name => 'hub.net', instantiation_scn => iscn); END; /
An instantiation SCN is the lowest SCN for which an apply process can apply changes to a table. Before the apply process can apply changes to the shared tables at the spoke.net
database, an instantiation SCN must be set for each table.
Set the instantiation SCN for the tables at the hub.net
database:
In SQL*Plus, connect to the spoke.net
database as the Oracle Streams administrator:
CONNECT strmadmin/user-password@spoke.net
Set the instantiation SCN for the hr.employees
table at the hub.net
database:
DECLARE iscn NUMBER; -- Variable to hold instantiation SCN value BEGIN iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER(); DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@hub.net( source_object_name => 'hr.employees', source_database_name => 'spoke.net', instantiation_scn => iscn); END; /
Set the instantiation SCN for the hr.departments
table at the spoke.net
database:
DECLARE iscn NUMBER; -- Variable to hold instantiation SCN value BEGIN iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER(); DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@hub.net( source_object_name => 'hr.departments', source_database_name => 'spoke.net', instantiation_scn => iscn); END; /
Start the apply process at each database:
In SQL*Plus, connect to the hub.net
database as the Oracle Streams administrator:
CONNECT strmadmin/user-password@hub.net
Start the apply process:
BEGIN DBMS_APPLY_ADM.START_APPLY( apply_name => 'apply_emp_dep'); END; /
In SQL*Plus, connect to the spoke.net
database as the Oracle Streams administrator:
CONNECT strmadmin/user-password@spoke.net
Start the apply process:
BEGIN DBMS_APPLY_ADM.START_APPLY( apply_name => 'apply_emp_dep'); END; /
A hub-and-spoke replication environment with the following characteristics is configured:
Each database has a synchronous capture named sync_capture
. The synchronous capture captures all DML changes to the hr.employees
hr.departments
tables.
Each database has a queue named capture_queue
. This queue is for the synchronous capture at the database.
Each database has an apply process named apply_emp_dep
. The apply process applies all DML changes to the hr.employees
table and hr.departments
tables.
Each database has a queue named apply_queue
. This queue is for the apply process at the database.
Each database has a propagation named send_emp_dep
. The propagation sends changes from the capture_queue
in the local database to the apply_queue
in the other database. The propagation sends all DML changes to the hr.employees
and hr.departments
tables.
Tags are used to avoid change cycling in the following way:
Each apply process uses an apply tag, and redo records for changes applied by the apply process include the tag. Each apply process uses an apply tag that is unique in the replication environment.
Each synchronous capture captures all of the changes to the replicated database objects, regardless of the tag in the redo record. Therefore, each synchronous capture captures the changes applied by the apply processes on its source database.
Each propagation sends all changes made to the replicated database objects to the other database in the replication environment, except for changes that originated at the other database. The propagation rules instruct the propagation to discard these changes.
See "About Tags for Avoiding Change Cycling" and Oracle Streams Replication Administrator's Guide for more information about how hub-and-spoke replication environments avoid change cycling.
To replicate changes:
At one of the databases, make DML changes to the hr.employees
table or hr.departments
table.
After some time has passed to allow for replication of the changes, use SQL*Plus to query the hr.employees
or hr.departments
table at the other database to view the changes.