Oracle® Streams Concepts and Administration 10g Release 1 (10.1) Part Number B10727-01 |
|
|
View PDF |
This chapter contains information about identifying and resolving common problems in a Streams environment.
This chapter contains these topics:
See Also:
Oracle Streams Replication Administrator's Guide for more information about troubleshooting Streams replication environments |
If a capture process is not capturing changes as expected, or if you are having other problems with a capture process, then use the following checklist to identify and resolve capture problems:
A capture process captures changes only when it is enabled. You can check whether a capture process is enabled, disabled, or aborted by querying the DBA_CAPTURE
data dictionary view.
For example, to check whether a capture process named CAPTURE
is enabled, run the following query:
SELECT STATUS FROM DBA_CAPTURE WHERE CAPTURE_NAME = 'CAPTURE';
If the capture process is disabled, then your output looks similar to the following:
STATUS -------- DISABLED
If the capture process is disabled, then try restarting it. If the capture process is aborted, then you may need to correct an error before you can restart it successfully. To determine why the capture process aborted, query the DBA_CAPTURE
data dictionary view or check the trace file for the capture process.
The following query shows when the capture process aborted and the error that caused it to abort:
COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A10 COLUMN STATUS_CHANGE_TIME HEADING 'Abort Time' COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999 COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A40 SELECT CAPTURE_NAME, STATUS_CHANGE_TIME, ERROR_NUMBER, ERROR_MESSAGE FROM DBA_CAPTURE WHERE STATUS='ABORTED';
See Also:
|
If a capture process has not captured recent changes, then the cause may be that the capture process has fallen behind. To check, you can query the V$STREAMS_CAPTURE
dynamic performance view. If capture process latency is high, then you may be able to improve performance by adjusting the setting of the parallelism
capture process parameter.
When a capture process is started or restarted, it may need to scan redo log files that were generated before the log file that contains the start SCN. A capture process must scan these records to keep track of DDL changes to database objects. You can query the DBA_CAPTURE
data dictionary view to determine the first SCN and start SCN for a capture process. Removing required redo log files before they are scanned by a capture process causes the capture process to abort and results in the following error in a capture process trace file:
ORA-01291: missing logfile
If you see this error, then try restoring any missing redo log file and restarting the capture process. You can check the V$LOGMNR_LOGS
dynamic performance view to determine the missing SCN range, and add the relevant redo log files. A capture process needs the redo log file that includes the required checkpoint SCN, and all subsequent redo log files. You can query the REQUIRED_CHECKPOINT_SCN
column in the DBA_CAPTURE
data dictionary view to determine the required checkpoint SCN for a capture process.
If a downstream capture process is not capturing changes, then it may be waiting for redo log files to scan. Redo log files may be registered implicitly or explicitly for a downstream capture process. If redo log files are registered implicitly, then, typically, log transport services transfers the redo log files from the source database to the downstream database. If redo log files are registered explicitly, then you must manually transfer the redo log files to the downstream database and register them with the downstream capture process. In either case, the downstream capture process can capture changes made to the source database only if the appropriate redo log files are registered with the downstream capture process.
You can query the V$STREAMS_CAPTURE
dynamic performance view to determine whether a downstream capture process is waiting for a redo log file. For example, run the following query for a downstream capture process named strm05_capture
:
SELECT STATE FROM V$STREAMS_CAPTURE WHERE CAPTURE_NAME='STRM05_CAPTURE';
If the capture process state is either WAITING
FOR
DICTIONARY
REDO
or WAITING
FOR
REDO
, then verify that the redo log files have been registered with the downstream capture process by querying the DBA_REGISTERED_ARCHIVED_LOG
and DBA_CAPTURE
data dictionary views. For example, the following query lists the redo log files currently registered with the strm05_capture
downstream capture process:
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A15 COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 9999999 COLUMN NAME HEADING 'Archived Redo Log|File Name' FORMAT A30 COLUMN DICTIONARY_BEGIN HEADING 'Dictionary|Build|Begin' FORMAT A10 COLUMN DICTIONARY_END HEADING 'Dictionary|Build|End' FORMAT A10 SELECT r.SOURCE_DATABASE, r.SEQUENCE#, r.NAME, r.DICTIONARY_BEGIN, r.DICTIONARY_END FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c WHERE c.CAPTURE_NAME = 'STRM05_CAPTURE' AND r.CONSUMER_NAME = c.CAPTURE_NAME;
If this query does not return any rows, then no redo log files are registered with the capture process currently. If you configured log transport services to transfer redo log files from the source database to the downstream database for this capture process, then make sure log transport services is configured correctly. If log transport services is configured correctly, then run the ALTER
SYSTEM
ARCHIVE
LOG
CURRENT
statement at the source database to archive a log file. If you did not configure log transport services to transfer the log files, then make sure the method you are using for log file transfer and registration is working properly. You can register log files explicitly using an ALTER
DATABASE
REGISTER
LOGICAL
LOGFILE
statement.
Also, if you plan to use a downstream capture process to capture changes to historical data, then consider the following additional issues:
You must use the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to create a downstream capture process. If you try to create a capture process using a procedure in the DBMS_STREAMS_ADM
package and specify a source database name that does not match the global name of the local database, then Oracle returns the following error:
ORA-26678: Streams capture process must be created first
To correct the problem, use the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to create the downstream capture process.
If you are trying to create a local capture process using a procedure in the DBMS_STREAMS_ADM
package, and you encounter this error, then make sure the database name specified in the source_database
parameter of the procedure you are running matches the global name of the local database.
When downstream capture is configured with a database link, the database link can be used to perform operations at the source database and obtain information from the source database automatically. When downstream capture is configured without a database link, these actions must be performed manually, and the information must be obtained manually. If you do not complete these actions manually, then errors result when you try to create the downstream capture process.
Specifically, the following actions must be performed manually when you configure downstream capture without a database link:
DBMS_CAPTURE_ADM.BUILD
procedure at the source database to extract the data dictionary at the source database to the redo log before a capture process is created.first_scn
parameter when you create the capture process with the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.
If a propagation is not propagating changes as expected, then use the following checklist to identify and resolve propagation problems:
If events are not appearing in the destination queue for a propagation as expected, then the propagation may not be configured to propagate events from the correct source queue to the correct destination queue.
For example, to check the source queue and destination queue for a propagation named dbs1_to_dbs2
, run the following query:
COLUMN SOURCE_QUEUE HEADING 'Source Queue' FORMAT A35 COLUMN DESTINATION_QUEUE HEADING 'Destination Queue' FORMAT A35 SELECT p.SOURCE_QUEUE_OWNER||'.'|| p.SOURCE_QUEUE_NAME||'@'|| g.GLOBAL_NAME SOURCE_QUEUE, p.DESTINATION_QUEUE_OWNER||'.'|| p.DESTINATION_QUEUE_NAME||'@'|| p.DESTINATION_DBLINK DESTINATION_QUEUE FROM DBA_PROPAGATION p, GLOBAL_NAME g WHERE p.PROPAGATION_NAME = 'DBS1_TO_DBS2';
Your output looks similar to the following:
Source Queue Destination Queue ----------------------------------- ----------------------------------- STRMADMIN.STREAMS_QUEUE@DBS1.NET STRMADMIN.STREAMS_QUEUE@DBS2.NET
If the propagation is not using the correct queues, then create a new propagation. You may need to remove the existing propagation if it is not appropriate for your environment.
For a propagation job to propagate events, the propagation schedule for the propagation job must be enabled. If events are not being propagated by a propagation as expected, then the propagation's propagation job schedule may not be enabled.
You can find the following information about the schedule for a propagation job by running the query in this section:
For example, to check whether a propagation job used by a propagation named dbs1_to_dbs2
is enabled, run the following query:
COLUMN DESTINATION_DBLINK HEADING 'Destination|DB Link' FORMAT A15 COLUMN SCHEDULE_DISABLED HEADING 'Schedule' FORMAT A8 COLUMN PROCESS_NAME HEADING 'Process' FORMAT A7 COLUMN FAILURES HEADING 'Number of|Failures' FORMAT 9999 COLUMN LAST_ERROR_TIME HEADING 'Last Error Time' FORMAT A15 COLUMN LAST_ERROR_MSG HEADING 'Last Error Message' FORMAT A18 SELECT p.DESTINATION_DBLINK, DECODE(s.SCHEDULE_DISABLED, 'Y', 'Disabled', 'N', 'Enabled') SCHEDULE_DISABLED, s.PROCESS_NAME, s.FAILURES, s.LAST_ERROR_TIME, s.LAST_ERROR_MSG FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p WHERE p.PROPAGATION_NAME = 'DBS1_TO_DBS2' AND p.DESTINATION_DBLINK = s.DESTINATION AND s.SCHEMA = p.SOURCE_QUEUE_OWNER AND s.QNAME = p.SOURCE_QUEUE_NAME;
If the schedule is enabled currently for the propagation job, then your output looks similar to the following:
Destination Number of DB Link Schedule Process Failures Last Error Time Last Error Message --------------- -------- ------- --------- --------------- ------------------ DBS2.NET Enabled J001 0
If there is a problem, then try the following actions to correct it:
ENABLE_PROPAGATION_SCHEDULE
procedure in the DBMS_AQADM
package, if you have not done so already.J001
.Propagation jobs use job queue processes to propagate events. Make sure the JOB_QUEUE_PROCESSES
initialization parameter is set to 2
or higher in each database instance that runs propagations. It should be set to a value that is high enough to accommodate all of the jobs that run simultaneously.
See Also:
|
SYS.AnyData
queues are secure queues, and security must be configured properly for users to be able to perform operations on them. If you use the SET_UP_QUEUE
procedure in the DBMS_STREAMS_ADM
package to configure a secure SYS.AnyData
queue, then an error is raised if the agent that SET_UP_QUEUE
tries to create already exists and is associated with a user other than the user specified by queue_user
in this procedure. In this case, rename or remove the existing agent using the ALTER_AQ_AGENT
or DROP_AQ_AGENT
procedure, respectively, in the DBMS_AQADM
package. Next, retry SET_UP_QUEUE
.
In addition, you may encounter one of the following errors if security is not configured properly for a SYS.AnyData
queue:
Secure queue access must be granted to an AQ agent explicitly for both enqueue and dequeue operations. You grant the agent these privileges using the ENABLE_DB_ACCESS
procedure in the DBMS_AQADM
package.
For example, to grant an agent named explicit_dq
privileges of the database user oe
, run the following procedure:
BEGIN DBMS_AQADM.ENABLE_DB_ACCESS( agent_name => 'explicit_dq', db_username => 'oe'); END;
/
To check the privileges of the agents in a database, run the following query:
SELECT AGENT_NAME "Agent", DB_USERNAME "User" FROM DBA_AQ_AGENT_PRIVS;
Your output looks similar to the following:
Agent User ------------------------------ ------------------------------ EXPLICIT_ENQ OE APPLY_OE OE EXPLICIT_DQ OE
See Also:
"Enabling a User to Perform Operations on a Secure Queue" for a detailed example that grants privileges to an agent |
To enqueue into a secure queue, the SENDER_ID
must be set to an AQ agent with secure queue privileges for the queue in the message properties.
See Also:
"Wrapping User Message Payloads in a SYS.AnyData Wrapper and Enqueuing Them" for an example that sets the |
If an apply process is not applying changes as expected, then use the following checklist to identify and resolve apply problems:
An apply process applies changes only when it is enabled. You can check whether an apply process is enabled, disabled, or aborted by querying the DBA_APPLY
data dictionary view.
For example, to check whether an apply process named APPLY
is enabled, run the following query:
SELECT STATUS FROM DBA_APPLY WHERE APPLY_NAME = 'APPLY';
If the apply process is disabled, then your output looks similar to the following:
STATUS -------- DISABLED
If the apply process is disabled, then try restarting it. If the apply process is aborted, then you may need to correct an error before you can restart it successfully. To determine why the apply process aborted, query the DBA_APPLY
data dictionary view or check the trace files for the apply process.
The following query shows when the apply process aborted and the error that caused it to abort:
COLUMN APPLY_NAME HEADING 'APPLY|Process|Name' FORMAT A10 COLUMN STATUS_CHANGE_TIME HEADING 'Abort Time' COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999 COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A40 SELECT APPLY_NAME, STATUS_CHANGE_TIME, ERROR_NUMBER, ERROR_MESSAGE FROM DBA_APPLY WHERE STATUS='ABORTED';
See Also:
|
If an apply process has not applied recent changes, then the cause may be that the apply process has fallen behind. You can check apply process latency by querying the V$STREAMS_APPLY_COORDINATOR
dynamic performance view. If apply process latency is high, then you may be able to improve performance by adjusting the setting of the parallelism
apply process parameter.
An apply process can apply either captured events or user-enqueued events, but not both types of events. If an apply process is not applying events of a certain type, then it may be because the apply process was configured to apply the other type of events. You can check the type of events applied by an apply process by querying the DBA_APPLY
data dictionary view.
For example, to check whether an apply process named APPLY
applies captured or user-enqueued events, run the following query:
COLUMN APPLY_CAPTURED HEADING 'Type of Events Applied' FORMAT A25 SELECT DECODE(APPLY_CAPTURED, 'YES', 'Captured', 'NO', 'User-Enqueued') APPLY_CAPTURED FROM DBA_APPLY WHERE APPLY_NAME = 'APPLY';
If the apply process applies captured events, then your output looks similar to the following:
Type of Events Applied ------------------------- Captured
If an apply process is not applying the expected type of events, then you may need to create a new apply process to apply the events.
An apply process must receive events in its queue before it can apply these events. Therefore, if an apply process is applying captured events, then the capture process that captures these events must be enabled, and it must be configured properly. Similarly, if events are propagated from one or more databases before reaching the apply process, then each propagation must be enabled and must be configured properly. If a capture process or a propagation on which the apply process depends is not enabled or is not configured properly, then the events may never reach the apply process queue.
The rule sets used by all Streams clients, including capture processes and propagations, determine the behavior of these Streams clients. Therefore, make sure the rule sets for any capture processes or propagations on which an apply process depends contain the correct rules. If the rules for these Streams clients are not configured properly, then the apply process queue may never receive the appropriate events. Also, an event travelling through a stream is the composition of all of the transformations done along the path. For example, if a capture process uses subset rules and performs row migration during capture of an event, and a propagation uses a rule-based transformation on the event to change the table name, then, when the event reaches an apply process, the apply process rules must account for these transformations.
In an environment where a capture process captures changes that are propagated and applied at multiple databases, you can use the following guidelines to determine whether a problem is caused by a capture process or a propagation on which an apply process depends, or the problem is caused by the apply process itself:
You can use PL/SQL procedures to handle events dequeued by an apply process in a customized way. These handlers include DML handlers, DDL handlers, precommit handlers, and message handlers. If an apply process is not behaving as expected, then check the handler procedures used by the apply process, and correct any flaws. You can find the names of these procedures by querying the DBA_APPLY_DML_HANDLERS
and DBA_APPLY
data dictionary views. You may need to modify a handler procedure or remove it to correct an apply problem.
See Also:
|
If an apply process is not applying events, but there are events that satisfy the apply process rule sets in the apply process queue, then make sure the AQ_TM_PROCESSES
initialization parameter is not set to zero at the destination database. If this parameter is set to zero, then unset this parameter or set it to a nonzero value and monitor the apply process to see if it begins to apply events.
The AQ_TM_PROCESSES
initialization parameter controls time monitoring on queue messages and controls processing of messages with delay and expiration properties specified. In Oracle Database 10g, the database automatically controls these activities when the AQ_TM_PROCESSES
initialization parameter is not set.
To determine whether there are captured events in a buffered queue, you can query the V$BUFFERED_QUEUES
and V$BUFFERED_SUBSCRIBERS
dynamic performance views. To determine whether there are user-enqueued events in a queue, you can query the queue table for the queue.
See Also:
|
When an apply process cannot apply an event, it moves the event and all of the other events in the same transaction into the error queue. You should check for apply errors periodically to see if there are any transactions that could not be applied. You can check for apply errors by querying the DBA_APPLY_ERROR
data dictionary view. Also, you can reexecute a particular transaction from the error queue or all of the transactions in the error queue.
If a capture process, a propagation, an apply process, or a messaging client is not behaving as expected, then the problem may be that rules or rule-based transformations for the Streams client are not configured properly. Use the following checklist to identify and resolve problems with rules and rule-based transformations:
If a capture process, a propagation, an apply process, or a messaging client is behaving in an unexpected way, then the problem may be that the rules in either the positive or negative rule set for the Streams client are not configured properly. For example, if you expect a capture process to capture changes made to a particular table, but the capture process is not capturing these changes, then the cause may be that the rules in the rule sets used by the capture process do not instruct the capture process to capture changes to the table.
You can check the rules for a particular Streams client by querying the DBA_STREAMS_RULES
data dictionary view. If you use both positive and negative rule sets in your Streams environment, then it is important to know whether a rule returned by this view is in the positive or negative rule set for a particular Streams client. A Streams client performs an action, such as capture, propagation, apply, or dequeue, for events that satisfy its rule sets. In general, an event satisfies the rule sets for a Streams client if no rules in the negative rule set evaluate to TRUE
for the event, and at least one rule in the positive rule set evaluates to TRUE
for the event. "Rule Sets and Rule Evaluation of Events" contains more detailed information about how an event satisfies the rule sets for a Streams client, including information about Streams client behavior when one or more rule sets are not specified.
This section includes the following subsections:
Schema and global rules in the positive rule set for a Streams client instruct the Streams client to perform its task for all of the events relating to a particular schema or database, respectively. Schema and global rules in the negative rule set for a Streams client instruct the Streams client to discard all of the events relating to a particular schema or database, respectively. If a Streams client is not behaving as expected, then it may be because schema or global rules are not configured properly for the Streams client.
For example, suppose a database is running an apply process named strm01_apply
, and you want this apply process to apply LCRs containing changes to the hr
schema. If the apply process uses a negative rule set, then make sure there are no schema rules that evaluate to TRUE
for this schema in the negative rule set. Such rules cause the apply process to discard LCRs containing changes to the schema. See "Determining the Rules in the Negative Rule Set for a Streams Client" for an example of a query that shows such rules.
If the query returns any such rules, then the rules returned may be causing the apply process to discard changes to the schema. If this query returns no rows, then make sure there are schema rules in the positive rule set for the apply process that evaluate to TRUE
for the schema. See "Determining the Rules in the Positive Rule Set for a Streams Client" for an example of a query that shows such rules.
Table rules in the positive rule set for a Streams client instruct the Streams client to perform its task for the events relating to one or more particular tables. Table rules in the negative rule set for a Streams client instruct the Streams client to discard the events relating to one or more particular tables.
If a Streams client is not behaving as expected for a particular table, then it may be for one of the following reasons:
If you are sure that no global or schema rules are causing the unexpected behavior, then you can check for table rules in the rule sets for a Streams client. For example, if you expect a capture process to capture changes to a particular table, but the capture process is not capturing these changes, then the cause may be that the rules in the positive and negative rule sets for the capture process do not instruct it to capture changes to the table.
Suppose a database is running a capture process named strm01_capture,
and you want this capture process to capture changes to the hr.departments
table. If the capture process uses a negative rule set, then make sure there are no table rules that evaluate to TRUE
for this table in the negative rule set. Such rules cause the capture process to discard changes to the table. See "Determining the Rules in the Negative Rule Set for a Streams Client" for an example of a query that shows such rules.
If the query returns any such rules, then the rules returned may be causing the capture process to discard changes to the table. If this query returns no rules, then make sure there are one or more table rules in the positive rule set for the capture process that evaluate to TRUE
for the table. See "Determining the Rules in the Positive Rule Set for a Streams Client" for an example of a query that shows such rules.
See Also:
"Table Rules Example" for more information about specifying table rules |
A subset rule may be in the rule set used by a capture process, propagation, apply process, or messaging client. A subset rule evaluates to TRUE
only if a DML operation contains a change to a particular subset of rows in the table. For example, to check for table rules that evaluate to TRUE
for an apply process named strm01_apply
when there are changes to the hr.departments
table, run the following query:
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A20 COLUMN RULE_TYPE HEADING 'Rule Type' FORMAT A20 COLUMN DML_CONDITION HEADING 'Subset Condition' FORMAT A30 SELECT RULE_NAME, RULE_TYPE, DML_CONDITION FROM DBA_STREAMS_RULES WHERE STREAMS_NAME = 'STRM01_APPLY' AND STREAMS_TYPE = 'APPLY' AND SCHEMA_NAME = 'HR' AND OBJECT_NAME = 'DEPARTMENTS';
Rule Name Rule Type Subset Condition -------------------- -------------------- ------------------------------ DEPARTMENTS5 DML location_id=1700 DEPARTMENTS6 DML location_id=1700 DEPARTMENTS7 DML location_id=1700
Notice that this query returns any subset condition for the table in the DML_CONDITION
column, which is labeled "Subset Condition" in the output. In this example, subset rules are specified for the hr.departments
table. These subset rules evaluate to TRUE
only if an LCR contains a change that involves a row where the location_id
is 1700
. So, if you expected the apply process to apply all changes to the table, then these subset rules cause the apply process to discard changes that involve rows where the location_id
is not 1700
.
See Also:
|
A message rule may be in the rule set used by a propagation, apply process, or messaging client. Message rules only pertain to user-enqueued events of a specific message type, not captured events. A message rule evaluates to TRUE
if a user-enqueued event in a queue is of the type specified in the message rule and satisfies the rule condition of the message rule.
If you expect a propagation, apply process, or messaging client to perform its task for certain user-enqueued events, but the Streams client is not performing its task for these events, then the cause may be that the rules in the positive and negative rule sets for the Streams client do not instruct it to perform its task for these events. Similarly, if you expect a propagation, apply process, or messaging client to discard certain user-enqueued events, but the Streams client is not discarding these events, then the cause may be that the rules in the positive and negative rule sets for the Streams client do not instruct it to discard these events.
For example, suppose you want a messaging client named oe
to dequeue messages of type oe.user_msg
that satisfy the following condition:
:"VAR$_2".OBJECT_OWNER = 'OE' AND :"VAR$_2".OBJECT_NAME = 'ORDERS'
If the messaging client uses a negative rule set, then make sure there are no message rules that evaluate to TRUE
for this message type in the negative rule set. Such rules cause the messaging client to discard these messages. For example, to determine whether there are any such rules in the negative rule set for the messaging client, run the following query:
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A30 COLUMN RULE_CONDITION HEADING 'Rule Condition' FORMAT A30 SELECT RULE_NAME, RULE_CONDITION FROM DBA_STREAMS_RULES WHERE STREAMS_NAME = 'OE' AND MESSAGE_TYPE_OWNER = 'OE' AND MESSAGE_TYPE_NAME = 'USER_MSG' AND RULE_SET_TYPE = 'NEGATIVE';
If this query returns any rules, then the rules returned may be causing the messaging client to discard messages. Examine the rule condition of the returned rules to determine whether these rules are causing the messaging client to discard the messages that it should be dequeuing. If this query returns no rules, then make sure there are message rules in the positive rule set for the messaging client that evaluate to TRUE
for this message type and condition.
For example, to determine whether there are any message rules that evaluate to TRUE
for this message type in the positive rule set for the messaging client, run the following query:
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A35 COLUMN RULE_CONDITION HEADING 'Rule Condition' FORMAT A35 SELECT RULE_NAME, RULE_CONDITION FROM DBA_STREAMS_RULES WHERE STREAMS_NAME = 'OE' AND MESSAGE_TYPE_OWNER = 'OE' AND MESSAGE_TYPE_NAME = 'USER_MSG' AND RULE_SET_TYPE = 'POSITIVE';
If you have message rules that evaluate to TRUE
for this message type in the positive rule set for the messaging client, then these rules are returned. In this case, your output looks similar to the following:
Rule Name Rule Condition ----------------------------------- ----------------------------------- RULE$_3 :"VAR$_2".OBJECT_OWNER = 'OE' AND :"VAR$_2".OBJECT_NAME = 'ORDERS'
Examine the rule condition for the rules returned to determine whether they instruct the messaging client to dequeue the proper messages. Based on these results, the messaging client named oe
should dequeue messages of oe.user_msg
type that satisfy condition shown in the output. In other words, no rule in the negative messaging client rule set discards these messages, and a rule exists in the positive messaging client rule set that evaluates to TRUE
when the messaging client finds a message in its queue of the of oe.user_msg
type that satisfies the rule condition.
See Also:
|
If you determine that a Streams capture process, propagation, apply process, or messaging client is not behaving as expected because one or more rules must be added to the rule set for the Streams client, then you can use one of the following procedures in the DBMS_STREAMS_ADM
package to add appropriate rules:
ADD_GLOBAL_PROPAGATION_RULES
ADD_GLOBAL_RULES
ADD_SCHEMA_PROPAGATION_RULES
ADD_SCHEMA_RULES
ADD_SUBSET_PROPAGATION_RULES
ADD_SUBSET_RULES
ADD_TABLE_PROPAGATION_RULES
ADD_TABLE_RULES
ADD_MESSAGE_PROPAGATION_RULE
ADD_MESSAGE_RULE
You can use the DBMS_RULE_ADM
package to add customized rules, if necessary.
It is also possible that the Streams capture process, propagation, apply process, or messaging client is not behaving as expected because one or more rules should be altered or removed from a rule set.
If you have the correct rules, and the relevant events are still filtered out by a Streams capture process, propagation, or apply process, then check your trace files and alert log for a warning about a missing "multi-version data dictionary", which is a Streams data dictionary. The following information may be included in such warning messages:
gdbnm
: Global name of the source database of the missing objectscn
: SCN for the transaction that has been missedIf you find such messages, and you are using custom capture process rules or reusing existing capture process rules for a new destination database, then make sure you run the appropriate procedure to prepare for instantiation:
Also, make sure propagation is working from the source database to the destination database. Streams data dictionary information is propagated to the destination database and loaded into the dictionary at the destination database.
See Also:
|
A rule-based transformation is any user-defined modification to an event that results when a rule in a positive rule set evaluates to TRUE
. A rule-based transformation is specified in the action context of a rule, and these action contexts contain a name-value pair with STREAMS$_TRANSFORM_FUNCTION
for the name and a user-created function name for the value. This user-created function performs the transformation. If the user-created function contains any flaws, then unexpected behavior may result.
If a Streams capture process, propagation, apply process, or messaging client is not behaving as expected, then check the rule-based transformation functions specified for the Streams client and correct any flaws. You can find the names of these functions by querying the DBA_STREAMS_TRANSFORM_FUNCTION
data dictionary view. You may need to modify a transformation function or remove a rule-based transformation to correct the problem. Make sure the name of the function is spelled correctly.
Rule evaluation is done before a rule-based transformation. For example, if you have a transformation that changes the name of a table from emps
to employees
, then make sure each rule using the transformation specifies the table name emps
, rather than employees
, in its rule condition.
See Also:
|
Messages about each capture process, propagation job, and apply process are recorded in trace files for the database in which the process or propagation job is running. A local capture process runs on a source database, a downstream capture process runs on a downstream database, a propagation job runs on the database containing the source queue in the propagation, and an apply process runs on a destination database. These trace file messages can help you to identify and resolve problems in a Streams environment.
All trace files for background processes are written to the destination directory specified by the initialization parameter BACKGROUND_DUMP_DEST
. The names of trace files are operating system specific, but each file usually includes the name of the process writing the file.
For example, on some operating systems, the trace file name for a process is sid_xxxxx_iiiii
.trc
, where:
sid
is the system identifier for the databasexxxxx
is the name of the processiiiii
is the operating system process numberAlso, you can set the write_alert_log
parameter to y
for both a capture process and an apply process. When this parameter is set to y
, which is the default setting, the alert log for the database contains messages about why the capture process or apply process stopped.
You can control the information in the trace files by setting the trace_level
capture process or apply process parameter using the SET_PARAMETER
procedure in the DBMS_CAPTURE_ADM
and DBMS_APPLY_ADM
packages.
Use the following checklist to check the trace files related to Streams:
See Also:
|
A capture process is an Oracle background process named c
nnn
, where nnn
is the capture process number. For example, on some operating systems, if the system identifier for a database running a capture process is hqdb
and the capture process number is 01
, then the trace file for the capture process starts with hqdb_c001
.
See Also:
"Displaying General Information About Each Capture Process" for a query that displays the capture process number of a capture process |
Each propagation uses a propagation job that depends on the job queue coordinator process and a job queue process. The job queue coordinator process is named cjq
nn
, where nn
is the job queue coordinator process number, and a job queue process is named j
nnn
, where nnn
is the job queue process number.
For example, on some operating systems, if the system identifier for a database running a propagation job is hqdb
and the job queue coordinator process is 01
, then the trace file for the job queue coordinator process starts with hqdb_cjq01
. Similarly, on the same database, if a job queue process is 001
, then the trace file for the job queue process starts with hqdb_j001
. You can check the process name by querying the PROCESS_NAME
column in the DBA_QUEUE_SCHEDULES
data dictionary view.
See Also:
"Is the Propagation Job Used by a Propagation Enabled?" for a query that displays the job queue process used by a propagation job |
An apply process is an Oracle background process named a
nnn
, where nnn
is the apply process number. For example, on some operating systems, if the system identifier for a database running an apply process is hqdb
and the apply process number is 001
, then the trace file for the apply process starts with hqdb_a001
.
An apply process also uses parallel execution servers. Information about an apply process may be recorded in the trace file for one or more parallel execution servers. The process name of a parallel execution server is p
nnn
, where nnn
is the process number. So, on some operating systems, if the system identifier for a database running an apply process is hqdb
and the process number is 001
, then the trace file that may contain information about a parallel execution server used by an apply process starts with hqdb_p001
.
See Also:
|