PL/SQL Packages and Types Reference 10g Release 1 (10.1) Part Number B10802-01 |
|
|
View PDF |
The DBMS_DATAPUMP
package is used to move all, or part of, a database between databases, including both data and metadata.
See Also:
Oracle Database Utilities for more information on the concepts behind the |
This chapter contains the following topics:
The support and functionality provided by DBMS_DATAPUMP
is as follows:
Security for the DBMS_DATAPUMP
package is implemented through roles.
The existing EXP_FULL_DATABASE
and IMP_FULL_DATABASE
roles will be used to allow privileged users to take full advantage of the API. The Data Pump API will use these roles to determine whether privileged application roles should be assigned to the processes comprising the job.
The EXP_FULL_DATABASE
role affects only Export operations. It allows users running these operations to do the following:
Although the SYS
schema does not have the EXP_FULL_DATABASE
role assigned to it, all security checks performed by Data Pump that require the EXP_FULL_DATABASE
role will also grant access to the SYS schema.
The IMP_FULL_DATABASE
role affects only Import and SQL_FILE operations. It allows users running these operations to do the following:
Although the SYS
schema does not have the IMP_FULL_DATABASE
role assigned to it, all security checks performed by Data Pump that require the IMP_FULL_DATABASE
role will also grant access to the SYS schema.
There are several public constants defined for use with the DBMS_DATAPUMP
.GET_STATUS
procedure. All such constants are defined as part of the DBMS_DATAPUMP
package. Any references to these constants must be prefixed by DBMS_DATAPUMP
. and followed by the symbols in the following lists:
The following mask bit definitions are used for controlling the return of data through the DBMS_DATAPUMP
.GET_STATUS
procedure.
KU$_STATUS_WIP
CONSTANT BINARY_INTEGER := 1
;KU$_STATUS_JOB_DESC
CONSTANT BINARY_INTEGER := 2;
KU$_STATUS_JOB_STATUS
CONSTANT BINARY_INTEGER := 4;
KU$_STATUS_JOB_ERROR
CONSTANT BINARY_INTEGER := 8;
The following definitions are used for identifying types of dump files returned through the DBMS_DATAPUMP
.GET_STATUS
procedure.
KU$_DUMPFILE_TYPE_DISK
CONSTANT BINARY_INTEGER := 0;
KU$_DUMPFILE_TYPE_TEMPLATE CONSTANT BINARY_INTEGER := 3;
The types described in this section are defined in the SYS
schema for use by the GET_STATUS
function. The way in which these types are defined and used may be different than what you are accustomed to. Be sure to read this section carefully.
The collection of types defined for use with the GET_STATUS
procedure are version-specific and include version information in the names of the types. Once introduced, these types will always be provided and supported in future versions of Oracle Database and will not change. However, in future releases of Oracle Database, new versions of these types might be created that provide new or different information. The new versions of these types will have different version information embedded in the type names.
For example, in Oracle Database 10g, release 1 (10.1), there is a sys.ku$_Status1010
type, and in the next Oracle Database release, there could be a sys.ku$_Status1110
type defined. Both types could be used with the GET_STATUS
procedure.
Public synonyms have been defined for each of the types used with the GET_STATUS
procedure. This makes it easier to use the types and means that you do not have to be concerned with changes to the actual type names or schemas where they reside. Oracle recommends that you use these synonyms whenever possible.
For each of the types, there is a version-specific synonym and a generic synonym. For example, the version-specific synonym ku$_Status1010
is defined for the sys.ku$_Status1010
type.
The generic synonym always describes the latest version of that type. For example, in Oracle Database 10g, release 1, the generic synonym ku$_Status
is defined as ku$_Status1010
. In a future release, there might be a ku$_Status1110
synonym for sys.ku$Status1110
. Because the ku$_Status
generic synonym always points to the latest definition, it would now point to ku$_Status1110
rather than to ku$_Status1010
.
The choice of whether to use version-specific synonyms or generic synonyms makes a significant difference in how you work. Using version-specific names protects your code from changes in future releases of Oracle Database because those types will continue to exist and be supported. However, access to new information will require code changes to use new synonym names for each of the types. Using the generic names implies that you always want the latest definition of the types and are prepared to deal with changes in different releases of Oracle Database.
When the version of Oracle Database that you are using changes, any C code that accesses types through generic synonym names will need to be recompiled.
Note: Languages other than PL/SQL must ensure that their type definitions are properly aligned with the version-specific definitions. |
See Also:
GET_STATUS Procedure for additional information about how types are used |
The worker status types describe what each worker process in a job is doing. The schema, object name, and object type of an object being processed will be provided. For workers processing user data, the partition name for a partitioned table (if any), the number of bytes processed in the partition, and the number of rows processed in the partition are also returned. Workers processing metadata provide status on the last object that was processed. No status for idle threads is returned.
The percent_done
refers to the amount completed for the current data item being processed. It is not updated for metadata objects.
The worker status types are defined as follows:
CREATE TYPE sys.ku$_WorkerStatus1010 AS OBJECT ( worker_number NUMBER, process_name VARCHAR2(30), state VARCHAR2(30), schema VARCHAR2(30), name VARCHAR2(4000), object_type VARCHAR2(200), partition VARCHAR2(30), completed_objects NUMBER, total_objects NUMBER, completed_rows NUMBER, completed_bytes NUMBER, percent_done NUMBER ) / CREATE PUBLIC SYNONYM ku$_WorkerStatus1010 FOR sys.ku$_WorkerStatus1010; CREATE TYPE sys.ku$_WorkerStatusList1010 AS TABLE OF sys.ku$_WorkerStatus1010 / CREATE PUBLIC SYNONYM ku$_WorkerStatusList1010 FOR sys.ku$_WorkerStatusList1010;
These types provide informational and error text to attached clients and the log stream. The ku$LogLine.errorNumber
type is set to NULL
for informational messages but is specified for error messages. Each log entry may contain several lines of text messages.
The log entry and error types are defined as follows:
CREATE TYPE sys.ku$_LogLine1010 AS OBJECT ( logLineNumber NUMBER, errorNumber NUMBER, LogText VARCHAR2(2000) ) / CREATE PUBLIC SYNONYM ku$_LogLine1010 FOR sys.ku$_LogLine1010; CREATE TYPE sys.ku$_LogEntry1010 AS TABLE OF sys.ku$_LogLine1010 / CREATE PUBLIC SYNONYM ku$_LogEntry1010 FOR sys.ku$_LogEntry1010;
The job status type returns status about a job. Usually, the status concerns a running job but it could also be about a stopped job when a client attaches. It is typically requested at attach time, when the client explicitly requests status from interactive mode and every N seconds when the client has requested status periodically.
The job status types are defined as follows (percent_done
applies to data only):
CREATE TYPE sys.ku$_DumpFile1010 AS OBJECT ( file_name VARCHAR2(4000), file_type NUMBER, file_size NUMBER, file_bytes_written NUMBER) CREATE TYPE sys.ku$_DumpFileSet1010 AS TABLE OF sys.ku$_DumpFile1010; CREATE TYPE sys.ku$_JobStatus1010 AS OBJECT ( job_name VARCHAR2(30), operation VARCHAR2(30), job_mode VARCHAR2(30), bytes_processed NUMBER, total_bytes NUMBER, percent_done NUMBER, degree NUMBER, error_count NUMBER, state VARCHAR2(30), phase NUMBER, restart_count NUMBER, worker_status_list ku$_WorkerStatusList1010, files ku$_DumpFileSet1010) CREATE PUBLIC SYNONYM ku$_JobStatus1010 FOR sys.ku$_JobStatus1010;
The job description type holds all the environmental information about the job such as parameter settings and dump file set members. There are a couple of subordinate types required as well.
The job description types are defined as follows:
CREATE TYPE sys.ku$_ParamValue1010 AS OBJECT ( param_name VARCHAR2(30), param_op VARCHAR2(30), param_type VARCHAR2(30), param_length NUMBER, param_value_n NUMBER, param_value_t VARCHAR2(2000) ); CREATE TYPE sys.ku$_ParamValues1010 AS TABLE OF sys.ku$_ParamValue1010; CREATE TYPE sys.ku$_JobDesc1010 AS OBJECT ( job_name VARCHAR2(30), guid RAW(16), operation VARCHAR2(30), job_mode VARCHAR2(30), remote_link VARCHAR2(4000), owner VARCHAR2(30), instance VARCHAR2(16), db_version VARCHAR2(30), creator_privs VARCHAR2(30), start_time DATE, max_degree NUMBER, log_file VARCHAR2(4000), sql_file VARCHAR2(4000), params ku$_ParamValues1010)
The status type is an aggregate of some the previous types defined and is the return value for the GET_STATUS
call. The mask attribute indicates which types of information are being returned to the caller. It is created by a client's shadow process from information it retrieves off the status queue or directly from the master table.
For errors, the ku$_LogEntry
that is returned has already had its log lines ordered for proper output. That is, the original ku$_LogEntry
objects have been ordered from outermost context to innermost.
The status types are defined as follows:
CREATE TYPE sys.ku$_Status1010 AS OBJECT ( mask NUMBER, /* Indicates which status types are present*/ wip ku$_LogEntry1010, /* Work-In-Progress: std. exp/imp msgs */ job_description ku$_JobDesc1010, /* Complete job description */ job_status ku$_JobStatus1010, /* Detailed job status + per-worker sts */ error ku$_LogEntry1010 /* Multi-level contextual errors */ )
This procedure adds files to the dump file set for an Export, Import, or SQL_FILE operation or specifies the log file or the output file for a SQL_FILE operation.
DBMS_DATAPUMP.ADD_FILE ( handle IN NUMBER, filename IN VARCHAR2, directory IN VARCHAR2, filesize IN VARCHAR2 DEFAULT NULL, filetype IN NUMBER DEFAULT DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
Parameter | Description |
---|---|
|
The handle of a job. The current session must have previously attached to the handle through an |
|
The name of the file being added. |
|
The name of a directory object within the database that is used to locate |
|
The size of the dump file that is being added. It may be specified as the number of bytes, number of kilobytes (if followed by K), number of megabytes (if followed by M) or number of gigabytes (if followed by G). An Export operation will write no more than the specified number of bytes to the file. Once the file is full, it will be closed. If there is insufficient space on the device to write the specified number of bytes, the Export operation will fail, but it can be restarted. If not specified, |
|
The type of the file to be added. The legal values are as follows and must be preceded by |
INVALID_HANDLE
. The specified handle is not attached to a Data Pump job.INVALID_ARGVAL
. An invalid value was supplied for an input parameter.INVALID_STATE
. The job is completing, or the job is past the defining state for an import or SQL_FILE job or is past the defining state for LOG and SQL files.FILE_ERROR
. Oracle does not have the requested operating system access to the specified file or the file has already been specified for the current operation.INVALID_OPERATION
. A dump file was specified for a Network Import or ESTIMATE_ONLY
export operation.SUCCESS_WITH_INFO
. The procedure succeeded, but further information is available through the GET_STATUS
procedure.NO_SUCH_JOB
. The specified job does not exist.SET_PARALLEL
procedure.ADD_FILE
. For Export operations, the new dump files will be created as they are needed. Enough dump files will be created to allow all of the processes specified by the current SET_PARALLEL
value to be active. If one of the dump files fills, it will be closed and a new dump file (with a new generated name) will be created to take its place. If multiple ADD_FILE
s with substitution variables have been specified for dump files in a job, they will be used to generate dump files in a round robin fashion. For example, if expa%U
, expb%U
and expc%U
were all specified for a job having a parallelism of 6, the initial dump files created would look like: expa01
, expb01
, expc01
, expa02
, expb02
, and expc02
.expa%U
, expb%U
and expc%U
, an Import or SQL_FILE operation will begin by attempting to open the dump files, expa01
, expb01
, and expc01
.If the dump file containing the master table is not found in this set, the operation will expand its search for dump files by incrementing the substitution variable and looking up the new filenames (for example, expa02
, expb02
, and expc02
). The DataPump API will keep expanding the search until it locates the dump file containing the master table. If the DataPump API determines that the dump file does not exist or is not part of the current dump set at any iteration, the DataPump API will stop incrementing the substitution variable for the dump file specification that was in error. Once the master table is found, the master table will be used to ascertain when all of dump files in the dump file set have been located.This function gains access to a previously-created job.
DBMS_DATAPUMP.ATTACH( job_name IN VARCHAR2 DEFAULT NULL, job_owner IN VARCHAR2 DEFAULT NULL) RETURN NUMBER;
An opaque handle for the job. This handle is used as input to the following procedures: ADD_FILE
, DATA_FILTER
, DETACH
, STOP_JOB
, GET_STATUS
, LOG_ENTRY
, METADATA_FILTER
, METADATA_REMAP
, METADATA_TRANSFORM
, SET_PARALLEL
, and START_JOB
.
INVALID_ARGVAL
. An invalid value was supplied for an input parameter.SUCCESS_WITH_INFO
. The procedure succeeded, but further information is available through the GET_STATUS
procedure.NO_SUCH_JOB
. The specified job does not exist.INVALID_OPERATION
. The specified operation is not valid in this context.ATTACH
succeeds, you can monitor the progress of the job or control the job. The stream of KU$_STATUS_WIP and KU$_STATUS_JOB_ERROR messages returned through the GET_STATUS
procedure will be returned to the newly attached job starting at the approximate time of the client's attachment. There will be no repeating of status and error messages that were processed before the client attached to a job.ATTACH
fails, use a null handle in a subsequent call to GET_STATUS
for more information about the failure.This procedure specifies restrictions on the rows that are to be retrieved.
DBMS_DATAPUMP.DATA_FILTER ( handle IN NUMBER, name IN VARCHAR2, value IN NUMBER, table_name IN VARCHAR2 DEFAULT NULL, schema_name IN VARCHAR2 DEFAULT NULL); DBMS_DATAPUMP.DATA_FILTER( handle IN NUMBER, name IN VARCHAR2, value IN VARCHAR2, table_name IN VARCHAR2 DEFAULT NULL, schema_name IN VARCHAR2 DEFAULT NULL);
INVALID_HANDLE
. The specified handle is not attached to a Data Pump job.INVALID_ARGVAL
. There can be several reasons for this message:
INVALID_STATE
. The user called DATA_FILTER
when the job was not in the Defining state.INCONSISTENT_ARGS
. The value
parameter is missing or its datatype does not match the filter name. Or a schema name was supplied, but not a table name.PRIVILEGE_ERROR
. A schema name was supplied, but the user did not have the EXP_FULL_DATABASE
or IMP_FULL_DATABASE
role.SUCCESS_WITH_INFO
. The procedure succeeded, but further information is available through the GET_STATUS
procedure.NO_SUCH_JOB
. The specified job does not exist.SUBQUERY
filters to a table) or once in each job. If different filters using the same name are applied to both a particular table and to the whole job, the filter parameter supplied for the specific table will take precedence.
With the exception of the INCLUDE_ROWS
filter, data filters are not supported on tables having nested tables or domain indexes defined upon them. Data filters are not supported in jobs performed in Transportable Tablespace mode.
The available data filters are described in Table 25-5.
This procedure specifies that the user has no further interest in using the handle.
DBMS_DATAPUMP.DETACH( handle IN NUMBER);
Parameter | Description |
---|---|
|
The handle of the job. The current session must have previously attached to the handle through an |
INVALID_HANDLE
. The specified handle is not attached to a Data Pump job.SUCCESS_WITH_INFO
. The procedure succeeded, but further information is available through the GET_STATUS
procedure.NO_SUCH_JOB
. The specified job does not exist.STOP_JOB
that was applied to the job referenced by the handle. All previously allocated DBMS_DATAPUMP
handles are released when an instance is restarted.This procedure monitors the status of a job or wait for the completion of a job.
DBMS_DATAPUMP.GET_STATUS( handle IN NUMBER, mask IN BINARY_INTEGER, timeout IN NUMBER DEFAULT NULL, job_state OUT VARCHAR2, status OUT ku$_Status1010);
Parameter | Description |
---|---|
|
The handle of a job. The current session must have previously attached to the handle through an |
|
A bit mask that indicates which of four types of information to return: Each status has a numerical value. You can request multiple types of information by adding together different combinations of values. See Types. |
|
Maximum number of seconds to wait before returning to the user. A value of 0 requests an immediate return. A value of -1 requests an infinite wait. If KU$_STATUS_WIP or KU$_STATUS_JOB_ERROR information is requested and becomes available during the timeout period, then the procedure returns before the timeout period is over. |
|
Current state of the job. If only job state is needed, it is much more efficient to use this parameter than to retrieve the full ku$_Status1010 structure. |
|
A ku$_Status is returned. The ku$_Status mask indicates what kind of information is included. This could be none if only KU$_STATUS_WIP or KU$_STATUS_JOB_ERROR information is requested and the timeout period expires. |
INVALID_HANDLE
. The specified handle is not attached to a Data Pump job.INVALID_VALUE
. The mask or timeout contains an illegal value.SUCCESS_WITH_INFO
. The procedure succeeded, but further information is available through the GET_STATUS procedure.NO_SUCH_JOB
. The specified job does not exist.The GET_STATUS
procedure is used to monitor the progress of an ongoing job and to receive error notification. You can request various type of information using the mask parameter. The KU$_STATUS_JOB_DESC and KU$_STATUS_JOB_STATUS values are classified as synchronous information because the information resides in the master table. The KU$_STATUS_WIP and KU$_STATUS_JOB_ERROR values are classified as asynchronous because the messages that embody these types of information can be generated at any time by various layers in the Data Pump architecture.
There are two types of error scenarios that need to be handled using the GET_STATUS
procedure:
SET_PARAMETER
procedure may produce an INCONSISTENT_ARGS
exception. The client should immediately call GET_STATUS
with mask=8
(errors) and timeout=0
. The returned ku$_Status.error will contain a ku$_LogEntry that describes the inconsistency in more detail.Table
already
exists
when trying to create a table. The ku$_Status.error
will contain a ku$_LogEntry
with all error lines (from all processing layers that added context about the error) properly ordered.After a job has begun, a client's main processing loop will typically consist of a call to GET_STATUS
with an infinite timeout (-1) "listening" for KU$_STATUS_WIP and KU$_STATUS_JOB_ERROR messages. If status was requested, then JOB_STATUS
information will also be in the request.
When the ku$_Status is interpreted, the following guidelines should be used:
ku$_Status.ku$_JobStatus.percent_done
refers only to the amount of data that has been processed in a job. Metadata is not considered in the calculation. It is determined using the following formulas:
(bytes_processed/estimated_bytes) * 100
(bytes_processed/total_expected_bytes) * 100
0.00
if not done or 100
.00
if doneThe effects of the QUERY
and PARTITION_EXPR
data filters are not considered in computing percent_done
.
It is expected that the status returned will be transformed by the caller into more user-friendly status. For example, when percent done is not zero, an estimate of completion time could be produced using the following formula:
((SYSDATE - start time) / ku$_Status.ku$_JobStatus.percent_done) * 100
ku$_Status.ku$_JobStatus.percent_done
for determining whether the job has completed. Instead, the caller should only rely on the state of the job as found in job_state
.This procedure inserts a message into the log file.
DBMS_DATAPUMP.LOG_ENTRY( handle IN NUMBER, message IN VARCHAR2 log_file_only IN NUMBER DEFAULT 0);
INVALID_HANDLE
. The specified handle is not attached to a Data Pump job.SUCCESS_WITH_INFO
. The procedure succeeded, but further information is available through the GET_STATUS
procedure.NO_SUCH_JOB
. The specified job does not exist.The message is added to the log file. If log_file_only
is zero (the default), the message is also broadcast as a KU$_STATUS_WIP message through the GET_STATUS
procedure to all users attached to the job.
The LOG_ENTRY
procedure allows applications to tailor the log stream to match the abstractions provided by the application. For example, the command-line interface supports INCLUDE
and EXCLUDE
parameters defined by the user. Identifying these values as calls to the underlying METADATA_FILTER
procedure would be confusing to users. Instead, the command-line interface can enter text into the log describing the settings for the INCLUDE
and EXCLUDE
parameters.
Lines entered in the log stream from LOG_ENTRY
are prefixed by the string, ";;; "
This procedure provides filters that allow you to restrict the items that are included in a job.
DBMS_DATAPUMP.METADATA_FILTER( handle IN NUMBER, name IN VARCHAR2, value IN VARCHAR2, object_path IN VARCHAR2 DEFAULT NULL);
Parameter | Description |
---|---|
|
The handle returned from the |
|
The name of the filter. See Table 25-10 for descriptions of the available filters. |
|
The value of the filter. |
|
The object path to which the filter applies. If the default is used, the filter applies to all applicable objects. Lists of the object paths supported for each mode are contained in the catalog views for For an import operation, object paths reference the mode used to create the dump file rather than the mode being used for the import. |
Table 25-10 describes the name, object type, datatype, operations that support the filter, and the meaning of the filters available with the METADATA_FILTER
procedure. The datatype for all the filters is a text expression. All operations support all filters.
INVALID_HANDLE
. The specified handle is not attached to a Data Pump job.INVALID_ARGVAL
. This exception can indicate any of the following conditions:
INVALID_STATE
. The user called the METADATA_FILTER
procedure after the job left the defining state.INCONSISTENT_ARGS
. The filter value is of the wrong datatype or is missing.SUCCESS_WITH_INFO
. The procedure succeeded but further information is available through the GET_STATUS
procedure.NO_SUCH_JOB
. The specified job does not exist.EXCLUDE_PATH_EXPR
and INCLUDE_PATH_EXPR
, dependent objects of an identified object will be processed along with the identified object. For example, if an index is identified for inclusion by a filter, grants upon that index will also be included by the filter. Likewise, if a table is excluded by a filter, then indexes, constraints, grants and triggers upon the table will also be excluded by the filter.NAME_EXPR as '!=''EMP'''
and NAME_EXPR as '!=''DEPT'''
on a TABLE-mode export would produce a file set containing all of the tables except for EMP
and DEPT
.This procedure specifies a remapping to be applied to objects as they are processed in the specified job.
DBMS_DATAPUMP.METADATA_REMAP ( handle IN NUMBER, name IN VARCHAR2, old_value IN VARCHAR2, value IN VARCHAR2, object_type IN VARCHAR2 DEFAULT NULL);
Parameter | Description |
---|---|
|
The handle for the current job. The current session must have previously attached to the handle through a call to the |
|
The name of the remap. See Table 25-12 for descriptions of the available remaps. |
|
Specifies which value in the dump file set should be reset to |
|
The value of the parameter for the remap. This signifies the new value that |
|
Designates the object type to which the remap applies. The list of object types supported for each mode are contained in the By default, the remap applies to all applicable objects within the job. The |
Table 25-12 describes the remaps provided by the METADATA_REMAP
procedure.
INVALID_HANDLE
. The specified handle is not attached to a Data Pump job.INVALID_ARGVAL
. This message can indicate any of the following:
INVALID_OPERATION
. Remaps are only supported for SQL_FILE and Import operations. The job's operation was Export, which does not support the use of metadata remaps.INVALID_STATE
. The user called METADATA_REMAP
after the job had started (that is, the job was not in the defining state).INCONSISTENT_ARGS
. There was no value
supplied or it was of the wrong datatype for the remap.SUCCESS_WITH_INFO
. The procedure succeeded, but further information is available through the GET_STATUS
procedure.NO_SUCH_JOB
. The specified job does not exist.METADATA_REMAP
procedure is only supported for Import and SQL_FILE operations. It enables you to apply commonly desired, predefined remappings to the definition of objects as part of the transfer. If you need remaps that are not supported within this procedure, you should do a preliminary SQL_FILE operation to produce a SQL script corresponding to the dump file set. By editing the DDL directly and then executing it, you can produce any remappings that you need.DBMS_METADATA
.SET_TRANSFORM_PARAMETER
API. Multiple remaps can be defined for a single job. However, each remap defined must be unique according its parameters. That is, two remaps cannot specify conflicting or redundant remaps.This procedure specifies transformations to be applied to objects as they are processed in the specified job.
DBMS_DATAPUMP.METADATA_TRANSFORM ( handle IN NUMBER, name IN VARCHAR2, value IN VARCHAR2, object_type IN VARCHAR2 DEFAULT NULL);
Parameter | Description |
---|---|
|
The handle for the current job. The current session must have previously attached to the handle through a call to the |
|
The name of the transformation. See Table 25-14 for descriptions of the available transforms. |
|
The value of the parameter for the transform. |
|
Designates the object type to which the transform applies. The list of object types supported for each mode are contained in the By default, the transform applies to all applicable objects within the job. The |
Table 25-14 describes the transforms provided by the METADATA_TRANSFORM
procedure.
INVALID_HANDLE
. The specified handle is not attached to a Data Pump job.INVALID_ARGVAL
. This message can indicate any of the following:
INVALID_OPERATION
. Transforms are only supported for SQL_FILE and Import operations. The job's operation was Export which does not support the use of metadata transforms.INVALID_STATE
. The user called METADATA_TRANSFORM
after the job had started (that is, the job was not in the defining state).INCONSISTENT_ARGS
. There was no value
supplied or it was of the wrong datatype for the transform.SUCCESS_WITH_INFO
. The procedure succeeded, but further information is available through the GET_STATUS
procedure.NO_SUCH_JOB
. The specified job does not exist.METADATA_TRANSFORM
procedure is only supported for Import and SQL_FILE operations. It enables you to apply commonly desired, predefined transformations to the definition of objects as part of the transfer. If you need transforms that are not supported within this procedure, you should do a preliminary SQL_FILE operation to produce a SQL script corresponding to the dump file set. By editing the DDL directly and then executing it, you can produce any transformations that you need.DBMS_METADATA
.SET_TRANSFORM_PARAMETER
API. Multiple transforms can be defined for a single job. However, each transform defined must be unique according its parameters. That is, two transforms cannot specify conflicting or redundant transformations.This function is used to declare a new job using the Data Pump API. The handle that is returned is used as a parameter for calls to all other procedures except ATTACH
.
DBMS_DATAPUMP.OPEN ( operation IN VARCHAR2, mode IN VARCHAR2, remote_link IN VARCHAR2 DEFAULT NULL, job_name IN VARCHAR2 DEFAULT NULL, version IN VARCHAR2 DEFAULT 'COMPATIBLE') RETURN NUMBER;
Parameter | Meaning |
---|---|
|
The type of operation to be performed. Table 25-16 contains descriptions of valid operation types. |
|
The scope of the operation to be performed. Table 25-17 contains descriptions of valid modes. Specifying NULL generates an error. |
|
If the value of this parameter is non-null, it provides the name of a database link to the remote database that will be the source of data and metadata for the current job. |
|
The name of the job. The name is limited to 30 characters; it will be truncated if more than 30 characters are used. It may consist of printable characters and spaces. It is implicitly qualified by the schema of the user executing the The name is used to identify the job both within the API and with other database components such as identifying the job in the The default job name is formed where |
|
The version of database objects to be extracted. This option is only valid for Export, network Import, and SQL_FILE operations. Database objects or attributes that are incompatible with the version will not be extracted. Legal values for this parameter are as follows:
|
Table 25-16 describes the valid operation types for the OPEN
procedure.
Table 25-17 describes the valid modes for the OPEN
procedure.
SET_PARALLEL
, ADD_FILE
, DETACH
, STOP_JOB
, GET_STATUS
, LOG_ENTRY
, METADATA_FILTER
, DATA_FILTER
, METADATA_TRANSFORM
, METADATA_REMAP
, SET_PARAMETER
, and START_JOB
.INVALID_ARGVAL
. An invalid operation
or mode
was specified. A NULL or invalid value was supplied for an input parameter. The error message text identifies the parameter.JOB_EXISTS
. A table already exists with the specified job name.PRIVILEGE_ERROR
. The user does not have the necessary privileges or roles to use the specified mode.INTERNAL_ERROR
. The job was created under the wrong schema or the master table was of the wrong format.SUCCESS_WITH_INFO
. The procedure succeeded, but further information is available through the GET_STATUS
procedure.NO_SUCH_JOB
. The specified job does not exist.ATTACH
procedure.OPEN
fails, call GET_STATUS
with a null handle to retrieve additional information about the failure.This procedure adjusts the degree of parallelism within a job.
DBMS_DATAPUMP.SET_PARALLEL( handle IN NUMBER, degree IN NUMBER);
INVALID_HANDLE
. The specified handle is not attached to a Data Pump job.INVALID_OPERATION
. The SET_PARALLEL
procedure is only valid for export and import operations.INVALID_ARGVAL
. An invalid value was supplied for an input parameter.SUCCESS_WITH_INFO
. The procedure succeeded, but further information is available through the GET_STATUS
procedure.NO_SUCH_JOB
. The specified job does not exist.SET_PARALLEL
procedure is only available in the Enterprise Edition of the Oracle database.SET_PARALLEL
procedure can be executed by any session attached to a job. The job must be in one of the following states: Defining, Idling, or Executing.SET_PARALLEL
can take effect.PROCESSES
or SESSIONS
initialization parameters in the init
.ora
file.n
, the user should supply n
files in the dump file set or specify a substitution variable in a file specification. Otherwise, some of the worker processes will be idle while waiting for files.This procedure is used to specify job-processing options.
DBMS_DATAPUMP.SET_PARAMETER( handle IN NUMBER, name IN VARCHAR2, value IN VARCHAR2); DBMS_DATAPUMP.SET_PARAMETER ( handle IN NUMBER, name IN VARCHAR2, value IN NUMBER);
Parameter | Description |
---|---|
|
The handle of a job. The current session must have previously attached to the handle through an |
|
The name of the parameter. Table 25-20 describes the valid parameter names. |
|
The value for the specified parameter. |
Table 25-20 describes the valid options for the name
parameter of the SET_PARAMETER
procedure.
INVALID_HANDLE
. The specified handle is not attached to a Data Pump job.INVALID_ARGVAL
. This exception could be due to any of the following causes:
value
value
was not suppliedvalue
was not allowed for the specified parameter name
INVALID_OPERATION
. The operation specified is invalid in this context.INVALID_STATE
. The specified job is not in the Defining state.INCONSISTENT_ARGS
. Either the specified parameter is not supported for the current operation type or it is not supported for the current mode.PRIVILEGE_ERROR
. The user does not have the EXP_FULL_DATABASE
or IMP_FULL_DATABASE
role required for the specified parameter.SUCCESS_WITH_INFO
. The procedure succeeded, but further information is available through the GET_STATUS
procedure.NO_SUCH_JOB
. The specified job does not exist.SET_PARAMETER
procedure is used to specify optional features for the current job. See Table 25-20 for a list of supported options.This procedure begins or resumes execution of a job.
DBMS_DATAPUMP.START_JOB ( handle IN NUMBER, skip_current IN NUMBER DEFAULT 0);
INVALID_HANDLE
. The specified handle is not attached to a Data Pump job.INVALID_STATE
. The causes of this exception can be any of the following:
ADD_FILE
procedure has not been called to define the output for a SQL_FILE
jobTABLESPACE_DATAFILE
parameter has not been defined for a Transportable Import jobTABLESPACE_EXPR
metadata filter has not been defined for a Transportable or Tablespace mode Export or Network jobINVALID_OPERATION
. Unable to restore master table from a dump file set.INTERNAL_ERROR
. An inconsistency was detected when the job was started. Additional information may be available through the GET_STATUS
procedure.SUCCESS_WITH_INFO
. The procedure succeeded, but further information is available through the GET_STATUS
procedure.NO_SUCH_JOB
. The specified job does not exist.SET_PARALLEL
procedure was not called prior to the START_JOB
procedure, the initial level of parallelism used in the job will be 1. If SET_PARALLEL
was called prior to the job starting, the degree
specified by the last SET_PARALLEL
call determines the parallelism for the job. On restarts, the parallelism is determined by the previous parallel setting for the job, unless it is overridden by another SET_PARALLEL
call.ATTACH
must be performed prior to executing the START_JOB
procedure.This procedure terminates a job, but optionally, preserves the state of the job.
DBMS_DATAPUMP.STOP_JOB ( handle IN NUMBER, immediate IN NUMBER DEFAULT 0, keep_master IN NUMBER DEFAULT NULL, delay IN NUMBER DEFAULT 0);
INVALID_HANDLE
. The specified handle is not attached to a Data Pump job.INVALID
STATE
. The job is already in the process of being stopped or completed.SUCCESS_WITH_INFO
. The procedure succeeded, but further information is available through the GET_STATUS
procedure.NO_SUCH_JOB
. The specified job does not exist.DETACH
procedure.ATTACH
and START_JOB
procedures, provided the master table and the dump file set are left intact.KEEP_MASTER
parameter is not specified, and the job is in the Defining state or has a mode of Transportable, the master table is dropped. Otherwise, the master table is retained.