Oracle® Database Utilities 10g Release 1 (10.1) Part Number B10825-01 |
|
|
View PDF |
This chapter describes how to use the original Export and Import utilities, invoked with the exp
and imp
command, respectively. These are called the original Export and Import utilities to differentiate them from the new Oracle Data Pump Export and Import utilities available as of Oracle Database 10g. These new utilities are invoked with the expdp
and impdp
commands, respectively. In general, Oracle recommends that you use the new Data Pump Export and Import utilities because they support all Oracle Database 10g features. Original Export and Import do not support all Oracle Database 10g features.
However, you should still use the original Export and Import utilities in the following situations:
You want to import files that were created using the original Export utility (exp
).
You want to export files that will be imported using the original Import utility (imp
). An example of this would be if you wanted to export data from Oracle Database 10g and then import it into an earlier database release.
This chapter discusses the following topics:
The Export and Import utilities provide a simple way for you to transfer data objects between Oracle databases, even if they reside on platforms with different hardware and software configurations.
When you run Export against an Oracle database, objects (such as tables) are extracted, followed by their related objects (such as indexes, comments, and grants), if any. The extracted data is written to an export dump file. The Import utility reads the object definitions and table data from the dump file.
An export file is an Oracle binary-format dump file that is typically located on disk or tape. The dump files can be transferred using FTP or physically transported (in the case of tape) to a different site. The files can then be used with the Import utility to transfer data between databases that are on systems not connected through a network. The files can also be used as backups in addition to normal backup procedures.
Export dump files can only be read by the Oracle Import utility. The version of the Import utility cannot be earlier than the version of the Export utility used to create the dump file.
You can also display the contents of an export file without actually performing an import. To do this, use the Import SHOW
parameter. See SHOW for more information.
To load data from ASCII fixed-format or delimited files, use the SQL*Loader utility.
See Also:
|
Before you begin using Export and Import, be sure you take care of the following items (described in detail in the following sections):
Run the catexp.sql
or catalog.sql
script
Ensure there is sufficient disk or tape storage to write the export file
Verify that you have the required access privileges
To use Export and Import, you must run the script catexp.sql
or catalog.sql
(which runs catexp.sql
) after the database has been created or migrated to Oracle Database 10g.
The catexp.sql
or catalog.sql
script needs to be run only once on a database. The script performs the following tasks to prepare the database for export and import operations:
Creates the necessary export and import views in the data dictionary
Creates the EXP_FULL_DATABASE
role
Assigns all necessary privileges to the EXP_FULL_DATABASE
and IMP_FULL_DATABASE
roles
Assigns EXP_FULL_DATABASE
and IMP_FULL_DATABASE
to the DBA
role
Records the version of catexp.sql
that has been installed
Before you run Export, ensure that there is sufficient disk or tape storage space to write the export file. If there is not enough space, Export terminates with a write-failure error.
You can use table sizes to estimate the maximum space needed. You can find table sizes in the USER_SEGMENTS
view of the Oracle data dictionary. The following query displays disk usage for all tables:
SELECT SUM(BYTES) FROM USER_SEGMENTS WHERE SEGMENT_TYPE='TABLE';
The result of the query does not include disk space used for data stored in LOB (large object) or VARRAY
columns or in partitioned tables.
To use Export and Import, you must have the CREATE SESSION
privilege on an Oracle database. This privilege belongs to the CONNECT
role established during database creation. To export tables owned by another user, you must have the EXP_FULL_DATABASE
role enabled. This role is granted to all database administrators (DBAs).
If you do not have the system privileges contained in the EXP_FULL_DATABASE
role, you cannot export objects contained in another user's schema. For example, you cannot export a table in another user's schema, even if you created a synonym for it.
The following schema names are reserved and will not be processed by Export:
ORDSYS
MDSYS
CTXSYS
ORDPLUGINS
LBACSYS
You can perform an import operation even if you did not create the export file. However, keep in mind that if the export file was created by a user with the EXP_FULL_DATABASE
role, then you must have the IMP_FULL_DATABASE
role to import it. Both of these roles are typically assigned to database administrators (DBAs).
You can invoke Export and Import, and specify parameters by using any of the following methods:
Command-line entries
Parameter files
Interactive mode
Before you use one of these methods, be sure to read the descriptions of the available parameters. See Export Parameters and Import Parameters.
SYSDBA
is used internally and has specialized functions; its behavior is not the same as for generalized users. Therefore, you should not typically need to invoke Export or Import as SYSDBA,
except in the following situations:
At the request of Oracle technical support
When importing a transportable tablespace set
To invoke Export or Import as SYSDBA,
use the following syntax (substitute exp
for imp
if you are using Export). Add any desired parameters or parameter filenames:
imp \'username/password AS SYSDBA\'
Optionally, you could also specify an instance name:
imp \'username/password@instance AS SYSDBA\'
If either the username or password is omitted, you will be prompted you for it.
This example shows the entire connect string enclosed in quotation marks and backslashes. This is because the string, AS SYSDBA,
contains a blank, a situation for which most operating systems require that the entire connect string be placed in quotation marks or marked as a literal by some method. Some operating systems also require that quotation marks on the command line be preceded by an escape character. In this example, backslashes are used as the escape character. If the backslashes were not present, the command-line parser that Export and Import use would not understand the quotation marks and would remove them.
You can specify all valid parameters and their values from the command line using the following syntax:
exp username/password PARAMETER=value
or
exp username/password PARAMETER=(value1,value2,...,valuen)
The number of parameters cannot exceed the maximum length of a command line on the system. Note that the examples could use imp
to invoke Import rather than exp
to invoke Export.
The information in this section applies to both Export and Import, but the examples show use of the Export command, exp
.
You can specify all valid parameters and their values in a parameter file. Storing the parameters in a file allows them to be easily modified or reused, and is the recommended method for invoking Export. If you use different parameters for different databases, you can have multiple parameter files.
Create the parameter file using any flat file text editor. The command-line option PARFILE=
filename
tells Export to read the parameters from the specified file rather than from the command line. For example:
exp PARFILE=filename exp username/password PARFILE=filename
The first example does not specify the username
/
password
on the command line to illustrate that you can specify them in the parameter file, although, for security reasons, this is not recommended.
The syntax for parameter file specifications is one of the following:
PARAMETER=value PARAMETER=(value) PARAMETER=(value1, value2, ...)
The following example shows a partial parameter file listing:
FULL=y FILE=dba.dmp GRANTS=y INDEXES=y CONSISTENT=y
You can add comments to the parameter file by preceding them with the pound (#) sign. Export ignores all characters to the right of the pound (#) sign.
You can specify a parameter file at the same time that you are entering parameters on the command line. In fact, you can specify the same parameter in both places. The position of the PARFILE
parameter and other parameters on the command line determines which parameters take precedence. For example, assume the parameter file params.dat
contains the parameter INDEXES=y
and Export is invoked with the following line:
exp username/password PARFILE=params.dat INDEXES=n
In this case, because INDEXES=n
occurs after PARFILE=params.dat
, INDEXES=n
overrides the value of the INDEXES
parameter in the parameter file.
See Also:
|
If you prefer to be prompted for the value of each parameter, you can use the following syntax to start Export (or Import, if you specify imp
) in interactive mode:
exp username/password
Commonly used parameters are displayed with a request for you to enter a value. The command-line interactive method does not provide prompts for all functionality and is provided only for backward compatibility. If you want to use an interactive interface, Oracle recommends that you use the Oracle Enterprise Manager Export or Import Wizard.
If you do not specify a username
/password
combination on the command line, then you are prompted for this information.
Keep in mind the following points when you use the interactive method:
In user mode, Export prompts for all usernames to be included in the export before exporting any data. To indicate the end of the user list and begin the current Export session, press Enter.
In table mode, if you do not specify a schema prefix, Export defaults to the exporter's schema or the schema containing the last table exported in the current session.
For example, if beth
is a privileged user exporting in table mode, Export assumes that all tables are in the beth
schema until another schema is specified. Only a privileged user (someone with the EXP_FULL_DATABASE
role) can export tables in another user's schema.
If you specify a null table list to the prompt "Table to be exported," the Export utility exits.
Table 20-1 lists the privileges required to import objects into your own schema. All of these privileges initially belong to the RESOURCE
role.
Table 20-1 Privileges Required to Import Objects into Your Own Schema
Object | Required Privilege (Privilege Type, If Applicable) |
---|---|
Clusters | CREATE CLUSTER (System) or UNLIMITED TABLESPACE (System). The user must also be assigned a tablespace quota. |
Database links | CREATE DATABASE LINK (System) and CREATE SESSION (System) on remote database |
Triggers on tables | CREATE TRIGGER (System) |
Triggers on schemas | CREATE ANY TRIGGER (System) |
Indexes | CREATE INDEX (System) or UNLIMITED TABLESPACE (System). The user must also be assigned a tablespace quota. |
Integrity constraints | ALTER TABLE (Object) |
Libraries | CREATE ANY LIBRARY (System) |
Packages | CREATE PROCEDURE (System) |
Private synonyms | CREATE SYNONYM (System) |
Sequences | CREATE SEQUENCE (System) |
Snapshots | CREATE SNAPSHOT (System) |
Stored functions | CREATE PROCEDURE (System) |
Stored procedures | CREATE PROCEDURE (System) |
Table data | INSERT TABLE (Object) |
Table definitions (including comments and audit options) | CREATE TABLE (System) or UNLIMITED TABLESPACE (System). The user must also be assigned a tablespace quota. |
Views | CREATE VIEW (System) and SELECT (Object) on the base table, or SELECT ANY TABLE (System) |
Object types | CREATE TYPE (System) |
Foreign function libraries | CREATE LIBRARY (System) |
Dimensions | CREATE DIMENSION (System) |
Operators | CREATE OPERATOR (System) |
Indextypes | CREATE INDEXTYPE (System) |
To import the privileges that a user has granted to others, the user initiating the import must either own the objects or have object privileges with the WITH
GRANT
OPTION
. Table 20-2 shows the required conditions for the authorizations to be valid on the target system.
Table 20-2 Privileges Required to Import Grants
Grant | Conditions |
---|---|
Object privileges | The object must exist in the user's schema, or
the user must have the object privileges with the the user must have the |
System privileges | User must have the SYSTEM privilege as well as the WITH ADMIN OPTION. |
To import objects into another user's schema, you must have the IMP_FULL_DATABASE
role enabled.
To import system objects from a full database export file, the IMP_FULL_DATABASE
role must be enabled. The parameter FULL
specifies that the following system objects are included in the import when the export file is a full export:
Profiles
Public database links
Public synonyms
Roles
Rollback segment definitions
Resource costs
Foreign function libraries
Context objects
System procedural objects
System audit options
System privileges
Tablespace definitions
Tablespace quotas
User definitions
Directory aliases
System event triggers
The following restrictions apply when you process data with the Export and Import utilities:
Java classes, resources, and procedures that are created using Enterprise JavaBeans (EJB) are not placed in the export file.
Constraints that have been altered using the RELY
keyword lose the RELY
attribute when they are exported.
When a type definition has evolved and then data referencing that evolved type is exported, the type definition on the import system must have evolved in the same manner.
The table compression attribute of tables and partitions is preserved during export and import. However, the import process does not use the direct path API, hence the data will not be stored in the compressed format when imported. Use the new Data Pump Export and Import utilities to enable compression during import.
Table objects are imported as they are read from the export file. The export file contains objects in the following order:
Type definitions
Table definitions
Table data
Table indexes
Integrity constraints, views, procedures, and triggers
Bitmap, function-based, and domain indexes
The order of import is as follows: new tables are created, data is imported and indexes are built, triggers are imported, integrity constraints are enabled on the new tables, and any bitmap, function-based, and/or domain indexes are built. This sequence prevents data from being rejected due to the order in which tables are imported. This sequence also prevents redundant triggers from firing twice on the same data (once when it is originally inserted and again during the import).
For example, if the emp
table has a referential integrity constraint on the dept
table and the emp
table is imported first, all emp
rows that reference departments that have not yet been imported into dept
would be rejected if the constraints were enabled.
When data is imported into existing tables, however, the order of import can still produce referential integrity failures. In the situation just given, if the emp
table already existed and referential integrity constraints were in force, many rows could be rejected.
A similar situation occurs when a referential integrity constraint on a table references itself. For example, if scott
's manager in the emp
table is drake,
and drake
's row has not yet been loaded, scott
's row will fail, even though it would be valid at the end of the import.
Note: For the reasons mentioned previously, it is a good idea to disable referential constraints when importing into an existing table. You can then reenable the constraints after the import is completed. |
This section describes factors to take into account when you import data into existing tables.
When you choose to create tables manually before importing data into them from an export file, you should use either the same table definition previously used or a compatible format. For example, although you can increase the width of columns and change their order, you cannot do the following:
Add NOT NULL
columns
Change the datatype of a column to an incompatible datatype (LONG
to NUMBER,
for example)
Change the definition of object types used in a table
Change DEFAULT
column values
Note: When tables are manually created before data is imported, theCREATE TABLE statement in the export dump file will fail because the table already exists. To avoid this failure and continue loading data into the table, set the Import parameter IGNORE =y . Otherwise, no data will be loaded into the table because of the table creation error. |
In the normal import order, referential constraints are imported only after all tables are imported. This sequence prevents errors that could occur if a referential integrity constraint exists for data that has not yet been imported.
These errors can still occur when data is loaded into existing tables. For example, if table emp
has a referential integrity constraint on the mgr
column that verifies that the manager number exists in emp,
a legitimate employee row might fail the referential integrity constraint if the manager's row has not yet been imported.
When such an error occurs, Import generates an error message, bypasses the failed row, and continues importing other rows in the table. You can disable constraints manually to avoid this.
Referential constraints between tables can also cause problems. For example, if the emp
table appears before the dept
table in the export file, but a referential check exists from the emp
table into the dept
table, some of the rows from the emp
table may not be imported due to a referential constraint violation.
To prevent errors like these, you should disable referential integrity constraints when importing data into existing tables.
When the constraints are reenabled after importing, the entire table is checked, which may take a long time for a large table. If the time required for that check is too long, it may be beneficial to order the import manually.
To do so, perform several imports from an export file instead of one. First, import tables that are the targets of referential checks. Then, import the tables that reference them. This option works if tables do not reference each other in a circular fashion, and if a table does not reference itself.
Triggers that are defined to trigger on DDL events for a specific schema or on DDL-related events for the database, are system triggers. These triggers can have detrimental effects on certain import operations. For example, they can prevent successful re-creation of database objects, such as tables. This causes errors to be returned that give no indication that a trigger caused the problem.
Database administrators and anyone creating system triggers should verify that such triggers do not prevent users from performing database operations for which they are authorized. To test a system trigger, take the following steps:
Define the trigger.
Create some database objects.
Export the objects in table or user mode.
Delete the objects.
Import the objects.
Verify that the objects have been successfully re-created.
Note: A full export does not export triggers owned by schemaSYS . You must manually re-create SYS triggers either before or after the full import. Oracle recommends that you re-create them after the import in case they define actions that would impede progress of the import. |
The Export and Import utilities support four modes of operation:
Full: Exports and imports a full database. Only users with the EXP_FULL_DATABASE
and IMP_FULL_DATABASE
roles can use this mode. Use the FULL
parameter to specify this mode.
Tablespace: Enables a privileged user to move a set of tablespaces from one Oracle database to another. Use the TRANSPORT_TABLESPACE
parameter to specify this mode.
User: Enables you to export and import all objects that belong to you (such as tables, grants, indexes, and procedures). A privileged user importing in user mode can import all objects in the schemas of a specified set of users. Use the OWNER
parameter to specify this mode in Export, and use the FROMUSER
parameter to specify this mode in Import.
Table: Enables you to export and import specific tables and partitions. A privileged user can qualify the tables by specifying the schema that contains them. Use the TABLES
parameter to specify this mode.
See Table 20-3 for a list of objects that are exported and imported in each mode.
A user with the IMP_FULL_DATABASE
role must specify one of these modes. Otherwise, an error results. If a user without the IMP_FULL_DATABASE
role fails to specify one of these modes, a user-level Import is performed.
You can use conventional path Export or direct path Export to export in any mode except tablespace mode.The differences between conventional path Export and direct path Export are described in Conventional Path Export Versus Direct Path Export.
Table 20-3 Objects Exported and Imported in Each Mode
Object | Table Mode | User Mode | Full Database Mode | Tablespace Mode |
---|---|---|---|---|
Analyze cluster | No | Yes | Yes | No |
Analyze tables/statistics | Yes | Yes | Yes | Yes |
Application contexts | No | No | Yes | No |
Auditing information | Yes | Yes | Yes | No |
B-tree, bitmap, domain function-based indexes | YesFoot 1 | Yes | Yes | Yes |
Cluster definitions | No | Yes | Yes | Yes |
Column and table comments | Yes | Yes | Yes | Yes |
Database links | No | Yes | Yes | No |
Default roles | No | No | Yes | No |
Dimensions | No | Yes | Yes | No |
Directory aliases | No | No | Yes | No |
External tables (without data) | Yes | Yes | Yes | No |
Foreign function libraries | No | Yes | Yes | No |
Indexes owned by users other than table owner | Yes (Privileged users only) | Yes | Yes | Yes |
Index types | No | Yes | Yes | No |
Java resources and classes | No | Yes | Yes | No |
Job queues | No | Yes | Yes | No |
Nested table data | Yes | Yes | Yes | Yes |
Object grants | Yes (Only for tables and indexes) | Yes | Yes | Yes |
Object type definitions used by table | Yes | Yes | Yes | Yes |
Object types | No | Yes | Yes | No |
Operators | No | Yes | Yes | No |
Password history | No | No | Yes | No |
Postinstance actions and objects | No | No | Yes | No |
Postschema procedural actions and objects | No | Yes | Yes | No |
Posttable actions | Yes | Yes | Yes | Yes |
Posttable procedural actions and objects | Yes | Yes | Yes | Yes |
Preschema procedural objects and actions | No | Yes | Yes | No |
Pretable actions | Yes | Yes | Yes | Yes |
Pretable procedural actions | Yes | Yes | Yes | Yes |
Private synonyms | No | Yes | Yes | No |
Procedural objects | No | Yes | Yes | No |
Profiles | No | No | Yes | No |
Public synonyms | No | No | Yes | No |
Referential integrity constraints | Yes | Yes | Yes | No |
Refresh groups | No | Yes | Yes | No |
Resource costs | No | No | Yes | No |
Role grants | No | No | Yes | No |
Roles | No | No | Yes | No |
Rollback segment definitions | No | No | Yes | No |
Security policies for table | Yes | Yes | Yes | Yes |
Sequence numbers | No | Yes | Yes | No |
Snapshot logs | No | Yes | Yes | No |
Snapshots and materialized views | No | Yes | Yes | No |
System privilege grants | No | No | Yes | No |
Table constraints (primary, unique, check) | Yes | Yes | Yes | Yes |
Table data | Yes | Yes | Yes | Yes |
Table definitions | Yes | Yes | Yes | Yes |
Tablespace definitions | No | No | Yes | No |
Tablespace quotas | No | No | Yes | No |
Triggers | Yes | YesFoot 2 | YesFoot 3 | Yes |
Triggers owned by other users | Yes (Privileged users only) | No | No | No |
User definitions | No | No | Yes | No |
User proxies | No | No | Yes | No |
User views | No | Yes | Yes | No |
User-stored procedures, packages, and functions | No | Yes | Yes | No |
You can export tables, partitions, and subpartitions in the following ways:
Table-level Export: exports all data from the specified tables
Partition-level Export: exports only data from the specified source partitions or subpartitions
In all modes, partitioned data is exported in a format such that partitions or subpartitions can be imported selectively.
In table-level Export, you can export an entire table (partitioned or nonpartitioned) along with its indexes and other table-dependent objects. If the table is partitioned, all of its partitions and subpartitions are also exported. This applies to both direct path Export and conventional path Export. You can perform a table-level export in any Export mode.
In partition-level Export, you can export one or more specified partitions or subpartitions of a table. You can only perform a partition-level export in table mode.
For information about how to specify table-level and partition-level Exports, see TABLES.
You can import tables, partitions, and subpartitions in the following ways:
Table-level Import: Imports all data from the specified tables in an export file.
Partition-level Import: Imports only data from the specified source partitions or subpartitions.
You must set the parameter IGNORE=y
when loading data into an existing table. See IGNORE for more information.
For each specified table, table-level Import imports all rows of the table. With table-level Import:
All tables exported using any Export mode (except TRANSPORT_TABLESPACES
) can be imported.
Users can import the entire (partitioned or nonpartitioned) table, partitions, or subpartitions from a table-level export file into a (partitioned or nonpartitioned) target table with the same name.
If the table does not exist, and if the exported table was partitioned, table-level Import creates a partitioned table. If the table creation is successful, table-level Import reads all source data from the export file into the target table. After Import, the target table contains the partition definitions of all partitions and subpartitions associated with the source table in the export file. This operation ensures that the physical and logical attributes (including partition bounds) of the source partitions are maintained on import.
Partition-level Import can only be specified in table mode. It lets you selectively load data from specified partitions or subpartitions in an export file. Keep the following guidelines in mind when using partition-level Import.
Import always stores the rows according to the partitioning scheme of the target table.
Partition-level Import inserts only the row data from the specified source partitions or subpartitions.
If the target table is partitioned, partition-level Import rejects any rows that fall above the highest partition of the target table.
Partition-level Import cannot import a nonpartitioned exported table. However, a partitioned table can be imported from a nonpartitioned exported table using table-level Import.
Partition-level Import is legal only if the source table (that is, the table called tablename at export time) was partitioned and exists in the export file.
If the partition or subpartition name is not a valid partition in the export file, Import generates a warning.
The partition or subpartition name in the parameter refers to only the partition or subpartition in the export file, which may not contain all of the data of the table on the export source system.
If ROWS=y
(default), and the table does not exist in the import target system, the table is created and all rows from the source partition or subpartition are inserted into the partition or subpartition of the target table.
If ROWS=y
(default) and IGNORE=y
, but the table already existed before import, all rows for the specified partition or subpartition in the table are inserted into the table. The rows are stored according to the existing partitioning scheme of the target table.
If ROWS=n
, Import does not insert data into the target table and continues to process other objects associated with the specified table and partition or subpartition in the file.
If the target table is nonpartitioned, the partitions and subpartitions are imported into the entire table. Import requires IGNORE=y
to import one or more partitions or subpartitions from the export file into a nonpartitioned table on the import target system.
If you specify a partition name for a composite partition, all subpartitions within the composite partition are used as the source.
In the following example, the partition specified by the partition name is a composite partition. All of its subpartitions will be imported:
imp SYSTEM/password FILE=expdat.dmp FROMUSER=scott TABLES=b:py
The following example causes row data of partitions qc
and qd
of table scott.e
to be imported into the table scott.e
:
imp scott/tiger FILE=expdat.dmp TABLES=(e:qc, e:qd) IGNORE=y
If table e
does not exist in the import target database, it is created and data is inserted into the same partitions. If table e
existed on the target system before import, the row data is inserted into the partitions whose range allows insertion. The row data can end up in partitions of names other than qc
and qd
.
Note: With partition-level Import to an existing table, you must set up the target partitions or subpartitions properly and useIGNORE=y. |
This section contains descriptions of the Export command-line parameters.
Default: operating system-dependent. See your Oracle operating system-specific documentation to determine the default value for this parameter.
Specifies the size, in bytes, of the buffer used to fetch rows. As a result, this parameter determines the maximum number of rows in an array fetched by Export. Use the following formula to calculate the buffer size:
buffer_size = rows_in_array * maximum_row_size
If you specify zero, the Export utility fetches only one row at a time.
Tables with columns of type LOBs, LONG
, BFILE
, REF
, ROWID
, LOGICAL
ROWID
, or DATE
are fetched one row at a time.
Note: TheBUFFER parameter applies only to conventional path Export. It has no effect on a direct path Export. For direct path Exports, use the RECORDLENGTH parameter to specify the size of the buffer that Export uses for writing to the export file. |
This section shows an example of how to calculate buffer size.
The following table is created:
CREATE TABLE sample (name varchar(30), weight number);
The maximum size of the name
column is 30, plus 2 bytes for the indicator. The maximum size of the weight
column is 22 (the size of the internal representation for Oracle numbers), plus 2 bytes for the indicator.
Therefore, the maximum row size is 56 (30+2+22+2).
To perform array operations for 100 rows, a buffer size of 5600 should be specified.
Specifies how Export and Import manage the initial extent for table data.
The default, COMPRESS=y,
causes Export to flag table data for consolidation into one initial extent upon import. If extent sizes are large (for example, because of the PCTINCREASE
parameter), the allocated space will be larger than the space required to hold the data.
If you specify COMPRESS=n,
Export uses the current storage parameters, including the values of initial extent size and next extent size. The values of the parameters may be the values specified in the CREATE TABLE
or ALTER TABLE
statements or the values modified by the database system. For example, the NEXT
extent size value may be modified if the table grows and if the PCTINCREASE
parameter is nonzero.
Note: Although the actual consolidation is performed upon import, you can specify theCOMPRESS parameter only when you export, not when you import. The Export utility, not the Import utility, generates the data definitions, including the storage parameter definitions. Therefore, if you specify COMPRESS=y when you export, you can import the data in consolidated form only. |
Specifies whether or not Export uses the SET TRANSACTION READ ONLY
statement to ensure that the data seen by Export is consistent to a single point in time and does not change during the execution of the exp
command. You should specify CONSISTENT=y
when you anticipate that other applications will be updating the target data after an export has started.
If you use CONSISTENT=n
, each table is usually exported in a single transaction. However, if a table contains nested tables, the outer table and each inner table are exported as separate transactions. If a table is partitioned, each partition is exported as a separate transaction.
Therefore, if nested tables and partitioned tables are being updated by other applications, the data that is exported could be inconsistent. To minimize this possibility, export those tables at a time when updates are not being done.
Table 20-4 shows a sequence of events by two users: user1
exports partitions in a table and user2
updates data in that table.
Table 20-4 Sequence of Events During Updates by Two Users
TIme Sequence | User1 | User2 |
---|---|---|
1 | Begins export of TAB:P1 | No activity |
2 | No activity | Updates TAB:P2 Updates TAB:P1 Commits transaction |
3 | Ends export of TAB:P1 | No activity |
4 | Exports TAB:P2 | No activity |
If the export uses CONSISTENT=y,
none of the updates by user2
are written to the export file.
If the export uses CONSISTENT=n,
the updates to TAB:P1 are not written to the export file. However, the updates to TAB:P2 are written to the export file, because the update transaction is committed before the export of TAB:P2 begins. As a result, the user2
transaction is only partially recorded in the export file, making it inconsistent.
If you use CONSISTENT=y
and the volume of updates is large, the rollback segment usage will be large. In addition, the export of each table will be slower, because the rollback segment must be scanned for uncommitted transactions.
Keep in mind the following points about using CONSISTENT=y:
CONSISTENT=y
is unsupported for exports that are performed when you are connected as user SYS
or you are using AS SYSDBA,
or both.
Export of certain metadata may require the use of the SYS
schema within recursive SQL. In such situations, the use of CONSISTENT=y
will be ignored. Oracle recommends that you avoid making metadata changes during an export process in which CONSISTENT=y
is selected.
To minimize the time and space required for such exports, you should export tables that need to remain consistent separately from those that do not. For example, export the emp
and dept
tables together in a consistent export, and then export the remainder of the database in a second pass.
A "snapshot too old" error occurs when rollback space is used up, and space taken up by committed transactions is reused for new transactions. Reusing space in the rollback segment allows database integrity to be preserved with minimum space requirements, but it imposes a limit on the amount of time that a read-consistent image can be preserved.
If a committed transaction has been overwritten and the information is needed for a read-consistent view of the database, a "snapshot too old" error results.
To avoid this error, you should minimize the time taken by a read-consistent export. (Do this by restricting the number of objects exported and, if possible, by reducing the database transaction rate.) Also, make the rollback segment as large as possible.
Note: Rollback segments will be deprecated in a future Oracle database release. Oracle recommends that you use automatic undo management instead. |
Default: n
Specifies the use of direct path Export.
Specifying DIRECT=y
causes Export to extract data by reading the data directly, bypassing the SQL command-processing layer (evaluating buffer). This method can be much faster than a conventional path Export.
For information about direct path Exports, including security and performance considerations, see Invoking a Direct Path Export.
Default: 0
(zero)
Specifies that Export should display a progress meter in the form of a period for n
number of rows exported. For example, if you specify FEEDBACK=10,
Export displays a period each time 10 rows are exported. The FEEDBACK
value applies to all tables being exported; it cannot be set individually for each table.
Specifies the names of the export dump files. The default extension is .dmp
, but you can specify any extension. Because Export supports multiple export files, you can specify multiple filenames to be used. For example:
exp scott/tiger FILE = dat1.dmp, dat2.dmp, dat3.dmp FILESIZE=2048
When Export reaches the value you have specified for the maximum FILESIZE,
Export stops writing to the current file, opens another export file with the next name specified by the FILE
parameter, and continues until complete or the maximum value of FILESIZE
is again reached. If you do not specify sufficient export filenames to complete the export, Export will prompt you to provide additional filenames.
Default: Data is written to one file until the maximum size, as specified in Table 20-5, is reached.
Export supports writing to multiple export files, and Import can read from multiple export files. If you specify a value (byte limit) for the FILESIZE
parameter, Export will write only the number of bytes you specify to each dump file.
When the amount of data Export must write exceeds the maximum value you specified for FILESIZE,
it will get the name of the next export file from the FILE
parameter (see FILE for more information) or, if it has used all the names specified in the FILE
parameter, it will prompt you to provide a new export filename. If you do not specify a value for FILESIZE
(note that a value of 0 is equivalent to not specifying FILESIZE
), then Export will write to only one file, regardless of the number of files specified in the FILE
parameter.
Note: If the space requirements of your export file exceed the available disk space, Export will terminate, and you will have to repeat the Export after making sufficient disk space available. |
The FILESIZE
parameter has a maximum value equal to the maximum value that can be stored in 64 bits.
Table 20-5 shows that the maximum size for dump files depends on the operating system you are using and on the release of the Oracle database that you are using.
Table 20-5 Maximum Size for Dump Files
Operating System | Release of Oracle Database | Maximum Size |
---|---|---|
Any | Prior to 8.1.5 | 2 gigabytes |
32-bit | 8.1.5 | 2 gigabytes |
64-bit | 8.1.5 and later | Unlimited |
32-bit with 32-bit files | Any | 2 gigabytes |
32-bit with 64-bit files | 8.1.6 and later | Unlimited |
The maximum value that can be stored in a file is dependent on your operating system. You should verify this maximum value in your Oracle operating system-specific documentation before specifying FILESIZE
. You should also ensure that the file size you specify for Export is supported on the system on which Import will run.
The FILESIZE
value can also be specified as a number followed by KB (number of kilobytes). For example, FILESIZE=2KB
is the same as FILESIZE=2048.
Similarly, MB specifies megabytes (1024 * 1024) and GB specifies gigabytes (1024**3). B remains the shorthand for bytes; the number is not multiplied to obtain the final file size (FILESIZE=2048B
is the same as FILESIZE=2048
).
Default: none
Specifies the system change number (SCN) that Export will use to enable flashback. The export operation is performed with data consistent as of this specified SCN.
See Also: Oracle Database Application Developer's Guide - Fundamentals for more information about using flashback |
The following is an example of specifying an SCN. When the export is performed, the data will be consistent as of SCN 3482971
.
> exp system/password FILE=exp.dmp FLASHBACK_SCN=3482971
Default: none
Enables you to specify a timestamp. Export finds the SCN that most closely matches the specified timestamp. This SCN is used to enable flashback. The export operation is performed with data consistent as of this SCN.
You can specify the time in any format that the DBMS_FLASHBACK
.ENABLE_AT_TIME
procedure accepts. This means that you can specify it in either of the following ways:
> exp system/password FILE=exp.dmp FLASHBACK_TIME="TIMESTAMP '2002-05-01 11:00:00'" > exp system/password FILE=exp.dmp FLASHBACK_TIME="TO_TIMESTAMP('12-02-2001 14:35:00', 'DD-MM-YYYY HH24:MI:SS')"
Also, the old format, as shown in the following example, will continue to be accepted to ensure backward compatibility:
> exp system/password FILE=exp.dmp FLASHBACK_TIME="'2002-05-01 11:00:00'"
See Also:
|
Indicates that the export is a full database mode export (that is, it exports the entire database). Specify FULL=y
to export in full database mode. You need to have the EXP_FULL_DATABASE
role to export in this mode.
A full database export and import can be a good way to replicate or clean up a database. However, to avoid problems be sure to keep the following points in mind:
A full export does not export triggers owned by schema SYS
. You must manually re-create SYS
triggers either before or after the full import. Oracle recommends that you re-create them after the import in case they define actions that would impede progress of the import.
If possible, before beginning, make a physical copy of the exported database and the database into which you intend to import. This ensures that any mistakes are reversible.
Before you begin the export, it is advisable to produce a report that includes the following information:
A list of tablespaces and datafiles
A list of rollback segments
A count, by user, of each object type such as tables, indexes, and so on
This information lets you ensure that tablespaces have already been created and that the import was successful.
If you are creating a completely new database from an export, remember to create an extra rollback segment in SYSTEM
and to make it available in your initialization parameter file (init
.ora)
before proceeding with the import.
When you perform the import, ensure you are pointing at the correct instance. This is very important because on some UNIX systems, just the act of entering a subshell can change the database against which an import operation was performed.
Do not perform a full import on a system that has more than one database unless you are certain that all tablespaces have already been created. A full import creates any undefined tablespaces using the same datafile names as the exported database. This can result in problems in the following situations:
If the datafiles belong to any other database, they will become corrupted. This is especially true if the exported database is on the same system, because its datafiles will be reused by the database into which you are importing.
If the datafiles have names that conflict with existing operating system files.
Specifies whether or not the Export utility exports object grants. The object grants that are exported depend on whether you use full database mode or user mode. In full database mode, all grants on a table are exported. In user mode, only those granted by the owner of the table are exported. System privilege grants are always exported.
Displays a description of the Export parameters. Enter exp
help=y
on the command line to invoke it.
Specifies a filename to receive informational and error messages. For example:
exp SYSTEM/password LOG=export.log
If you specify this parameter, messages are logged in the log file and displayed to the terminal display.
Default: n
Specifies whether or not the Export utility uses the SET
TRANSACTION
READ
ONLY
statement to ensure that the data exported is consistent to a single point in time and does not change during the export. If OBJECT_CONSISTENT
is set to y
, each object is exported in its own read-only transaction, even if it is partitioned. In contrast, if you use the CONSISTENT
parameter, then there is only one read-only transaction.
Indicates that the export is a user-mode export and lists the users whose objects will be exported. If the user initiating the export is the database administrator (DBA), multiple users can be listed.
User-mode exports can be used to back up one or more database users. For example, a DBA may want to back up the tables of deleted users for a period of time. User mode is also appropriate for users who want to back up their own data or who want to move objects from one owner to another.
Specifies a filename for a file that contains a list of Export parameters. For more information about using a parameter file, see Invoking Export and Import.
Default: none
This parameter enables you to select a subset of rows from a set of tables when doing a table mode export. The value of the query parameter is a string that contains a WHERE
clause for a SQL SELECT
statement that will be applied to all tables (or table partitions) listed in the TABLE
parameter.
For example, if user scott
wants to export only those employees whose job title is SALESMAN
and whose salary is less than 1600, he could do the following (this example is UNIX-based):
exp scott/tiger TABLES=emp QUERY=\"WHERE job=\'SALESMAN\' and sal \<1600\"
Note: Because the value of theQUERY parameter contains blanks, most operating systems require that the entire strings WHERE job=\'SALESMAN\' and sal\<1600 be placed in double quotation marks or marked as a literal by some method. Operating system reserved characters also need to be preceded by an escape character. See your Oracle operating system-specific documentation for information about special and reserved characters on your system. |
When executing this query, Export builds a SQL SELECT
statement similar to the following:
SELECT * FROM emp WHERE job='SALESMAN' and sal <1600;
The values specified for the QUERY
parameter are applied to all tables (or table partitions) listed in the TABLE
parameter. For example, the following statement will unload rows in both emp
and bonus
that match the query:
exp scott/tiger TABLES=emp,bonus QUERY=\"WHERE job=\'SALESMAN\' and sal\<1600\"
Again, the SQL statements that Export executes are similar to the following:
SELECT * FROM emp WHERE job='SALESMAN' and sal <1600; SELECT * FROM bonus WHERE job='SALESMAN' and sal <1600;
If a table is missing the columns specified in the QUERY
clause, an error message will be produced, and no rows will be exported for the offending table.
The QUERY
parameter cannot be specified for full, user, or tablespace-mode exports.
The QUERY
parameter must be applicable to all specified tables.
The QUERY
parameter cannot be specified in a direct path Export (DIRECT=y
)
The QUERY
parameter cannot be specified for tables with inner nested tables.
You cannot determine from the contents of the export file whether the data is the result of a QUERY
export.
Default: operating system-dependent
Specifies the length, in bytes, of the file record. The RECORDLENGTH
parameter is necessary when you must transfer the export file to another operating system that uses a different default value.
If you do not define this parameter, it defaults to your platform-dependent value for buffer size.
You can set RECORDLENGTH
to any value equal to or greater than your system's buffer size. (The highest value is 64 KB.) Changing the RECORDLENGTH
parameter affects only the size of data that accumulates before writing to the disk. It does not affect the operating system file block size.
Note: You can use this parameter to specify the size of the Export I/O buffer. |
Default: n
The RESUMABLE
parameter is used to enable and disable resumable space allocation. Because this parameter is disabled by default, you must set RESUMABLE=y
in order to use its associated parameters, RESUMABLE_NAME
and RESUMABLE_TIMEOUT
.
See Also:
|
Default: 'User USERNAME (USERID), Session SESSIONID, Instance INSTANCEID'
The value for this parameter identifies the statement that is resumable. This value is a user-defined text string that is inserted in either the USER_RESUMABLE
or DBA_RESUMABLE
view to help you identify a specific resumable statement that has been suspended.
This parameter is ignored unless the RESUMABLE
parameter is set to y
to enable resumable space allocation.
Default: 7200
seconds (2 hours)
The value of the parameter specifies the time period during which an error must be fixed. If the error is not fixed within the timeout period, execution of the statement is terminated.
This parameter is ignored unless the RESUMABLE
parameter is set to y
to enable resumable space allocation.
Specifies the type of database optimizer statistics to generate when the exported data is imported. Options are ESTIMATE,
COMPUTE,
and NONE.
See the Import parameter STATISTICS and Importing Statistics.
In some cases, Export will place the precalculated statistics in the export file, as well as the ANALYZE
statements to regenerate the statistics.
However, the precalculated optimizer statistics will not be used at export time if a table has columns with system-generated names.
The precalculated optimizer statistics are flagged as questionable at export time if:
There are row errors while exporting
The client character set or NCHAR
character set does not match the server character set or NCHAR
character set
A QUERY
clause is specified
Only certain partitions or subpartitions are exported
Note: SpecifyingROWS=n does not preclude saving the precalculated statistics in the export file. This enables you to tune plan generation for queries in a nonproduction database using statistics from a production database. |
Specifies that the export is a table-mode export and lists the table names and partition and subpartition names to export. You can specify the following when you specify the name of the table:
schemaname
specifies the name of the user's schema from which to export the table or partition. The schema names ORDSYS,
MDSYS,
CTXSYS,
LBACSYS
, and ORDPLUGINS
are reserved by Export.
tablename
specifies the name of the table or tables to be exported. Table-level export lets you export entire partitioned or nonpartitioned tables. If a table in the list is partitioned and you do not specify a partition name, all its partitions and subpartitions are exported.
The table name can contain any number of '%' pattern matching characters, which can each match zero or more characters in the table name against the table objects in the database. All the tables in the relevant schema that match the specified pattern are selected for export, as if the respective table names were explicitly specified in the parameter.
partition_name
indicates that the export is a partition-level Export. Partition-level Export lets you export one or more specified partitions or subpartitions within a table.
The syntax you use to specify the preceding is in the form:
schemaname.tablename:partition_name schemaname.tablename:subpartition_name
If you use tablename
:
partition_name
,
the specified table must be partitioned, and partition_name
must be the name of one of its partitions or subpartitions. If the specified table is not partitioned, the partition_name
is ignored and the entire table is exported.
See Example Export Session Using Partition-Level Export for several examples of partition-level Exports.
The following restrictions apply to table names:
By default, table names in a database are stored as uppercase. If you have a table name in mixed-case or lowercase, and you want to preserve case-sensitivity for the table name, you must enclose the name in quotation marks. The name must exactly match the table name stored in the database.
Some operating systems require that quotation marks on the command line be preceded by an escape character. The following are examples of how case-sensitivity can be preserved in the different Export modes.
In command-line mode:
TABLES='\"Emp\"'
In interactive mode:
Table(T) to be exported: "Emp"
In parameter file mode:
TABLES='"Emp"'
Table names specified on the command line cannot include a pound (#) sign, unless the table name is enclosed in quotation marks. Similarly, in the parameter file, if a table name includes a pound (#) sign, the Export utility interprets the rest of the line as a comment, unless the table name is enclosed in quotation marks.
For example, if the parameter file contains the following line, Export interprets everything on the line after emp#
as a comment and does not export the tables dept
and mydata:
TABLES=(emp#, dept, mydata)
However, given the following line, the Export utility exports all three tables, because emp#
is enclosed in quotation marks:
TABLES=("emp#", dept, mydata)
Note: Some operating systems require single quotation marks rather than double quotation marks, or the reverse. Different operating systems also have other restrictions on table naming. |
Default: none
The TABLESPACES
parameter specifies that all tables in the specified tablespace be exported to the Export dump file. This includes all tables contained in the list of tablespaces and all tables that have a partition located in the list of tablespaces. Indexes are exported with their tables, regardless of where the index is stored.
You must have the EXP_FULL_DATABASE
role to use TABLESPACES
to export all tables in the tablespace.
When TABLESPACES
is used in conjunction with TRANSPORT_TABLESPACE=y,
you can specify a limited list of tablespaces to be exported from the database to the export file.
Default: n
When specified as y
, this parameter enables the export of transportable tablespace metadata.
Default: n
When TTS_FULL_CHECK
is set to y,
Export verifies that a recovery set (set of tablespaces to be recovered) has no dependencies (specifically, IN
pointers) on objects outside the recovery set, and the reverse.
Specifies the username
/
password
(and optional connect string) of the user performing the export. If you omit the password, Export will prompt you for it.
USERID
can also be:
username/password AS SYSDBA
or
username/password@instance AS SYSDBA
If you connect as user SYS,
you must also specify AS SYSDBA
in the connect string. Your operating system may require you to treat AS SYSDBA
as a special string, in which case the entire string would be enclosed in quotation marks. See Invoking Export and Import for more information.
See Also:
|
Default: none
Specifies the maximum number of bytes in an export file on each volume of tape.
The VOLSIZE
parameter has a maximum value equal to the maximum value that can be stored in 64 bits on your platform.
The VOLSIZE
value can be specified as a number followed by KB (number of kilobytes). For example, VOLSIZE=2KB
is the same as VOLSIZE=2048.
Similarly, MB specifies megabytes (1024 * 1024) and GB specifies gigabytes (1024**3). B remains the shorthand for bytes; the number is not multiplied to get the final file size (VOLSIZE=2048B
is the same as VOLSIZE=2048)
.
This section contains descriptions of the Import command-line parameters.
Default: operating system-dependent
The integer specified for BUFFER
is the size, in bytes, of the buffer through which data rows are transferred.
BUFFER
determines the number of rows in the array inserted by Import. The following formula gives an approximation of the buffer size that inserts a given array of rows:
buffer_size = rows_in_array * maximum_row_size
For tables containing LOBs or LONG,
BFILE,
REF,
ROWID,
UROWID,
or DATE
columns, rows are inserted individually. The size of the buffer must be large enough to contain the entire row, except for LOB and LONG
columns. If the buffer cannot hold the longest row in a table, Import attempts to allocate a larger buffer.
Note: See your Oracle operating system-specific documentation to determine the default value for this parameter. |
Specifies whether Import should commit after each array insert. By default, Import commits only after loading each table, and Import performs a rollback when an error occurs, before continuing with the next object.
If a table has nested table columns or attributes, the contents of the nested tables are imported as separate tables. Therefore, the contents of the nested tables are always committed in a transaction distinct from the transaction used to commit the outer table.
If COMMIT=n
and a table is partitioned, each partition and subpartition in the Export file is imported in a separate transaction.
Specifying COMMIT=y
prevents rollback segments from growing inordinately large and improves the performance of large imports. Specifying COMMIT=y
is advisable if the table has a uniqueness constraint. If the import is restarted, any rows that have already been imported are rejected with a recoverable error.
If a table does not have a uniqueness constraint, Import could produce duplicate rows when you reimport the data.
For tables containing LOBs, LONG,
BFILE,
REF,
ROWID,
or UROWID
columns, array inserts are not done. If COMMIT=y
, Import commits these tables after each row.
Default: y
Specifies whether or not Import should compile packages, procedures, and functions as they are created.
If COMPILE
=n
, these units are compiled on their first use. For example, packages that are used to build domain indexes are compiled when the domain indexes are created.
Default: y
Specifies whether or not table constraints are to be imported. The default is to import constraints. If you do not want constraints to be imported, you must set the parameter value to n.
Note that primary key constraints for index-organized tables (IOTs) and object tables are always imported.
Default: none
When TRANSPORT_TABLESPACE
is specified as y
, use this parameter to list the datafiles to be transported into the database.
Specifies whether or not the existing datafiles making up the database should be reused. That is, specifying DESTROY=y
causes Import to include the REUSE
option in the datafile clause of the SQL CREATE TABLESPACE
statement, which causes Import to reuse the original database's datafiles after deleting their contents.
Note that the export file contains the datafile names used in each tablespace. If you specify DESTROY=y
and attempt to create a second database on the same system (for testing or other purposes), the Import utility will overwrite the first database's datafiles when it creates the tablespace. In this situation you should use the default, DESTROY=n,
so that an error occurs if the datafiles already exist when the tablespace is created. Also, when you need to import into the original database, you will need to specify IGNORE=y
to add to the existing datafiles without replacing them.
Caution: If datafiles are stored on a raw device,DESTROY=n does not prevent files from being overwritten. |
Default: 0
(zero)
Specifies that Import should display a progress meter in the form of a period for n
number of rows imported. For example, if you specify FEEDBACK=10,
Import displays a period each time 10 rows have been imported. The FEEDBACK
value applies to all tables being imported; it cannot be individually set for each table.
Specifies the names of the export files to import. The default extension is .dmp
. Because Export supports multiple export files (see the following description of the FILESIZE
parameter), you may need to specify multiple filenames to be imported. For example:
imp scott/tiger IGNORE=y FILE = dat1.dmp, dat2.dmp, dat3.dmp FILESIZE=2048
You need not be the user who exported the export files; however, you must have read access to the files. If you were not the exporter of the export files, you must also have the IMP_FULL_DATABASE
role granted to you.
Default: operating system-dependent
Export supports writing to multiple export files, and Import can read from multiple export files. If, on export, you specify a value (byte limit) for the Export FILESIZE
parameter, Export will write only the number of bytes you specify to each dump file. On import, you must use the Import parameter FILESIZE
to tell Import the maximum dump file size you specified on export.
Note: The maximum size allowed is operating system-dependent. You should verify this maximum value in your Oracle operating system-specific documentation before specifyingFILESIZE. |
The FILESIZE
value can be specified as a number followed by KB (number of kilobytes). For example, FILESIZE=2KB
is the same as FILESIZE=2048.
Similarly, MB specifies megabytes (1024 * 1024) and GB specifies gigabytes (1024**3). B remains the shorthand for bytes; the number is not multiplied to obtain the final file size (FILESIZE=2048B
is the same as FILESIZE=2048
).
For information about the maximum size of dump files, see Table 20-5.
A comma-delimited list of schemas to import. This parameter is relevant only to users with the IMP_FULL_DATABASE
role. The parameter enables you to import a subset of schemas from an export file containing multiple schemas (for example, a full export dump file or a multischema, user-mode export dump file).
Schema names that appear inside function-based indexes, functions, procedures, triggers, type bodies, views, and so on, are not affected by FROMUSER
or TOUSER
processing. Only the name of the object is affected. After the import has completed, items in any TOUSER
schema should be manually checked for references to old (FROMUSER
) schemas, and corrected if necessary.
You will typically use FROMUSER
in conjunction with the Import parameter TOUSER
, which you use to specify a list of usernames whose schemas will be targets for import (see TOUSER). The user that you specify with TOUSER
must exist in the target database prior to the import operation; otherwise an error is returned.
If you do not specify TOUSER
, Import will do the following:
Import objects into the FROMUSER
schema if the export file is a full dump or a multischema, user-mode export dump file
Create objects in the importer's schema (regardless of the presence of or absence of the FROMUSER
schema on import) if the export file is a single-schema, user-mode export dump file created by an unprivileged user
Note: SpecifyingFROMUSER=SYSTEM causes only schema objects belonging to user SYSTEM to be imported; it does not cause system objects to be imported. |
Default: y
Specifies whether to import object grants.
By default, the Import utility imports any object grants that were exported. If the export was a user-mode export, the export file contains only first-level object grants (those granted by the owner).
If the export was a full database mode export, the export file contains all object grants, including lower-level grants (those granted by users given a privilege with the WITH GRANT OPTION
). If you specify GRANTS=n,
the Import utility does not import object grants. (Note that system grants are imported even if GRANTS=n.
)
Note: Export does not export grants on data dictionary views for security reasons that affect Import. If such grants were exported, access privileges would be changed and the importer would not be aware of this. |
Displays a description of the Import parameters. Enter imp
HELP=y
on the command line to invoke it.
Specifies how object creation errors should be handled. If you accept the default, IGNORE=n
, Import logs or displays object creation errors before continuing.
If you specify IGNORE=y
, Import overlooks object creation errors when it attempts to create database objects, and continues without reporting the errors.
Note that only object creation errors are ignored; other errors, such as operating system, database, and SQL errors, are not ignored and may cause processing to stop.
In situations where multiple refreshes from a single export file are done with IGNORE=y
, certain objects can be created multiple times (although they will have unique system-defined names). You can prevent this for certain objects (for example, constraints) by doing an import with CONSTRAINTS=n
. If you do a full import with CONSTRAINTS=n
, no constraints for any tables are imported.
If a table already exists and IGNORE=y
, then rows are imported into existing tables without any errors or messages being given. You might want to import data into tables that already exist in order to use new storage parameters or because you have already created the table in a cluster.
If a table already exists and IGNORE=n,
then errors are reported and the table is skipped with no rows inserted. Also, objects dependent on tables, such as indexes, grants, and constraints, will not be created.
Caution: When you import into existing tables, if no column in the table is uniquely indexed, rows could be duplicated. |
Specifies whether or not to import indexes. System-generated indexes such as LOB indexes, OID indexes, or unique constraint indexes are re-created by Import regardless of the setting of this parameter.
You can postpone all user-generated index creation until after Import completes, by specifying INDEXES=n
.
If indexes for the target table already exist at the time of the import, Import performs index maintenance when data is inserted into the table.
Specifies a file to receive index-creation statements.
When this parameter is specified, index-creation statements for the requested mode are extracted and written to the specified file, rather than used to create indexes in the database. No database objects are imported.
If the Import parameter CONSTRAINTS
is set to y
, Import also writes table constraints to the index file.
The file can then be edited (for example, to change storage parameters) and used as a SQL script to create the indexes.
To make it easier to identify the indexes defined in the file, the export file's CREATE
TABLE
statements and CREATE
CLUSTER
statements are included as comments.
Perform the following steps to use this feature:
Import using the INDEXFILE
parameter to create a file of index-creation statements.
Edit the file, making certain to add a valid password to the connect
strings.
Rerun Import, specifying INDEXES=n
.
(This step imports the database objects while preventing Import from using the index definitions stored in the export file.)
Execute the file of index-creation statements as a SQL script to create the index.
The INDEXFILE
parameter can be used only with the FULL=y
, FROMUSER
, TOUSER
, or TABLES
parameters.
Specifies a file to receive informational and error messages. If you specify a log file, the Import utility writes all information to the log in addition to the terminal display.
Specifies a filename for a file that contains a list of Import parameters. For more information about using a parameter file, see Parameter Files.
Default: operating system-dependent
Specifies the length, in bytes, of the file record. The RECORDLENGTH
parameter is necessary when you must transfer the export file to another operating system that uses a different default value.
If you do not define this parameter, it defaults to your platform-dependent value for BUFSIZ.
You can set RECORDLENGTH
to any value equal to or greater than your system's BUFSIZ
. (The highest value is 64 KB.) Changing the RECORDLENGTH
parameter affects only the size of data that accumulates before writing to the database. It does not affect the operating system file block size.
You can also use this parameter to specify the size of the Import I/O buffer.
Default: n
The RESUMABLE
parameter is used to enable and disable resumable space allocation. Because this parameter is disabled by default, you must set RESUMABLE=y
in order to use its associated parameters, RESUMABLE_NAME
and RESUMABLE_TIMEOUT
.
See Also:
|
Default: 'User USERNAME (USERID), Session SESSIONID, Instance INSTANCEID'
The value for this parameter identifies the statement that is resumable. This value is a user-defined text string that is inserted in either the USER_RESUMABLE
or DBA_RESUMABLE
view to help you identify a specific resumable statement that has been suspended.
This parameter is ignored unless the RESUMABLE
parameter is set to y
to enable resumable space allocation.
Default: 7200
seconds (2 hours)
The value of the parameter specifies the time period during which an error must be fixed. If the error is not fixed within the timeout period, execution of the statement is terminated.
This parameter is ignored unless the RESUMABLE
parameter is set to y
to enable resumable space allocation.
Default: n
When SHOW=y,
the contents of the export dump file are listed to the display and not imported. The SQL statements contained in the export are displayed in the order in which Import will execute them.
The SHOW
parameter can be used only with the FULL=y
, FROMUSER
, TOUSER
, or TABLES
parameter.
Default: the value of the Oracle database configuration parameter, SKIP_UNUSABLE_INDEXES
, as specified in the initialization parameter file
Both Import and the Oracle database provide a SKIP_UNUSABLE_INDEXES
parameter. The Import SKIP_UNUSABLE_INDEXES
parameter is specified at the Import command line. The Oracle database SKIP_UNUSABLE_INDEXES
parameter is specified as a configuration parameter in the initialization parameter file. It is important to understand how they affect each other.
If you do not specify a value for SKIP_UNUSABLE_INDEXES
at the Import command line, then Import uses the database setting for the SKIP_UNUSABLE_INDEXES
configuration parameter, as specified in the initialization parameter file.
If you do specify a value for SKIP_UNUSABLE_INDEXES
at the Import command line, it overrides the value of the SKIP_UNUSABLE_INDEXES
configuration parameter in the initialization parameter file.
A value of y
means that Import will skip building indexes that were set to the Index Unusable state (by either system or user). Other indexes (not previously set to Index Unusable) continue to be updated as rows are inserted.
This parameter enables you to postpone index maintenance on selected index partitions until after row data has been inserted. You then have the responsibility to rebuild the affected index partitions after the Import.
Note: Indexes that are unique and marked Unusable are not allowed to skip index maintenance. Therefore, theSKIP_UNUSABLE_INDEXES parameter has no effect on unique indexes. |
You can use the INDEXFILE
parameter in conjunction with INDEXES=n
to provide the SQL scripts for re-creating the index. If the SKIP_UNUSABLE_INDEXES
parameter is not specified, row insertions that attempt to update unusable indexes will fail.
Default: ALWAYS
Specifies what is done with the database optimizer statistics at import time.
The options are:
ALWAYS
Always import database optimizer statistics regardless of whether or not they are questionable.
NONE
Do not import or recalculate the database optimizer statistics.
SAFE
Import database optimizer statistics only if they are not questionable. If they are questionable, recalculate the optimizer statistics.
RECALCULATE
Do not import the database optimizer statistics. Instead, recalculate them on import. This requires that the original export operation that created the dump file must have generated the necessary ANALYZE
statements (that is, the export was not performed with STATISTICS
=NONE
). These ANALYZE
statements are included in the dump file and used by the import operation for recalculation of the table's statistics.
See Also:
|
Default: y
Specifies whether or not to import any general Streams metadata that may be present in the export dump file.
Default: n
Specifies whether or not to import Streams instantiation metadata that may be present in the export dump file. Specify y
if the import is part of an instantiation in a Streams environment.
Specifies that the import is a table-mode import and lists the table names and partition and subpartition names to import. Table-mode import lets you import entire partitioned or nonpartitioned tables. The TABLES
parameter restricts the import to the specified tables and their associated objects, as listed in Table 20-3. You can specify the following values for the TABLES
parameter:
tablename
specifies the name of the table or tables to be imported. If a table in the list is partitioned and you do not specify a partition name, all its partitions and subpartitions are imported. To import all the exported tables, specify an asterisk (*) as the only table name parameter.
tablename
can contain any number of '%' pattern matching characters, which can each match zero or more characters in the table names in the export file. All the tables whose names match all the specified patterns of a specific table name in the list are selected for import. A table name in the list that consists of all pattern matching characters and no partition name results in all exported tables being imported.
partition_name
and subpartition_name
let you restrict the import to one or more specified partitions or subpartitions within a partitioned table.
The syntax you use to specify the preceding is in the form:
tablename:partition_name tablename:subpartition_name
If you use tablename
:
partition_name
, the specified table must be partitioned, and partition_name
must be the name of one of its partitions or subpartitions. If the specified table is not partitioned, the partition_name
is ignored and the entire table is imported.
The number of tables that can be specified at the same time is dependent on command-line limits.
As the export file is processed, each table name in the export file is compared against each table name in the list, in the order in which the table names were specified in the parameter. To avoid ambiguity and excessive processing time, specific table names should appear at the beginning of the list, and more general table names (those with patterns) should appear at the end of the list.
Although you can qualify table names with schema names (as in scott
.emp
) when exporting, you cannot do so when importing. In the following example, the TABLES
parameter is specified incorrectly:
imp SYSTEM/password TABLES=(jones.accts, scott.emp, scott.dept)
The valid specification to import these tables is as follows:
imp SYSTEM/password FROMUSER=jones TABLES=(accts) imp SYSTEM/password FROMUSER=scott TABLES=(emp,dept)
For a more detailed example, see Example Import Using Pattern Matching to Import Various Tables.
The following restrictions apply to table names:
By default, table names in a database are stored as uppercase. If you have a table name in mixed-case or lowercase, and you want to preserve case-sensitivity for the table name, you must enclose the name in quotation marks. The name must exactly match the table name stored in the database.
Some operating systems require that quotation marks on the command line be preceded by an escape character. The following are examples of how case-sensitivity can be preserved in the different Import modes.
In command-line mode:
tables='\"Emp\"'
In interactive mode:
Table(T) to be exported: "Exp"
In parameter file mode:
tables='"Emp"'
Table names specified on the command line cannot include a pound (#) sign, unless the table name is enclosed in quotation marks. Similarly, in the parameter file, if a table name includes a pound (#) sign, the Import utility interprets the rest of the line as a comment, unless the table name is enclosed in quotation marks.
For example, if the parameter file contains the following line, Import interprets everything on the line after emp#
as a comment and does not import the tables dept
and mydata:
TABLES=(emp#, dept, mydata)
However, given the following line, the Import utility imports all three tables because emp#
is enclosed in quotation marks:
TABLES=("emp#", dept, mydata)
Note: Some operating systems require single quotation marks rather than double quotation marks, or the reverse; see your Oracle operating system-specific documentation. Different operating systems also have other restrictions on table naming.For example, the UNIX C shell attaches a special meaning to a dollar sign ($) or pound sign (#) (or certain other special characters). You must use escape characters to get such characters in the name past the shell and into Import. |
Default: none
When TRANSPORT_TABLESPACE
is specified as y
, use this parameter to provide a list of tablespaces to be transported into the database.
See TRANSPORT_TABLESPACE for more information.
Default: none
When you import a table that references a type, but a type of that name already exists in the database, Import attempts to verify that the preexisting type is, in fact, the type used by the table (rather than a different type that just happens to have the same name).
To do this, Import compares the type's unique identifier (TOID) with the identifier stored in the export file. Import will not import the table rows if the TOIDs do not match.
In some situations, you may not want this validation to occur on specified types (for example, if the types were created by a cartridge installation). You can use the TOID_NOVALIDATE
parameter to specify types to exclude from TOID comparison.
The syntax is as follows:
TOID_NOVALIDATE=([schemaname.]typename [, ...])
For example:
imp scott/tiger TABLE=jobs TOID_NOVALIDATE=typ1 imp scott/tiger TABLE=salaries TOID_NOVALIDATE=(fred.typ0,sally.typ2,typ3)
If you do not specify a schema name for the type, it defaults to the schema of the importing user. For example, in the first preceding example, the type typ1
defaults to scott.typ1.
Note that TOID_NOVALIDATE
deals only with table column types. It has no effect on table types.
The output of a typical import with excluded types would contain entries similar to the following:
[...] . importing IMP3's objects into IMP3 . . skipping TOID validation on type IMP2.TOIDTYP0 . . importing table "TOIDTAB3" [...]
Caution: When you inhibit validation of the type identifier, it is your responsibility to ensure that the attribute list of the imported type matches the attribute list of the existing type. If these attribute lists do not match, results are unpredictable. |
Default: none
Specifies a list of user names whose schemas will be targets for Import. The user names must exist prior to the import operation; otherwise an error is returned. The IMP_FULL_DATABASE
role is required to use this parameter. To import to a different schema than the one that originally contained the object, specify TOUSER.
For example:
imp SYSTEM/password FROMUSER=scott TOUSER=joe TABLES=emp
If multiple schemas are specified, the schema names are paired. The following example imports scott'
s objects into joe
's schema, and fred
's objects into ted'
s schema:
imp SYSTEM/password FROMUSER=scott,fred TOUSER=joe,ted
If the FROMUSER
list is longer than the TOUSER
list, the remaining schemas will be imported into either the FROMUSER
schema, or into the importer's schema, based on normal defaulting rules. You can use the following syntax to ensure that any extra objects go into the TOUSER
schema:
imp SYSTEM/password FROMUSER=scott,adams TOUSER=ted,ted
Note that user ted
is listed twice.
Default: n
When specified as y
, instructs Import to import transportable tablespace metadata from an export file.
Default: none
When TRANSPORT_TABLESPACE
is specified as y
, use this parameter to list the users who own the data in the transportable tablespace set.
See TRANSPORT_TABLESPACE.
Default: none
Specifies the username
/
password
(and optional connect string) of the user performing the import.
USERID
can also be:
username/password AS SYSDBA
or
username/password@instance
or
username/password@instance AS SYSDBA
If you connect as user SYS,
you must also specify AS SYSDBA
in the connect string. Your operating system may require you to treat AS SYSDBA
as a special string, in which case the entire string would be enclosed in quotation marks.
See Also:
|
Default: none
Specifies the maximum number of bytes in a dump file on each volume of tape.
The VOLSIZE
parameter has a maximum value equal to the maximum value that can be stored in 64 bits on your platform.
The VOLSIZE
value can be specified as number followed by KB (number of kilobytes). For example, VOLSIZE=2KB
is the same as VOLSIZE=2048.
Similarly, MB specifies megabytes (1024 * 1024) and GB specifies gigabytes (1024**3). The shorthand for bytes remains B; the number is not multiplied to get the final file size (VOLSIZE=2048B
is the same as VOLSIZE=2048
).
This section provides examples of the following types of Export sessions:
In each example, you are shown how to use both the command-line method and the parameter file method. Some examples use vertical ellipses to indicate sections of example output that were too long to include.
Only users with the DBA
role or the EXP_FULL_DATABASE
role can export in full database mode. In this example, an entire database is exported to the file dba.dmp
with all GRANTS
and all data.
> exp SYSTEM/password PARFILE=params.dat
The params.dat
file contains the following information:
FILE=dba.dmp GRANTS=y FULL=y ROWS=y
> exp SYSTEM/password FULL=y FILE=dba.dmp GRANTS=y ROWS=y
Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Status messages are written out as the entire database is exported. A final completion message is returned when the export completes successfully, without warnings.
User-mode exports can be used to back up one or more database users. For example, a DBA may want to back up the tables of deleted users for a period of time. User mode is also appropriate for users who want to back up their own data or who want to move objects from one owner to another. In this example, user scott
is exporting his own tables.
> exp scott/tiger PARFILE=params.dat
The params.dat
file contains the following information:
FILE=scott.dmp OWNER=scott GRANTS=y ROWS=y COMPRESS=y
> exp scott/tiger FILE=scott.dmp OWNER=scott GRANTS=y ROWS=y COMPRESS=y
Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:
. . . about to export SCOTT's tables via Conventional Path ... . . exporting table BONUS 0 rows exported . . exporting table DEPT 4 rows exported . . exporting table EMP 14 rows exported . . exporting table SALGRADE 5 rows exported . . . Export terminated successfully without warnings.
In table mode, you can export table data or the table definitions. (If no rows are exported, the CREATE TABLE
statement is placed in the export file, with grants and indexes, if they are specified.)
A user with the EXP_FULL_DATABASE
role can use table mode to export tables from any user's schema by specifying TABLES=schemaname.tablename.
If schemaname
is not specified, Export defaults to the previous schema name from which an object was exported. If there is not a previous object, Export defaults to the exporter's schema. In the following example, Export defaults to the SYSTEM
schema for table a
and to scott
for table c
:
> exp SYSTEM/password TABLES=(a, scott.b, c, mary.d)
A user with the EXP_FULL_DATABASE
role can also export dependent objects that are owned by other users. A nonprivileged user can export only dependent objects for the specified tables that the user owns.
Exports in table mode do not include cluster definitions. As a result, the data is exported as unclustered tables. Thus, you can use table mode to uncluster tables.
In this example, a DBA exports specified tables for two users.
> exp SYSTEM/password PARFILE=params.dat
The params.dat
file contains the following information:
FILE=expdat.dmp TABLES=(scott.emp,blake.dept) GRANTS=y INDEXES=y
> exp SYSTEM/password FILE=expdat.dmp TABLES=(scott.emp,blake.dept) GRANTS=y INDEXES=y
Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:
. . . About to export specified tables via Conventional Path ... Current user changed to SCOTT . . exporting table EMP 14 rows exported Current user changed to BLAKE . . exporting table DEPT 8 rows exported Export terminated successfully without warnings.
In this example, user blake
exports selected tables that he owns.
> exp blake/paper PARFILE=params.dat
The params.dat
file contains the following information:
FILE=blake.dmp TABLES=(dept,manager) ROWS=y COMPRESS=y
> exp blake/paper FILE=blake.dmp TABLES=(dept, manager) ROWS=y COMPRESS=y
Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:
. . . About to export specified tables via Conventional Path ... . . exporting table DEPT 8 rows exported . . exporting table MANAGER 4 rows exported Export terminated successfully without warnings.
In this example, pattern matching is used to export various tables for users scott
and blake
.
> exp SYSTEM/password PARFILE=params.dat
The params.dat
file contains the following information:
FILE=misc.dmp TABLES=(scott.%P%,blake.%,scott.%S%)
> exp SYSTEM/password FILE=misc.dmp TABLES=(scott.%P%,blake.%,scott.%S%)
Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:
. . . About to export specified tables via Conventional Path ... Current user changed to SCOTT . . exporting table DEPT 4 rows exported . . exporting table EMP 14 rows exported Current user changed to BLAKE . . exporting table DEPT 8 rows exported . . exporting table MANAGER 4 rows exported Current user changed to SCOTT . . exporting table BONUS 0 rows exported . . exporting table SALGRADE 5 rows exported Export terminated successfully without warnings.
In partition-level Export, you can specify the partitions and subpartitions of a table that you want to export.
Assume emp
is a table that is partitioned on employee name. There are two partitions, m
and z.
As this example shows, if you export the table without specifying a partition, all of the partitions are exported.
> exp scott/tiger PARFILE=params.dat
The params.dat
file contains the following:
TABLES=(emp) ROWS=y
> exp scott/tiger TABLES=emp rows=y
Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:
. . . About to export specified tables via Conventional Path ... . . exporting table EMP . . exporting partition M 8 rows exported . . exporting partition Z 6 rows exported Export terminated successfully without warnings.
Assume emp
is a table that is partitioned on employee name. There are two partitions, m
and z.
As this example shows, if you export the table and specify a partition, only the specified partition is exported.
> exp scott/tiger PARFILE=params.dat
The params.dat
file contains the following:
TABLES=(emp:m) ROWS=y
> exp scott/tiger TABLES=emp:m rows=y
Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:
. . . About to export specified tables via Conventional Path ... . . exporting table EMP . . exporting partition M 8 rows exported Export terminated successfully without warnings.
Assume emp
is a partitioned table with two partitions, m
and z.
Table emp
is partitioned using the composite method. Partition m
has subpartitions sp1
and sp2,
and partition z
has subpartitions sp3
and sp4.
As the example shows, if you export the composite partition m,
all its subpartitions (sp1
and sp2
) will be exported. If you export the table and specify a subpartition (sp4
), only the specified subpartition is exported.
> exp scott/tiger PARFILE=params.dat
The params.dat
file contains the following:
TABLES=(emp:m,emp:sp4) ROWS=y
> exp scott/tiger TABLES=(emp:m, emp:sp4) ROWS=y
Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:
. . . About to export specified tables via Conventional Path ... . . exporting table EMP . . exporting composite partition M . . exporting subpartition SP1 1 rows exported . . exporting subpartition SP2 3 rows exported . . exporting composite partition Z . . exporting subpartition SP4 1 rows exported Export terminated successfully without warnings.
This section gives some examples of import sessions that show you how to use the parameter file and command-line methods. The examples illustrate the following scenarios:
In this example, using a full database export file, an administrator imports the dept
and emp
tables into the scott
schema.
> imp SYSTEM/password PARFILE=params.dat
The params
.dat
file contains the following information:
FILE=dba.dmp SHOW=n IGNORE=n GRANTS=y FROMUSER=scott TABLES=(dept,emp)
> imp SYSTEM/password FILE=dba.dmp FROMUSER=scott TABLES=(dept,emp)
Information is displayed about the release of Import you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:
. . . Export file created by EXPORT:V10.00.00 via conventional path import done in WE8DEC character set and AL16UTF16 NCHAR character set . importing SCOTT's objects into SCOTT . . importing table "DEPT" 4 rows imported . . importing table "EMP" 14 rows imported Import terminated successfully without warnings.
This example illustrates importing the unit
and manager
tables from a file exported by blake
into the scott
schema.
> imp SYSTEM/password PARFILE=params.dat
The params
.dat
file contains the following information:
FILE=blake.dmp SHOW=n IGNORE=n GRANTS=y ROWS=y FROMUSER=blake TOUSER=scott TABLES=(unit,manager)
> imp SYSTEM/password FROMUSER=blake TOUSER=scott FILE=blake.dmp - TABLES=(unit,manager)
Information is displayed about the release of Import you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:
. . . Warning: the objects were exported by BLAKE, not by you import done in WE8DEC character set and AL16UTF16 NCHAR character set . . importing table "UNIT" 4 rows imported . . importing table "MANAGER" 4 rows imported Import terminated successfully without warnings.
In this example, a database administrator (DBA) imports all tables belonging to scott
into user blake'
s account.
> imp SYSTEM/password PARFILE=params.dat
The params
.dat
file contains the following information:
FILE=scott.dmp FROMUSER=scott TOUSER=blake TABLES=(*)
> imp SYSTEM/password FILE=scott.dmp FROMUSER=scott TOUSER=blake TABLES=(*)
Information is displayed about the release of Import you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:
. . . Warning: the objects were exported by SCOTT, not by you import done in WE8DEC character set and AL16UTF16 NCHAR character set . importing SCOTT's objects into BLAKE . . importing table "BONUS" 0 rows imported . . importing table "DEPT" 4 rows imported . . importing table "EMP" 14 rows imported . . importing table "SALGRADE" 5 rows imported Import terminated successfully without warnings.
This section describes an import of a table with multiple partitions, a table with partitions and subpartitions, and repartitioning a table on different columns.
In this example, emp
is a partitioned table with three partitions: P1
, P2
, and P3
.
A table-level export file was created using the following command:
> exp scott/tiger TABLES=emp FILE=exmpexp.dat ROWS=y
Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:
. . . About to export specified tables via Conventional Path ... . . exporting table EMP . . exporting partition P1 7 rows exported . . exporting partition P2 12 rows exported . . exporting partition P3 3 rows exported Export terminated successfully without warnings.
In a partition-level Import you can specify the specific partitions of an exported table that you want to import. In this example, these are P1
and P3
of table emp:
> imp scott/tiger TABLES=(emp:p1,emp:p3) FILE=exmpexp.dat ROWS=y
Information is displayed about the release of Import you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:
. . . Export file created by EXPORT:V10.00.00 via conventional path import done in WE8DEC character set and AL16UTF16 NCHAR character set . importing SCOTT's objects into SCOTT . . importing partition "EMP":"P1" 7 rows imported . . importing partition "EMP":"P3" 3 rows imported Import terminated successfully without warnings.
This example demonstrates that the partitions and subpartitions of a composite partitioned table are imported. emp
is a partitioned table with two composite partitions: P1
and P2
. Partition P1
has three subpartitions: P1_SP1
, P1_SP2,
and P1_SP3
. Partition P2
has two subpartitions: P2_SP1
and P2_SP2
.
A table-level export file was created using the following command:
> exp scott/tiger TABLES=emp FILE=exmpexp.dat ROWS=y
Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:
When the command executes, the following Export messages are displayed:
. . . About to export specified tables via Conventional Path ... . . exporting table EMP . . exporting composite partition P1 . . exporting subpartition P1_SP1 2 rows exported . . exporting subpartition P1_SP2 10 rows exported . . exporting subpartition P1_SP3 7 rows exported . . exporting composite partition P2 . . exporting subpartition P2_SP1 4 rows exported . . exporting subpartition P2_SP2 2 rows exported Export terminated successfully without warnings.
The following Import command results in the importing of subpartition P1_SP2
and P1_SP3
of composite partition P1
in table emp
and all subpartitions of composite partition P2
in table emp.
> imp scott/tiger TABLES=(emp:p1_sp2,emp:p1_sp3,emp:p2) FILE=exmpexp.dat ROWS=y
Information is displayed about the release of Import you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:
. . . . importing SCOTT's objects into SCOTT . . importing subpartition "EMP":"P1_SP2" 10 rows imported . . importing subpartition "EMP":"P1_SP3" 7 rows imported . . importing subpartition "EMP":"P2_SP1" 4 rows imported . . importing subpartition "EMP":"P2_SP2" 2 rows imported Import terminated successfully without warnings.
This example assumes the emp
table has two partitions based on the empno
column. This example repartitions the emp
table on the deptno
column.
Perform the following steps to repartition a table on a different column:
Export the table to save the data.
Drop the table from the database.
Create the table again with the new partitions.
Import the table data.
The following example illustrates these steps.
> exp scott/tiger table=emp file=empexp.dat . . . About to export specified tables via Conventional Path ... . . exporting table EMP . . exporting partition EMP_LOW 4 rows exported . . exporting partition EMP_HIGH 10 rows exported Export terminated successfully without warnings. SQL> connect scott/tiger Connected. SQL> drop table emp cascade constraints; Statement processed. SQL> create table emp 2 ( 3 empno number(4) not null, 4 ename varchar2(10), 5 job varchar2(9), 6 mgr number(4), 7 hiredate date, 8 sal number(7,2), 9 comm number(7,2), 10 deptno number(2) 11 ) 12 partition by range (deptno) 13 ( 14 partition dept_low values less than (15) 15 tablespace tbs_1, 16 partition dept_mid values less than (25) 17 tablespace tbs_2, 18 partition dept_high values less than (35) 19 tablespace tbs_3 20 ); Statement processed. SQL> exit > imp scott/tiger tables=emp file=empexp.dat ignore=y . . . import done in WE8DEC character set and AL16UTF16 NCHAR character set . importing SCOTT's objects into SCOTT . . importing partition "EMP":"EMP_LOW" 4 rows imported . . importing partition "EMP":"EMP_HIGH" 10 rows imported Import terminated successfully without warnings.
The following SQL SELECT
statements show that the data is partitioned on the deptno
column:
SQL> connect scott/tiger Connected. SQL> select empno, deptno from emp partition (dept_low); EMPNO DEPTNO ---------- ---------- 7782 10 7839 10 7934 10 3 rows selected. SQL> select empno, deptno from emp partition (dept_mid); EMPNO DEPTNO ---------- ---------- 7369 20 7566 20 7788 20 7876 20 7902 20 5 rows selected. SQL> select empno, deptno from emp partition (dept_high); EMPNO DEPTNO ---------- ---------- 7499 30 7521 30 7654 30 7698 30 7844 30 7900 30 6 rows selected. SQL> exit;
In this example, pattern matching is used to import various tables for user scott
.
imp SYSTEM/password PARFILE=params.dat
The params
.dat
file contains the following information:
FILE=scott.dmp IGNORE=n GRANTS=y ROWS=y FROMUSER=scott TABLES=(%d%,b%s)
imp SYSTEM/password FROMUSER=scott FILE=scott.dmp TABLES=(%d%,b%s)
Information is displayed about the release of Import you are using and the release of Oracle Database that you are connected to. Then, status messages similar to the following are shown:
. . . import done in US7ASCII character set and AL16UTF16 NCHAR character set import server uses JA16SJIS character set (possible charset conversion) . importing SCOTT's objects into SCOTT . . importing table "BONUS" 0 rows imported . . importing table "DEPT" 4 rows imported . . importing table "SALGRADE" 5 rows imported Import terminated successfully without warnings.
The Export and Import utilities are the only method that Oracle supports for moving an existing Oracle database from one hardware platform to another. This includes moving between UNIX and NT systems and also moving between two NT systems running on different platforms.
The following steps present a general overview of how to move a database between platforms.
As a DBA user, issue the following SQL query to get the exact name of all tablespaces. You will need this information later in the process.
SQL> SELECT tablespace_name FROM dba_tablespaces;
As a DBA user, perform a full export from the source database, for example:
> exp system/manager FULL=y FILE=expdat.dmp
Move the dump file to the target database server. If you use FTP, be sure to copy it in binary format (by entering binary
at the FTP prompt) to avoid file corruption.
Create a database on the target server.
Before importing the dump file, you must first create your tablespaces, using the information obtained in Step 1. Otherwise, the import will create the corresponding datafiles in the same file structure as at the source database, which may not be compatible with the file structure on the target system.
As a DBA user, perform a full import with the IGNORE
parameter enabled:
> imp system/manager FULL=y IGNORE=y FILE=expdat.dmp
Using IGNORE=y
instructs Oracle to ignore any creation errors during the import and permit the import to complete.
Perform a full backup of your new database.
This section describes the different types of messages issued by Export and Import and how to save them in a log file.
You can capture all Export and Import messages in a log file, either by using the LOG
parameter or, for those systems that permit it, by redirecting the output to a file. A log of detailed information is written about successful unloads and loads and any errors that may have occurred.
Export and Import do not terminate after recoverable errors. For example, if an error occurs while exporting a table, Export displays (or logs) an error message, skips to the next table, and continues processing. These recoverable errors are known as warnings.
Export and Import also issue warnings when invalid objects are encountered.
For example, if a nonexistent table is specified as part of a table-mode Export, the Export utility exports all other tables. Then it issues a warning and terminates successfully.
Some errors are nonrecoverable and terminate the Export or Import session. These errors typically occur because of an internal problem or because a resource, such as memory, is not available or has been exhausted. For example, if the catexp.sql
script is not executed, Export issues the following nonrecoverable error message:
EXP-00024: Export views not installed, please notify your DBA
When an export or import completes without errors, a message to that effect is displayed, for example:
Export terminated successfully without warnings
If one or more recoverable errors occurs but the job continues to completion, a message similar to the following is displayed:
Export terminated successfully with warnings
If a nonrecoverable error occurs, the job terminates immediately and displays a message stating so, for example:
Export terminated unsuccessfully
Export and Import provide the results of an operation immediately upon completion. Depending on the platform, the outcome may be reported in a process exit code and the results recorded in the log file. This enables you to check the outcome from the command line or script. Table 20-6 shows the exit codes that get returned for various results.
Table 20-6 Exit Codes for Export and Import
Result | Exit Code |
---|---|
Export terminated successfully without warnings
Import terminated successfully without warnings |
EX_SUCC |
Export terminated successfully with warnings
Import terminated successfully with warnings |
EX_OKWARN |
Export terminated unsuccessfully
Import terminated unsuccessfully |
EX_FAIL |
For UNIX, the exit codes are as follows:
EX_SUCC 0 EX_OKWARN 0 EX_FAIL 1
This section describes factors to take into account when using Export and Import across a network.
Because the export file is in binary format, use a protocol that supports binary transfers to prevent corruption of the file when you transfer it across a network. For example, use FTP or a similar file transfer protocol to transmit the file in binary mode. Transmitting export files in character mode causes errors when the file is imported.
With Oracle Net, you can perform exports and imports over a network. For example, if you run Export locally, you can write data from a remote Oracle database into a local export file. If you run Import locally, you can read data into a remote Oracle database.
To use Export or Import with Oracle Net, include the connection qualifier string @
connect_string
when entering the username
/
password
in the exp
or imp
command. For the exact syntax of this clause, see the user's guide for your Oracle Net protocol.
The following sections describe the globalization support behavior of Export and Import with respect to character set conversion of user data and data definition language (DDL).
The Export utility always exports user data, including Unicode data, in the character sets of the Export server. (Character sets are specified at database creation.) If the character sets of the source database are different than the character sets of the import database, a single conversion is performed to automatically convert the data to the character sets of the Import server.
If the export character set has a different sorting order than the import character set, then tables that are partitioned on character columns may yield unpredictable results. For example, consider the following table definition, which is produced on a database having an ASCII character set:
CREATE TABLE partlist ( part VARCHAR2(10), partno NUMBER(2) ) PARTITION BY RANGE (part) ( PARTITION part_low VALUES LESS THAN ('Z') TABLESPACE tbs_1, PARTITION part_mid VALUES LESS THAN ('z') TABLESPACE tbs_2, PARTITION part_high VALUES LESS THAN (MAXVALUE) TABLESPACE tbs_3 );
This partitioning scheme makes sense because z
comes after Z
in ASCII character sets.
When this table is imported into a database based upon an EBCDIC character set, all of the rows in the part_mid
partition will migrate to the part_low
partition because z
comes before Z
in EBCDIC character sets. To obtain the desired results, the owner of partlist
must repartition the table following the import.
Up to three character set conversions may be required for data definition language (DDL) during an export/import operation:
Export writes export files using the character set specified in the NLS_LANG
environment variable for the user session. A character set conversion is performed if the value of NLS_LANG
differs from the database character set.
If the export file's character set is different than the import user session character set, then Import converts the character set to its user session character set. Import can only perform this conversion for single-byte character sets. This means that for multibyte character sets, the import file's character set must be identical to the export file's character set.
A final character set conversion may be performed if the target database's character set is different from the character set used by the import user session.
To minimize data loss due to character set conversions, ensure that the export database, the export user session, the import user session, and the import database all use the same character set.
Some 8-bit characters can be lost (that is, converted to 7-bit equivalents) when you import an 8-bit character set export file. This occurs if the system on which the import occurs has a native 7-bit character set, or the NLS_LANG
operating system environment variable is set to a 7-bit character set. Most often, this is apparent when accented characters lose the accent mark.
To avoid this unwanted conversion, you can set the NLS_LANG
operating system environment variable to be that of the export file character set.
During character set conversion, any characters in the export file that have no equivalent in the target character set are replaced with a default character. (The default character is defined by the target character set.) To guarantee 100% conversion, the target character set must be a superset (or equivalent) of the source character set.
Caution: When the character set width differs between the Export client and the Export server, truncation of data can occur if conversion causes expansion of data. If truncation occurs, Export displays a warning message. |
The three interrelated objects in a snapshot system are the master table, optional snapshot log, and the snapshot itself. The tables (master table, snapshot log table definition, and snapshot tables) can be exported independently of one another. Snapshot logs can be exported only if you export the associated master table. You can export snapshots using full database or user-mode export; you cannot use table-mode export.
See Also: Oracle Database Advanced Replication for Import-specific information about migration and compatibility and for more information about snapshots and snapshot logs |
The snapshot log in a dump file is imported if the master table already exists for the database to which you are importing and it has a snapshot log.
When a ROWID
snapshot log is exported, ROWID
s stored in the snapshot log have no meaning upon import. As a result, each ROWID
snapshot's first attempt to do a fast refresh fails, generating an error indicating that a complete refresh is required.
To avoid the refresh error, do a complete refresh after importing a ROWID
snapshot log. After you have done a complete refresh, subsequent fast refreshes will work properly. In contrast, when a primary key snapshot log is exported, the values of the primary keys do retain their meaning upon import. Therefore, primary key snapshots can do a fast refresh after the import.
A snapshot that has been restored from an export file has reverted to a previous state. On import, the time of the last refresh is imported as part of the snapshot table definition. The function that calculates the next refresh time is also imported.
Each refresh leaves a signature. A fast refresh uses the log entries that date from the time of that signature to bring the snapshot up to date. When the fast refresh is complete, the signature is deleted and a new signature is created. Any log entries that are not needed to refresh other snapshots are also deleted (all log entries with times before the earliest remaining signature).
When you restore a snapshot from an export file, you may encounter a problem under certain circumstances.
Assume that a snapshot is refreshed at time A, exported at time B, and refreshed again at time C. Then, because of corruption or other problems, the snapshot needs to be restored by dropping the snapshot and importing it again. The newly imported version has the last refresh time recorded as time A. However, log entries needed for a fast refresh may no longer exist. If the log entries do exist (because they are needed for another snapshot that has yet to be refreshed), they are used, and the fast refresh completes successfully. Otherwise, the fast refresh fails, generating an error that says a complete refresh is required.
Snapshots and related items are exported with the schema name explicitly given in the DDL statements. To import them into a different schema, use the FROMUSER
and TOUSER
parameters. This does not apply to snapshot logs, which cannot be imported into a different schema.
The transportable tablespace feature enables you to move a set of tablespaces from one Oracle database to another.
To move or copy a set of tablespaces, you must make the tablespaces read-only, copy the datafiles of these tablespaces, and use Export and Import to move the database information (metadata) stored in the data dictionary. Both the datafiles and the metadata export file must be copied to the target database. The transport of these files can be done using any facility for copying flat binary files, such as the operating system copying facility, binary-mode FTP, or publishing on CD-ROMs.
After copying the datafiles and exporting the metadata, you can optionally put the tablespaces in read/write mode.
Export and Import provide the following parameters to enable movement of transportable tablespace metadata.
TABLESPACES
TRANSPORT_TABLESPACE
See TABLESPACES and TRANSPORT_TABLESPACE for more information about using these parameters during an export operation.
See TABLESPACES and TRANSPORT_TABLESPACE for information about using these parameters during an import operation.
See Also:
|
Read-only tablespaces can be exported. On import, if the tablespace does not already exist in the target database, the tablespace is created as a read/write tablespace. If you want read-only functionality, you must manually make the tablespace read-only after the import.
If the tablespace already exists in the target database and is read-only, you must make it read/write before the import.
You can drop a tablespace by redefining the objects to use different tablespaces before the import. You can then issue the imp
command and specify IGNORE=y.
In many cases, you can drop a tablespace by doing a full database export, then creating a zero-block tablespace with the same name (before logging off) as the tablespace you want to drop. During import, with IGNORE=y,
the relevant CREATE TABLESPACE
statement will fail and prevent the creation of the unwanted tablespace.
All objects from that tablespace will be imported into their owner's default tablespace with the exception of partitioned tables, type tables, and tables that contain LOB or VARRAY
columns or index-only tables with overflow segments. Import cannot determine which tablespace caused the error. Instead, you must first create a table and then import the table again, specifying IGNORE=y.
Objects are not imported into the default tablespace if the tablespace does not exist, or you do not have the necessary quotas for your default tablespace.
If a user's quota allows it, the user's tables are imported into the same tablespace from which they were exported. However, if the tablespace no longer exists or the user does not have the necessary quota, the system uses the default tablespace for that user as long as the table is unpartitioned, contains no LOB or VARRAY
columns, is not a type table, and is not an index-only table with an overflow segment. This scenario can be used to move a user's tables from one tablespace to another.
For example, you need to move joe
's tables from tablespace A
to tablespace B
after a full database export. Follow these steps:
If joe
has the UNLIMITED
TABLESPACE
privilege, revoke it. Set joe
's quota on tablespace A
to zero. Also revoke all roles that might have such privileges or quotas.
When you revoke a role, it does not have a cascade effect. Therefore, users who were granted other roles by joe
will be unaffected.
Export joe
's tables.
Drop joe
's tables from tablespace A
.
Give joe
a quota on tablespace B
and make it the default tablespace for joe
.
Import joe
's tables. (By default, Import puts joe
's tables into
tablespace B
.)
You can export and import tables with fine-grained access control policies enabled. When doing so, consider the following:
To restore the fine-grained access control policies, the user who imports from an export file containing such tables must have the EXECUTE
privilege on the DBMS_RLS
package, so that the security policies on the tables can be reinstated. If a user without the correct privileges attempts to export a table with fine-grained access policies enabled, only those rows that the user has privileges to read will be exported.
If a user without the correct privileges attempts to import from an export file that contains tables with fine-grained access control policies, a warning message will be issued. Therefore, it is advisable for security reasons that the exporter and importer of such tables be the DBA.
If fine-grained access control is enabled on a SELECT
statement, then conventional path Export may not export the entire table, because fine-grained access may rewrite the query.
Only user SYS,
or a user with the EXP_FULL_DATABASE
role enabled or who has been granted the EXEMPT ACCESS POLICY
privilege, can perform direct path Exports on tables having fine-grained access control.
See Also: Oracle Database Application Developer's Guide - Fundamentals for more information about fine-grained access control |
You can use instance affinity to associate jobs with instances in databases you plan to export and import. Be aware that there may be some compatibility issues if you are using a combination of releases.
A database with many noncontiguous, small blocks of free space is said to be fragmented. A fragmented database should be reorganized to make space available in contiguous, larger blocks. You can reduce fragmentation by performing a full database export and import as follows:
Do a full database export (FULL=y
) to back up the entire database.
Shut down the Oracle database after all users are logged off.
Delete the database. See your Oracle operating system-specific documentation for information about how to delete a database.
Re-create the database using the CREATE DATABASE
statement.
Do a full database import (FULL=y
) to restore the entire database.
By default, a table is imported into its original tablespace.
If the tablespace no longer exists, or the user does not have sufficient quota in the tablespace, the system uses the default tablespace for that user, unless the table:
Is partitioned
Is a type table
Contains LOB, VARRAY,
or OPAQUE
type columns
Has an index-organized table (IOT) overflow segment
If the user does not have sufficient quota in the default tablespace, the user's tables are not imported. See Reorganizing Tablespaces to see how you can use this to your advantage.
The storage parameter OPTIMAL
for rollback segments is not preserved during export and import.
Tables are exported with their current storage parameters. For object tables, the OIDINDEX is created with its current storage parameters and name, if given. For tables that contain LOB, VARRAY
, or OPAQUE
type columns, LOB, VARRAY
, or OPAQUE
type data is created with their current storage parameters.
If you alter the storage parameters of existing tables prior to export, the tables are exported using those altered storage parameters. Note, however, that storage parameters for LOB data cannot be altered prior to export (for example, chunk size for a LOB column, whether a LOB column is CACHE
or NOCACHE
, and so forth).
Note that LOB data might not reside in the same tablespace as the containing table. The tablespace for that data must be read/write at the time of import or the table will not be imported.
If LOB data resides in a tablespace that does not exist at the time of import, or the user does not have the necessary quota in that tablespace, the table will not be imported. Because there can be multiple tablespace clauses, including one for the table, Import cannot determine which tablespace clause caused the error.
Before using the Import utility to import data, you may want to create large tables with different storage parameters. If so, you must specify IGNORE=y
on the command line or in the parameter file.
By default at export time, storage parameters are adjusted to consolidate all data into its initial extent. To preserve the original size of an initial extent, you must specify at export time that extents are not to be consolidated (by setting COMPRESS=n
). See COMPRESS.
The material presented in this section is specific to the original Export utility. The following topics are discussed:
Export provides two methods for exporting table data:
Conventional path Export uses the SQL SELECT
statement to extract data from tables. Data is read from disk into a buffer cache, and rows are transferred to the evaluating buffer. The data, after passing expression evaluation, is transferred to the Export client, which then writes the data into the export file.
Direct path Export is much faster than conventional path Export because data is read from disk into the buffer cache and rows are transferred directly to the Export client. The evaluating buffer (that is, the SQL command-processing layer) is bypassed. The data is already in the format that Export expects, thus avoiding unnecessary data conversion. The data is transferred to the Export client, which then writes the data into the export file.
To use direct path Export, specify the DIRECT=y
parameter on the command line or in the parameter file. The default is DIRECT=n,
which extracts the table data using the conventional path. The rest of this section discusses the following topics:
Restrictions for Direct Path Exports
Note: When you export a table in direct path, be sure that no other transaction is updating the same table, and that the size of the rollback segment is sufficient. Otherwise, you may receive the following error:ORA-01555 snapshot too old; rollback segment number string with name "string" too small This will cause the export to terminate unsuccessfully. |
Oracle Virtual Private Database (VPD) and Oracle Label Security are not enforced during direct path Exports.
The following users are exempt from Virtual Private Database and Oracle Label Security enforcement regardless of the export mode, application, or utility used to extract data from the database:
The database user SYS
Database users granted the EXEMPT
ACCESS
POLICY
privilege, either directly or through a database role
This means that any user who is granted the EXEMPT
ACCESS
POLICY
privilege is completely exempt from enforcement of VPD and Oracle Label Security. This is a powerful privilege and should be carefully managed. This privilege does not affect the enforcement of traditional object privileges such as SELECT
, INSERT
, UPDATE
, and DELETE
. These privileges are enforced even if a user has been granted the EXEMPT
ACCESS
POLICY
privilege.
You may be able to improve performance by increasing the value of the RECORDLENGTH
parameter when you invoke a direct path Export. Your exact performance gain depends upon the following factors:
DB_BLOCK_SIZE
The types of columns in your table
Your I/O layout (The drive receiving the export file should be separate from the disk drive where the database files reside.)
The following values are generally recommended for RECORDLENGTH:
Multiples of the file system I/O block size
Multiples of DB_BLOCK_SIZE
An export file that is created using direct path Export will take the same amount of time to import as an export file created using conventional path Export.
Keep the following restrictions in mind when you are using direct path mode:
To invoke a direct path Export, you must use either the command-line method or a parameter file. You cannot invoke a direct path Export using the interactive method.
The Export parameter BUFFER
applies only to conventional path Exports. For direct path Export, use the RECORDLENGTH
parameter to specify the size of the buffer that Export uses for writing to the export file.
You cannot use direct path when exporting in tablespace mode (TRANSPORT_TABLESPACES
=Y
).
The QUERY
parameter cannot be specified in a direct path Export.
A direct path Export can only export data when the NLS_LANG
environment variable of the session invoking the export is equal to the database character set. If NLS_LANG
is not set or if it is different than the database character set, a warning is displayed and the export is discontinued. The default value for the NLS_LANG
environment variable is AMERICAN_AMERICA.US7ASCII
.
To extract metadata from a source database, Export uses queries that contain ordering clauses (sort operations). For these queries to succeed, the user performing the export must be able to allocate sort segments. For these sort segments to be allocated in a read-only database, the user's temporary tablespace should be set to point at a temporary, locally managed tablespace.
See Also: Oracle Data Guard Concepts and Administration for more information about setting up this environment |
The following sections describe points you should consider when you export particular database objects.
If transactions continue to access sequence numbers during an export, sequence numbers might be skipped. The best way to ensure that sequence numbers are not skipped is to ensure that the sequences are not accessed during the export.
Sequence numbers can be skipped only when cached sequence numbers are in use. When a cache of sequence numbers has been allocated, they are available for use in the current database. The exported value is the next sequence number (after the cached values). Sequence numbers that are cached, but unused, are lost when the sequence is imported.
On export, LONG
datatypes are fetched in sections. However, enough memory must be available to hold all of the contents of each row, including the LONG data.
LONG
columns can be up to 2 gigabytes in length.
All data in a LOB column does not need to be held in memory at the same time. LOB data is loaded and unloaded in sections.
Note: Oracle also recommends that you convert existingLONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases. |
The contents of foreign function libraries are not included in the export file. Instead, only the library specification (name, location) is included in full database mode and user-mode export. You must move the library's executable files and update the library specification if the database is moved to a new location.
If the data you are exporting contains offline locally managed tablespaces, Export will not be able to export the complete tablespace definition and will display an error message. You can still import the data; however, you must create the offline locally managed tablespaces before importing to prevent DDL commands that may reference the missing tablespaces from failing.
Directory alias definitions are included only in a full database mode export. To move a database to a new location, the database administrator must update the directory aliases to point to the new location.
Directory aliases are not included in user-mode or table-mode export. Therefore, you must ensure that the directory alias has been created on the target system before the directory alias is used.
The export file does not hold the contents of external files referenced by BFILE
columns or attributes. Instead, only the names and directory aliases for files are copied on Export and restored on Import. If you move the database to a location where the old directories cannot be used to access the included files, the database administrator (DBA) must move the directories containing the specified files to a new location where they can be accessed.
The contents of external tables are not included in the export file. Instead, only the table specification (name, location) is included in full database mode and user-mode export. You must manually move the external data and update the table specification if the database is moved to a new location.
In all Export modes, the Export utility includes information about object type definitions used by the tables being exported. The information, including object name, object identifier, and object geometry, is needed to verify that the object type on the target system is consistent with the object instances contained in the export file. This ensures that the object types needed by a table are created with the same object identifier at import time.
Note, however, that in table mode, user mode, and tablespace mode, the export file does not include a full object type definition needed by a table if the user running Export does not have execute access to the object type. In this case, only enough information is written to verify that the type exists, with the same object identifier and the same geometry, on the Import target system.
The user must ensure that the proper type definitions exist on the target system, either by working with the DBA to create them, or by importing them from full database mode or user-mode exports performed by the DBA.
It is important to perform a full database mode export regularly to preserve all object type definitions. Alternatively, if object type definitions from different schemas are used, the DBA should perform a user mode export of the appropriate set of users. For example, if table1
belonging to user scott
contains a column on blake
's type type1,
the DBA should perform a user mode export of both blake
and scott
to preserve the type definitions needed by the table.
Inner nested table data is exported whenever the outer containing table is exported. Although inner nested tables can be named, they cannot be exported individually.
Queues are implemented on tables. The export and import of queues constitutes the export and import of the underlying queue tables and related dictionary tables. You can export and import queues only at queue table granularity.
When you export a queue table, both the table definition information and queue data are exported. Because the queue table data is exported as well as the table definition, the user is responsible for maintaining application-level data integrity when queue table data is imported.
You should be cautious when exporting compiled objects that reference a name used as a synonym and as another object. Exporting and importing these objects will force a recompilation that could result in changes to the object definitions.
The following example helps to illustrate this problem:
CREATE PUBLIC SYNONYM emp FOR scott.emp; CONNECT blake/paper; CREATE TRIGGER t_emp BEFORE INSERT ON emp BEGIN NULL; END; CREATE VIEW emp AS SELECT * FROM dual;
If the database in the preceding example were exported, the reference to emp
in the trigger would refer to blake
's view rather than to scott
's table. This would cause an error when Import tried to reestablish the t_emp
trigger.
If an export operation attempts to export a synonym named DBMS_JAVA
when there is no corresponding DBMS_JAVA
package or when Java is either not loaded or loaded incorrectly, the export will terminate unsuccessfully. The error messages that are generated include, but are not limited to, the following: EXP-00008, ORA-00904, and ORA-29516.
If Java is enabled, make sure that both the DBMS_JAVA
synonym and DBMS_JAVA
package are created and valid before rerunning the export.
If Java is not enabled, remove Java-related objects before rerunning the export.
The material in this section is specific to the original Import utility. The following topics are discussed:
This section describes errors that can occur when you import database objects.
If a row is rejected due to an integrity constraint violation or invalid data, Import displays a warning message but continues processing the rest of the table. Some errors, such as "tablespace full," apply to all subsequent rows in the table. These errors cause Import to stop processing the current table and skip to the next table.
A "tablespace full" error can suspend the import if the RESUMABLE=y
parameter is specified.
Errors can occur for many reasons when you import database objects, as described in this section. When these errors occur, import of the current database object is discontinued. Import then attempts to continue with the next database object in the export file.
If a database object to be imported already exists in the database, an object creation error occurs. What happens next depends on the setting of the IGNORE
parameter.
If IGNORE=n
(the default), the error is reported, and Import continues with the next database object. The current database object is not replaced. For tables, this behavior means that rows contained in the export file are not imported.
If IGNORE=y
, object creation errors are not reported. The database object is not replaced. If the object is a table, rows are imported into it. Note that only object creation errors are ignored; all other errors (such as operating system, database, and SQL errors) are reported and processing may stop.
Caution: SpecifyingIGNORE=y can cause duplicate rows to be entered into a table unless one or more columns of the table are specified with the UNIQUE integrity constraint. This could occur, for example, if Import were run twice. |
If sequence numbers need to be reset to the value in an export file as part of an import, you should drop sequences. If a sequence is not dropped before the import, it is not set to the value captured in the export file, because Import does not drop and re-create a sequence that already exists. If the sequence already exists, the export file's CREATE SEQUENCE
statement fails and the sequence is not imported.
Resource limitations can cause objects to be skipped. When you are importing tables, for example, resource errors can occur as a result of internal problems, or when a resource such as memory has been exhausted.
If a resource error occurs while you are importing a row, Import stops processing the current table and skips to the next table. If you have specified COMMIT=y,
Import commits the partial import of the current table. If not, a rollback of the current table occurs before Import continues. See the description of COMMIT.
Domain indexes can have associated application-specific metadata that is imported using anonymous PL/SQL blocks. These PL/SQL blocks are executed at import time prior to the CREATE
INDEX
statement. If a PL/SQL block causes an error, the associated index is not created because the metadata is considered an integral part of the index.
This section describes the behavior of Import with respect to index creation and maintenance.
Import provides you with the capability of delaying index creation and maintenance services until after completion of the import and insertion of exported data. Performing index creation, re-creation, or maintenance after Import completes is generally faster than updating the indexes for each row inserted by Import.
Index creation can be time consuming, and therefore can be done more efficiently after the import of all other objects has completed. You can postpone creation of indexes until after the import completes by specifying INDEXES=n
. (INDEXES=y
is the default.) You can then store the missing index definitions in a SQL script by running Import while using the INDEXFILE
parameter. The index-creation statements that would otherwise be issued by Import are instead stored in the specified file.
After the import is complete, you must create the indexes, typically by using the contents of the file (specified with INDEXFILE
) as a SQL script after specifying passwords for the connect statements.
If SKIP_UNUSABLE_INDEXES=y
, the Import utility postpones maintenance on all indexes that were set to Index Unusable before the Import. Other indexes (not previously set to Index Unusable) continue to be updated as rows are inserted. This approach saves on index updates during the import of existing tables.
Delayed index maintenance may cause a violation of an existing unique integrity constraint supported by the index. The existence of a unique integrity constraint on a table does not prevent existence of duplicate keys in a table that was imported with INDEXES=n.
The supporting index will be in an UNUSABLE
state until the duplicates are removed and the index is rebuilt.
For example, assume that partitioned table t
with partitions p1
and p2
exists on the import target system. Assume that local indexes p1_ind
on partition p1
and p2_ind
on partition p2
exist also. Assume that partition p1
contains a much larger amount of data in the existing table t
, compared with the amount of data to be inserted by the export file (expdat
.dmp
). Assume that the reverse is true for p2
.
Consequently, performing index updates for p1_ind
during table data insertion time is more efficient than at partition index rebuild time. The opposite is true for p2_ind
.
Users can postpone local index maintenance for p2_ind
during import by using the following steps:
Issue the following SQL statement before import:
ALTER TABLE t MODIFY PARTITION p2 UNUSABLE LOCAL INDEXES;
Issue the following Import command:
imp scott/tiger FILE=expdat.dmp TABLES = (t:p1, t:p2) IGNORE=y SKIP_UNUSABLE_INDEXES=y
This example executes the ALTER
SESSION
SET
SKIP_UNUSABLE_INDEXES=y
statement before performing the import.
Issue the following SQL statement after import:
ALTER TABLE t MODIFY PARTITION p2 REBUILD UNUSABLE LOCAL INDEXES;
In this example, local index p1_ind
on p1
will be updated when table data is inserted into partition p1
during import. Local index p2_ind
on p2
will be updated at index rebuild time, after import.
If statistics are requested at export time and analyzer statistics are available for a table, Export will include the ANALYZE
statement used to recalculate the statistics for the table into the dump file. In most circumstances, Export will also write the precalculated optimizer statistics for tables, indexes, and columns to the dump file. See the description of the Export parameter STATISTICS and the Import parameter STATISTICS.
Because of the time it takes to perform an ANALYZE
statement, it is usually preferable for Import to use the precalculated optimizer statistics for a table (and its indexes and columns) rather than execute the ANALYZE
statement saved by Export. By default, Import will always use the precalculated statistics that are found in the export dump file.
The Export utility flags certain precalculated statistics as questionable. The importer might want to import only unquestionable statistics, not precalculated statistics, in the following situations:
Character set translations between the dump file and the import client and the import database could potentially change collating sequences that are implicit in the precalculated statistics.
Row errors occurred while importing the table.
A partition level import is performed (column statistics will no longer be accurate).
Note: SpecifyingROWS=n will not prevent the use of precalculated statistics. This feature allows plan generation for queries to be tuned in a nonproduction database using statistics from a production database. In these cases, the import should specify STATISTICS=SAFE. |
In certain situations, the importer might want to always use ANALYZE
statements rather than precalculated statistics. For example, the statistics gathered from a fragmented database may not be relevant when the data is imported in a compressed form. In these cases, the importer should specify STATISTICS=RECALCULATE
to force the recalculation of statistics.
If you do not want any statistics to be established by Import, you should specify STATISTICS=NONE.
This section discusses some ways to possibly improve the performance of an import operation. The information is categorized as follows:
The following suggestions about system-level options may help to improve performance of an import operation:
Create and use one large rollback segment and take all other rollback segments offline. Generally a rollback segment that is one half the size of the largest table being imported should be big enough. It can also help if the rollback segment is created with the minimum number of two extents, of equal size.
Note: Rollback segments will be deprecated in a future Oracle Database release. Oracle recommends that you use automatic undo management instead. |
Put the database in NOARCHIVELOG
mode until the import is complete. This will reduce the overhead of creating and managing archive logs.
Create several large redo files and take any small redo log files offline. This will result in fewer log switches being made.
If possible, have the rollback segment, table data, and redo log files all on separate disks. This will reduce I/O contention and increase throughput.
If possible, do not run any other jobs at the same time that may compete with the import operation for system resources.
Make sure there are no statistics on dictionary tables.
Set TRACE_LEVEL_CLIENT
=OFF
in the sqlnet
.ora
file.
If possible, increase the value of DB_BLOCK_SIZE
when you re-create the database. The larger the block size, the smaller the number of I/O cycles needed. This change is permanent, so be sure to carefully consider all effects it will have before making it.
The following suggestions about settings in your initialization parameter file may help to improve performance of an import operation.
Set LOG_CHECKPOINT_INTERVAL
to a number that is larger than the size of the redo log files. This number is in operating system blocks (512 on most UNIX systems). This reduces checkpoints to a minimum (at log switching time).
Increase the value of SORT_AREA_SIZE
. The amount you increase it depends on other activity taking place on the system and on the amount of free memory available. (If the system begins swapping and paging, the value is probably set too high.)
Increase the value for DB_BLOCK_BUFFERS
and SHARED_POOL_SIZE
.
The following suggestions about usage of import options may help to improve performance. Be sure to also read the individual descriptions of all the available options in Import Parameters.
Set COMMIT
=N
. This causes Import to commit after each object (table), not after each buffer. This is why one large rollback segment is needed. (Because rollback segments will be deprecated in future releases, Oracle recommends that you use automatic undo management instead.)
Specify a large value for BUFFER
or RECORDLENGTH
, depending on system activity, database size, and so on. A larger size reduces the number of times that the export file has to be accessed for data. Several megabytes is usually enough. Be sure to check your system for excessive paging and swapping activity, which can indicate that the buffer size is too large.
Consider setting INDEXES
=N
because indexes can be created at some point after the import, when time is not a factor. If you choose to do this, you need to use the INDEXFILE
parameter to extract the DLL for the index creation or to rerun the import with INDEXES
=Y
and ROWS
=N
.
Keep the following in mind when you are importing large amounts of LOB data:
Eliminating indexes significantly reduces total import time. This is because LOB data requires special consideration during an import because the LOB locator has a primary key that cannot be explicitly dropped or ignored during an import.
Make sure there is enough space available in large contiguous chunks to complete the data load.
Keep in mind that importing a table with a LONG
column may cause a higher rate of I/O and disk usage, resulting in reduced performance of the import operation. There are no specific parameters that will improve performance during an import of large amounts of LONG data, although some of the more general tuning suggestions made in this section may help overall performance.
The following sections describe restrictions and points you should consider when you import particular database objects.
The Oracle database assigns object identifiers to uniquely identify object types, object tables, and rows in object tables. These object identifiers are preserved by Import.
When you import a table that references a type, but a type of that name already exists in the database, Import attempts to verify that the preexisting type is, in fact, the type used by the table (rather than a different type that just happens to have the same name).
To do this, Import compares the types's unique identifier (TOID) with the identifier stored in the export file. If those match, Import then compares the type's unique hashcode with that stored in the export file. Import will not import table rows if the TOIDs or hashcodes do not match.
In some situations, you may not want this validation to occur on specified types (for example, if the types were created by a cartridge installation). You can use the parameter TOID_NOVALIDATE
to specify types to exclude from the TOID and hashcode comparison. See TOID_NOVALIDATE for more information.
Caution: Be very careful about usingTOID_NOVALIDATE, because type validation provides an important capability that helps avoid data corruption. Be sure you are confident of your knowledge of type validation and how it works before attempting to perform an import operation with this feature disabled. |
Import uses the following criteria to decide how to handle object types, object tables, and rows in object tables:
For object types, if IGNORE
=y
, the object type already exists, and the object identifiers, hashcodes, and type descriptors match, no error is reported. If the object identifiers or hashcodes do not match and the parameter TOID_NOVALIDATE
has not been set to ignore the object type, an error is reported and any tables using the object type are not imported.
For object types, if IGNORE
=n
and the object type already exists, an error is reported. If the object identifiers, hashcodes, or type descriptors do not match and the parameter TOID_NOVALIDATE
has not been set to ignore the object type, any tables using the object type are not imported.
For object tables, if IGNORE
=y
, the table already exists, and the object identifiers, hashcodes, and type descriptors match, no error is reported. Rows are imported into the object table. Import of rows may fail if rows with the same object identifier already exist in the object table. If the object identifiers, hashcodes, or type descriptors do not match, and the parameter TOID_NOVALIDATE
has not been set to ignore the object type, an error is reported and the table is not imported.
For object tables, if IGNORE
=n
and the table already exists, an error is reported and the table is not imported.
Because Import preserves object identifiers of object types and object tables, consider the following when you import objects from one schema into another schema using the FROMUSER
and TOUSER
parameters:
If the FROMUSER
object types and object tables already exist on the target system, errors occur because the object identifiers of the TOUSER
object types and object tables are already in use. The FROMUSER
object types and object tables must be dropped from the system before the import is started.
If an object table was created using the OID
AS
option to assign it the same object identifier as another table, both tables cannot be imported. You can import one of the tables, but the second table receives an error because the object identifier is already in use.
Users frequently create tables before importing data to reorganize tablespace usage or to change a table's storage parameters. The tables must be created with the same definitions as were previously used or a compatible format (except for storage parameters). For object tables and tables that contain columns of object types, format compatibilities are more restrictive.
For object tables and for tables containing columns of objects, each object the table references has its name, structure, and version information written out to the export file. Export also includes object type information from different schemas, as needed.
Import verifies the existence of each object type required by a table prior to importing the table data. This verification consists of a check of the object type's name followed by a comparison of the object type's structure and version from the import system with that found in the export file.
If an object type name is found on the import system, but the structure or version do not match that from the export file, an error message is generated and the table data is not imported.
The Import parameter TOID_NOVALIDATE
can be used to disable the verification of the object type's structure and version for specific objects.
Inner nested tables are exported separately from the outer table. Therefore, situations may arise where data in an inner nested table might not be properly imported:
Suppose a table with an inner nested table is exported and then imported without dropping the table or removing rows from the table. If the IGNORE=y
parameter is used, there will be a constraint violation when inserting each row in the outer table. However, data in the inner nested table may be successfully imported, resulting in duplicate rows in the inner table.
If nonrecoverable errors occur inserting data in outer tables, the rest of the data in the outer table is skipped, but the corresponding inner table rows are not skipped. This may result in inner table rows not being referenced by any row in the outer table.
If an insert to an inner table fails after a recoverable error, its outer table row will already have been inserted in the outer table and data will continue to be inserted in it and any other inner tables of the containing table. This circumstance results in a partial logical row.
If nonrecoverable errors occur inserting data in an inner table, Import skips the rest of that inner table's data but does not skip the outer table or other nested tables.
You should always carefully examine the log file for errors in outer tables and inner tables. To be consistent, table data may need to be modified or deleted.
Because inner nested tables are imported separately from the outer table, attempts to access data from them while importing may produce unexpected results. For example, if an outer row is accessed before its inner rows are imported, an incomplete row may be returned to the user.
REF
columns and attributes may contain a hidden ROWID
that points to the referenced type instance. Import does not automatically recompute these ROWID
s for the target database. You should execute the following statement to reset the
ROWID
s to their proper values:
ANALYZE TABLE [schema.]table VALIDATE REF UPDATE;
Export and Import do not copy data referenced by BFILE
columns and attributes from the source database to the target database. Export and Import only propagate the names of the files and the directory aliases referenced by the BFILE
columns. It is the responsibility of the DBA or user to move the actual files referenced through BFILE
columns and attributes.
When you import table data that contains BFILE
columns, the BFILE
locator is imported with the directory alias and filename that was present at export time. Import does not verify that the directory alias or file exists. If the directory alias or file does not exist, an error occurs when the user accesses the BFILE
data.
For directory aliases, if the operating system directory syntax used in the export system is not valid on the import system, no error is reported at import time. The error occurs when the user seeks subsequent access to the file data. It is the responsibility of the DBA or user to ensure the directory alias is valid on the import system.
Import does not verify that the location referenced by the foreign function library is correct. If the formats for directory and filenames used in the library's specification on the export file are invalid on the import system, no error is reported at import time. Subsequent usage of the callout functions will receive an error.
It is the responsibility of the DBA or user to manually move the library and ensure the library's specification is valid on the import system.
The behavior of Import when a local stored procedure, function, or package is imported depends upon whether the COMPILE
parameter is set to y
or to n
.
When a local stored procedure, function, or package is imported and COMPILE=y
, the procedure, function, or package is recompiled upon import and retains its original timestamp specification. If the compilation is successful, it can be accessed by remote procedures without error.
If COMPILE=n
, the procedure, function, or package is still imported, but the original timestamp is lost. The compilation takes place the next time the procedure, function, or package is used.
When you import Java objects into any schema, the Import utility leaves the resolver unchanged. (The resolver is the list of schemas used to resolve Java full names.) This means that after an import, all user classes are left in an invalid state until they are either implicitly or explicitly revalidated. An implicit revalidation occurs the first time the classes are referenced. An explicit revalidation occurs when the SQL statement ALTER JAVA CLASS...RESOLVE
is used. Both methods result in the user classes being resolved successfully and becoming valid.
Import does not verify that the location referenced by the external table is correct. If the formats for directory and filenames used in the table's specification on the export file are invalid on the import system, no error is reported at import time. Subsequent usage of the callout functions will result in an error.
It is the responsibility of the DBA or user to manually move the table and ensure the table's specification is valid on the import system.
Importing a queue table also imports any underlying queues and the related dictionary information. A queue can be imported only at the granularity level of the queue table. When a queue table is imported, export pretable and posttable action procedures maintain the queue dictionary.
LONG
columns can be up to 2 gigabytes in length. In importing and exporting, the LONG
columns must fit into memory with the rest of each row's data. The memory used to store LONG
columns, however, does not need to be contiguous, because LONG
data is loaded in sections.
Import can be used to convert LONG
columns to CLOB
columns. To do this, first create a table specifying the new CLOB
column. When Import is run, the LONG
data is converted to CLOB
format. The same technique can be used to convert LONG
RAW
columns to BLOB
columns.
Note: Oracle recommends that you convert existingLONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases. |
As of Oracle Database 10g, LOB handling has been improved to ensure that triggers work properly and that performance remains high when LOBs are being loaded. To achieve these improvements, the Import utility automatically changes all LOBs that were empty at export time to be NULL after they are imported.
If you have applications that expect the LOBs to be empty rather than NULL, then after the import you can issue a SQL UPDATE
statement for each LOB column. Depending on whether the LOB column type was a BLOB
or a CLOB
, the syntax would be one of the following:
UPDATE <tablename> SET <lob column> = EMPTY_BLOB() WHERE <lob column> = IS NULL; UPDATE <tablename> SET <lob column> = EMPTY_CLOB() WHERE <lob column> = IS NULL;
It is important to note that once the import is performed, there is no way to distinguish between LOB columns that are NULL versus those that are empty. Therefore, if that information is important to the integrity of your data, be sure you know which LOB columns are NULL and which are empty before you perform the import.
Views are exported in dependency order. In some cases, Export must determine the ordering, rather than obtaining the order from the database. In doing so, Export may not always be able to duplicate the correct ordering, resulting in compilation warnings when a view is imported, and the failure to import column comments on such views.
In particular, if viewa
uses the stored procedure procb
, and procb
uses the view viewc
, Export cannot determine the proper ordering of viewa
and viewc
. If viewa
is exported before viewc
and procb
already exists on the import system, viewa
receives compilation warnings at import time.
Grants on views are imported even if a view has compilation errors. A view could have compilation errors if an object it depends on, such as a table, procedure, or another view, does not exist when the view is created. If a base table does not exist, the server cannot validate that the grantor has the proper privileges on the base table with the GRANT
OPTION
. Access violations could occur when the view is used if the grantor does not have the proper privileges after the missing tables are created.
Importing views that contain references to tables in other schemas requires that the importer have SELECT
ANY
TABLE
privilege. If the importer has not been granted this privilege, the views will be imported in an uncompiled state. Note that granting the privilege to a role is insufficient. For the view to be compiled, the privilege must be granted directly to the importer.
Import attempts to create a partitioned table with the same partition or subpartition names as the exported partitioned table, including names of the form SYS_P
nnn.
If a table with the same name already exists, Import processing depends on the value of the IGNORE
parameter.
Unless SKIP_UNUSABLE_INDEXES
=y,
inserting the exported data into the target table fails if Import cannot update a nonpartitioned index or index partition that is marked Indexes Unusable or is otherwise not suitable.
When you use the Export and Import utilities to migrate a large database, it may be more efficient to partition the migration into multiple export and import jobs. If you decide to partition the migration, be aware of the following advantages and disadvantages.
Partitioning a migration has the following advantages:
Time required for the migration may be reduced, because many of the subjobs can be run in parallel.
The import can start as soon as the first export subjob completes, rather than waiting for the entire export to complete.
Partitioning a migration has the following disadvantages:
The export and import processes become more complex.
Support of cross-schema references for certain types of objects may be compromised. For example, if a schema contains a table with a foreign key constraint against a table in a different schema, you may not have the required parent records when you import the table into the dependent schema.
To perform a database migration in a partitioned manner, take the following steps:
For all top-level metadata in the database, issue the following commands:
exp dba/password FILE=full FULL=y CONSTRAINTS=n TRIGGERS=n ROWS=n INDEXES=n
imp dba/password FILE=full FULL=y
For each scheman
in the database, issue the following commands:
exp dba/password OWNER=schema
n
FILE=schema
n
imp dba/password FILE=schema
n
FROMUSER=schema
n
TOUSER=schema
n
IGNORE=y
All exports can be done in parallel. When the import of full
.dmp
completes, all remaining imports can also be done in parallel.
This section describes compatibility issues that relate to using different releases of Export and the Oracle database.
Whenever you are moving data between different releases of the Oracle database, the following basic rules apply:
The Import utility and the database to which data is being imported (the target database) must be the same version.
The version of the Export utility must be equal to the earliest version of the source or target database.
For example, to create an export file for an import into a later release database, use a version of the Export utility that is equal to the source database. Conversely, to create an export file for an import into an earlier release database, use a version of the Export utility that is equal to the version of the target database.
In general, you can use the Export utility from any Oracle8 release to export from an Oracle9i server and create an Oracle8 export file. See Creating Oracle Release 8.0 Export Files from an Oracle9i Database.
The following restrictions apply when you are using different releases of Export and Import:
Export dump files can be read only by the Import utility because they are stored in a special binary format.
Any export dump file can be imported into a later release of the Oracle database.
The Import utility cannot read export dump files created by the Export utility of a later maintenance release or version. For example, a release 9.2 export dump file cannot be imported by a release 9.0.1 Import utility.
Whenever a lower version of the Export utility runs with a later version of the Oracle database, categories of database objects that did not exist in the earlier version are excluded from the export.
Export files generated by Oracle9i Export, either direct path or conventional path, are incompatible with earlier releases of Import and can be imported only with Oracle9i Import. When backward compatibility is an issue, use the earlier release or version of the Export utility against the Oracle9i database.
Table 20-7 shows some examples of which Export and Import releases to use when moving data between different releases of the Oracle database.
Table 20-7 Using Different Releases of Export and Import
Export from->Import to | Use Export Release | Use Import Release |
---|---|---|
8.1.6 -> 8.1.6 | 8.1.6 | 8.1.6 |
8.1.5 -> 8.0.6 | 8.0.6 | 8.0.6 |
8.1.7 -> 8.1.6 | 8.1.6 | 8.1.6 |
9.0.1 -> 8.1.6 | 8.1.6 | 8.1.6 |
9.0.1 -> 9.0.2 | 9.0.1 | 9.0.2 |
9.0.2 -> 10.1.0 | 9.0.2 | 10.1.0 |
10.1.0 -> 9.0.2 | 9.0.2 | 9.0.2 |
You do not need to take any special steps to create an Oracle release 8.0 export file from an Oracle9i database. However, the following features are not supported when you use Export release 8.0 on an Oracle9i database:
Export does not export rows from tables containing objects and LOBs when you have specified a direct path load (DIRECT=y
).
Export does not export dimensions.
Function-based indexes and domain indexes are not exported.
Secondary objects (tables, indexes, sequences, and so on, created in support of a domain index) are not exported.
Views, procedures, functions, packages, type bodies, and types containing references to new Oracle9i features may not compile.
Objects whose DDL is implemented as a stored procedure rather than SQL are not exported.
Triggers whose action is a CALL
statement are not exported.
Tables containing logical ROWID
columns, primary key refs, or user-defined OID
columns are not exported.
Temporary tables are not exported.
Index-organized tables (IOTs) revert to an uncompressed state.
Partitioned IOTs lose their partitioning information.
Index types and operators are not exported.
Bitmapped, temporary, and UNDO tablespaces are not exported.
Java sources, classes, and resources are not exported.
Varying-width CLOB
s, collection enhancements, and LOB-storage clauses for VARRAY
columns or nested table enhancements are not exported.
Fine-grained access control policies are not preserved.
External tables are not exported.