This chapter describes how to use the Export
utility to write data from an Oracle database into an operating system file in
binary format. This file is stored outside the database, and it can be read
into another Oracle database using the Import utility.
- What Is the Export Utility?
- Before Using Export
- Invoking Export
- Export Modes
- Getting Online Help
- Export Parameters
- Example Export Sessions
- Using the Interactive Method
- Warning, Error, and Completion Messages
- Exit Codes for Inspection and Display
- Conventional Path Export Versus Direct Path Export
- Invoking a Direct Path Export
- Network Considerations
- Character Set and Globalization Support Considerations
- Instance Affinity and Export
- Considerations When Exporting Database Objects
- Transportable Tablespaces
- Exporting from a Read-Only Database
- Using Export and Import to Partition a Database Migration
- Using Different Releases and Versions of Export
The Export utility provides a simple way for you
to transfer data objects between Oracle databases, even if they reside on
platforms with different hardware and software configurations.
When you run Export against an Oracle database,
objects (such as tables) are extracted, followed by their related objects (such
as indexes, comments, and grants), if any. The extracted data is written to an
Export file, as illustrated in Figure 1-1.
An Export file is an Oracle binary-format dump
file that is typically located on disk or tape. The dump files can be
transferred using FTP or physically transported (in the case of tape) to a
different site. The files can then be used with the Import utility to transfer
data between databases that are on systems not connected through a network. The
files can also be used as backups in addition to normal backup procedures.
Export dump files can only be read by the Oracle
Import utility. The version of the Import utility cannot be earlier than the
version of the Export utility used to create the dump file.
You can also display the contents of an export
file without actually performing an import. To do this, use the Import SHOW parameter. See SHOW for more information.
Before you begin using Export, be sure you take
care of the following items (described in detail in the following sections):
- Run
the catexp.sql or catalog.sql script
- Ensure
there is sufficient disk or tape storage to write the export file
- Verify
that you have the required access privileges
To use Export, you must run the script catexp.sql or catalog.sql (which runs catexp.sql) after the database has been created.
The actual names of the script
files depend on your operating system. The script filenames and the method
for running them are described in your Oracle operating system-specific
documentation.
|
catexp.sql or catalog.sql needs to be run only once on a database. You do not need to run it
again before you perform the export. The script performs the following tasks to
prepare the database for Export:
- Creates
the necessary export views in the data dictionary
- Creates
the EXP_FULL_DATABASE role
- Assigns
all necessary privileges to the EXP_FULL_DATABASE role
- Assigns EXP_FULL_DATABASE to
the DBA role
- Records
the version of catexp.sql that has been installed
Before you run Export, ensure that there is
sufficient disk or tape storage space to write the export file. If there is not
enough space, Export terminates with a write-failure error.
You can use table sizes to estimate the maximum
space needed. You can find table sizes in the USER_SEGMENTS view of the Oracle data dictionary. The following query displays
disk usage for all tables:
The result of the query does not include disk
space used for data stored in LOB (large object) or VARRAY columns or in partitioned tables.
Oracle9i
Database Reference for
more information about dictionary views
|
To use Export, you must have the CREATE
SESSION privilege on an Oracle database. To export
tables owned by another user, you must have the EXP_FULL_DATABASE role enabled. This role is granted to all DBAs.
If you do not have the system privileges
contained in the EXP_FULL_DATABASE role, you cannot export objects contained in
another user's schema. For example, you cannot export a table in another user's
schema, even if you created a synonym for it.
Before you use one of these methods to invoke
Export, be sure to read the descriptions of the available parameters.
You can specify all valid parameters and their
values from the command line using the following syntax:
If you prefer to let Export prompt you for the
value of each parameter, you can use the following syntax to start Export in
interactive mode:
Export will display commonly used parameters
with a request for you to enter a value. This method exists for backward
compatibility and is not recommended because it provides less functionality
than the other methods.
You can specify all valid parameters and their
values in a parameter file. Storing the parameters in a file allows them to be
easily modified or reused, and is the recommended method for invoking Export.
If you use different parameters for different databases, you can have multiple
parameter files.
Create the parameter file using any flat file
text editor. The command-line option PARFILE=filename tells Export to read the parameters from the specified file rather
than from the command line. For example:
The first example does not specify the username/password on the command line to illustrate that you can
specify them in the parameter file, although, for security reasons, this is not
recommended.
You can add comments to the parameter file by
preceding them with the pound (#) sign. Export ignores all characters to the
right of the pound (#) sign.
You can specify a parameter file at the same
time that you are entering parameters on the command line. In fact, you can
specify the same parameter in both places. The position of the PARFILE parameter and other parameters on the command line determines which
parameters take precedence. For example, assume the parameter file params.dat contains the parameter INDEXES=y and Export is invoked with the following line:
In this case, because INDEXES=n occurs after PARFILE=params.dat, INDEXES=n overrides the value of the INDEXES parameter in the parameter file.
SYSDBA is used internally and has specialized
functions; its behavior is not the same as for generalized users. Therefore,
you should not typically need to invoke Export as SYSDBA, except in the following situations:
To invoke Export as SYSDBA, use the following syntax, adding any desired parameters or
parameter filenames:
This example shows the entire connect string
enclosed in single quotation marks and backslashes. This is because the string, AS
SYSDBA, contains a blank, a situation for which most
operating systems require that the entire connect string be placed in quotation
marks or marked as a literal by some method. Some operating systems also
require that quotation marks on the command line be preceded by an escape
character. In this example, backslashes are used as the escape character. If
the backslashes were not present, the command-line parser that Export uses
would not understand the quotation marks and would remove them before calling
Export.
See your Oracle operating system-specific
documentation for more information about special and reserved characters on
your system.
All users can export in table mode and user
mode. Users with the EXP_FULL_DATABASE role (privileged users) can export in all modes.
To
specify one of these modes, use the appropriate parameter (FULL, OWNER, TABLES, or TABLESPACES) when you invoke Export.
You can use conventional path Export or direct
path Export to export in any of the first three modes.
Table 1-1 Objects Exported and Imported in Each Mode
YesFoot 1
|
YesFootref 1
|
|||
YesFoot 2
|
YesFoot 3
|
|||
1 Nonprivileged users can export and import only
indexes they own on tables they own. They cannot export indexes they own that
are on tables owned by other users, nor can they export indexes owned by
other users on their own tables. Privileged users can export and import
indexes on the specified users' tables, even if the indexes are owned by
other users. Indexes owned by the specified user on other users' tables are
not included, unless those other users are included in the list of users to
export.
2 Nonprivileged and privileged users can export and import all triggers owned by the user, even if they are on tables owned by other users. 3 A full export does not export triggers owned by schema SYS. You must manually re-create SYS triggers either before or after the full import. Oracle Corporation recommends that you re-create them after the import in case they define actions that would impede progress of the import. |
- Table-level
Export: exports all data from the specified tables
- Partition-level
Export: exports only data from the specified source partitions
or subpartitions
In all modes, partitioned data is exported in a
format such that partitions or subpartitions can be imported selectively.
In table-level Export, you can export an entire
table (partitioned or nonpartitioned) along with its indexes and other
table-dependent objects. If the table is partitioned, all of its partitions and
subpartitions are also exported. This applies to both direct path Export and
conventional path Export. You can perform a table-level export in any Export
mode.
In partition-level Export, you can export one or
more specified partitions or subpartitions of a table. You can only perform a
partition-level export in Table mode.
- Java
classes, resources, and procedures that are created using Enterprise Java
Beans (EJBs) are not placed in the export file.
- Constraints
that have been altered using the RELY keyword lose the RELY attribute when they are exported.
- When
a type definition has evolved and then data referencing that evolved type
is exported, the type definition on the import system must have evolved in
the same manner.
The following diagrams show the syntax for the
parameters that you can specify in the parameter file or on the command line.
Following the diagrams are descriptions of each parameter.
Default: operating system-dependent. See your
Oracle operating system-specific documentation to determine the default value
for this parameter.
Specifies the size, in bytes, of the buffer used
to fetch rows. As a result, this parameter determines the maximum number of
rows in an array fetched by Export. Use the following formula to calculate the
buffer size:
Tables with columns of type LONG, LOB, BFILE, REF, ROWID, LOGICAL ROWID, or DATE are fetched one row at a time.
The BUFFER parameter
applies only to conventional path Export. It has no effect on a direct path
Export.
|
The maximum size of the name column is 30, plus 2 bytes for the indicator. The maximum size of
the weight column is 22 (the size of the internal
representation for Oracle numbers), plus 2 bytes for the indicator.
The default, COMPRESS=y, causes Export to flag table data for consolidation into one
initial extent upon Import. If extent sizes are large (for example, because of
the PCTINCREASE parameter), the allocated space will be larger
than the space required to hold the data.
If you specify COMPRESS=n, Export uses the current storage parameters, including the values
of initial extent size and next extent size. The values of the parameters may
be the values specified in the CREATE TABLE or ALTER TABLE statements or the values modified by the database system. For
example, the NEXT extent size value may be modified if the table
grows and if the PCTINCREASE parameter is nonzero.
LOB data is not compressed. For
LOB data, values of initial extent size and next extent size at the time of
export are used.
|
Specifies whether or not Export uses the SET
TRANSACTION READ ONLY statement to ensure that the data seen by Export
is consistent to a single point in time and does not change during the
execution of the exp command. You should specify CONSISTENT=y when you anticipate that other applications will be updating the
target data after an export has started.
If you use CONSISTENT=n, each table is usually exported in a single
transaction. However, if a table contains nested tables, the outer table and
each inner table are exported as separate transactions. If a table is
partitioned, each partition is exported as a separate transaction.
Therefore, if nested tables and partitioned
tables are being updated by other applications, the data that is exported could
be inconsistent. To minimize this possibility, export those tables at a time
when updates are not being done.
Table 1-2 shows a sequence of
events by two users: user1 exports partitions in a table and user2 updates data in that table.
If the export uses CONSISTENT=n, the updates to TAB:P1 are not written to the export file. However,
the updates to TAB:P2 are written to the export file because the update
transaction is committed before the export of TAB:P2 begins. As a result, the user2 transaction is only partially recorded in the export file, making
it inconsistent.
If you use CONSISTENT=y and the volume of updates is large, the rollback segment usage
will be large. In addition, the export of each table will be slower because the
rollback segment must be scanned for uncommitted transactions.
- CONSISTENT=y is
unsupported for exports that are performed when you are connected as user SYS or
you are using AS SYSDBA, or both.
- Export
of certain metadata may require the use of the SYS schema
within recursive SQL. In such situations, the use of CONSISTENT=y will
be ignored. Oracle Corporation recommends that you avoid making metadata
changes during an export process in which CONSISTENT=y is selected.
- To
minimize the time and space required for such exports, you should export
tables that need to remain consistent separately from those that do not.
For example, export the emp and dept tables together in a consistent export, and then
export the remainder of the database in a second pass.
- A
"snapshot too old" error occurs when rollback space is used up,
and space taken up by committed transactions is reused for new
transactions. Reusing space in the rollback segment allows database
integrity to be preserved with minimum space requirements, but it imposes
a limit on the amount of time that a read-consistent image can be
preserved.
If a committed transaction has been overwritten
and the information is needed for a read-consistent view of the database, a
"snapshot too old" error results.
To avoid this error, you should minimize the
time taken by a read-consistent export. (Do this by restricting the number of
objects exported and, if possible, by reducing the database transaction rate.)
Also, make the rollback segment as large as possible.
Specifying DIRECT=y causes Export to extract data by reading the data directly,
bypassing the SQL command-processing layer (evaluating buffer). This method can
be much faster than a conventional path Export.
Specifies that Export should display a progress
meter in the form of a period for n number of rows exported. For example, if you specify FEEDBACK=10, Export displays a period each time 10 rows are exported. The FEEDBACK value applies to all tables being exported; it cannot be set on a
per-table basis.
Specifies the names of the export files. The
default extension is .dmp, but you can specify any extension. Because
Export supports multiple export files (see the parameter FILESIZE), you can specify
multiple filenames to be used. For example:
When Export reaches the value you have specified
for the maximum FILESIZE, Export stops writing to the current file, opens
another export file with the next name specified by the FILE parameter, and continues until complete or the maximum value of FILESIZE is again reached. If you do not specify sufficient export
filenames to complete the export, Export will prompt you to provide additional
filenames.
Export supports writing to multiple export
files, and Import can read from multiple export files. If you specify a value
(byte limit) for the FILESIZE parameter, Export will write only the number of
bytes you specify to each dump file.
When the amount of data Export must write
exceeds the maximum value you specified for FILESIZE, it will get the name of the next export file from the FILE parameter or, if it has used all the names specified in the FILE parameter, it will prompt you to provide a new export filename. If
you do not specify a value for FILESIZE (note that a value of 0 is equivalent to not
specifying FILESIZE), then Export will
write to only one file, regardless of the number of files specified in the FILE parameter.
If the space requirements of your
export file exceed the available disk space, Export will abort, and you will
have to repeat the Export after making sufficient disk space available.
|
The FILESIZE parameter has a maximum value equal to the maximum value that can
be stored in 64 bits.
Table 1-3 shows that the maximum
size for dump files depends on the operating system you are using and on the
release of the Oracle database server that you are using.
The maximum value that can be
stored in a file is dependent on your operating system. You should verify
this maximum value in your Oracle operating system-specific documentation
before specifying FILESIZE. You should also ensure that the file size you specify
for Export is supported on the system on which Import will run.
|
The FILESIZE value can also be specified as a number followed by KB (number of
kilobytes). For example, FILESIZE=2KB is the same as FILESIZE=2048. Similarly, MB specifies megabytes (1024 * 1024) and GB specifies
gigabytes (1024**3). B remains the shorthand for bytes; the number is not
multiplied to obtain the final file size (FILESIZE=2048B is the same as FILESIZE=2048).
Specifies the system change number (SCN) that
Export will use to enable flashback. The export operation is performed with
data consistent as of this specified SCN.
FLASHBACK_TIME
Specifies a time. Export finds the SCN that most
closely matches the specified time. This SCN is used to enable flashback. The
export operation is performed with data consistent as of this SCN.
Oracle9i
Application Developer's Guide - Fundamentals for more
information about using flashback
|
Indicates that the Export is a full database
mode Export (that is, it exports the entire database). Specify FULL=y to export in full database mode. You need to have the EXP_FULL_DATABASE role to export in this mode.
Specifies whether or not the Export utility
exports object grants. The object grants that are exported depend on whether
you use full database mode or user mode. In full database mode, all grants on a
table are exported. In user mode, only those granted by the owner of the table
are exported. System privilege grants are always exported.
If you specify this parameter, messages are
logged in the log file and displayed to the
terminal display.
Specifies whether or not the Export utility uses
the SET TRANSACTION READ ONLY statement to ensure that the data exported is
consistent to a single point in time and does not change during the export. If OBJECT_CONSISTENT is set to y, each object is
exported in its own read-only transaction, even if it is partitioned. In
contrast, if you use the CONSISTENT parameter, then there is only one read-only
transaction.
Indicates that the Export is a user-mode Export
and lists the users whose objects will be exported. If the user initiating the
export is the DBA, multiple users may be listed.
Specifies a filename for a file that contains a
list of Export parameters. For more information on using a parameter file.
This parameter allows you to select a subset of
rows from a set of tables when doing a table mode export. The value of the
query parameter is a string that contains a WHERE clause for a SQL SELECT statement that will be applied to all tables (or
table partitions) listed in the TABLE parameter.
For example, if user scott wants to export only those employees whose job title is SALESMAN and whose salary is less than 1600, he could do the following
(this example is UNIX-based):
The values specified for the QUERY parameter are applied to all tables (or table partitions) listed
in the TABLE parameter. For example, the following statement
will unload rows in both emp and bonus that match the query:
If a table is missing the columns specified in
the QUERY clause, an error message will be produced, and
no rows will be exported for the offending table.
- The
parameter QUERY cannot be specified for full, user, or tablespace mode
exports.
- The
parameter QUERY must be applicable to all specified tables.
- The
parameter QUERY cannot be specified in a direct path export (DIRECT=y)
- The
parameter QUERY cannot be specified for tables with inner nested
tables.
- You
cannot determine from the contents of the export file whether the data is
the result of a QUERY export.
Specifies the length, in bytes, of the file
record. The RECORDLENGTH parameter is necessary when you must transfer
the export file to another operating system that uses a different default
value.
If you do not define this parameter, it defaults
to your platform-dependent value for buffer size. For more information about
the buffer size default value, see your Oracle operating system-specific
documentation.
You can set RECORDLENGTH to any value equal to or greater than your system's buffer size.
(The highest value is 64 KB.) Changing the RECORDLENGTH parameter affects only the size of data that accumulates before
writing to the disk. It does not affect the operating system file block size.
See your Oracle operating system-specific
documentation to determine the proper value or to create a file with a
different record size.
The RESUMABLE parameter is used to enable and disable resumable space
allocation. Because this parameter is disabled by default, you must set RESUMABLE=y in order to use its associated parameters, RESUMABLE_NAME and RESUMABLE_TIMEOUT.
The value for this parameter identifies the
statement that is resumable. This value is a user-defined text string that is
inserted in either the USER_RESUMABLE or DBA_RESUMABLE view to help you identify a specific resumable statement that has
been suspended.
This parameter is ignored unless the RESUMABLE parameter is set to y to enable resumable space allocation.
The value of the parameter specifies the time
period during which an error must be fixed. If the error is not fixed within
the timeout period, execution of the statement is aborted.
This parameter is ignored unless the RESUMABLE parameter is set to y to enable resumable space allocation.
Specifies the type of database optimizer
statistics to generate when the exported data is imported. Options are ESTIMATE, COMPUTE, and NONE.
In some cases, Export will place the precalculated
statistics in the export file as well as the ANALYZE statements to regenerate the statistics.
However, the precalculated optimizer statistics
will not be used at export time if a table has columns with system-generated
names.
- There
are row errors while exporting
- The
client character set or NCHAR character set does not match the server character set
or NCHAR character set
- A QUERY clause
is specified
- Only
certain partitions or subpartitions are exported
Specifies that the Export is a table-mode Export
and lists the table names and partition and subpartition names to export. You
can specify the following when you specify the name of the table:
- schemaname specifies
the name of the user's schema from which to export the table or partition.
The schema names ORDSYS, MDSYS, CTXSYS, and ORDPLUGINS are reserved by Export.
- tablename specifies
the name of the table or tables to be exported. Table-level export lets
you export entire partitioned or nonpartitioned tables. If a table in the
list is partitioned and you do not specify a partition name, all its
partitions and subpartitions are exported.
The table name can contain any number of '%'
pattern matching characters, which can each match zero or more characters in
the table name against the table objects in the database. All the tables in the
relevant schema that match the specified pattern are selected for export, as if
the respective table names were explicitly specified in the parameter.
- partition_name indicates
that the export is a partition-level Export. Partition-level Export lets
you export one or more specified partitions or subpartitions within a
table.
If you use tablename:partition_name, the specified table must
be partitioned, and partition_name must be the name of one of its partitions or
subpartitions. If the specified table is not partitioned, the partition_name is ignored and the entire table is exported.
- By
default, table names in a database are stored as uppercase. If you have a
table name in mixed-case or lowercase, and you want to preserve
case-sensitivity for the table name, you must enclose the name in
quotation marks. The name must exactly match the table name stored in the
database.
Some operating systems require that quotation
marks on the command line be preceded by an escape character. The following are
examples of how case-sensitivity can be preserved in the different Export
modes.
- Table
names specified on the command line cannot include a pound (#) sign,
unless the table name is enclosed in quotation marks. Similarly, in the
parameter file, if a table name includes a pound (#) sign, the Export
utility interprets the rest of the line as a comment, unless the table
name is enclosed in quotation marks.
For example, if the parameter file contains the
following line, Export interprets everything on the line after emp# as a comment and does not export the tables dept and mydata:
However, given the following line, the Export
utility exports all three tables because emp# is enclosed in quotation marks:
For a TABLES parameter that specifies multiple schema.tablename:(sub)partition_name arguments, Export attempts to purge duplicates before processing
the list of objects.
The TABLESPACES parameter specifies that all tables in the tablespace be exported
to the Export dump file. This includes all tables contained in the list of
tablespaces and all tables that have a partition located in the list of
tablespaces. Indexes are exported with their tables, regardless of where the
index is stored.
When TABLESPACES is used in conjunction with TRANSPORT_TABLESPACE=y, you can specify a limited list of tablespaces to be exported from
the database to the export file.
When TTS_FULL_CHECK is set to TRUE, Export verifies that a recovery set (set of
tablespaces to be recovered) has no dependencies (specifically, IN pointers) on
objects outside the recovery set, and vice versa.
Specifies the username/password (and optional connect string) of the user
performing the export. If you omit the password, Export will prompt you for it.
If you connect as user SYS, you must also specify AS SYSDBA in the connect string. Your operating system may require you to
treat AS SYSDBA as a special string, in which case the entire
string would be enclosed in quotation marks.
The VOLSIZE parameter has a maximum value equal to the maximum value that can
be stored in 64 bits. See your Oracle operating system-specific documentation
for more information.
The VOLSIZE value can be specified as a number followed by KB (number of
kilobytes). For example, VOLSIZE=2KB is the same as VOLSIZE=2048. Similarly, MB specifies megabytes (1024 * 1024) and GB specifies
gigabytes (1024**3). B remains the shorthand for bytes; the number is not
multiplied to get the final file size (VOLSIZE=2048B is the same as VOLSIZE=2048).
Certain parameters can conflict with each other.
For example, because specifying TABLES can conflict with an OWNER specification, the following command causes Export to terminate
with an error:
Only users with the DBA role or the EXP_FULL_DATABASE role can export in full database mode. In this
example, an entire database is exported to the file dba.dmp with all GRANTS and all data.
User mode exports can be used to back up one or
more database users. For example, a DBA may want to back up the tables of
deleted users for a period of time. User mode is also appropriate for users who
want to back up their own data or who want to move objects from one owner to
another. In this example, user scott is exporting his own tables.
In table mode, you can export table data or the
table definitions. (If no rows are exported, the CREATE TABLE statement is placed in the export file, with grants and indexes,
if they are specified.)
A user with the EXP_FULL_DATABASE role can use table mode to export tables from any user's schema by
specifying TABLES=schemaname.tablename.
If schemaname is not specified, Export defaults to the previous schema name from
which an object was exported. If there is not a previous object, Export
defaults to the exporter's schema. In the following example, Export defaults to
the SYSTEM schema for table a and to scott for table c:
A user with the EXP_FULL_DATABASE role can also export dependent objects that are owned by other
users. A nonprivileged user can export only dependent objects for the specified
tables that the user owns.
Exports in table mode do not include cluster
definitions. As a result, the data is exported as unclustered tables. Thus, you
can use table mode to uncluster tables.
INDEXES=y
In partition-level export, you can specify the
partitions and subpartitions of a table that you want to export.
Assume emp is a table that is partitioned on employee name. There are two
partitions, m and z. As this example shows, if you export the table without specifying
a partition, all of the partitions are exported.
Assume emp is a table that is partitioned on employee name. There are two
partitions, m and z. As this example shows, if you export the table and specify a
partition, only the specified partition is exported.
Assume emp is a partitioned table with two partitions, m and z. Table emp is partitioned using the composite method. Partition m has subpartitions sp1 and sp2, and partition z has subpartitions sp3 and sp4. As the example shows, if you export the
composite partition m, all its subpartitions (sp1 and sp2) will be exported. If you export the table and
specify a subpartition (sp4), only the specified subpartition is exported.
Starting Export from the command line with no
parameters initiates the interactive method. The command-line interactive
method does not provide prompts for all Export functionality and is provided
only for backward compatibility. If you want to use an interactive interface to
the Export utility, it is recommended that you use the Oracle Enterprise
Manager (OEM) Export Wizard.
If you do not specify a username/password combination on the command line, the Export utility prompts you
for this information.
When you invoke Export interactively, the
response given by Export depends on what you enter at the command line. Table 1-4 shows the possibilities.
Table 1-4 Invoking Export Using the
Interactive Method
In Export interactive mode, you are not prompted
to specify whether you want to connect as SYSDBA or @instance.
You must specify AS SYSDBA and/or @instance with the username.
Additionally, if you omit the password and allow
Export to prompt you for it, you cannot specify the @instance string as well. You can specify @instance only with username.
After Export is invoked, it displays the
following prompts. You may not see all prompts in a given Export session
because some prompts depend on your responses to other prompts. Some prompts
show a default answer. If the default is acceptable, press Enter.
- In
user mode, Export prompts for all usernames to be included in the export
before exporting any data. To indicate the end of the user list and begin
the current Export session, press Enter.
- In
table mode, if you do not specify a schema prefix, Export defaults to the
exporter's schema or the schema containing the last table exported in the
current session.
For example, if beth is a privileged user exporting in table mode, Export assumes that
all tables are in the beth schema until another schema is specified. Only a
privileged user (someone with the EXP_FULL_DATABASE role) can export tables in another user's schema.
This section describes the different types of
messages issued by Export and how to save them in a log file.
You can capture all Export messages in a log
file, either by using the LOG parameter or, for those systems that permit it,
by redirecting the Export output to a file. The Export utility writes a log of
detailed information about successful unloads and any errors that may occur.
Refer to your Oracle operating system-specific documentation for information on
redirecting output.
Export does not terminate after recoverable
errors. For example, if an error occurs while exporting a table, Export
displays (or logs) an error message, skips to the next table, and continues
processing. These recoverable errors are known as warnings.
For example, if a nonexistent table is specified
as part of a table-mode export, the Export utility exports all other tables.
Then it issues a warning and terminates successfully.
Some errors are nonrecoverable and terminate the
Export session. These errors typically occur because of an internal problem or
because a resource, such as memory, is not available or has been exhausted. For
example, if the catexp.sql script is not executed, Export issues the
following nonrecoverable error message:
If one or more recoverable errors occurs but
Export is able to continue to completion, Export displays the following
message:
Export provides the results of an export
operation immediately upon completion. Depending on the platform, Export may
report the outcome in a process exit code as well as recording the results in
the log file. This enables you to check the outcome from the command line or
script. Table 1-5 shows the exit codes
that get returned for various results.
Conventional path Export uses the SQL SELECT statement to extract data from tables. Data is read from disk into
a buffer cache, and rows are transferred to the evaluating buffer. The data,
after passing expression evaluation, is transferred to the Export client, which
then writes the data into the export file.
Direct path Export is much faster than
conventional path Export because data is read from disk into the buffer cache
and rows are transferred directly to the Export client.
The evaluating buffer is bypassed. The data is already in the format that
Export expects, thus avoiding unnecessary data conversion. The data is
transferred to the Export client, which then writes the data into the export
file.
Figure 1-2 illustrates how data
extraction differs between conventional path Export and direct path Export.
To invoke a direct path Export, you must use
either the command-line method or a parameter file. You cannot invoke a direct
path Export using the interactive method.
To use direct path Export, specify the DIRECT=y parameter on the command line or in the parameter file. The
default is DIRECT=n, which extracts the table data using the
conventional path.
Additionally, be aware that the Export parameter BUFFER applies only to conventional path Exports. For direct path Export,
use the RECORDLENGTH parameter to specify the size of the buffer that
Export uses for writing to the export file.
In versions of Export prior to 8.1.5, you could
not use direct path Export for tables containing objects and LOBs. If you tried
to, their rows were not exported. This behavior has changed. Rows in tables
that contain objects and LOBs will now be exported using conventional path,
even if direct path was specified. Import will correctly handle these
conventional path tables within direct path dump files.
Virtual Private Database (VPD) and Oracle Label
Security are not enforced during direct path Exports.
The following users are exempt from Virtual
Private Database and Oracle Label Security enforcement regardless of the export
mode, application, or utility used to extract data from the database:
- The
database user SYS
- Database
users granted the Oracle9i EXEMPT ACCESS POLICY privilege, either directly or through a database role
This means that any user who is granted the EXEMPT ACCESS POLICY privilege is completely exempt from enforcement of VPD and Oracle
Label Security. This is a powerful privilege and should be carefully managed.
This privilege does not affect the enforcement of traditional object privileges
such as SELECT, INSERT, UPDATE, and DELETE. These privileges are enforced even if a user
has been granted the EXEMPTACCESS POLICY privilege.
You may be able to improve performance by
increasing the value of the RECORDLENGTH parameter when you invoke a direct path Export.
Your exact performance gain depends upon the following factors:
- DB_BLOCK_SIZE
- The
types of columns in your table
- Your
I/O layout (The drive receiving the export file should be separate from
the disk drive where the database files reside.)
This section describes factors to take into
account when you use Export and Import across a network.
Because the export file is in binary format, use
a protocol that supports binary transfers to prevent corruption of the file
when you transfer it across a network. For example, use FTP or a similar file
transfer protocol to transmit the file in binary mode. Transmitting export files in character mode causes errors
when the file is imported.
With Oracle Net, you can perform exports and
imports over a network. For example, if you run Export locally, you can write
data from a remote Oracle database into a local export file. If you run Import
locally, you can read data into a remote Oracle database.
To use Import with Oracle Net, include the
connection qualifier string @connect_string when entering the username/password in the exp or imp command. For the exact syntax of this clause,
see the user's guide for your Oracle Net protocol.
The Export utility always exports user data,
including Unicode data, in the character sets of the Export server. The
character sets are specified at database creation.
Some 8-bit characters can be lost (that is,
converted to 7-bit equivalents) when you import an 8-bit character set export
file. This occurs if the client system has a native 7-bit character set or if
the NLS_LANG operating system environment variable is set to
a 7-bit character set. Most often, you notice that accented characters lose
their accent mark.
Both Export and Import provide descriptions of
any required character set conversion before exporting or importing the data.
Data definition language (DDL),
such as a CREATE TABLE command, is exported in the client character set.
|
If the export character set has a different sorting
order than the import character set, then tables that are partitioned on
character columns may yield unpredictable results. For example, consider the
following table definition, which is produced on a database having an ASCII
character set:
When this table is imported into a database
based upon an EBCDIC character set, all of the rows in the part_mid partition will migrate to the part_low partition because z comes before Z in EBCDIC character sets. To obtain the desired results, the owner
of partlist must repartition the table following the import.
You can use instance affinity to associate jobs
with instances in databases you plan to export and import. Be aware that there
may be some compatibility issues if you are using a combination of releases
8.0, 8.1, and 9i.
The following sections describe points you
should consider when you export particular database objects.
If transactions continue to access sequence
numbers during an export, sequence numbers can be skipped. The best way to
ensure that sequence numbers are not skipped is to ensure that the sequences
are not accessed during the export.
Sequence numbers can be skipped only when cached
sequence numbers are in use. When a cache of sequence numbers has been
allocated, they are available for use in the current database. The exported
value is the next sequence number (after
the cached values). Sequence numbers that are cached, but unused, are lost when
the sequence is imported.
On export, LONG datatypes are fetched in sections. However, enough memory must be
available to hold all of the contents of each row, including the LONG data.
All data in a LOB column does not need to be held in memory at the same time. LOB
data is loaded and unloaded in sections.
The contents of foreign function libraries are
not included in the export file. Instead, only the library specification (name,
location) is included in full database and user mode export. You must move the
library's executable files and update the library specification if the database
is moved to a new location.
If the data you are exporting contains offline
bitmapped tablespaces, Export will not be able to export the complete
tablespace definition and will display an error message. You can still import
the data; however, you must first create the offline bitmapped tablespaces
before importing to prevent DDL commands that may reference the missing
tablespaces from failing.
Directory alias definitions are included only in
a full database mode Export. To move a database to a new location, the database
administrator must update the directory aliases to point to the new location.
Directory aliases are not included in user or
table mode Export. Therefore, you must ensure that the directory alias has been
created on the target system before the directory alias is used.
The export file does not hold the contents of
external files referenced by BFILE columns or attributes. Instead, only the names
and directory aliases for files are copied on Export and restored on Import. If
you move the database to a location where the old directories cannot be used to
access the included files, the database administrator (DBA) must move the
directories containing the specified files to a new location where they can be
accessed.
The contents of external tables are not included
in the export file. Instead, only the table specification (name, location) is
included in full database and user mode export. You must manually move the
external data and update the table specification if the database is moved to a
new location.
In all Export modes, the Export utility includes
information about object type definitions used by the tables being exported.
The information, including object name, object identifier, and object geometry,
is needed to verify that the object type on the target system is consistent
with the object instances contained in the export file. This ensures that the
object types needed by a table are created with the same object identifier at
import time.
Note, however, that in table, user, and
tablespace mode, the export file does not include a full object type definition
needed by a table if the user running Export does not have execute access to
the object type. In this case, only enough information is written to verify
that the type exists, with the same object identifier and the same geometry, on
the import target system.
The user must ensure that the proper type
definitions exist on the target system, either by working with the DBA to
create them, or by importing them from full database or user mode exports
performed by the DBA.
It is important to perform a full database mode
export regularly to preserve all object type definitions. Alternatively, if
object type definitions from different schemas are used, the DBA should perform
a user mode export of the appropriate set of users. For example, if table1 belonging to user scott contains a column on blake's type type1, the DBA should perform a user mode export of both blake and scott to preserve the type definitions needed by the
table.
Inner nested table data is exported whenever the
outer containing table is exported. Although inner nested tables can be named,
they cannot be exported individually.
Queues are implemented on tables. The export and
import of queues constitutes the export and import of the underlying queue
tables and related dictionary tables. You can export and import queues only at
queue table granularity.
When you export a queue table, both the table
definition information and queue data are exported. Because the queue table
data is exported as well as the table definition, the user is responsible for
maintaining application-level data integrity when queue table data is imported.
You should be cautious when exporting compiled
objects that reference a name used as a synonym and as another object.
Exporting and importing these objects will force a recompilation that could
result in changes to the object definitions.
CREATE TRIGGER t_emp
BEFORE INSERT ON emp BEGIN NULL; END;
CREATE VIEW emp AS
SELECT * FROM dual;
If the database in the preceding example were
exported, the reference to emp in the trigger would refer to blake's view rather than to scott's table. This would cause an error when Import
tried to reestablish the t_emp trigger.
If an export operation attempts to export a
synonym named DBMS_JAVA when there is no corresponding DBMS_JAVA package or when Java is either not loaded or loaded incorrectly,
the export will terminate unsuccessfully. The error messages that are generated
include, but are not limited to, the following: EXP-00008, ORA-00904, and
ORA-29516.
If Java is enabled, make sure that both the DBMS_JAVA synonym and DBMS_JAVA package are created and valid before rerunning
the export.
You can export tables with fine-grained access
control policies enabled. When doing so, consider the following:
- The
user who imports from an export file containing such tables must have the
appropriate privileges (specifically, the EXECUTE privilege on the DBMS_RLS package so that the tables' security policies can be
reinstated). If a user without the correct privileges attempts to export a
table with fine-grained access policies enabled, only those rows that the
exporter is privileged to read will be exported.
- If
fine-grained access control is enabled on a SELECT statement, then conventional path Export may not export
the entire table because fine-grained access may rewrite the query.
- Only
user SYS, or a user with the EXPORT_FULL_DATABASE role enabled or who has been granted EXEMPT ACCESS
POLICY, can perform direct path
Exports on tables having fine-grained access control.
The transportable tablespace feature enables you
to move a set of tablespaces from one Oracle database to another.
To move or copy a set of tablespaces, you must
make the tablespaces read-only, copy the datafiles of these tablespaces, and
use Export and Import to move the database information (metadata) stored in the
data dictionary. Both the datafiles and the metadata export file must be copied
to the target database. The transport of these files can be done using any
facility for copying flat binary files, such as the operating system copying
facility, binary-mode FTP, or publishing on CD-ROMs.
After copying the datafiles and exporting the
metadata, you can optionally put the tablespaces in read/write mode.
To extract metadata from a source database,
Export uses queries that contain ordering clauses (sort operations). For these
queries to succeed, the user performing the export must be able to allocate
on-disk sort segments. For these sort segments to be allocated in a read-only
database, the user's temporary tablespace should be set to point at a
temporary, locally managed tablespace.
When you use the Export and Import utilities to
migrate a large database, it may be more efficient to partition the migration
into multiple export and import jobs. If you decide to partition the migration,
be aware of the following advantages and disadvantages.
- Time
required for the migration may be reduced because many of the subjobs can
be run in parallel.
- The
import can start as soon as the first export subjob completes, rather than
waiting for the entire export to complete.
- The
export and import processes become more complex.
- Support
of cross-schema references for certain types of objects may be
compromised. For example, if a schema contains a table with a foreign key
constraint against a table in a different schema, you may not have the
required parent records when you import the table into the dependent
schema.
All exports can be done in parallel. When the
import of full.dmp completes, all remaining imports can also be
done in parallel.
This section describes compatibility issues that
relate to using different releases of Export and the Oracle database server.
Whenever you are moving data between different
releases of the Oracle database server, the following basic rules apply:
- The
Import utility and the database to which data is being imported (the
target database) must be the same version.
- The
version of the Export utility must be equal to the lowest version of the
source or target database.
For example, to create an export file for an
import into a higher release database, use a version of the Export utility that
is equal to the source database. Conversely, to create an export file for an
import into a lower release database, use a version of the Export utility that
is equal to the version of the target database. The following information is
for specific versions:
- When
you create an Oracle version 6 export file from an Oracle7 database by
running the Oracle version 6 Export utility against the Oracle7 database
server, you must first run the catexp6.sql script on the Oracle7 database. This script creates
the export views that make the database look, to Export, like an Oracle
version 6 database.
- When
you create an Oracle7 export file from an Oracle8i database
by running the Oracle7 Export utility against the Oracle8i database,
you must first run the catexp7.sql script on the Oracle8i database.
This script creates the export views that make the database look, to
Export, like an Oracle8i database.
The catexp6.sql and catexp7.sql scripts
must be run by user SYS. These scripts only need to be run once.
|
- Export
dump files can be read only by the Import utility because they are stored
in a special binary format.
- Any
export dump file can be imported into a higher release of the Oracle
database server.
- The
Import utility can read export dump files created by Export release 5.1.22
and higher.
- The
Import utility cannot read export dump files created by the Export utility
of a higher maintenance release or version. For example, a release 8.1
export dump file cannot be imported by a release 8.0 Import utility, and a
version 8 export dump file cannot be imported by a version 7 Import
utility.
- The
Oracle version 6 (or earlier) Export utility cannot be used against an
Oracle8 or higher database.
- Whenever
a lower version of the Export utility runs with a higher version of the
Oracle database server, categories of database objects that did not exist
in the lower version are excluded from the export. For example,
partitioned tables did not exist in the Oracle database server version 7.
So, if you need to move a version 8 partitioned table to a version 7
database, you must first reorganize the table into a nonpartitioned table.
- Export
files generated by Oracle9i Export, either direct path or conventional path, are
incompatible with earlier releases of Import and can be imported only with
Oracle9i Import. When backward compatibility is an issue, use
the earlier release or version of the Export utility against the Oracle9i database.
Table 1-6 shows some examples of
which Export and Import releases to use when moving data between different
releases of the Oracle database server.
1 If catexp7.sql has never been run on the 8.1.7
database, then you must do that first to create the Oracle7 data dictionary
views. Only then can you successfully use Export release 7.3.4 on the release
8.1.7 database.
|
Creating Oracle Release 8.0 Export Files from
an Oracle9i Database
You do not need to take any special steps to
create an Oracle release 8.0 export file from an Oracle9i database. However, the following features are not supported when
you use Export release 8.0 on an Oracle9i database:
- Export
does not export rows from tables containing objects and LOBs when you have
specified a direct path load (DIRECT=y).
- Export
does not export dimensions.
- Functional
and domain indexes are not exported.
- Secondary
objects (tables, indexes, sequences, and so on, created in support of a
domain index) are not exported.
- Views,
procedures, functions, packages, type bodies, and types containing references
to new Oracle9i features may not compile.
- Objects
whose DDL is implemented as a stored procedure rather than SQL are not
exported.
- Triggers
whose action is a CALL statement are not exported.
- Tables
containing logical ROWID columns, primary key refs, or user-defined OID columns
will not be exported.
- Temporary
tables are not exported.
- Index-organized
tables (IOTs) revert to an uncompressed state.
- Partitioned
IOTs lose their partitioning information.
- Index
types and operators are not exported.
- Bitmapped,
temporary, and UNDO tablespaces are not exported.
- Java
sources, classes, and resources are not exported.
- Varying-width CLOBs, collection enhancements, and LOB-storage clauses for VARRAY columns
or nested table enhancements are not exported.
- Fine-grained
access control policies are not preserved.
- External
tables are not exported.
- Objects
whose DDL is implemented as a stored procedure rather than SQL are not
exported.
Exporting Schemas | |
Note: Deprecated ... replaced by DataPump | |
Export help | exp -help |
exp -help | |
Export Authority | exp userid=<schema_owner/password> |
exp uwclass/uwclass | |
Export File Name | exp userid=<schema_owner/password> FILE=<file_name> |
exp uwclass/uwclass file=c: emp\uw_test.dmp | |
Log File Name | exp userid=<schema_owner/password> LOG=<file_name> |
exp uwclass/uwclass log=c: emp\uw.log | |
Buffer size | -- O/S dependent and can usually be ignored exp userid=<schema_owner/password> BUFFER=<integer> |
-- rows_in_array * maximum_row_size exp uwclass/uwclass buffer=64000 | |
Compress (default is Y) | -- The default is Y and it is best to override it. exp userid=<schema_owner/password> COMPRESS=<Y|N> |
exp uwclass/uwclass compress=N | |
Consistent (default is N) | -- Implements SET TRANSACTION READ ONLY exp userid=<schema_owner/password> CONSISTENT=<Y|N> |
exp uwclass/uwclass file=c: emp\cnsstnt.dmp consistent=Y | |
Constraints (default is Y) | exp userid=<schema_owner/password> CONSTRAINTS=<Y|N> |
exp uwclass/uwclass file=c: emp\cnstrnt.dmp constraints=N | |
Direct Path (default is N) | exp userid=<schema_owner/password> DIRECT=<Y|N> |
exp uwclass/uwclass file=c: emp\dirpath.dmp direct=Y | |
Feedback (default is 0) | exp userid=<schema_owner/password> FEEDBACK=<integer DEFAULT 0> |
exp uwclass/uwclass file=c: emp back.dmp feedback=100 | |
File Size (default unlimited) | exp userid=<schema_owner/password> FILESIZE=<integer> [KB|MB] |
exp uwclass/uwclass file=c: emp size.dmp filesize 100MB | |
Flashback By SCN | exp userid=<schema_owner/password> FLASHBACK_SCN=<SCN> |
exp uwclass/uwclass file=c: emp bscn.dmp flashback_scn=7632619 | |
Flashback By Timestamp | exp userid=<schema_owner/password> FLASHBACK_TIME=<timestamp> |
exp uwclass/uwclass FLASHBACK_TIME="TIMESTAMP '2002-05-01 11:00:00'" or exp uwclass/uwclass flashback_time="TO_TIMESTAMP('12-02-2001 14:35:00', 'DD-MM-YYYY HH24:MI:SS')" | |
Full | -- Yes requires the user to have the EXP_FULL_DATABASE role exp userid=<schema_owner/password> FULL=<Y|N> |
exp uwclass/uwclass full=Y | |
Grants | exp userid=<schema_owner/password> GRANTS=<Y|N> |
exp uwclass/uwclass grants=N | |
Indexes | exp userid=<schema_owner/password> INDEXES=<Y|N> |
exp uwclass/uwclass indexes=N | |
Object Consistent | exp userid=<schema_owner/password> OBJECT_CONSISTENT=<Y|N> |
exp uwclass/uwclass object_consistent=Y | |
Owner | exp userid=<schema_owner/password> OWNER=(o1, o2, ... o#) |
exp uwclass/uwclass owner=(uwclass, ids, webapps) | |
Parameter File | exp userid=<schema_owner/password> PARFILE=<parameter_file_name> |
exp uwclass/uwclass parfile=c: emp\uwparfile.ctl | |
Query | exp userid=<schema_owner/password> QUERY=<query_string> |
exp uwclass/uwclass owner=SCOTT tables=emp query=\" WHERE job=\'MANAGER\' AND sal \>50000\" | |
Record Length | exp userid=<schema_owner/password> RECORDLENGTH=<bytes> |
exp uwclass/uwclass recordlength=32000 | |
Resumable | exp userid=<schema_owner/password> RESUMABLE=<Y|N> |
exp uwclass/uwclass resumable=Y | |
Resumable Name | exp userid=<schema_owner/password> RESUMABLE_NAME = 'User USERNAME (USERID), Session SESSIONID, Instance INSTANCEID' |
exp uwclass/uwclass resumable_name 'UWCLASS' | |
Resumable Time Out | exp userid=<schema_owner/password> RESUMABLE_TIMEOUT=<seconds> DEFAULT 7200 (2 hours) |
exp uwclass/uwclass resumable_timeout=18000 | |
Rows | exp userid=<schema_owner/password> ROWS=<Y|N> |
exp uwclass/uwclass rows=N | |
Statistics | exp userid=<schema_owner/password> STATISTICS=<ESTIMATE | COMPUTE | NONE> |
exp uwclass/uwclass statistics=COMPUTE | |
Tables | exp userid=<schema_owner/password> TABLES=(t1, t2, ... t#) |
exp uwclass/uwclass tables=(emp, dept, bonus) | |
Tablespaces | exp userid=<schema_owner/password> TABLESPACES=(tbsp1, tbsp2, ... tbsp#) |
exp uwclass/uwclass tablespaces=(uwdata, user_data) | |
Transportable Tablespaces | exp userid=<schema_owner/password> TRANSPORT_TABLESPACE <Y|N> |
exp uwclass/uwclass transport_tablespace=Y | |
Triggers | exp userid=<schema_owner/password> TRRIGGERS <Y|N> |
exp uwclass/uwclass triggers=N | |
TTS Full Check | exp userid=<schema_owner/password> TTS_FULL_CHECK <Y|N> |
exp uwclass/uwclass tts_full_check=Y | |
Volume Size | exp userid=<schema_owner/password> VOLSIZE <integer> [KB|MB|GB] |
exp uwclass/uwclass volsize=10GB |
No comments:
Post a Comment