This chapter describes how to use the Import
utility to read an export file into an Oracle database. Import only reads files
created by the Export utility. For information on how to export a database.
- What Is the Import Utility?
- Before Using Import
- Importing into Existing Tables
- Effect of Schema and Database Triggers on Import Operations
- Invoking Import
- Import Modes
- Getting Online Help
- Import Parameters
- Example Import Sessions
- Using the Interactive Method
- Warning, Error, and Completion Messages
- Exit Codes for Inspection and Display
- Error Handling During an Import
- Table-Level and Partition-Level Import
- Controlling Index Creation and Maintenance
- Reducing Database Fragmentation
- Network Considerations
- Character Set and Globalization Support Considerations
- Considerations When Importing Database Objects
- Materialized Views and Snapshots
- Transportable Tablespaces
- Storage Parameters
- Dropping a Tablespace
- Reorganizing Tablespaces
- Importing Statistics
- Using Export and Import to Partition a Database Migration
- Using Export Files from a Previous Oracle Release
The Import utility reads the object definitions
and table data from an Export dump file. It inserts the data objects into an
Oracle database.
Export dump files can only be read by the Oracle
Import utility. The version of the Import utility cannot be earlier than the
version of the Export utility used to create the dump file.
Table objects are imported as they are read from
the export file. The export file contains objects in the following order:
First, new tables are created. Then, data is
imported and indexes are built. Then triggers are imported, integrity
constraints are enabled on the new tables, and any bitmap, functional, and/or
domain indexes are built. This sequence prevents data from being rejected due
to the order in which tables are imported. This sequence also prevents
redundant triggers from firing twice on the same data (once when it is
originally inserted and again during the import).
For example, if the emp table has a referential integrity constraint on the dept table and the emp table is imported first, all emp rows that reference departments that have not yet been imported
into dept would be rejected if the constraints were
enabled.
When data is imported into existing tables,
however, the order of import can still produce referential integrity failures.
In the situation just given, if the emp table already existed and referential integrity constraints were
in force, many rows could be rejected.
A similar situation occurs when a referential
integrity constraint on a table references itself. For example, if scott's manager in the emp table is drake, and drake's row has not yet been loaded, scott's row will fail, even though it would be valid
at the end of the import.
For the reasons mentioned
previously, it is a good idea to disable referential constraints when
importing into an existing table. You can then reenable the constraints after
the import is completed.
|
To use Import, you must run either the script catexp.sql or catalog.sql (which runs catexp.sql) after the database has been created or
migrated to Oracle9i.
The actual names of the script
files depend on your operating system. The script filenames and the method
for running them are described in your Oracle operating system-specific
documentation.
|
The catexp.sql or catalog.sql script needs to be run only once on a database. You do not need to
run either script again before performing future import operations. Both
scripts perform the following tasks to prepare the database for Import:
- Assign
all necessary privileges to the IMP_FULL_DATABASE role.
- Assign IMP_FULL_DATABASE to
the DBA role.
- Create
required views of the data dictionary.
This section describes the privileges you need
to use the Import utility and to import objects into your own and others'
schemas.
To use Import, you need the privilege CREATE
SESSION to log on to the Oracle database server. This
privilege belongs to the CONNECT role established during database creation.
You can do an import even if you did not create
the export file. However, keep in mind that if the export file was created by a
user with EXP_FULL_DATABASE privilege, then you must have IMP_FULL_DATABASE privilege to import it. Both of these privileges are typically
assigned to DBAs.
lists the privileges required to import objects
into your own schema. All of these privileges initially belong to the RESOURCE role.
To import the privileges that a user has granted
to others, the user initiating the import must either own the objects or have
object privileges with the WITH GRANT OPTION. shows the required conditions
for the authorizations to be valid on the target system.
the user must have the IMP_FULL_DATABASE role
enabled.
|
|
To import system objects from a full database
export file, the role IMP_FULL_DATABASE must be enabled. The parameter FULL specifies that these system objects are included in the import
when the export file is a full export:
- Profiles
- Public
database links
- Public
synonyms
- Roles
- Rollback
segment definitions
- Resource
costs
- Foreign
function libraries
- Context
objects
- System
procedural objects
- System
audit options
- System
privileges
- Tablespace
definitions
- Tablespace
quotas
- User
definitions
- Directory
aliases
- System
event triggers
When you choose to create tables manually before
importing data into them from an export file, you should use either the same
table definition previously used or a compatible format. For example, although
you can increase the width of columns and change their order, you cannot do the
following:
- Add NOT NULL columns
- Change
the datatype of a column to an incompatible datatype (LONG to NUMBER, for example)
- Change
the definition of object types used in a table
- Change DEFAULT column
values
In the normal import order, referential
constraints are imported only after all tables are imported. This sequence
prevents errors that could occur if a referential integrity constraint existed
for data that has not yet been imported.
These errors can still occur when data is loaded
into existing tables. For example, if table emp has a referential integrity constraint on the mgr column that verifies that the manager number exists in emp, a perfectly legitimate employee row might fail the referential
integrity constraint if the manager's row has not yet been imported.
When such an error occurs, Import generates an
error message, bypasses the failed row, and continues importing other rows in
the table. You can disable constraints manually to avoid this.
Referential constraints between tables can also
cause problems. For example, if the emp table appears before the dept table in the export file, but a referential check exists from the emp table into the dept table, some of the rows from the emp table may not be imported due to a referential constraint
violation.
To prevent errors like these, you should disable
referential integrity constraints when importing data into existing tables.
When the constraints are reenabled after
importing, the entire table is checked, which may take a long time for a large
table. If the time required for that check is too long, it may be beneficial to
order the import manually.
To do so, perform several imports from an export
file instead of one. First, import tables that are the targets of referential
checks. Then, import the tables that reference them. This option works if
tables do not reference each other in a circular fashion, and if a table does
not reference itself.
Triggers that are defined to trigger on DDL
events for a specific schema or on DDL-related events for the database are
system triggers. These triggers can have detrimental effects on certain Import
operations. For example, they can prevent successful re-creation of database
objects, such as tables. This causes errors to be returned that give no
indication that a trigger caused the problem.
Database administrators and anyone creating
system triggers should verify that such triggers do not prevent users from
performing database operations for which they are authorized. To test a system
trigger, take the following steps:
Before you use one of these methods to invoke
Import, be sure to read the descriptions of the available parameters.
You can specify all valid parameters and their
values from the command line using the following syntax:
If you prefer to let Import prompt you for the
value of each parameter, you can use the following syntax to start Import in
interactive mode:
Import will display each parameter with a
request for you to enter a value. This method exists for backward compatibility
and is not recommended because it provides less functionality than the other
methods.
You can specify all valid parameters and their
values in a parameter file. Storing the parameters in a file allows them to be
easily modified or reused, and is the recommended method for invoking Import.
If you use different parameters for different databases, you can have multiple parameter
files.
Create the parameter file using any flat file
text editor. The command-line option PARFILE=filename tells Import to read the parameters from the specified file rather
than from the command line. For example:
The first example does not specify the username/password on the command line to illustrate that you can
specify them in the parameter file, although, for security reasons, this is not
recommended.
You can add comments to the parameter file by
preceding them with the pound (#) sign. Import ignores all characters to the
right of the pound (#) sign.
You can specify a parameter file at the same
time that you are entering parameters on the command line. In fact, you can
specify the same parameter in both places. The position of the PARFILE parameter and other parameters on the command line determines
which parameters take precedence. For example, assume the parameter file params.dat contains the parameter INDEXES=y and Import is invoked with the following line:
In this case, because INDEXES=n occurs after PARFILE=params.dat, INDEXES=n overrides the value of the INDEXES parameter in the parameter file.
SYSDBA is used internally and has specialized
functions; its behavior is not the same as for generalized users. Therefore,
you should not typically need to invoke Import as SYSDBA, except in the following situations:
To invoke Import as SYSDBA, use the following syntax, adding any desired parameters or
parameter filenames:
This example shows the entire connect string
enclosed in quotation marks and backslashes. This is because the string, AS
SYSDBA, contains a blank, a situation for which most
operating systems require that the entire connect string be placed in quotation
marks or marked as a literal by some method. Some operating systems also
require that quotation marks on the command line be preceded by an escape
character. In this example, backslashes are used as the escape character. If
the backslashes were not present, the command-line parser that Export uses
would not understand the quotation marks and would remove them before calling
Export.
See your Oracle operating system-specific
documentation for more information about special and reserved characters on
your system.
- Full--Only
users with the IMP_FULL_DATABASE role can import in this mode, which imports a full
database export dump file. Use the FULL parameter to specify this mode.
- Tablespace--allows
a privileged user to move a set of tablespaces from one Oracle database to
another. Use the TRANSPORT_TABLESPACE parameter to specify this mode.
- User
(Owner)--allows you to import all objects that belong to you (such as
tables, grants, indexes, and procedures). A privileged user importing in
user mode can import all objects in the schemas of a specified set of
users. Use the FROMUSER parameter to specify this mode.
- Table--allows
you to import specific tables and partitions. A privileged user can
qualify the tables by specifying the schema that contains them. Use the TABLES parameter
to specify this mode.
All users can import in table mode and user
mode. Users with the
IMP_FULL_DATABASE role (privileged users) can import in all modes.
IMP_FULL_DATABASE role (privileged users) can import in all modes.
A user with the IMP_FULL_DATABASE role must specify one of these modes. Otherwise, an error results.
If a user without the IMP_FULL_DATABASE role fails to specify one of these modes, a user-level
import is performed.
The objects that are imported depend on the
Import mode you choose and the mode that was used during the export.
The following diagrams show the syntax for the
parameters that you can specify in the parameter file or on the command line.
Following the diagrams are descriptions of each parameter.
The integer specified for BUFFER is the size, in bytes, of the buffer through which data rows are
transferred.
BUFFER determines the number of rows in the array
inserted by Import. The following formula gives an approximation of the buffer
size that inserts a given array of rows:
For tables containing LONG, LOB, BFILE, REF, ROWID, UROWID, or DATE columns, rows are inserted individually. The
size of the buffer must be large enough to contain the entire row, except for LOB and LONG columns. If the buffer cannot hold the longest
row in a table, Import attempts to allocate a larger buffer.
See your Oracle operating
system-specific documentation to determine the default value for this
parameter.
|
This parameter applies to Oracle version 5 and 6
export files only. Use of this parameter is not recommended. It is provided only for compatibility with previous
versions. Eventually, it will no longer be supported.
Specifies whether Import should commit after
each array insert. By default, Import commits only after loading each table,
and Import performs a rollback when an error occurs, before continuing with the
next object.
If a table has nested table columns or
attributes, the contents of the nested tables are imported as separate tables. Therefore,
the contents of the nested tables are always committed in a transaction
distinct from the transaction used to commit the outer table.
If COMMIT=n and a table is partitioned, each partition and subpartition in the
Export file is imported in a separate transaction.
Specifying COMMIT=y prevents rollback segments from growing inordinately large and
improves the performance of large imports. Specifying COMMIT=y is advisable if the table has a uniqueness constraint. If the
import is restarted, any rows that have already been imported are rejected with
a recoverable error.
If a table does not have a uniqueness
constraint, Import could produce duplicate rows when you reimport the data.
For tables containing LONG, LOB, BFILE, REF, ROWID, UROWID, or DATE columns, array inserts are not done. If COMMIT=y, Import commits these tables after each row.
Specifies whether or not Import should compile
packages, procedures, and functions as they are created.
If COMPILE=n, these units are compiled on their first use.
For example, packages that are used to build domain indexes are compiled when
the domain indexes are created.
Specifies whether or not table constraints are
to be imported. The default is to import constraints. If you do not want
constraints to be imported, you must set the parameter value to n.
Note that primary key constraints for
index-organized tables (IOTs) and object tables are always imported.
When TRANSPORT_TABLESPACE is specified as y, use this parameter to
list the datafiles to be transported into the database.
Specifies whether or not the existing datafiles
making up the database should be reused. That is, specifying DESTROY=y causes Import to include the REUSE option in the datafile clause of the CREATE TABLESPACE statement, which causes Import to reuse the original database's
datafiles after deleting their contents.
Note that the export file contains the datafile
names used in each tablespace. If you specify DESTROY=y and attempt to create a second database on the same system (for
testing or other purposes), the Import utility will overwrite the first
database's datafiles when it creates the tablespace. In this situation you
should use the default, DESTROY=n, so that an error occurs if the datafiles already
exist when the tablespace is created. Also, when you need to import into the
original database, you will need to specify IGNORE=y to add to the existing datafiles without replacing them.
Specifies that Import should display a progress
meter in the form of a period for n number of rows imported. For example, if you specify FEEDBACK=10, Import displays a period each time 10 rows have been imported. The FEEDBACK value applies to all tables being imported; it cannot be set on a
per-table basis.
Specifies the names of the export files to
import. The default extension is .dmp. Because Export supports multiple export files
(see the following description of the FILESIZE parameter), you may need to specify multiple filenames to be
imported. For example:
You need not be the user who exported the export
files; however, you must have read access to the files. If you were not the
exporter of the export files, you must also have the IMP_FULL_DATABASE role granted to you.
Export supports writing to multiple export
files, and Import can read from multiple export files. If, on export, you
specify a value (byte limit) for the Export FILESIZE parameter, Export will write only the number of bytes you specify
to each dump file. On import, you must use the Import parameter FILESIZE to tell Import the maximum dump file size you specified on export.
The maximum value that can be
stored in a file is operating system-dependent. You should verify this
maximum value in your Oracle operating system-specific documentation before
specifying FILESIZE.
|
The FILESIZE value can be specified as a number followed by KB (number of
kilobytes). For example, FILESIZE=2KB is the same as FILESIZE=2048. Similarly, MB specifies megabytes (1024 * 1024) and GB specifies
gigabytes (1024**3). B remains the shorthand for bytes; the number is not
multiplied to obtain the final file size (FILESIZE=2048B is the same as FILESIZE=2048).
A comma-separated list of schemas to import.
This parameter is relevant only to users with the IMP_FULL_DATABASE role. The parameter enables you to import a subset of schemas from
an export file containing multiple schemas (for example, a full export dump
file or a multischema, user-mode export dump file).
Schema
names that appear inside functional indexes, functions, procedures, triggers,
type bodies, views, and so on, are not affected by FROMUSER or TOUSER processing. Only the name of the object is affected.
After the import has completed, items in
Grants on views are imported even if a view has
compilation errors. A view could have compilation errors if an object it
depends on, such as a table, procedure, or another view, does not exist when
the view is created. If a base table does not exist, the server cannot validate
that the grantor has the proper privileges on the base table with the GRANT OPTION. Access violations could occur when the view is
used if the grantor does not have the proper privileges after the missing
tables are created.
Importing views that contain references to
tables in other schemas requires that the importer have SELECT ANY TABLE privilege. If the importer has not been granted this privilege,
the views will be imported in an uncompiled state. Note that granting the
privilege to a role is insufficient. For the view to be compiled, the privilege
must be granted directly to the importer.
Import attempts to create a partitioned table
with the same partition or subpartition names as the exported partitioned table,
including names of the form SYS_Pnnn. If a table with the same name already exists,
Import processing depends on the value of the IGNORE parameter.
Unless SKIP_UNUSABLE_INDEXES=y, inserting the exported data into the target
table fails if Import cannot update a nonpartitioned index or index partition
that is marked Indexes Unusable or is otherwise not suitable.
Support for Fine-Grained Access Control
You can export tables with fine-grained access
control policies enabled. When doing so, keep the following considerations in
mind:
To restore the fine-grained access control
policies, the user who imports from an export file containing such tables must
have the following privileges:
- EXECUTE privilege
on the DBMS_RLS package so that the tables' security policies can be
reinstated.
- EXPORT_FULL_DATABASE role
enabled or EXEMPT ACCESS POLICY granted
If a user without the correct privileges
attempts to import from an export file that contains tables with fine-grained
access control policies, a warning message will be issued. Therefore, it is
advisable for security reasons that the exporter and importer of such tables be
the DBA.
The three interrelated objects in a snapshot
system are the master table, optional snapshot log, and the snapshot itself.
The tables (master table, snapshot log table definition, and snapshot tables)
can be exported independently of one another. Snapshot logs can be exported
only if you export the associated master table. You can export snapshots using
full database or user-mode Export; you cannot use table-mode Export.
The snapshot log in a dump file is imported if
the master table already exists for the database to which you are importing and
it has a snapshot log.
When a ROWID snapshot log is exported, ROWIDs stored in the snapshot log have no meaning
upon import. As a result, each ROWID snapshot's first attempt to do a fast refresh
fails, generating an error indicating that a complete refresh is required.
To avoid the refresh error, do a complete
refresh after importing a ROWID snapshot log. After you have done a complete
refresh, subsequent fast refreshes will work properly. In contrast, when a primary
key snapshot log is exported, the values of the primary keys do retain their
meaning upon Import. Therefore, primary key snapshots can do a fast refresh
after the import.
A snapshot that has been restored from an export
file has reverted to a previous state. On import, the time of the last refresh
is imported as part of the snapshot table definition. The function that
calculates the next refresh time is also imported.
Each refresh leaves a signature. A fast refresh
uses the log entries that date from the time of that signature to bring the
snapshot up to date. When the fast refresh is complete, the signature is
deleted and a new signature is created. Any log entries that are not needed to
refresh other snapshots are also deleted (all log entries with times before the
earliest remaining signature).
When you restore a snapshot from an export file,
you may encounter a problem under certain circumstances.
Assume that a snapshot is refreshed at time A,
exported at time B, and refreshed again at time C. Then, because of corruption
or other problems, the snapshot needs to be restored by dropping the snapshot
and importing it again. The newly imported version has the last refresh time
recorded as time A. However, log entries needed for a fast refresh may no
longer exist. If the log entries do exist (because they are needed for another
snapshot that has yet to be refreshed), they are used, and the fast refresh
completes successfully. Otherwise, the fast refresh fails, generating an error
that says a complete refresh is required.
Snapshots, snapshot logs, and related items are
exported with the schema name explicitly given in the DDL statements;
therefore, snapshots and their related items cannot be imported into a
different schema.
If you attempt to use FROMUSER and TOUSER to import snapshot data, an error will be
written to the Import log file and the items will not be imported.
To do this, you must make the tablespaces
read-only, copy the datafiles of these tablespaces, and use Export and Import
to move the database information (metadata) stored in the data dictionary. Both
the datafiles and the metadata export file must be copied to the target
database. The transport of these files can be done using any facility for
copying flat binary files, such as the operating system copying facility,
binary-mode FTP, or publishing on CD-ROMs.
After copying the datafiles and importing the
metadata, you can optionally put the tablespaces in read/write mode.
If the tablespace no longer exists, or the user
does not have sufficient quota in the tablespace, the system uses the default
tablespace for that user, unless the table:
- Is
partitioned
- Is
a type table
- Contains LOB, VARRAY, or OPAQUE type
columns
- Has
an index-organized table (IOT) overflow segment
If the user does not have sufficient quota in
the default tablespace, the user's tables are not imported.
Tables are exported with their current storage
parameters. For object tables, the OIDINDEX is created with its current storage
parameters and name, if given. For tables that contain LOB, VARRAY, or OPAQUE type columns, LOB, VARRAY, or OPAQUE type data is created with their current storage parameters.
If you alter the storage parameters of existing
tables prior to export, the tables are exported using those altered storage
parameters. Note, however, that storage parameters for LOB data cannot be
altered prior to export (for example, chunk size for a LOB column, whether a LOB column is CACHE or NOCACHE, and so forth).
Note that LOB data might not reside in the same
tablespace as the containing table. The tablespace for that data must be
read/write at the time of import or the table will not be imported.
If LOB data resides in a tablespace that does
not exist at the time of import or the user does not have the necessary quota
in that tablespace, the table will not be imported. Because there can be
multiple tablespace clauses, including one for the table, Import cannot
determine which tablespace clause caused the error.
Before using the Import utility to import data,
you may want to create large tables with different storage parameters. If so,
you must specify IGNORE=y on the command line or in the parameter file.
By default at export time, storage parameters
are adjusted to consolidate all data into its initial extent. To preserve the
original size of an initial extent, you must specify at export time that
extents are not to be consolidated (by
setting COMPRESS=n).
Read-only tablespaces can be exported. On
import, if the tablespace does not already exist in the target database, the
tablespace is created as a read/write tablespace. If you want read-only
functionality, you must manually make the tablespace read-only after the
import.
If the tablespace already exists in the target
database and is read-only, you must make it read/write before the import.
You can drop a tablespace by redefining the
objects to use different tablespaces before the import. You can then issue the imp command and specify IGNORE=y.
In many cases, you can drop a tablespace by
doing a full database export, then creating a zero-block tablespace with the
same name (before logging off) as the tablespace you want to drop. During
import, with IGNORE=y, the relevant CREATE TABLESPACE statement will fail and prevent the creation of the unwanted
tablespace.
All objects from that tablespace will be
imported into their owner's default tablespace with the exception of
partitioned tables, type tables, and tables that contain LOB or VARRAY columns or index-only tables with overflow
segments. Import cannot determine which tablespace caused the error. Instead,
you must first create a table and then import the table again, specifying IGNORE=y.
Objects are not imported into the default
tablespace if the tablespace does not exist or you do not have the necessary
quotas for your default tablespace.
If a user's quota allows it, the user's tables
are imported into the same tablespace from which they were exported. However,
if the tablespace no longer exists or the user does not have the necessary
quota, the system uses the default tablespace for that user as long as the
table is unpartitioned, contains no LOB or VARRAY columns, is not a type table, and is not an
index-only table with an overflow segment. This scenario can be used to move a
user's tables from one tablespace to another.
For example, you need to move joe's tables from tablespace A to tablespace B after a full database export. Follow these
steps:
1. If joe has the UNLIMITED TABLESPACE privilege, revoke it. Set joe's quota on tablespace A to zero. Also revoke all roles that might have such privileges or
quotas.
Role revokes do not cascade. Therefore, users
who were granted other roles by joe will be unaffected.
If statistics are requested at export time and
analyzer statistics are available for a table, Export will place the ANALYZE statement to recalculate the statistics for the table into the
dump file. In most circumstances, Export will also write the precalculated
optimizer statistics for tables, indexes, and columns to the dump file. See the
description of the Export parameter STATISTICS and the Import parameter STATISTICS.
Because of the time it takes to perform an ANALYZE statement, it is usually preferable for Import to use the
precalculated optimizer statistics for a table (and its indexes and columns)
rather than executing the ANALYZE statement saved by Export. By default, Import
will always use the precalculated statistics that are found in the export dump
file.
The Export utility flags certain precalculated
statistics as questionable. See the Export parameter, STATISTICS for more information. In certain situations, the
importer might want to import only unquestionable statistics, and may not want
to import precalculated statistics in the following situations:
- Character
set translations between the dump file and the import client and the
import database could potentially change collating sequences that are
implicit in the precalculated statistics.
- Row
errors occurred while importing the table.
- A
partition level import is performed (column statistics will no longer be
accurate).
In certain situations, the importer might want
to always use ANALYZE statements rather than precalculated statistics.
For example, the statistics gathered from a fragmented database may not be
relevant when the data is imported in a compressed form. In these cases, the
importer should specify STATISTICS=RECALCULATE to force the
recalculation of statistics.
When you use the Export and Import utilities to
migrate a large database, it may be more efficient to partition the migration
into multiple export and import jobs. If you decide to partition the migration,
be aware of the following advantages and disadvantages.
- Time
required for the migration may be reduced because many of the subjobs can
be run in parallel.
- The
import can start as soon as the first export subjob completes, rather than
waiting for the entire export to complete.
- The
export and import processes become more complex.
- Support
of cross-schema references for certain types of objects may be
compromised. For example, if a schema contains a table with a foreign key
constraint against a table in a different schema, you may not have all
required parent records when you import the table into the dependent
schema.
All exports can be done in parallel. When the
import of full.dmp completes, all remaining imports can also be
done in parallel.
The following sections describe considerations
when you import data from earlier versions of the Oracle database server into
an Oracle9i server.
This section describes guidelines and
restrictions that apply when you import data from an Oracle version 7 database
into an Oracle9i server.
In Oracle9i, check constraints on DATE columns must use the TO_DATE function to specify the format of the date. Because this function
was not required in versions prior to Oracle8i, data imported from an
earlier Oracle database might not have used the TO_DATE function. In such cases, the constraints are imported into the
Oracle9i database, but they are flagged in the dictionary
as invalid.
The catalog views DBA_CONSTRAINTS, USER_CONSTRAINTS, and ALL_CONSTRAINTS can be used to identify such constraints. Import
issues a warning message if invalid date constraints are in the database.
This section describes guidelines and
restrictions that apply when you import data from an Oracle version 6 database
into an Oracle9i server.
When user definitions are imported into an
Oracle database, they are created with the CREATE USER statement. So, when importing from export files created by
previous versions of Export, users are not granted CREATE SESSION privileges automatically.
A table with a default column value that is
longer than the maximum size of that column generates the following error on
import to Oracle9i:
Oracle version 6 did not check the columns in a CREATE
TABLE statement to be sure they were long enough to
hold their default values so these tables could be imported into a version 6
database. The Oracle9i server does make this check, however. As a
result, column defaults that could be imported into a version 6 database may
not import into Oracle9i.
If the default is a value returned by a
function, the column must be large enough to hold the maximum value that can be
returned by that function. Otherwise, the CREATE TABLE statement recorded in the export file produces an error on import.
Oracle9i Import reads Export dump
files created by Oracle release 5.1.22 and higher. Keep in mind the following:
- CHAR columns
are automatically converted to VARCHAR2.
- NOT NULL constraints
are imported as ENABLED.
- Import
automatically creates an index on any clusters to be imported.
- Export
dump files can be read only by the Import utility because they are stored
in a special binary format.
- Any
export dump file can be imported into a higher release of the Oracle
database server.
- Export
dump files cannot be read by previous versions and releases of the Import
utility. Therefore, a release 8.1 export file cannot be imported by a
release 8.0 Import utility and a version 8 export dump file cannot be
imported by a version 7 Import utility.
- The
Import utility can read export dump files created by Export release 5.1.22
and higher.
- The
Import utility cannot read export dump files created by the Export utility
of a higher maintenance release or version. For example, a release 8.1
export dump file cannot be imported by a release 8.0 Import utility, and a
version 8 export dump file cannot be imported by a version 7 Import
utility.
- The
Oracle version 6 (or earlier) Export utility cannot be used against an
Oracle8 or higher database.
- Whenever
a lower version of the Export utility runs with a higher version of the
Oracle database server, categories of database objects that did not exist
in the lower version are excluded from the export. For example,
partitioned tables did not exist in the Oracle database server version 7.
So, if you need to move a version 8 partitioned table to a version 7
database, you must first reorganize the table into a nonpartitioned table.
- Export
files generated by Oracle9i Export, either direct path or conventional path, are
incompatible with earlier releases of Import and can be imported only with
Oracle9i Import. When backward compatibility is an issue, use
the earlier release or version of the Export utility against the Oracle9i database.
- You
cannot import job queues from a release 8.1.7 database into earlier
releases of the database. Therefore, you must manually restart your jobs
after the import is finished.
This parameter applies to Oracle version 5 and 6
export files only. Use of this parameter is not recommended. It is provided only for compatibility with previous
versions. Eventually, it will no longer be supported.
Oracle version 5 and 6 export files do not
contain the database character set identifier. However, a version 5 or 6 export
file does indicate whether the user session character set was ASCII or EBCDIC.
Use this parameter to indicate the actual
character set used at export time. The Import utility will verify whether the
specified character set is ASCII or EBCDIC based on the character set in the
export file.
If you do not specify a value for the CHARSET parameter and the export file is ASCII, Import will verify that
the user session character set is ASCII. Or, if the export file is EBCDIC,
Import will verify that the user session character set is EBCDIC.
If you are using a version of Oracle greater
than version 5 or 6, the character set is specified within the export file, and
conversion to the current database's character set is automatic. Specification
of this
Importing Schemas | |
Note: Deprecated ... replaced by DataPump | |
Import help | imp -help |
imp -help | |
Import Authority | imp userid=<schema_owner/password> |
imp uwclass/uwclass | |
Import File Name | imp userid=<schema_owner/password> FILE=<file_name> |
imp uwclass/uwclass file=c: emp\uw_test.dmp | |
Log File Name | imp userid=<schema_owner/password> LOG=<file_name> |
imp uwclass/uwclass log=c: emp\uw.log | |
Buffer Size | -- O/S dependent and can usually be ignored imp userid=<schema_owner/password> BUFFER=<integer> |
-- rows_in_array * maximum_row_size imp uwclass/uwclass buffer=64000 | |
Commit | imp userid=<schema_owner/password> COMMIT=<Y|N> |
imp uwclass/uwclass commit=Y | |
Compile | imp userid=<schema_owner/password> COMPILE=<Y|N> |
imp uwclass/uwclass compile=N | |
Constraints | imp userid=<schema_owner/password> CONSTRAINTS=<Y|N> |
imp uwclass/uwclass constraints=N | |
Data Files | imp userid=<schema_owner/password> DATAFILES=(df1, df2, ... df#) |
imp uwclass/uwclass datafiles=(c:\oradata\datasml1.dbf') | |
Destroy | imp userid=<schema_owner/password> DESTROY=<Y|N> |
imp uwclass/uwclass destroy=Y | |
Feedback | imp userid=<schema_owner/password> FEEDBACK=<integer DEFAULT 0> |
imp uwclass/uwclass feedback=100 | |
File Size | -- if used in exporting must match export specification imp userid=<schema_owner/password> FILESIZE=<integer> [KB|MB] |
imp uwclass/uwclass filesize 100MB | |
From User | imp userid=<schema_owner/password> FROMUSER=<schema_name> |
imp uwclass/uwclass fromuser=uwclass touser=abc | |
Full | imp userid=<schema_owner/password> FULL=<Y|N> |
imp uwclass/uwclass full=N | |
Grants | imp userid=<schema_owner/password> GRANTS=<Y|N> |
imp uwclass/uwclass grants=N | |
Ignore | imp userid=<schema_owner/password> IGNORE=<Y|N> |
imp uwclass/uwclass ignore=Y | |
Indexes | imp userid=<schema_owner/password> INDEXES=<Y|N> |
imp uwclass/uwclass indexes=N | |
Index File | -- creates a file of index creation statements: INDEXES parameter -- must be N imp userid=<schema_owner/password> INDEXFILE=<file_name> |
imp uwclass/uwclass indexfile=c: emp\index_dml.sql | |
Parameter File | imp userid=<schema_owner/password> PARFILE=<parameter_file_name> |
imp uwclass/uwclass parfile=c: emp\uwparfile.ctl | |
Record Length | imp userid=<schema_owner/password> RECORDLENGTH=<bytes> |
imp uwclass/uwclass recordlength=32000 | |
Resumable | imp userid=<schema_owner/password> RESUMABLE=<Y|N> |
imp uwclass/uwclass resumable=Y | |
Resumable Name | imp userid=<schema_owner/password> RESUMABLE_NAME = 'User USERNAME (USERID), Session SESSIONID, Instance INSTANCEID' |
imp uwclass/uwclass resumable_name 'UWCLASS' | |
Resumable Timeout | imp userid=<schema_owner/password> RESUMABLE_TIMEOUT=<seconds> DEFAULT 7200 (2 hours) |
imp uwclass/uwclass resumable_timeout=18000 | |
Rows | imp userid=<schema_owner/password> ROWS=<Y|N> |
imp uwclass/uwclass rows=N | |
Show | imp userid=<schema_owner/password> SHOW=<Y|N> |
imp uwclass/uwclass show=Y | |
Skip Unusable Indexes | -- default is parameter from spfile imp userid=<schema_owner/password> SKIP_UNUSABLE_INDEXES=<Y|N> |
imp uwclass/uwclass skip_unusable_indexes=N | |
Statistics | imp userid=<schema_owner/password> STATISTICS=<ALWAYS | NONE | RECALCULATE | SAFE> |
imp uwclass/uwclass statistics=RECALCULATE | |
Streams Configuration | imp userid=<schema_owner/password> STREAMS_CONFIGURATION=<Y|N> |
imp uwclass/uwclass streams_configuration=N | |
Streams Instantiation | imp userid=<schema_owner/password> STREAMS_INSTANTIATION=<Y|N> |
imp uwclass/uwclass streams_instantiation=Y | |
Tables | imp userid=<schema_owner/password> TABLES=(t1, t2, ... t#) |
imp uwclass/uwclass tables=(emp, dept, bonus) | |
Tablespaces | imp userid=<schema_owner/password> TABLES=(t1, t2, ... t#) |
imp uwclass/uwclass tables=(emp, dept, bonus) | |
Table Object ID No Validate | imp userid=<schema_owner/password> TOID_NOVALIDATE=([schemaname.]typename [, ...]) |
imp uwclass/uwclass TABLE=jobs TOID_NOVALIDATE=typ1 | |
To User | imp userid=<schema_owner/password> TOUSER=<schema_name> |
imp uwclass/uwclass fromuser=uwclass touser=abc | |
Transport Tablespace | imp userid=<schema_owner/password> TRANSPORT_TABLESPACE=<Y|N> |
imp uwclass/uwclass transport_tablespace=Y | |
TTS Owners | imp userid=<schema_owner/password> TTS_OWNERS=(o1, o2, ... o#) |
imp uwclass/uwclass tts_owners=(uwclass, ids) | |
Volume Size | imp userid=<schema_owner/password> VOLSIZE <integer> [KB|MB|GB] |
imp uwclass/uwclass volsize=10GB | |
Write DDL From DMP file | imp userid=<schema_owner/password> full=Y show=Y 2>path_and_file |
imp uwclass/uwclass file=c: emp\uwclass.dmp full=Y show=Y 2> c: emp\uwclass.sql |
No comments:
Post a Comment