Adsense Ad

Friday, 5 May 2017

Oracle Import Utility

This chapter describes how to use the Import utility to read an export file into an Oracle database. Import only reads files created by the Export utility. For information on how to export a database.
This chapter discusses the following topics:
  • What Is the Import Utility?
  • Before Using Import
  • Importing into Existing Tables
  • Effect of Schema and Database Triggers on Import Operations
  • Invoking Import
  • Import Modes
  • Getting Online Help
  • Import Parameters
  • Example Import Sessions
  • Using the Interactive Method
  • Warning, Error, and Completion Messages
  • Exit Codes for Inspection and Display
  • Error Handling During an Import
  • Table-Level and Partition-Level Import
  • Controlling Index Creation and Maintenance
  • Reducing Database Fragmentation
  • Network Considerations
  • Character Set and Globalization Support Considerations
  • Considerations When Importing Database Objects
  • Materialized Views and Snapshots
  • Transportable Tablespaces
  • Storage Parameters
  • Dropping a Tablespace
  • Reorganizing Tablespaces
  • Importing Statistics
  • Using Export and Import to Partition a Database Migration
  • Using Export Files from a Previous Oracle Release
What Is the Import Utility?
The Import utility reads the object definitions and table data from an Export dump file. It inserts the data objects into an Oracle database.
illustrates the process of importing from an Export dump file.
Figure 2-1 Importing an Export File
Text description of sut81013.gif follows
Export dump files can only be read by the Oracle Import utility. The version of the Import utility cannot be earlier than the version of the Export utility used to create the dump file.
Import can read export files created by Export release 5.1.22 and higher.
To read load data from ASCII fixed-format or delimited files, use the SQL*Loader utility.
Table Objects: Order of Import
Table objects are imported as they are read from the export file. The export file contains objects in the following order:
1.     Type definitions
2.     Table definitions
3.     Table data
4.     Table indexes
5.     Integrity constraints, views, procedures, and triggers
6.     Bitmap, functional, and domain indexes
First, new tables are created. Then, data is imported and indexes are built. Then triggers are imported, integrity constraints are enabled on the new tables, and any bitmap, functional, and/or domain indexes are built. This sequence prevents data from being rejected due to the order in which tables are imported. This sequence also prevents redundant triggers from firing twice on the same data (once when it is originally inserted and again during the import).
For example, if the emp table has a referential integrity constraint on the dept table and the emp table is imported first, all emp rows that reference departments that have not yet been imported into dept would be rejected if the constraints were enabled.
When data is imported into existing tables, however, the order of import can still produce referential integrity failures. In the situation just given, if the emp table already existed and referential integrity constraints were in force, many rows could be rejected.
A similar situation occurs when a referential integrity constraint on a table references itself. For example, if scott's manager in the emp table is drake, and drake's row has not yet been loaded, scott's row will fail, even though it would be valid at the end of the import.

Note:
For the reasons mentioned previously, it is a good idea to disable referential constraints when importing into an existing table. You can then reenable the constraints after the import is completed.

Before Using Import
Before you begin using Import, be sure you take care of the following items:
  • Run the catexp.sql or catalog.sql script
  • Verify that you have the required access privileges
Running catexp.sql or catalog.sql
To use Import, you must run either the script catexp.sql or catalog.sql (which runs catexp.sql) after the database has been created or migrated to Oracle9i.

Note:
The actual names of the script files depend on your operating system. The script filenames and the method for running them are described in your Oracle operating system-specific documentation.

The catexp.sql or catalog.sql script needs to be run only once on a database. You do not need to run either script again before performing future import operations. Both scripts perform the following tasks to prepare the database for Import:
  • Assign all necessary privileges to the IMP_FULL_DATABASE role.
  • Assign IMP_FULL_DATABASE to the DBA role.
  • Create required views of the data dictionary.
Verifying Access Privileges
This section describes the privileges you need to use the Import utility and to import objects into your own and others' schemas.
To use Import, you need the privilege CREATE SESSION to log on to the Oracle database server. This privilege belongs to the CONNECT role established during database creation.
You can do an import even if you did not create the export file. However, keep in mind that if the export file was created by a user with EXP_FULL_DATABASE privilege, then you must have IMP_FULL_DATABASE privilege to import it. Both of these privileges are typically assigned to DBAs.
Importing Objects into Your Own Schema
lists the privileges required to import objects into your own schema. All of these privileges initially belong to the RESOURCE role.
Table 2-1  Privileges Required to Import Objects into Your Own Schema
Object
Required Privilege (Privilege Type, If Applicable)
Clusters
CREATE CLUSTER (System) and Tablespace Quota, or UNLIMITED TABLESPACE (System)
Database links
CREATE DATABASE LINK (System) and CREATE SESSION (System) on remote database
Triggers on tables
CREATE TRIGGER (System)
Triggers on schemas
CREATE ANY TRIGGER (System)
Indexes
CREATE INDEX (System) and Tablespace Quota, or UNLIMITED TABLESPACE (System)
Integrity constraints
ALTER TABLE (Object)
Libraries
CREATE ANY LIBRARY (System)
Packages
CREATE PROCEDURE (System)
Private synonyms
CREATE SYNONYM (System)
Sequences
CREATE SEQUENCE (System)
Snapshots
CREATE SNAPSHOT (System)
Stored functions
CREATE PROCEDURE (System)
Stored procedures
CREATE PROCEDURE (System)
Table data
INSERT TABLE (Object)
Table definitions (including comments and audit options)
CREATE TABLE (System) and Tablespace Quota, or UNLIMITED TABLESPACE (System)
Views
CREATE VIEW (System) and SELECT (Object) on the base table, or SELECT ANY TABLE (System)
Object types
CREATE TYPE (System)
Foreign function libraries
CREATE LIBRARY (System)
Dimensions
CREATE DIMENSION (System)
Operators
CREATE OPERATOR (System)
Indextypes
CREATE INDEXTYPE (System)
Importing Grants
To import the privileges that a user has granted to others, the user initiating the import must either own the objects or have object privileges with the WITH GRANT OPTION. shows the required conditions for the authorizations to be valid on the target system.
Table 2-2  Privileges Required to Import Grants
Grant
Conditions
Object privileges
The object must exist in the user's schema, or
the user must have the object privileges with the WITH GRANT OPTION or,
the user must have the IMP_FULL_DATABASE role enabled.
System privileges
User must have the SYSTEM privilege as well as the WITH ADMIN OPTION.
Importing Objects into Other Schemas
To import objects into another user's schema, you must have the IMP_FULL_DATABASE role enabled.
Importing System Objects
To import system objects from a full database export file, the role IMP_FULL_DATABASE must be enabled. The parameter FULL specifies that these system objects are included in the import when the export file is a full export:
  • Profiles
  • Public database links
  • Public synonyms
  • Roles
  • Rollback segment definitions
  • Resource costs
  • Foreign function libraries
  • Context objects
  • System procedural objects
  • System audit options
  • System privileges
  • Tablespace definitions
  • Tablespace quotas
  • User definitions
  • Directory aliases
  • System event triggers
Importing into Existing Tables
This section describes factors to take into account when you import data into existing tables.
Manually Creating Tables Before Importing Data
When you choose to create tables manually before importing data into them from an export file, you should use either the same table definition previously used or a compatible format. For example, although you can increase the width of columns and change their order, you cannot do the following:
  • Add NOT NULL columns
  • Change the datatype of a column to an incompatible datatype (LONG to NUMBER, for example)
  • Change the definition of object types used in a table
  • Change DEFAULT column values

Note:
When tables are manually created before data is imported, the CREATE TABLE statement in the export dump file will fail because the table already exists. To avoid this failure and continue loading data into the table, set the import parameter IGNORE=y. Otherwise, no data will be loaded into the table because of the table creation error.

Disabling Referential Constraints
In the normal import order, referential constraints are imported only after all tables are imported. This sequence prevents errors that could occur if a referential integrity constraint existed for data that has not yet been imported.
These errors can still occur when data is loaded into existing tables. For example, if table emp has a referential integrity constraint on the mgr column that verifies that the manager number exists in emp, a perfectly legitimate employee row might fail the referential integrity constraint if the manager's row has not yet been imported.
When such an error occurs, Import generates an error message, bypasses the failed row, and continues importing other rows in the table. You can disable constraints manually to avoid this.
Referential constraints between tables can also cause problems. For example, if the emp table appears before the dept table in the export file, but a referential check exists from the emp table into the dept table, some of the rows from the emp table may not be imported due to a referential constraint violation.
To prevent errors like these, you should disable referential integrity constraints when importing data into existing tables.
Manually Ordering the Import
When the constraints are reenabled after importing, the entire table is checked, which may take a long time for a large table. If the time required for that check is too long, it may be beneficial to order the import manually.
To do so, perform several imports from an export file instead of one. First, import tables that are the targets of referential checks. Then, import the tables that reference them. This option works if tables do not reference each other in a circular fashion, and if a table does not reference itself.
Effect of Schema and Database Triggers on Import Operations
Triggers that are defined to trigger on DDL events for a specific schema or on DDL-related events for the database are system triggers. These triggers can have detrimental effects on certain Import operations. For example, they can prevent successful re-creation of database objects, such as tables. This causes errors to be returned that give no indication that a trigger caused the problem.
Database administrators and anyone creating system triggers should verify that such triggers do not prevent users from performing database operations for which they are authorized. To test a system trigger, take the following steps:
1.     Define the trigger.
2.     Create some database objects.
3.     Export the objects in table or user mode.
4.     Delete the objects.
5.     Import the objects.
6.     Verify that the objects have been successfully re-created.

Note:
A full export does not export triggers owned by schema SYS. You must manually re-create SYS triggers either before or after the full import. Oracle Corporation recommends that you re-create them after the import in case they define actions that would impede progress of the import.

Invoking Import
You can invoke Import and specify parameters by using any of the following methods:
  • Command-line entries
  • Interactive Import prompts
  • Parameter files
Before you use one of these methods to invoke Import, be sure to read the descriptions of the available parameters.
Command-Line Entries
You can specify all valid parameters and their values from the command line using the following syntax:
imp username/password PARAMETER=value

or
imp username/password PARAMETER=(value1,value2,...,valuen)

The number of parameters cannot exceed the maximum length of a command line on the system.
Interactive Import Prompts
If you prefer to let Import prompt you for the value of each parameter, you can use the following syntax to start Import in interactive mode:
imp username/password

Import will display each parameter with a request for you to enter a value. This method exists for backward compatibility and is not recommended because it provides less functionality than the other methods.
Parameter Files
You can specify all valid parameters and their values in a parameter file. Storing the parameters in a file allows them to be easily modified or reused, and is the recommended method for invoking Import. If you use different parameters for different databases, you can have multiple parameter files.
Create the parameter file using any flat file text editor. The command-line option PARFILE=filename tells Import to read the parameters from the specified file rather than from the command line. For example:
imp PARFILE=filename
imp username/password PARFILE=filename

The first example does not specify the username/password on the command line to illustrate that you can specify them in the parameter file, although, for security reasons, this is not recommended.
The syntax for parameter file specifications is one of the following:
PARAMETER=value
PARAMETER=(value)
PARAMETER=(value1, value2, ...)

The following example shows a partial parameter file listing:
FULL=y
FILE=dbay
INDEXES=y
CONSISTENT=y

Note:
The maximum size of the parameter file may be limited by the operating system. The name of the parameter file is subject to the file-naming conventions of the operating system. See your Oracle operating system-specific documentation for more information.

You can add comments to the parameter file by preceding them with the pound (#) sign. Import ignores all characters to the right of the pound (#) sign.
You can specify a parameter file at the same time that you are entering parameters on the command line. In fact, you can specify the same parameter in both places. The position of the PARFILE parameter and other parameters on the command line determines which parameters take precedence. For example, assume the parameter file params.dat contains the parameter INDEXES=y and Import is invoked with the following line:
imp username/password PARFILE=params.dat INDEXES=n

In this case, because INDEXES=n occurs after PARFILE=params.dat, INDEXES=n overrides the value of the INDEXES parameter in the parameter file.
Invoking Import As SYSDBA
SYSDBA is used internally and has specialized functions; its behavior is not the same as for generalized users. Therefore, you should not typically need to invoke Import as SYSDBA, except in the following situations:
  • At the request of Oracle technical support
  • When importing a transportable tablespace set
To invoke Import as SYSDBA, use the following syntax, adding any desired parameters or parameter filenames:
imp \'username/password AS SYSDBA\'

Optionally, you could also specify an instance name:
imp \'username/password@instance AS SYSDBA\'

If either the username or password is omitted, Import will prompt you for it.
This example shows the entire connect string enclosed in quotation marks and backslashes. This is because the string, AS SYSDBA, contains a blank, a situation for which most operating systems require that the entire connect string be placed in quotation marks or marked as a literal by some method. Some operating systems also require that quotation marks on the command line be preceded by an escape character. In this example, backslashes are used as the escape character. If the backslashes were not present, the command-line parser that Export uses would not understand the quotation marks and would remove them before calling Export.
See your Oracle operating system-specific documentation for more information about special and reserved characters on your system.
If you prefer to use the Import interactive mode.
Import Modes
The Import utility provides four modes of import.
  • Full--Only users with the IMP_FULL_DATABASE role can import in this mode, which imports a full database export dump file. Use the FULL parameter to specify this mode.
  • Tablespace--allows a privileged user to move a set of tablespaces from one Oracle database to another. Use the TRANSPORT_TABLESPACE parameter to specify this mode.
  • User (Owner)--allows you to import all objects that belong to you (such as tables, grants, indexes, and procedures). A privileged user importing in user mode can import all objects in the schemas of a specified set of users. Use the FROMUSER parameter to specify this mode.
  • Table--allows you to import specific tables and partitions. A privileged user can qualify the tables by specifying the schema that contains them. Use the TABLES parameter to specify this mode.

Caution:
When you use table mode to import tables that have columns of type ANYDATA, you may receive the following error:
ORA-22370: Incorrect usage of method. Nonexistent type.
This indicates that the ANYDATA column depends on other types that are not present in the database. You must manually create dependent types in the target database before you use table mode to import tables that use the ANYDATA type.

All users can import in table mode and user mode. Users with the
IMP_FULL_DATABASE role (privileged users) can import in all modes.
A user with the IMP_FULL_DATABASE role must specify one of these modes. Otherwise, an error results. If a user without the IMP_FULL_DATABASE role fails to specify one of these modes, a user-level import is performed.
The objects that are imported depend on the Import mode you choose and the mode that was used during the export.
Getting Online Help
Import provides online help. Enter imp HELP=y on the command line to invoke it.
Import Parameters
The following diagrams show the syntax for the parameters that you can specify in the parameter file or on the command line. Following the diagrams are descriptions of each parameter.
Import_start
Text description of impstart.gif follows
ImpModes
Text description of impmodes.gif follows
ImpUserOpts
Text description of impuseop.gif follows
ImpTableOpts
Text description of imptabop.gif follows
ImpTTSOpts
Text description of impttsop.gif follows
ImpTTSFiles
Text description of imptts.gif follows
ImpOpts
Text description of impopts.gif follows
ImpOpts_continued
Text description of impopts_cont.gif follows
ImpFileOpts
Text description of impfilop.gif follows
The following sections describe parameter functionality and default values.
BUFFER
Default: operating system-dependent
The integer specified for BUFFER is the size, in bytes, of the buffer through which data rows are transferred.
BUFFER determines the number of rows in the array inserted by Import. The following formula gives an approximation of the buffer size that inserts a given array of rows:
buffer_size = rows_in_array * maximum_row_size

For tables containing LONG, LOB, BFILE, REF, ROWID, UROWID, or DATE columns, rows are inserted individually. The size of the buffer must be large enough to contain the entire row, except for LOB and LONG columns. If the buffer cannot hold the longest row in a table, Import attempts to allocate a larger buffer.

Note:
See your Oracle operating system-specific documentation to determine the default value for this parameter.

CHARSET
This parameter applies to Oracle version 5 and 6 export files only. Use of this parameter is not recommended. It is provided only for compatibility with previous versions. Eventually, it will no longer be supported.
COMMIT
Default: n
Specifies whether Import should commit after each array insert. By default, Import commits only after loading each table, and Import performs a rollback when an error occurs, before continuing with the next object.
If a table has nested table columns or attributes, the contents of the nested tables are imported as separate tables. Therefore, the contents of the nested tables are always committed in a transaction distinct from the transaction used to commit the outer table.
If COMMIT=n and a table is partitioned, each partition and subpartition in the Export file is imported in a separate transaction.
Specifying COMMIT=y prevents rollback segments from growing inordinately large and improves the performance of large imports. Specifying COMMIT=y is advisable if the table has a uniqueness constraint. If the import is restarted, any rows that have already been imported are rejected with a recoverable error.
If a table does not have a uniqueness constraint, Import could produce duplicate rows when you reimport the data.
For tables containing LONG, LOB, BFILE, REF, ROWID, UROWID, or DATE columns, array inserts are not done. If COMMIT=y, Import commits these tables after each row.
COMPILE
Default: y
Specifies whether or not Import should compile packages, procedures, and functions as they are created.
If COMPILE=n, these units are compiled on their first use. For example, packages that are used to build domain indexes are compiled when the domain indexes are created.
CONSTRAINTS
Default: y
Specifies whether or not table constraints are to be imported. The default is to import constraints. If you do not want constraints to be imported, you must set the parameter value to n.
Note that primary key constraints for index-organized tables (IOTs) and object tables are always imported.
DATAFILES
Default: none
When TRANSPORT_TABLESPACE is specified as y, use this parameter to list the datafiles to be transported into the database.
DESTROY
Default: n
Specifies whether or not the existing datafiles making up the database should be reused. That is, specifying DESTROY=y causes Import to include the REUSE option in the datafile clause of the CREATE TABLESPACE statement, which causes Import to reuse the original database's datafiles after deleting their contents.
Note that the export file contains the datafile names used in each tablespace. If you specify DESTROY=y and attempt to create a second database on the same system (for testing or other purposes), the Import utility will overwrite the first database's datafiles when it creates the tablespace. In this situation you should use the default, DESTROY=n, so that an error occurs if the datafiles already exist when the tablespace is created. Also, when you need to import into the original database, you will need to specify IGNORE=y to add to the existing datafiles without replacing them.

Caution:
If datafiles are stored on a raw device, DESTROY=n does not prevent files from being overwritten.

FEEDBACK
Default: 0 (zero)
Specifies that Import should display a progress meter in the form of a period for n number of rows imported. For example, if you specify FEEDBACK=10, Import displays a period each time 10 rows have been imported. The FEEDBACK value applies to all tables being imported; it cannot be set on a per-table basis.
FILE
Default: expdat.dmp
Specifies the names of the export files to import. The default extension is .dmp. Because Export supports multiple export files (see the following description of the FILESIZE parameter), you may need to specify multiple filenames to be imported. For example:
imp scott/tiger IGNORE=y FILE = dat1.dmp, dat2.dmp, dat3.dmp FILESIZE=2048

You need not be the user who exported the export files; however, you must have read access to the files. If you were not the exporter of the export files, you must also have the IMP_FULL_DATABASE role granted to you.
FILESIZE
Default: operating-system dependent
Export supports writing to multiple export files, and Import can read from multiple export files. If, on export, you specify a value (byte limit) for the Export FILESIZE parameter, Export will write only the number of bytes you specify to each dump file. On import, you must use the Import parameter FILESIZE to tell Import the maximum dump file size you specified on export.

Note:
The maximum value that can be stored in a file is operating system-dependent. You should verify this maximum value in your Oracle operating system-specific documentation before specifying FILESIZE.

The FILESIZE value can be specified as a number followed by KB (number of kilobytes). For example, FILESIZE=2KB is the same as FILESIZE=2048. Similarly, MB specifies megabytes (1024 * 1024) and GB specifies gigabytes (1024**3). B remains the shorthand for bytes; the number is not multiplied to obtain the final file size (FILESIZE=2048B is the same as FILESIZE=2048).
FROMUSER
Default: none
A comma-separated list of schemas to import. This parameter is relevant only to users with the IMP_FULL_DATABASE role. The parameter enables you to import a subset of schemas from an export file containing multiple schemas (for example, a full export dump file or a multischema, user-mode export dump file).
Schema names that appear inside functional indexes, functions, procedures, triggers, type bodies, views, and so on, are not affected by FROMUSER or TOUSER processing. Only the name of the object is affected. After the import has completed, items in

EMPNO      DEPTNO   
---------- ----------
      7369         20
      7566         20
      7788         20
      7876         20
      7902         20
5 rows selected.
SQL> select empno, deptno from emp partition (dept_high);
EMPNO      DEPTNO   
---------- ----------
      7499         30
      7521         30
      7654         30
      7698         30
      7844         30
      7900         30
6 rows selected.
SQL> exit;
Example Import of Using Pattern Matching to Import Various Tables
In this example, pattern matching is used to import various tables for user scott.
Parameter File Method
imp SYSTEM/password PARFILE=params.dat

The params.dat file contains the following information:
FILE=scott.dmp
IGNORE=n
GRANTS=y
ROWS=y
FROMUSER=scott
TABLES=(%d%,b%s)
Command-Line Method
imp SYSTEM/password FROMUSER=scott FILE=scott.dmp TABLES=(%d%,b%s)
Import Messages
Import: Release 9.2.0.1.0 - Production on Wed Feb 27 17:22:25 2002

Grants on views are imported even if a view has compilation errors. A view could have compilation errors if an object it depends on, such as a table, procedure, or another view, does not exist when the view is created. If a base table does not exist, the server cannot validate that the grantor has the proper privileges on the base table with the GRANT OPTION. Access violations could occur when the view is used if the grantor does not have the proper privileges after the missing tables are created.
Importing views that contain references to tables in other schemas requires that the importer have SELECT ANY TABLE privilege. If the importer has not been granted this privilege, the views will be imported in an uncompiled state. Note that granting the privilege to a role is insufficient. For the view to be compiled, the privilege must be granted directly to the importer.
Importing Partitioned Tables
Import attempts to create a partitioned table with the same partition or subpartition names as the exported partitioned table, including names of the form SYS_Pnnn. If a table with the same name already exists, Import processing depends on the value of the IGNORE parameter.
Unless SKIP_UNUSABLE_INDEXES=y, inserting the exported data into the target table fails if Import cannot update a nonpartitioned index or index partition that is marked Indexes Unusable or is otherwise not suitable.

Support for Fine-Grained Access Control
You can export tables with fine-grained access control policies enabled. When doing so, keep the following considerations in mind:
To restore the fine-grained access control policies, the user who imports from an export file containing such tables must have the following privileges:
  • EXECUTE privilege on the DBMS_RLS package so that the tables' security policies can be reinstated.
  • EXPORT_FULL_DATABASE role enabled or EXEMPT ACCESS POLICY granted
If a user without the correct privileges attempts to import from an export file that contains tables with fine-grained access control policies, a warning message will be issued. Therefore, it is advisable for security reasons that the exporter and importer of such tables be the DBA.
Materialized Views and Snapshots
The three interrelated objects in a snapshot system are the master table, optional snapshot log, and the snapshot itself. The tables (master table, snapshot log table definition, and snapshot tables) can be exported independently of one another. Snapshot logs can be exported only if you export the associated master table. You can export snapshots using full database or user-mode Export; you cannot use table-mode Export.
This section discusses how fast refreshes are affected when these objects are imported.
Snapshot Log
The snapshot log in a dump file is imported if the master table already exists for the database to which you are importing and it has a snapshot log.
When a ROWID snapshot log is exported, ROWIDs stored in the snapshot log have no meaning upon import. As a result, each ROWID snapshot's first attempt to do a fast refresh fails, generating an error indicating that a complete refresh is required.
To avoid the refresh error, do a complete refresh after importing a ROWID snapshot log. After you have done a complete refresh, subsequent fast refreshes will work properly. In contrast, when a primary key snapshot log is exported, the values of the primary keys do retain their meaning upon Import. Therefore, primary key snapshots can do a fast refresh after the import.
Snapshots
A snapshot that has been restored from an export file has reverted to a previous state. On import, the time of the last refresh is imported as part of the snapshot table definition. The function that calculates the next refresh time is also imported.
Each refresh leaves a signature. A fast refresh uses the log entries that date from the time of that signature to bring the snapshot up to date. When the fast refresh is complete, the signature is deleted and a new signature is created. Any log entries that are not needed to refresh other snapshots are also deleted (all log entries with times before the earliest remaining signature).
Importing a Snapshot
When you restore a snapshot from an export file, you may encounter a problem under certain circumstances.
Assume that a snapshot is refreshed at time A, exported at time B, and refreshed again at time C. Then, because of corruption or other problems, the snapshot needs to be restored by dropping the snapshot and importing it again. The newly imported version has the last refresh time recorded as time A. However, log entries needed for a fast refresh may no longer exist. If the log entries do exist (because they are needed for another snapshot that has yet to be refreshed), they are used, and the fast refresh completes successfully. Otherwise, the fast refresh fails, generating an error that says a complete refresh is required.
Importing a Snapshot into a Different Schema
Snapshots, snapshot logs, and related items are exported with the schema name explicitly given in the DDL statements; therefore, snapshots and their related items cannot be imported into a different schema.
If you attempt to use FROMUSER and TOUSER to import snapshot data, an error will be written to the Import log file and the items will not be imported.
Transportable Tablespaces
Transportable tablespaces let you move a set of tablespaces from one Oracle database to another.
To do this, you must make the tablespaces read-only, copy the datafiles of these tablespaces, and use Export and Import to move the database information (metadata) stored in the data dictionary. Both the datafiles and the metadata export file must be copied to the target database. The transport of these files can be done using any facility for copying flat binary files, such as the operating system copying facility, binary-mode FTP, or publishing on CD-ROMs.
After copying the datafiles and importing the metadata, you can optionally put the tablespaces in read/write mode.
Import provides the following parameters to enable import of transportable tablespaces metadata.
  • TRANSPORT_TABLESPACE
  • TABLESPACES
  • DATAFILES
  • TTS_OWNERS
Storage Parameters
By default, a table is imported into its original tablespace.
If the tablespace no longer exists, or the user does not have sufficient quota in the tablespace, the system uses the default tablespace for that user, unless the table:
  • Is partitioned
  • Is a type table
  • Contains LOB, VARRAY, or OPAQUE type columns
  • Has an index-organized table (IOT) overflow segment
If the user does not have sufficient quota in the default tablespace, the user's tables are not imported.
The OPTIMAL Parameter
The storage parameter OPTIMAL for rollback segments is not preserved during export and import.
Storage Parameters for OID Indexes and LOB Columns
Tables are exported with their current storage parameters. For object tables, the OIDINDEX is created with its current storage parameters and name, if given. For tables that contain LOB, VARRAY, or OPAQUE type columns, LOB, VARRAY, or OPAQUE type data is created with their current storage parameters.
If you alter the storage parameters of existing tables prior to export, the tables are exported using those altered storage parameters. Note, however, that storage parameters for LOB data cannot be altered prior to export (for example, chunk size for a LOB column, whether a LOB column is CACHE or NOCACHE, and so forth).
Note that LOB data might not reside in the same tablespace as the containing table. The tablespace for that data must be read/write at the time of import or the table will not be imported.
If LOB data resides in a tablespace that does not exist at the time of import or the user does not have the necessary quota in that tablespace, the table will not be imported. Because there can be multiple tablespace clauses, including one for the table, Import cannot determine which tablespace clause caused the error.
Overriding Storage Parameters
Before using the Import utility to import data, you may want to create large tables with different storage parameters. If so, you must specify IGNORE=y on the command line or in the parameter file.
The Export COMPRESS Parameter
By default at export time, storage parameters are adjusted to consolidate all data into its initial extent. To preserve the original size of an initial extent, you must specify at export time that extents are not to be consolidated (by setting COMPRESS=n).
Read-Only Tablespaces
Read-only tablespaces can be exported. On import, if the tablespace does not already exist in the target database, the tablespace is created as a read/write tablespace. If you want read-only functionality, you must manually make the tablespace read-only after the import.
If the tablespace already exists in the target database and is read-only, you must make it read/write before the import.
Dropping a Tablespace
You can drop a tablespace by redefining the objects to use different tablespaces before the import. You can then issue the imp command and specify IGNORE=y.
In many cases, you can drop a tablespace by doing a full database export, then creating a zero-block tablespace with the same name (before logging off) as the tablespace you want to drop. During import, with IGNORE=y, the relevant CREATE TABLESPACE statement will fail and prevent the creation of the unwanted tablespace.
All objects from that tablespace will be imported into their owner's default tablespace with the exception of partitioned tables, type tables, and tables that contain LOB or VARRAY columns or index-only tables with overflow segments. Import cannot determine which tablespace caused the error. Instead, you must first create a table and then import the table again, specifying IGNORE=y.
Objects are not imported into the default tablespace if the tablespace does not exist or you do not have the necessary quotas for your default tablespace.
Reorganizing Tablespaces
If a user's quota allows it, the user's tables are imported into the same tablespace from which they were exported. However, if the tablespace no longer exists or the user does not have the necessary quota, the system uses the default tablespace for that user as long as the table is unpartitioned, contains no LOB or VARRAY columns, is not a type table, and is not an index-only table with an overflow segment. This scenario can be used to move a user's tables from one tablespace to another.
For example, you need to move joe's tables from tablespace A to tablespace B after a full database export. Follow these steps:
1.     If joe has the UNLIMITED TABLESPACE privilege, revoke it. Set joe's quota on tablespace A to zero. Also revoke all roles that might have such privileges or quotas.
Role revokes do not cascade. Therefore, users who were granted other roles by joe will be unaffected.
2.     Export joe's tables.
3.     Drop joe's tables from tablespace A.
4.     Give joe a quota on tablespace B and make it the default tablespace for joe.
5.     Import joe's tables. (By default, Import puts joe's tables into
tablespace
 B.)
Importing Statistics
If statistics are requested at export time and analyzer statistics are available for a table, Export will place the ANALYZE statement to recalculate the statistics for the table into the dump file. In most circumstances, Export will also write the precalculated optimizer statistics for tables, indexes, and columns to the dump file. See the description of the Export parameter STATISTICS and the Import parameter STATISTICS.
Because of the time it takes to perform an ANALYZE statement, it is usually preferable for Import to use the precalculated optimizer statistics for a table (and its indexes and columns) rather than executing the ANALYZE statement saved by Export. By default, Import will always use the precalculated statistics that are found in the export dump file.
The Export utility flags certain precalculated statistics as questionable. See the Export parameter, STATISTICS for more information. In certain situations, the importer might want to import only unquestionable statistics, and may not want to import precalculated statistics in the following situations:
  • Character set translations between the dump file and the import client and the import database could potentially change collating sequences that are implicit in the precalculated statistics.
  • Row errors occurred while importing the table.
  • A partition level import is performed (column statistics will no longer be accurate).

Note:
Specifying ROWS=n will not prevent the use of precalculated statistics. This feature allows plan generation for queries to be tuned in a nonproduction database using statistics from a production database. In these cases, the importer should specify STATISTICS=SAFE.

In certain situations, the importer might want to always use ANALYZE statements rather than precalculated statistics. For example, the statistics gathered from a fragmented database may not be relevant when the data is imported in a compressed form. In these cases, the importer should specify STATISTICS=RECALCULATE to force the recalculation of statistics.
If you do not want any statistics to be established by Import, you should specify STATISTICS=NONE.
Using Export and Import to Partition a Database Migration
When you use the Export and Import utilities to migrate a large database, it may be more efficient to partition the migration into multiple export and import jobs. If you decide to partition the migration, be aware of the following advantages and disadvantages.
Advantages of Partitioning a Migration
Partitioning a migration has the following advantages:
  • Time required for the migration may be reduced because many of the subjobs can be run in parallel.
  • The import can start as soon as the first export subjob completes, rather than waiting for the entire export to complete.
Disadvantages of Partitioning a Migration
Partitioning a migration has the following disadvantages:
  • The export and import processes become more complex.
  • Support of cross-schema references for certain types of objects may be compromised. For example, if a schema contains a table with a foreign key constraint against a table in a different schema, you may not have all required parent records when you import the table into the dependent schema.
How to Use Export and Import to Partition a Database Migration
To perform a database migration in a partitioned manner, take the following steps:
1.     For all top-level metadata in the database, issue the following commands:
a.      exp dba/password FILE=full FULL=y CONSTRAINTS=n TRIGGERS=n ROWS=n INDEXES=n
b.     imp dba/password FILE=full FULL=y
2.     For each scheman in the database, issue the following commands:
a.      exp dba/password OWNER=scheman FILE=scheman
b.     imp dba/password FILE=scheman FROMUSER=scheman TOUSER=scheman IGNORE=y
All exports can be done in parallel. When the import of full.dmp completes, all remaining imports can also be done in parallel.
Using Export Files from a Previous Oracle Release
The following sections describe considerations when you import data from earlier versions of the Oracle database server into an Oracle9i server.
Using Oracle Version 7 Export Files
This section describes guidelines and restrictions that apply when you import data from an Oracle version 7 database into an Oracle9i server.
Check Constraints on DATE Columns
In Oracle9i, check constraints on DATE columns must use the TO_DATE function to specify the format of the date. Because this function was not required in versions prior to Oracle8i, data imported from an earlier Oracle database might not have used the TO_DATE function. In such cases, the constraints are imported into the Oracle9i database, but they are flagged in the dictionary as invalid.
The catalog views DBA_CONSTRAINTS, USER_CONSTRAINTS, and ALL_CONSTRAINTS can be used to identify such constraints. Import issues a warning message if invalid date constraints are in the database.
Using Oracle Version 6 Export Files
This section describes guidelines and restrictions that apply when you import data from an Oracle version 6 database into an Oracle9i server.
User Privileges
When user definitions are imported into an Oracle database, they are created with the CREATE USER statement. So, when importing from export files created by previous versions of Export, users are not granted CREATE SESSION privileges automatically.
CHAR Columns
Oracle version 6 CHAR columns are automatically converted into the Oracle VARCHAR2 datatype.
Status of Integrity Constraints
NOT NULL constraints are imported as ENABLED. All other constraints are imported as DISABLED.
Length of Default Column Values
A table with a default column value that is longer than the maximum size of that column generates the following error on import to Oracle9i:
ORA-1401: inserted value too large for column

Oracle version 6 did not check the columns in a CREATE TABLE statement to be sure they were long enough to hold their default values so these tables could be imported into a version 6 database. The Oracle9i server does make this check, however. As a result, column defaults that could be imported into a version 6 database may not import into Oracle9i.
If the default is a value returned by a function, the column must be large enough to hold the maximum value that can be returned by that function. Otherwise, the CREATE TABLE statement recorded in the export file produces an error on import.

Note:
The maximum value of the USER function increased in Oracle7, so columns with a default of USER may not be long enough. To determine the maximum size that the USER function returns, execute the following SQL statement:
DESCRIBE user_sys_privs

The length shown for the USERNAME column is the maximum length returned by the USER function.


Using Oracle Version 5 Export Files
Oracle9i Import reads Export dump files created by Oracle release 5.1.22 and higher. Keep in mind the following:
  • CHAR columns are automatically converted to VARCHAR2.
  • NOT NULL constraints are imported as ENABLED.
  • Import automatically creates an index on any clusters to be imported.
Restrictions When Using Different Releases and Versions of Export and Import
The following restrictions apply when you are using different releases of Export and Import:
  • Export dump files can be read only by the Import utility because they are stored in a special binary format.
  • Any export dump file can be imported into a higher release of the Oracle database server.
  • Export dump files cannot be read by previous versions and releases of the Import utility. Therefore, a release 8.1 export file cannot be imported by a release 8.0 Import utility and a version 8 export dump file cannot be imported by a version 7 Import utility.
  • The Import utility can read export dump files created by Export release 5.1.22 and higher.
  • The Import utility cannot read export dump files created by the Export utility of a higher maintenance release or version. For example, a release 8.1 export dump file cannot be imported by a release 8.0 Import utility, and a version 8 export dump file cannot be imported by a version 7 Import utility.
  • The Oracle version 6 (or earlier) Export utility cannot be used against an Oracle8 or higher database.
  • Whenever a lower version of the Export utility runs with a higher version of the Oracle database server, categories of database objects that did not exist in the lower version are excluded from the export. For example, partitioned tables did not exist in the Oracle database server version 7. So, if you need to move a version 8 partitioned table to a version 7 database, you must first reorganize the table into a nonpartitioned table.
  • Export files generated by Oracle9i Export, either direct path or conventional path, are incompatible with earlier releases of Import and can be imported only with Oracle9i Import. When backward compatibility is an issue, use the earlier release or version of the Export utility against the Oracle9i database.
  • You cannot import job queues from a release 8.1.7 database into earlier releases of the database. Therefore, you must manually restart your jobs after the import is finished.
The CHARSET Parameter
Default: none
This parameter applies to Oracle version 5 and 6 export files only. Use of this parameter is not recommended. It is provided only for compatibility with previous versions. Eventually, it will no longer be supported.
Oracle version 5 and 6 export files do not contain the database character set identifier. However, a version 5 or 6 export file does indicate whether the user session character set was ASCII or EBCDIC.
Use this parameter to indicate the actual character set used at export time. The Import utility will verify whether the specified character set is ASCII or EBCDIC based on the character set in the export file.
If you do not specify a value for the CHARSET parameter and the export file is ASCII, Import will verify that the user session character set is ASCII. Or, if the export file is EBCDIC, Import will verify that the user session character set is EBCDIC.
If you are using a version of Oracle greater than version 5 or 6, the character set is specified within the export file, and conversion to the current database's character set is automatic. Specification of this 



Reference: https://docs.oracle.com/cd/B10501_01/server.920/a96652/ch02.htm 




Importing Schemas
Note: Deprecated ... replaced by DataPump
 
Import helpimp -help
imp -help
Import Authorityimp userid=<schema_owner/password>
imp uwclass/uwclass
Import File Nameimp userid=<schema_owner/password> FILE=<file_name>
imp uwclass/uwclass file=c: emp\uw_test.dmp
Log File Nameimp userid=<schema_owner/password> LOG=<file_name>
imp uwclass/uwclass log=c: emp\uw.log
Buffer Size-- O/S dependent and can usually be ignored

imp userid=<schema_owner/password> BUFFER=<integer>
-- rows_in_array * maximum_row_size

imp uwclass/uwclass buffer=64000
Commitimp userid=<schema_owner/password> COMMIT=<Y|N>
imp uwclass/uwclass commit=Y
Compileimp userid=<schema_owner/password> COMPILE=<Y|N>
imp uwclass/uwclass compile=N
Constraintsimp userid=<schema_owner/password> CONSTRAINTS=<Y|N>
imp uwclass/uwclass constraints=N
Data Filesimp userid=<schema_owner/password> DATAFILES=(df1, df2, ... df#)
imp uwclass/uwclass datafiles=(c:\oradata\datasml1.dbf')
Destroyimp userid=<schema_owner/password> DESTROY=<Y|N>
imp uwclass/uwclass destroy=Y
Feedbackimp userid=<schema_owner/password> FEEDBACK=<integer DEFAULT 0>
imp uwclass/uwclass feedback=100
File Size-- if used in exporting must match export specification

imp userid=<schema_owner/password> FILESIZE=<integer> [KB|MB]
imp uwclass/uwclass filesize 100MB
From Userimp userid=<schema_owner/password> FROMUSER=<schema_name>
imp uwclass/uwclass fromuser=uwclass touser=abc
Fullimp userid=<schema_owner/password> FULL=<Y|N>
imp uwclass/uwclass full=N
Grantsimp userid=<schema_owner/password> GRANTS=<Y|N>
imp uwclass/uwclass grants=N
Ignoreimp userid=<schema_owner/password> IGNORE=<Y|N>
imp uwclass/uwclass ignore=Y
Indexesimp userid=<schema_owner/password> INDEXES=<Y|N>
imp uwclass/uwclass indexes=N
Index File-- creates a file of index creation statements: INDEXES parameter
-- must be N

imp userid=<schema_owner/password> INDEXFILE=<file_name>
imp uwclass/uwclass indexfile=c: emp\index_dml.sql
Parameter Fileimp userid=<schema_owner/password> PARFILE=<parameter_file_name>
imp uwclass/uwclass parfile=c: emp\uwparfile.ctl
Record Lengthimp userid=<schema_owner/password> RECORDLENGTH=<bytes>
imp uwclass/uwclass recordlength=32000
Resumableimp userid=<schema_owner/password> RESUMABLE=<Y|N>
imp uwclass/uwclass resumable=Y
Resumable Nameimp userid=<schema_owner/password> RESUMABLE_NAME = 'User USERNAME (USERID), Session SESSIONID, Instance INSTANCEID'
imp uwclass/uwclass resumable_name 'UWCLASS'
Resumable Timeoutimp userid=<schema_owner/password> RESUMABLE_TIMEOUT=<seconds>
DEFAULT 7200 (2 hours)
imp uwclass/uwclass resumable_timeout=18000
Rowsimp userid=<schema_owner/password> ROWS=<Y|N>
imp uwclass/uwclass rows=N
Showimp userid=<schema_owner/password> SHOW=<Y|N>
imp uwclass/uwclass show=Y
Skip Unusable Indexes-- default is parameter from spfile

imp userid=<schema_owner/password> SKIP_UNUSABLE_INDEXES=<Y|N>
imp uwclass/uwclass skip_unusable_indexes=N
Statisticsimp userid=<schema_owner/password>
STATISTICS=<ALWAYS | NONE | RECALCULATE | SAFE>
imp uwclass/uwclass statistics=RECALCULATE
Streams Configurationimp userid=<schema_owner/password> STREAMS_CONFIGURATION=<Y|N>
imp uwclass/uwclass streams_configuration=N
Streams Instantiationimp userid=<schema_owner/password> STREAMS_INSTANTIATION=<Y|N>
imp uwclass/uwclass streams_instantiation=Y
Tablesimp userid=<schema_owner/password> TABLES=(t1, t2, ... t#)
imp uwclass/uwclass tables=(emp, dept, bonus)
Tablespacesimp userid=<schema_owner/password> TABLES=(t1, t2, ... t#)
imp uwclass/uwclass tables=(emp, dept, bonus)
Table Object ID No Validateimp userid=<schema_owner/password>
TOID_NOVALIDATE=([schemaname.]typename [, ...])
imp uwclass/uwclass TABLE=jobs TOID_NOVALIDATE=typ1
To Userimp userid=<schema_owner/password> TOUSER=<schema_name>
imp uwclass/uwclass fromuser=uwclass touser=abc
Transport Tablespaceimp userid=<schema_owner/password> TRANSPORT_TABLESPACE=<Y|N>
imp uwclass/uwclass transport_tablespace=Y
TTS Ownersimp userid=<schema_owner/password> TTS_OWNERS=(o1, o2, ... o#)
imp uwclass/uwclass tts_owners=(uwclass, ids)
Volume Sizeimp userid=<schema_owner/password> VOLSIZE <integer> [KB|MB|GB]
imp uwclass/uwclass volsize=10GB
 
Write DDL From DMP fileimp userid=<schema_owner/password> full=Y show=Y 2>path_and_file
imp uwclass/uwclass file=c: emp\uwclass.dmp full=Y show=Y
2> c: emp\uwclass.sql

No comments: