expdp
) and Data Pump Import (impdp
). In particular: -
Data Pump Export and Import operate on a group of files called a dump file set rather than on a single sequential dump file.
-
Data Pump Export and Import access files on the server rather than on the client. This results in improved performance. It also means that directory objects are required when you specify file locations.
-
The Data Pump Export and Import modes operate symmetrically, whereas original export and import did not always exhibit this behavior.
For example, suppose you perform an export with
FULL=Y
, followed by an import usingSCHEMAS=HR
. This will produce the same results as if you performed an export withSCHEMAS=HR
, followed by an import withFULL=Y
. -
Data Pump Export and Import use parallel execution rather than a single stream of execution, for improved performance. This means that the order of data within dump file sets and the information in the log files is more variable.
-
Data Pump Export and Import represent metadata in the dump file set as XML documents rather than as DDL commands. This provides improved flexibility for transforming the metadata at import time.
-
Data Pump Export and Import are self-tuning utilities. Tuning parameters that were used in original Export and Import, such as
BUFFER
andRECORDLENGTH
, are neither required nor supported by Data Pump Export and Import. -
At import time there is no option to perform interim commits during the restoration of a partition. This was provided by the
COMMIT
parameter in original Import. -
There is no option to merge extents when you re-create tables. In original Import, this was provided by the
COMPRESS
parameter. Instead, extents are reallocated according to storage parameters for the target table. -
Sequential media, such as tapes and pipes, are not supported.
-
The Data Pump method for moving data between different database versions is different than the method used by original Export/Import. With original Export, you had to run an older version of Export (
exp
) to produce a dump file that was compatible with an older database version. With Data Pump, you can use the current Export (expdp
) version and simply use theVERSION
parameter to specify the target database version. -
When you are importing data into an existing table using either
APPEND
orTRUNCATE
, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load. -
Data Pump Export and Import consume more undo tablespace than original Export and Import. This is due to additional metadata queries during export and some relatively long-running master table queries during import. As a result, for databases with large amounts of metadata, you may receive an
ORA-01555: snapshot too old error
. To avoid this, consider adding additional undo tablespace or increasing the value of theUNDO_RETENTION
initialization parameter for the database. -
If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas, the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not compressed upon import.
-
Data Pump supports character set conversion for both direct path and external tables. Most of the restrictions that exist for character set conversions in the original Import utility do not apply to Data Pump. The one case in which character set conversions are not supported under the Data Pump is when using transportable tablespaces.
Can we Import the data using imp when the file is created using expdp?
ReplyDeleteNo you cannot
ReplyDelete