You are on page 1of 22

ORACLE 11g

Overview of Data Pump:


What is Data Pump?
Features of Data Pump Architecture Enhancements Things to keep in mind

What is Data Pump?


Data Pump is Oracle Database Utility to allow fast and easy data

transfer. Data Pump Export and Import utilities are much faster comparatively to original Export and Import commands. Besides of the fact that stoppage of the data pump job was voluntary or involuntary, they can be re-started without any data loss. Fine-grained object selection is supported by Data pump jobs. Data pump supports Network Import, the ability to load one instance directly from another. Network Export supported by Data Pump, used to unload a remote instance.

Features of Data Pump


Data Pump supports all the features which were part of the

original import and export, besides this data pump include many new features like:
dump file encryption and compression checkpoint restart job size estimation very flexible, fine-grained object selection direct loading of one instance from another detailed job monitoring the ability to move individual table partitions using transportable

tablespaces

Architecture of Data Pump


Master process

Manages and controls the operation Responsible for data movement One for each degree of parallelism Created in invokers schema at job start Maintained during job execution Dropped after successful completion Used to resume a paused/failed job

Worker process(es)

Master table

Control & status queues

Architecture: Block Diagram


expdp impdp
Enterprise Manager
Other Clients: Data Mining, etc

Data Pump
DBMS_DATAPUMP

Data/Metadata movement engine

Oracle_ Loader

Oracle_ DataPump

Direct Path API

Metadata API:
DBMS_METADATA

External Table API

Utilities..
The three client utilities which are included in Oracle Database

are:
Command-line export (expdp) Command-line import (impdp) Web-based Oracle Enterprise Manager export/import interface

Features of Data Pump


Data Pump supports all the features which were part of the

original import and export, besides this data pump include many new features like:
dump file encryption and compression checkpoint restart job size estimation very flexible, fine-grained object selection direct loading of one instance from another detailed job monitoring the ability to move individual table partitions using transportable

tablespaces

Data Pump Enhancements


COMPRESSION Encryption Parameters
ENCRYPTION and ENCRYPTION_PASSWORD ENCRYPTION_ALGORITHM ENCRYPTION_MODE

TRANSPORTABLE
PARTITION_OPTIONS REUSE_DUMPFILES

Data Pump Enhancements


REMAP_TABLE DATA_OPTIONS
SKIP_CONSTRAINT_ERRORS XML_CLOBS

REMAP_DATA Miscellaneous Enhancements

Compression
Syntax:
COMPRESSION={ALL | DATA_ONLY | METADATA_ONLY | NONE}

The available options are:


ALL: Both metadata and data are compressed. DATA_ONLY: Only data is compressed. METADATA_ONLY: Only metadata is compressed. This is the default

setting. NONE: Nothing is compressed.

Compression {Example}
expdp test/test schemas=TEST directory=TEST_DIR

dumpfile=TEST.dmp logfile=expdpTEST.log compression=all


impdp test/test schemas=TEST directory=TEST_DIR

dumpfile=TEST.dmp logfile=expdpTEST.log compression=all

TRANSPORTABLE parameter
The TRANSPORTABLE parameter is similar to

the TRANSPORT_TABLESPACES parameter available previously in that it only exports and imports metadata about a table, relying on you to manually transfer the relevant tablespace datafiles. The export operation lists the tablespaces that must be transferred.

Syntax:
TRANSPORTABLE = {ALWAYS | NEVER}

Restrictions using TRANSPORTABLE parameter while Export.


This parameter is only applicable during table-level exports. The user performing the operation must have the

EXP_FULL_DATABASE privilege. Tablespaces containing the source objects must be read-only. The COMPATIBLE initialization parameter must be set to 11.0.0 or higher. The default tablespace of the user performing the export must not be the same as any of the tablespaces being transported.

Restrictions using TRANSPORTABLE parameter while Import.


The NETWORK_LINK parameter must be specified during the

import operation. This parameter is set to a valid database link to the source schema. The schema performing the import must have both EXP_FULL_DATABASE and IMP_FULL_DATABASE privileges. The TRANSPORT_DATAFILES parameter is used to identify the datafiles holding the table data.

Examples:
Export using TRANSPORTABLE parameter
expdp system tables=TEST1.TAB1 directory=TEST_DIR dumpfile=TEST.dmp

logfile=expdpTEST.log transportable=ALWAYS

Import using TRANSPORTABLE parameter


impdp system tables=TEST1.TAB1 directory=TEST_DIR dumpfile=TEST.dmp

logfile=impdpTEST.log transportable=ALWAYS network_link=DB11G transport_datafiles='/u01/oradata/DB11G/test01.dbf'

PARTITION_OPTIONS parameter
The PARTITION_OPTIONS parameter determines how partitions will be

handled during export and import operations. SYNTAX:


PARTITION_OPTIONS={none | departition | merge}

The allowable values are:


NONE: The partitions are created exactly as they were on the system the export was taken

from. DEPARTITION: Each partition and sub-partition is created as a separate table, named using a combination of the table and (sub-)partition name. MERGE: Combines all partitions into a single table.

Example:
Export using PARTITION_OPTIONS parameter:
expdp test/test directory=TEST_DIR dumpfile=TEST.dmp

logfile=expdpTEST.log tables=test.tab1 partition_options=merge

REUSE_DUMPFILES parameter.
The REUSE_DUMPFILES parameter can be used to prevent errors being

issued if the export attempts to write to a dump file that already exists.
SYNTAX:
REUSE_DUMPFILES={Y | N}

When set to "Y", any existing dumpfiles will be overwritten. When the default

values of "N" is used, an error is issued if the dump file already exists.

Example:
Export using REUSE_DUMPFILES parameter:
expdp test/test schemas=TEST directory=TEST_DIR

dumpfile=TEST.dmp logfile=expdpTEST.log reuse_dumpfiles=y

REMAP_TABLE parameter.
This parameter allows a table to be renamed during the import

operations performed using the TRANSPORTABLE method. It can also be used to alter the base table name used during PARTITION_OPTIONS imports.

SYNTAX:
REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablena

me

Example:
Example using REMAP_TABLE parameter:
impdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp

logfile=impdpTEST.log remap_table=TEST.TAB1:TAB2

Important thing to keep in mind is that only new table created by

import command are renamed but not the existing tables.

You might also like