You are on page 1of 18

DATAPUMP

Filed under: DATAPUMP, Oracle 10g by Deepak Leave a comment


December 14, 2009
DATAPUMP IN ORACLE
For using DATAPUMP through DB CONSOLE
http://www.oracle.com/technology/obe/obe10gdb/storage/datapump/datapump.htm
There are two new concepts in Oracle Data Pump that are different from original Export and
Import.
Directory Objects
Data Pump differs from original Export and Import in that all jobs run primarily on the server
using server processes. These server processes access files for the Data Pump jobs using
directory objects that identify the location of the files. The directory objects enforce a security
model that can be used by DBAs to control access to these files.
Interactive Command-Line Mode
Besides regular operating system command-line mode, there is now a very powerful interactive
Command-line mode which allows the user to monitor and control Data Pump Export and Import
operations. Changing from Original Export/Import to Oracle Data Pump Creating Directory
Objects
In order to use Data Pump, the database administrator must create a directory object and grant
privileges to the user on that directory object. If a directory object is not specified, a default
directory object called data_pump_dir is provided. The default data_pump_dir is available only
to privileged users unless access is granted by the DBA.
In the following example, the following SQL statement creates a directory object named
dpump_dir1 that is mapped to a directory located at /usr/apps/datafiles.
Create a directory.
1. SQL> CREATE DIRECTORY dpump_dir1 AS /usr/apps/datafiles;

After a directory is created, you need to grant READ and WRITE permission on the directory to
other users. For example, to allow the Oracle database to read and to write to files on behalf of
user scott in the directory named by dpump_dir1, you must execute the following command:
1. SQL> GRANT READ,WRITE ON DIRECTORY dpump_dir1 TO scott;

Note that READ or WRITE permission to a directory object means only that the Oracle database
will read or write that file on your behalf. You are not given direct access to those files outside of
the Oracle database unless you have the appropriate operating system privileges. Similarly, the
Oracle database requires permission from the operating system to read and write files in the
directories. Once the directory access is granted, the user scott can export his database objects
with command arguments:
1. >expdp username/password DIRECTORY=dpump_dir1 dumpfile=scott.dmp

Comparison of command-line parameters from Original Export and Import to


Data Pump
Data Pump commands have a similar look and feel to the original Export and Import
commands, but are different. Below are a few examples that demonstrate some of these
differences.
1) Example import of tables from scotts account to jims account
Original Import:
> imp username/password FILE=scott.dmp FROMUSER=scott TOUSER=jim TABLES=(*)
Data Pump Import:
> impdp username/password DIRECTORY=dpump_dir1 DUMPFILE=scott.dmp
TABLES=scott.emp REMAP_SCHEMA=scott:jim
Note how the FROMUSER/TOUSER syntax is replaced by the REMAP_SCHEMA option.
2) Example export of an entire database to a dump file with all GRANTS,
INDEXES, and data

> exp username/password FULL=y FILE=dba.dmp GRANTS=y INDEXES=y ROWS=y


> expdp username/password FULL=y INCLUDE=GRANT INCLUDE= INDEX
DIRECTORY=dpump_dir1 DUMPFILE=dba.dmp CONTENT=ALL
Data Pump offers much greater metadata filtering than original Export and Import. The
INCLUDE parameter allows you to specify which object (and its dependent objects) you want to
keep in the export job. The EXCLUDE parameter allows you to specify which object (and its
dependent objects) you want to keep out of the export job. You cannot mix the two parameters in
one job.
Both parameters work with Data Pump Import as well, and you can use different INCLUDE and
EXCLUDE options for different operations on the same dump file.
3) Tuning Parameters
Unlike original Export and Import, which used the BUFFER, COMMIT, COMPRESS,
CONSISTENT, DIRECT, and RECORDLENGTH parameters, Data Pump needs no tuning to
achieve maximum performance. Data Pump chooses the best method to ensure that data and
metadata are exported and imported in the most efficient manner. Initialization parameters should
be sufficient upon installation.
4) Moving data between versions
The Data Pump method for moving data between different database versions is different from the
method used by original Export and Import. With original Export, you had to run an older
version of Export to produce a dump file that was compatible with an older database
version.With Data Pump, you use the current Export version and simply use the VERSION
parameter to specify the target database version. You cannot specify versions earlier than Oracle
Database 10g (since Data Pump did not exist before 10g).
Example:
> expdp username/password TABLES=hr.employees VERSION=10.1
DIRECTORY=dpump_dir1 DUMPFILE=emp.dmp
Data Pump Import can always read dump file sets created by older versions of Data Pump
Export.
Note that Data Pump Import cannot read dump files produced by original Export.
Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters, but once you are comfortable with Data
Pump, there are new capabilities that you will want to explore.
Parallelism
Data Pump Export and Import operations are processed in the database as a Data Pump job,
which is much more efficient that the client-side execution of original Export and Import. Now
Data Pump operations can take advantage of the servers parallel processes to read or write
multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of
Oracle Database.)
The number of parallel processes can be changed on the fly using Data Pumps interactive
command-line mode. You may have a certain number of processes running during the day and
decide to change that number if more system resources become available at night (or vice versa).
For best performance, you should do the following:
Make sure your system is well balanced across CPU, memory, and I/O.
Have at least one dump file for each degree of parallelism. If there arent enough dump Files,
performance will not be optimal because multiple threads of execution will be trying to access
the same dump file.
Put files that are members of a dump file set on separate disks so that they will be written and
read in parallel.
For export operations, use the %U variable in the DUMPFILE parameter so multiple dump
files can be automatically generated.
Example:
> expdp username/password DIRECTORY=dpump_dir1 JOB_NAME=hr
DUMPFILE=par_exp%u.dmp PARALLEL=4
REMAP
REMAP_TABLESPACE This allows you to easily import a table into a different
tablespace from which it was originally exported. The databases have to be 10.1 or later.
Example:
> impdp username/password REMAP_TABLESPACE=tbs_1:tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employees.dmp
REMAP_DATAFILES This is a very useful feature when you move databases between
platforms that have different file naming conventions. This parameter changes the source datafile
name to the target datafile name in all SQL statements where the source
datafile is referenced. Because the REMAP_DATAFILE value uses quotation marks, its best to
specify the parameter within a parameter file.
Example:
The parameter file, payroll.par, has the following content:
DIRECTORY=dpump_dir1
FULL=Y
DUMPFILE=db_full.dmp
REMAP_DATAFILE=C:\DB1\HRDATA\PAYROLL\tbs6.dbf:/db1/hrdata/payroll/tbs6.dbf
You can then issue the following command:
> impdp username/password PARFILE=payroll.par
Even More Advanced Features of Oracle Data Pump
Beyond the command-line and performance features of Oracle Data Pump are new capabilities
that DBAs will find invaluable. A couple of prominent features are described here.Interactive
Command-Line Mode
You have much more control in monitoring and controlling Data Pump jobs with interactive
command-line mode. Because Data Pump jobs run entirely on the server, you can start an export
or import job, detach from it, and later reconnect to the job to monitor its progress. Here are
some of the things you can do while in this mode:
See the status of the job. All of the information needed to monitor the jobs execution is
available.

Add more dump files if there is insufficient disk space for an export file.

Change the default size of the dump files.

Stop the job (perhaps it is consuming too many resources) and later restart it (when more
resources become available).

Restart the job. If a job was stopped for any reason (system failure, power outage), you
can attach to the job and then restart it.

Increase or decrease the number of active worker processes for the job. (Enterprise
Edition only.)

Attach to a job from a remote site (such as from home) to monitor status.

Network Mode
Data Pump gives you the ability to pass data between two databases over a network (via a
database link), without creating a dump file on disk. This is very useful if youre moving data
between databases, like data marts to data warehouses, and disk space is not readily available.
Note that if you are moving large volumes of data, Network mode is probably going to be slower
than file mode. Network export creates the dump file set on the instance where the Data Pump
job is running and extracts the metadata and data from the remote instance. Network export gives
you the ability to export read-only databases. (Data Pump Export cannot run locally on a readonly instance because the job requires write operations on the instance.) This is useful when
there is a need to export data from a standby database.
Generating SQLFILES
In original Import, the INDEXFILE parameter generated a text file which contained the SQL
commands necessary to recreate tables and indexes that you could then edit to get a workable
DDL script. With Data Pump, its a lot easier to get a workable DDL script. When you run Data
Pump Import and specify the SQLFILE parameter, a text file is generated that has the necessary
DDL (Data Definition Language) in it to recreate all object types, not just tables and indexes.
Although this output file is ready for execution, the DDL statements are not actually executed, so
the target system will not be changed.
SQLFILEs can be particularly useful when pre-creating tables and objects in a new database.
Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output. For
example, if you want to create a database that contains all the tables and indexes of the source
database, but that does not include the same constraints, grants,and other metadata, you would
issue a command as follows:
>impdp username/password DIRECTORY=dpumpdir1 DUMPFILE=expfull.dmp
SQLFILE=dpump_dir2:expfull.sql INCLUDE=TABLE,INDEX
The SQL file named expfull.sql is written to dpump_dir2 and would include SQL DDL that
could be executed in another database to create the tables and indexes as desired.

----------------------------------------------------------------------------------------------------------------------

Encryption with Oracle Data Pump


Filed under: Encryption with Oracle Datapump by Deepak Leave a comment
December 14, 2009
Encryption with Oracle Data Pump
- from Oracle White paper

Introduction
The security and compliance requirements in todays business world present manifold
challenges. As incidences of data theft increase, protecting data privacy continues to be of
paramount importance. Now a de facto solution in meeting regulatory compliances, data
encryption is one of a number of security tools in use. The Oracle Advanced Security features
built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files
from unauthorized access. Oracle Database 10g release 2 introduced the Oracle Advanced
Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted
while stored in the database. Customers who take advantage of this feature can use Oracle Data
Pump to encrypt this TDE column data as it is written to the export dump file set. The purpose of
this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works.
Please note that this paper does not apply to the Original Export/Import utilities. For information
regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle
Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to
the export dump file set, refer to the Oracle Data Pump Encrypted Dump File Support
whitepaper.
The TDE column encryption feature transparently encrypts and decrypts data written to and read
from application table columns for any columns that are marked with the ENCRYPT key word.
Once a table column is marked with this keyword, encryption and decryption are performed
automatically, without the need for any further user or application intervention. The column
encryption key used by TDE is taken from randomly generated data or is derived from a
password provided during the creation of the table containing the encrypted column. When an
authorized user inserts new data into such a column, TDE column encryption encrypts this data
prior to storing it in the database. Conversely, when the user selects the column from the
database, TDE column encryption transparently decrypts this data back to its original clear text
format. Column data encrypted using TDE remains protected while it resides in the database.
However, the protection offered by TDE does not extend beyond the database and so this
protection is lost if the sensitive column data is extracted in clear text format and stored outside
of the database. Oracle Data Pump export extends the protection that TDE offers by taking the
extracted clear text column data and re-encrypting it, using a dump file encryption key derived

from a userprovided password, before it is written to the export dump file set.. Column data
encrypted using Oracle Data Pump encrypted column feature now remains protected outside of
the database while it resides in the export dump file set. Whenever Oracle Data Pump unloads or
loads tables containing encrypted columns, it uses the external tables mechanism instead of the
direct path mechanism. The use of external tables creates a correspondence between the database
table data and the export dump file while using the SQL engine to perform the data transfer.
The steps involved in exporting a table with encrypted columns are as follows:
1. Data Pump performs a SELECT operation on the table that contains the encrypted columns
from the database.
2. As part of the SELECT operation, TDE automatically decrypts the encrypted column data
back to clear text format using the column encryption key.
3. Data Pump re-encrypts the clear text column data using the dump file encryption key and then
writes this encrypted data to the export dump file set. To load an export dump file set containing
encrypted column data into a target database, the same encryption password used at export time
must be provided to Oracle Data Pump import. After verifying that the correct password has been
given, the corresponding dump file decryption key is derived from this password.
The steps involved in importing a table with encrypted columns are as follows:
1. Data Pump reads the encrypted column data from the export dump file set and decrypts the
data back to clear text format using the dump file encryption key.
2. Data Pump performs an INSERT operation of the clear text column data into the table that
contains the encrypted column.
3. As part of the INSERT operation, TDE automatically encrypts the column data using the
column encryption key and then writes it to the database.
Exporting and importing encrypted columns may have a slightly negative impact on the overall
performance of the Data Pump job. Although the data being processed is stored in memory
buffers, encryption and decryption are typically CPU intensive operations. Furthermore,
additional disk I/O is incurred due to space overhead added to the encrypted data in order to
perform data integrity checks and to safeguard against brute force attacks.
Keep in mind that in Oracle Data Pump 10g release 2, the ENCRYPTION_PASSWORD
parameter applies only to TDE encrypted columns. Support for the encryption of the entire dump
file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different
section.
Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns, it is first necessary to create an
Oracle Encryption Wallet, which is a repository for holding entities like authentication and
signing credentials as well as database master encryption keys. The SQL ALTER SYSTEM
statement is used to create a new encryption wallet and set the database master key. In the
following example, the password used in the IDENTIFIED BY clause is required and is used
solely for gaining access to the wallet.Next, create a table with an encrypted column. The
password used below in the IDENTIFIED
BY clause is optional and TDE uses it to derive the tables column encryption key. If the
IDENTIFIED BY clause is omitted, then TDE creates the tables column encryption key based on
random data.
SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY wallet_pwd
SQL> CREATE TABLE DP.EMP (empid NUMBER(6),empname VARCHAR2(100),salary
NUMBER(8,2) ENCRYPT IDENTIFIED BY column_pwd
Using Oracle Data Pump to Export Encrypted Columns
Oracle Data Pump can now be used to export the table. In the following example, the password
provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files
encryption key. Oracle Data Pump re-encrypts the column data in the dump file using this dump
file key. When re-encrypting encrypted column data, Oracle Data Pump uses the Advanced
Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits
(AES128).Note that there is absolutely no connection between the password specified by the
Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL
ALTERSYSTEM and CREATE TABLE statements.
Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption
Wallet is closed results in an error. This is shown in the following example, in which the Oracle
Wallet is manually closed and then the export command is re-issued.
Although the ENCRYPTION_PASSWORD is an optional parameter, it is always prudent to
export encrypted columns using a password. In the event that the password is not specified,
Oracle Data Pump writes the encrypted column data as clear text in the dump file. In such a case,
a warning message (ORA-39173) is displayed, as shown in the following example.
$ expdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp \
TABLES=emp ENCRYPTION_PASSWORD=dump_pwd
SQL> ALTER SYSTEM SET WALLET CLOSE;
$ expdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp TABLES=emp \

ENCRYPTION_PASSWORD=dump_pwd
Export: Release 10.2.0.4.0 Production on Monday, 09 July, 2009
8:21:23
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release
10.2.0.4.0 Production
With the Partitioning, Data Mining and Real Application Testing
options
ORA-39001: invalid argument value
ORA-39180: unable to encrypt ENCRYPTION_PASSWORD
ORA-28365: wallet is not open
Restriction with Transportable Tablespace Export Mode
Exporting encrypted columns is not limited to table mode exports, as used in the previous
examples. If a schema, tablespace, or full mode export is performed, then all encrypted columns
in any of the exported tables selected for that mode are re-encrypted before being written to the
export dump file set. This is true even when these export modes are used in network mode via
the Oracle Data Pump NETWORK_LINK parameter.
There is, however, one exception; transportable tablespace export mode does not support
encrypted columns. An attempt to perform an export using this mode when the tablespace
contains tables with encrypted columns yields the following error:
$ expdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp TABLES=emp
Export: Release 10.2.0.4.0 Production on Wednesday, 09 July, 2009
8:48:43
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release

10.2.0.4.0 Production
With the Partitioning, Data Mining and Real Application Testing
options
Starting DP.SYS_EXPORT_TABLE_01: dp/******** directory=dpump_dir
dumpfile=emp tables=emp
Estimate in progress using BLOCKS method
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 16 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported DP.EMP 6.25 KB 3 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file
set.
Master table DP.SYS_EXPORT_TABLE_01 successfully loaded/unloaded
*********************************************************************
Dump file set for DP.SYS_EXPORT_TABLE_01 is:
/ade/jkaloger_lx9/oracle/work/emp.dmp
Job DP.SYS_EXPORT_TABLE_01 completed with 1 error(s) at 08:48:57
$ expdp system/password DIRECTORY=dpump_dir DUMPFILE=dp.dmp \
TRANSPORT_TABLESPACES=dp
Export: Release 10.2.0.4.0 Production on Thursday, 09 July, 2009
8:55:07
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release

10.2.0.4.0 Production
With the Partitioning, Data Mining and Real Application Testing
options
Starting SYSTEM.SYS_EXPORT_TRANSPORTABLE_01: system/********
directory=dpump_dir dumpfile=dp transport_tablespaces=dp
ORA-39123: Data Pump transportable tablespace job aborted
ORA-29341: The transportable set is not self-contained
Job SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 stopped due to fatal error
at 08:55:25
The ORA-29341 error in the previous example is not very informative. If the same transportable
tablespace export is executed using Oracle Database 11g release 1, that version does a better job
at pinpointing the problem via the information in the ORA-39929 error:
Using Oracle Data Pump to Import Encrypted Columns
Just as when exporting encrypted column data, an Oracle Encryption Wallet must be created and
open on the target database before attempting to import a dump file set containing encrypted
column data. Otherwise, an ORA-28365: wallet not open error is returned. Note that the
wallet on the target database does not require that the same master key be present as the one used
on the source database where the export originally took place. Of course, the same password
must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the
export.
If the encryption attributes for all columns do not exactly match between the source and target
tables, then an ORA-26033 exception is raised when you try to import the export dump file set.
In the example of the DP.EMP table, the SALARY column must have the ENCRYPT attribute on
both the source and target tables between the time that the export dump file is created and the
import of that file is performed. For example, assume in the following example that the DP.EMP
table on the target system has been created exactly as it is on the source system except that the
ENCRYPT attribute has not been assigned to the SALARY column. The output and resulting
error messages would look as follows:
$ impdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp \
TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp system/password DIRECTORY=dpump_dir dumpfile=dp.dmp \


TRANSPORT_TABLESPACES=dp
Export: Release 11.1.0.7.0 Production on Thursday, 09 July, 2009
9:09:00
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release
11.1.0.7.0 Production
With the Partitioning, Data Mining and Real Application Testing
Options Starting SYSTEM.SYS_EXPORT_TRANSPORTABLE_01: system/********
directory=dpump_dir dumpfile=dp transport_tablespaces=dp
ORA-39123: Data Pump transportable tablespace job aborted
ORA-39187: The transportable set is not self-contained, violation list
is ORA-39929: Table DP.EMP in tablespace DP has encrypted columns which
are not supported.
Job SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 stopped due to fatal error
at 09:09:21
Restriction Using Import Network Mode
A network mode import uses a database link to extract data from a remote database and load it
into the connected database instance. There are no export dump files involved in a network
mode import and therefore there is no re-encrypting of TDE column data. Thus the use of the
ENCRYPTION_PASWORD parameter is prohibited in network mode imports, as shown in the
following example:
$ impdp dp/dp TABLES=dp.emp DIRECTORY=dpump_dir NETWORK_LINK=remote \

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd
Import: Release 10.2.0.4.0 Production on Friday, 09 July, 2009
11:00:57
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release
10.2.0.4.0 Production
With the Partitioning, Data Mining and Real Application Testing
options
ORA-39005: inconsistent arguments
ORA-39115: ENCRYPTION_PASSWORD is not supported over a network link
$ impdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp TABLES=emp \
ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND
Import: Release 10.2.0.4.0 Production on Thursday, 09 July, 2009
10:55:40
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 Production
With the Partitioning, Data Mining and Real Application Testing options
Master table DP.SYS_IMPORT_TABLE_01 successfully loaded/unloaded
Starting DP.SYS_IMPORT_TABLE_01: dp/******** directory=dpump_dir
dumpfile=emp.dmp tables=emp encryption_password=********
table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE

ORA-39152: Table DP.EMP exists. Data will be appended to existing


table but all dependent metadata will be skipped due to
table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-31693: Table data object DP.EMP failed to load/unload and is being
skipped due to error:
ORA-02354: error in exporting/importing data
ORA-26033: column EMP.SALARY encryption properties differ for source or
target table
Job DP.SYS_IMPORT_TABLE_01 completed with 2 error(s) at 10:55:48
Oracle White Paper Encryption with Oracle Data Pump
By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode
import. However, it is important to understand that any TDE column data will be transmitted in
clear-text format. If you are concerned about the security of the information being transmitted,
then consider using Oracle Net Services to configure Oracle Advanced Security Network Data
Encryption.
When the ENCRYPTION_PASSWORD Parameter Is Not Needed
It should be pointed out that when importing from an export dump file set that includes
encrypted column data, the encryption password and the Oracle Wallet are required only when
the encrypted column data is being accessed. The following are cases in which the encryption
password and Oracle Wallet are not needed:

A full metadata-only import

A schema-mode import in which the referenced schemas do not include tables with
encrypted columns

A table-mode import in which the referenced tables do not include encrypted columns

Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it
were inside a table residing in the database. An external table definition is created using the SQL
syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the
ORACLE_DATAPUMP access driver in the TYPE clause.
The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data.
Note that this external table export dump file is not the same export dump file as produced by the
Oracle Data Pump export utility (expdp). As is always the case when dealing with TDE columns,
the Oracle Wallet must first be open before creating the external table. The following example
creates an external table called DP.XEMP and populates it using the data in the DP.EMP table.
Notice that datatypes for the columns are not specified. This is because they are determined by
the column datatypes in the source table in the SELECT subquery.
SQL> CREATE TABLE DP.XEMP (
empid,
empname,
salary ENCRYPT IDENTIFIED BY column_pwd)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY dpump_dir
LOCATION (xemp.dmp)
)
REJECT LIMIT UNLIMITED
AS SELECT * FROM DP.EMP;
The steps involved in creating an external table with encrypted columns are as follows:
1. The SQL engine selects the data for the table DP.EMP from the database. If any columns in the
table are marked as encrypted, as the salary column is for DP.EMP, then TDE decrypts the
column data as part of the select operation.

2. The SQL engine then inserts the data, which is in clear text format, into the DP.XEMP table. If
any columns in the external table are marked as encrypted, as one of its columns is, then TDE
encrypts this column data as part of the insert operation.
3. Because DP.XEMP is an external table, the ORACLE_DATAPUMP access driver is used to
write the data to the external export dump file. The data in an external table can be written only
once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed.
However, the data in the external table can be selected any number of times using a simple SQL
SELECT statement: The steps involved in selecting data with encrypted columns from an
external table are as follows:
1. The SQL engine initiates a select operation. Because DP.XEMP is an external table the
ORACLE_DATAPUMP access driver is called to read the data from the external export file.
2. The data is passed back to the SQL engine. If any columns in the external table are marked as
encrypted, as one of its columns is, then TDE decrypts the data as part of the select operation.
The use of the encryption password in the IDENTIFIED BY clause is optional, unless you plan
to move the dump file to another database. In that case, the same encryption password must be
used for the encrypted columns in the dump file in the table definition on both the source and
target database in order to read the data in the dump file. Encryption Parameter Change in 11g
Release 1
As previously discussed, in Oracle Database 10g release 2 only TDE encrypted columns could be
encrypted by Oracle Data Pump and the only encryption-related parameter available was
ENCRYPTION_PASSW ORD. So, by default, if the ENCRYPTION_PASSWORD is present on
the command line, then it applies only to TDE encrypted columns (if there are no such columns
being exported, then the parameter is ignored).
SQL> SELECT * FROM DP.XEMP;
Beginning in Oracle Database 11g release 1, the ability to encrypt the entire export dump file set
is introduced and with it, several new encrypted-related parameters. A new ENCRYPTION
parameter supplies options for encrypting part or all of the data written to an export dump file
set. Oracle Database 11g release 1 also brings about a change in the default behavior with respect
to encryption. The presence of only the ENCRYPTION_PASSWORD parameter no longer
means that TDE columns will be encrypted by Oracle Data Pump but instead means that the
entire export dump file set will be encrypted. To encrypt only TDE columns using Oracle Data
Pump 11g, it is now necessary to include the new ENCRYPTION parameter with the keyword
ENCRYPTED_COLUMNS_ONLY. So, the 10g example previously shown becomes the
following in 11g:
$expdpdp/dpDIRECTORY=dpump_dirDUMPFILE=emp.dmp\
TABLES=empENCRYPTION_PASSWORD=dump_pwd\
ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

You might also like