You are on page 1of 6

Oracle Database Utilities

This chapter describes Oracle Database utilities for data transfer, data maintenance, and database
administration.
This chapter contains the following topics:

Introduction to Oracle Database Utilities


Overview of Data Pump Export and Import
Overview of the Data Pump API
Overview of the Metadata API
Overview of SQL*Loader
Overview of External Tables
Overview of LogMiner
Overview of DBVERIFY Utility
Overview of DBNEWID Utility
ADRCI: ADR Command Interpreter

Introduction to Oracle Database Utilities


Oracle Database utilities let you perform the following tasks:

High-speed movement of data and metadata from one database to another using
Data Pump Export and Import

Extract and manipulate complete representations of the metadata for database


objects, using the Metadata API

Move all or part of the data and metadata for a site from one database to another,
using the Data Pump API

Load data into Oracle Database tables from operating system files using SQL*Loader
or from external sources using external tables

Manage Oracle Database diagnostic data using the ADR Command Interpreter
(ADRCI).

Query redo log files through a SQL interface with LogMiner

Perform physical data structure integrity checks on an offline (for example, backup)
database or datafile with DBVERIFY.

Maintain the internal database identifier (DBID) and the database name (DBNAME) for
an operational database, using the DBNEWID utility

See Also:
Oracle Database Utilities for more information on all of the utilities described in this chapter

Overview of Data Pump Export and Import


Oracle Data Pump technology enables very high-speed movement of data and metadata
from one database to another. This technology is the basis for Oracle Database data
movement utilities, Data Pump Export and Data Pump Import.
Data Pump enables you to specify whether a job should move a subset of the data and
metadata. This is done using data filters and metadata filters, which are implemented
through Export and Import parameters.
This section includes the following topics:

Data Pump Export


Data Pump Import

Data Pump Export


Data Pump Export (hereinafter referred to as Export for ease of reading) is a utility for
unloading data and metadata into a set of operating system files called a dump file set. The
dump file set can be moved to another system and loaded by the Data Pump Import utility.
The dump file set is made up of one or more disk files that contain table data, database
object metadata, and control information. The files are written in a proprietary, binary
format, which can be read only by Data Pump Import. During an import operation, the Data
Pump Import utility uses these files to locate each database object in the dump file set.

Data Pump Import


Data Pump Import (hereinafter referred to as Import for ease of reading) is a utility for
loading an export dump file set into a target system. The dump file set is made up of one or
more disk files that contain table data, database object metadata, and control information.
The files are written in a proprietary, binary format.
Import can also be used to load a target database directly from a source database with no
intervening files, which allows export and import operations to run concurrently, minimizing
total elapsed time. This is known as network import.
Import also enables you to see all of the SQL DDL that the Import job will be executing,
without actually executing the SQL. This is implemented through the
Import SQLFILE parameter.

Overview of the Data Pump API

The Data Pump API provides a high-speed mechanism to move all or part of the data and
metadata for a site from one database to another. To use the Data Pump API, you use the
procedures provided in the DBMS_DATAPUMP PL/SQL package. The Data Pump Export and Data
Pump Import utilities are based on the Data Pump API.

See Also:

Oracle Database Utilities for information about how the Data Pump API works

Oracle Database PL/SQL Packages and Types Reference for a description of


the DBMS_DATAPUMP package

Overview of the Metadata API


The Metadata application programming interface (API), provides a means for you to do the
following:

Retrieve an object's metadata as XML

Transform the XML in a variety of ways, including transforming it into SQL DDL

Submit the XML to re-create the object extracted by the retrieval

To use the Metadata API, you use the procedures provided in the DBMS_METADATA PL/SQL
package. For the purposes of the Metadata API, every entity in the database is modeled as
an object that belongs to an object type. For example, the table scott.emp is an object and
its object type isTABLE. When you fetch an object's metadata you must specify the object
type.

See Also:

Oracle Database Utilities for information about how to use the Metadata API

Oracle Database PL/SQL Packages and Types Reference for a description of


the DBMS_METADATA package

Overview of SQL*Loader
SQL*Loader loads data from external files into tables of an Oracle database. It has a
powerful data parsing engine that puts little limitation on the format of the data in the
datafile. You can use SQL*Loader to do the following:

Load data from multiple datafiles during the same load session.

Load data into multiple tables during the same load session.

Specify the character set of the data.

Selectively load data (you can load records based on the records' values).

Manipulate the data before loading it, using SQL functions.

Generate unique sequential key values in specified columns.

Use the operating system's file system to access the datafiles.

Load data from disk, tape, or named pipe.

Generate sophisticated error reports, which greatly aids troubleshooting.

Load arbitrarily complex object-relational data.

Use secondary datafiles for loading LOBs and collections.

Use either conventional or direct path loading. While conventional path loading is
very flexible, direct path loading provides superior loading performance.

A typical SQL*Loader session takes as input a control file, which controls the behavior of
SQL*Loader, and one or more datafiles. The output of SQL*Loader is an Oracle database
(where the data is loaded), a log file, a bad file, and potentially, a discard file.

See Also:
Oracle Database Utilities to learn more about LogMiner

Overview of External Tables


The external tables feature is a complement to existing SQL*Loader functionality. It lets you
access data in external sources as if it were in a table in the database. External tables can
be written to using the ORACLE_DATAPUMP access driver. Neither data manipulation language
(DML) operations nor index creation are allowed on an external table. Therefore, SQL*Loader
may be the better choice in data loading situations that require additional indexing of the
staging table.
To use the external tables feature, you must have some knowledge of the file format and
record format of the datafiles on your platform. You must also know enough about SQL to be
able to create an external table and perform queries against it.

See Also:
"External Tables"

Overview of LogMiner
Oracle LogMiner enables you to query redo log files through a SQL interface. All changes
made to user data or to the database dictionary are recorded in the Oracle Database redo
log files. Therefore, redo log files contain all the necessary information to perform recovery
operations.
LogMiner functionality is available through a command-line interface or through the Oracle
LogMiner Viewer graphical user interface (GUI). The LogMiner Viewer is a part of Oracle
Enterprise Manager.
The following are some of the potential uses for data contained in redo log files:

Pinpointing when a logical corruption to a database, such as errors made at the


application level, may have begun. This enables you to restore the database to the
state it was in just before corruption.

Detecting and whenever possible, correcting user error, which is a more likely
scenario than logical corruption. User errors include deleting the wrong rows because
of incorrect values in a WHERE clause, updating rows with incorrect values, dropping
the wrong index, and so forth.

Determining what actions you would have to take to perform fine-grained recovery at
the transaction level. If you fully understand and consider existing dependencies, it
may be possible to perform a table-based undo operation to roll back a set of
changes.

Performance tuning and capacity planning through trend analysis. You can determine
which tables get the most updates and inserts. That information provides a historical
perspective on disk access statistics, which can be used for tuning purposes.

Performing post-auditing. The redo log files contain all the information necessary to
track any DML and DDL statements run on the database, the order in which they
were run, and who executed them.

Overview of DBVERIFY Utility


DBVERIFY is an external command-line utility that performs a physical data structure
integrity check. It can be used on offline or online databases, as well on backup files. You
use DBVERIFY primarily when you must ensure that a backup database (or datafile) is valid
before it is restored or as a diagnostic aid when you have encountered data corruption
problems.
Because DBVERIFY can be run against an offline database, integrity checks are significantly
faster.

DBVERIFY checks are limited to cache-managed blocks (that is, data blocks). Because
DBVERIFY is only for use with datafiles, it will not work against control files or redo logs.
There are two command-line interfaces to DBVERIFY. With the first interface, you specify disk
blocks of a single datafile for checking. With the second interface, you specify a segment for
checking.

Overview of DBNEWID Utility


DBNEWID is a database utility that can change the internal, unique database identifier
(DBID) and the database name (DBNAME) for an operational database. The DBNEWID utility
lets you change any of the following:

Only the DBID of a database


Only the DBNAME of a database
Both the DBNAME and DBID of a database

Therefore, you can manually create a copy of a database and give it a new DBNAME and
DBID by re-creating the control file, and you can register a seed database and a manually
copied database together in the same RMAN repository.

ADRCI: ADR Command Interpreter


ADRCI is a command-line tool that is part of the fault diagnosability infrastructure introduced
in Oracle Database 11g. ADRCI enables you to:

View diagnostic data within the Automatic Diagnostic Repository (ADR)

Package incident and problem information into a zip file for transmission to Oracle
Support

Diagnostic data includes incident and problem descriptions, trace files, dumps, health
monitor reports, alert log entries, and more.
ADRCI has a rich command set, and can be used in interactive mode or within scripts. In
addition, ADRCI can execute scripts of ADRCI commands in the same way that SQL*Plus
executes scripts of SQL and PL/SQL commands.

You might also like