You are on page 1of 93

ORACLE DBA BASICS

Configuring Kernel Parameters For Oracle Installation


This section documents the checks and modifications to the Linux kernel that should be made by the
DBA to support Oracle Database 10g. Before detailing these individual kernel parameters, it is
important to fully understand the key kernel components that are used to support the Oracle Database
environment.
The kernel parameters and shell limits presented in this section are recommended values only as
documented by Oracle. For production database systems, Oracle recommends that we tune these
values to optimize the performance of the system.
Verify that the kernel parameters shown in this section are set to values greater than or equal to the
recommended values.
Shared Memory: Shared memory allows processes to access common structures and data by
placing them in a shared memory segment. This is the fastest form of Inter-Process
Communications (IPC) available - mainly due to the fact that no kernel involvement occurs when data
is being passed between the processes. Data does not need to be copied between processes.
Oracle makes use of shared memory for its Shared Global Area (SGA) which is an area of memory that
is shared by all Oracle backup and foreground processes. Adequate sizing of the SGA is critical to
Oracle performance since it is responsible for holding the database buffer cache, shared SQL, access
paths, and so much more.
To determine all current shared memory limits, use the following :
# ipcs -lm
------ Shared Memory Limits -------max number of segments = 4096
max seg size (kbytes) = 4194303
max total shared memory (kbytes) = 1073741824
min seg size (bytes) = 1
The following list describes the kernel parameters that can be used to change the shared memory
configuration for the server:
1.) shmmax - Defines the maximum size (in bytes) for a shared memory segment. The Oracle SGA
is comprised of shared memory and it is possible that incorrectly setting shmmax could limit the size
of the SGA. When setting shmmax, keep in mind that the size of the SGA should fit within one shared
memory segment. An inadequate shmmax setting could result in the following:
ORA-27123: unable to attach to shared memory segment
We can determine the value of shmmax by performing the following :
# cat /proc/sys/kernel/shmmax
4294967295
For most Linux systems, the default value for shmmax is 32MB. This size is often too small to
configure the Oracle SGA. The default value for shmmax in CentOS 5 is 4GB which is more than
enough for the Oracle configuration. Note that this value of 4GB is not the "normal" default value for
shmmax in a Linux environment inserts the following two entries in the file /etc/sysctl.conf:
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 4294967295
2.) shmmni : This kernel parameter is used to set the maximum number of shared memory
segments system wide. The default value for this parameter is 4096. This value is sufficient and
typically does not need to be changed. We can determine the value of shmmni by performing the
following:
# cat /proc/sys/kernel/shmmni
4096

ORACLE DBA BASICS

3.) shmall : This parameter controls the total amount of shared memory (in pages) that can be
used at one time on the system. The value of this parameter should always be at least: We can
determine the value of shmall by performing the following :
# cat /proc/sys/kernel/shmall
268435456
For most Linux systems, the default value for shmall is 2097152 and is adequate for most
configurations. The default value for shmall in CentOS 5 is 268435456 (see above) which is more than
enough for the Oracle configuration described in this article. Note that this value of 268435456 is not
the "normal" default value for shmall in a Linux environment , inserts the following two entries in the
file /etc/sysctl.conf:
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 268435456
4.) shmmin : This parameter controls the minimum size (in bytes) for a shared memory segment.
The default value for shmmin is 1 and is adequate for the Oracle configuration described in this
article.We can determine the value of shmmin by performing the following:
# ipcs -lm | grep "min seg size"
min seg size (bytes) = 1
Semaphores :
After the DBA has configured the shared memory settings, it is time to take care of configuring the
semaphores. The best way to describe a semaphore is as a counter that is used to provide
synchronization between processes (or threads within a process) for shared resources like shared
memory. Semaphore sets are supported in System V where each one is a counting semaphore. When
an application requests semaphores, it does so using "sets". To determine all current semaphore
limits, use the following:
# ipcs -ls
------ Semaphore Limits -------max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767
We can also use the following command:
# cat /proc/sys/kernel/sem
250
32000 32
128
The following list describes the kernel parameters that can be used to change the semaphore
configuration for the server:
i.) semmsl - This kernel parameter is used to control the maximum number of semaphores per
semaphore set. Oracle recommends setting semmsl to the largest PROCESS instance parameter
setting in the init.ora file for all databases on the Linux system plus 10. Also, Oracle recommends
setting the semmsl to a value of no less than 100.
ii.) semmni - This kernel parameter is used to control the maximum number of semaphore sets in
the entire Linux system. Oracle recommends setting semmni to a value of no less than 100.
iii.) semmns - This kernel parameter is used to control the maximum number of semaphores (not
semaphore sets) in the entire Linux system. Oracle recommends setting the semmns to the sum of
the PROCESSES instance parameter setting for each database on the system, adding the largest
PROCESSES twice, and then finally adding 10 for each Oracle database on the system. Use the
following calculation to determine the maximum number of semaphores that can be allocated on a
Linux system. It will be the lesser of:
SEMMNS -or- (SEMMSL * SEMMNI)

ORACLE DBA BASICS


iv.) semopm - This kernel parameter is used to control the number of semaphore operations that can
be performed per semop system call. The semop system call (function) provides the ability to do
operations for multiple semaphores with one semop system call. A semaphore set can have the
maximum number of semmslsemaphores per semaphore set and is therefore recommended to set
semopm equal to semmsl in some situations. Oracle recommends setting the semopm to a value of no
less than 100.
File Handles :
When configuring the Linux server, it is critical to ensure that the maximum number of file handles is
large enough. The setting for file handles denotes the number of open files that you can have on the
Linux system. Use the following command to determine the maximum number of file handles for the
entire system:
# cat /proc/sys/fs/file-max
102312
Oracle recommends that the file handles for the entire system be set to at least 65536. We can query
the current usage of file handles by using the following :
# cat /proc/sys/fs/file-nr
3072 0
102312
The file-nr file displays three parameters:

Total allocated file handles

Currently used file handles

Maximum file handles that can be allocated


If we need to increase the value in /proc/sys/fs/file-max, then make sure that the ulimit is set
properly. Usually for Linux 2.4 and 2.6 it is set to unlimited. Verify theulimit setting my issuing the
ulimit command :
# ulimit
unlimited
IP Local Port Range :
Oracle strongly recommends to set the local port range ip_local_port_range for outgoing messages to
"1024 65000" which is needed for systems with high-usage. This kernel parameter defines the local
port range for TCP and UDP traffic to choose from.
The default value for ip_local_port_range is ports 32768 through 61000 which is inadequate for a
successful Oracle configuration. Use the following command to determine the value of
ip_local_port_range:
# cat /proc/sys/net/ipv4/ip_local_port_range
32768 61000
Networking Settings :
With Oracle 9.2.0.1 and later, Oracle makes use of UDP as the default protocol on Linux for interprocess communication (IPC), such as Cache Fusion and Cluster Manager buffer transfers between
instances within the RAC cluster.
Oracle strongly suggests to adjust the default and maximum receive buffer size (SO_RCVBUF socket
option) to 1MB and the default and maximum send buffer size (SO_SNDBUF socket option) to
256KB.The receive buffers are used by TCP and UDP to hold received data until it is read by the
application. The receive buffer cannot overflow because the peer is not allowed to send data beyond
the buffer size window.
This means that datagrams will be discarded if they don't fit in the socket receive buffer, potentially
causing the sender to overwhelm the receiver. Use the following commands to determine the current
buffer size (in bytes) of each of the IPC networking parameters:
# cat /proc/sys/net/core/rmem_default
109568
# cat /proc/sys/net/core/rmem_max
131071
# cat /proc/sys/net/core/wmem_default
109568

ORACLE DBA BASICS


# cat /proc/sys/net/core/wmem_max
131071
Setting Kernel Parameters for Oracle
If the value of any kernel parameter is different to the recommended value, they will need to be
modified. For this article, I identified and provide the following values that will need to be added to the
/etc/sysctl.conf file which is used during the boot process.
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 1048576
net.core.rmem_max = 1048576
net.core.wmem_default = 262144
net.core.wmem_max = 262144
After adding the above lines to the /etc/sysctl.conf file, they persist each time the system reboots.
If we would like to make these kernel parameter value changes to the current system without having
to first reboot, enter the following command:
# /sbin/sysctl p

HOW ORACLE WORKS?


An instance is currently running on the computer that is executing Oracle called database server.
A computer is running an application (local machine) runs the application in a user process.
The client application attempts to establish a connection to the server using the proper Net8 driver.
When the oracle server detects the connection request from the client its check client authentication,
if authentication pass the oracle server creates a (dedicated) server process on behalf of the user
process. When the user executes a SQL statement and commits the transaction. For example, the user
changes a name in a row of a table. The server process receives the statement and checks the shared
pool for any shared SQL area that contains an identical SQL statement. If a shared SQL area is found,
the server process checks the user's access privileges to the requested data and the previously
existing shared SQL area is used to process the statement; if not, a new shared SQL area is allocated
for the statement so that it can be parsed and processed. The server process retrieves any necessary
data values from the actual datafile or those stored in the system global area. The server process
modifies data block in the system global area. The DBWn process writes modified blocks permanently
to disk when doing so is efficient. Because the transaction committed, the LGWR process immediately
records the transaction in the online redo log file. If the transaction is successful, the server process
sends a message across the network to the application. If it is not successful, an appropriate error
message is transmitted. Throughout this entire procedure, the other background processes run,
watching for conditions that require intervention.
Basics of Oracle Architecture
What is An Oracle Database?
Basically, there are two main components of Oracle database instance and database itself. An
instance consists of some memory structures (SGA) and the background processes.
Instance

ORACLE DBA BASICS


Instance is consist of the memory structures and background processes. The memory structure itself
consists of System Global Area (SGA), Program Global Area (PGA). In the other hand, the mandatory
background processes are Database Writer (DBWn), Log Writer (LGWR), Checkpoint (CKPT), System
Monitor (SMON), and Process Monitor (PMON). And another optional background processes are
Archiver (ARCn), Recoverer (RECO), etc.
System Global Area
SGA is the primary memory structures. This area is broken into a few of part memory Buffer
Cache, Shared Pool, Redo Log Buffer, Large Pool, and Java Pool.
Buffer Cache
Buffer cache is used to stores the copies of data block that retrieved from datafiles. That is, when user
retrieves data from database, the data will be stored in buffer cache. Its size can be manipulated via
DB_CACHE_SIZE parameter in init.ora initialization parameter file.
Shared Pool
Shared pool is broken into two small part memories Library Cache and Dictionary Cache. The
library cache is used to stores information about the commonly used SQL and PL/SQL statements; and
is managed by a Least Recently Used (LRU) algorithm. It is also enables the sharing those statements
among users. In the other hand, dictionary cache is used to stores information about object definitions
in the database, such as columns, tables, indexes, users, privileges, etc.
The shared pool size can be set via SHARED_POOL_SIZE parameter in init.ora initialization parameter
file.
Redo Log Buffer
Each DML statement (insert, update, and delete) executed by users will generates the redo entry.
What is a redo entry? It is an information about all data changes made by users. That redo entry is
stored in redo log buffer before it is written into the redo log files. To manipulate the size of redo log
buffer, you can use the LOG_BUFFER parameter in init.ora initialization parameter file.
Large Pool
Large pool is an optional area of memory in the SGA. It is used to relieves the burden place on the
shared pool. It is also used for I/O processes. The large pool size can be set by LARGE_POOL_SIZE
parameter in init.ora initialization parameter file.
Java Pool
As its name, Java pool is used to services parsing of the Java commands. Its size can be set by
JAVA_POOL_SIZE parameter in init.ora initialization parameter file.
Oracle Background Processes
Oracle background processes is the processes behind the scene that work together with the memories.
DBWn
Database writer (DBWn) process is used to write data from buffer cache into the datafiles. Historically,
the database writer is named DBWR. But since some of Oracle version allows us to have more than
one database writer, the name is changed to DBWn, where n value is a number 0 to 9.
LGWR

ORACLE DBA BASICS


Log writer (LGWR) process is similar to DBWn. It writes the redo entries from redo log buffer into the
redo log files.
CKPT
Checkpoint (CKPT) is a process to give a signal to DBWn to writes data in the buffer cache into
datafiles. It will also updates datafiles and control files header when log file switch occurs.
SMON
System Monitor (SMON) process is used to recover the system crach or instance failure by applying
the entries in the redo log files to the datafiles.
PMON
Process Monitor (PMON) process is used to clean up work after failed processes by rolling back the
transactions and releasing other resources.
Database
We can broken up database into two main structures Logical structures and Physical structures.

Logical Structures
The logical units are tablespace, segment, extent, and data block.
Tablespace
A Tablespace is a grouping logical database objects. A database must have one or more tablespaces.
In the Figure 3, we have three tablespaces SYSTEM tablespace, Tablespace 1, and Tablespace 2.
Tablespace is composed by one or more datafiles.
Segment
A Tablespace is further broken into segments. A segment is used to stores same type of objects. That
is, every table in the database will store into a specific segment (named Data Segment) and every
index in the database will also store in its own segment (named Index Segment). The other segment
types are Temporary Segment and Rollback Segment.
Extent
A segment is further broken into extents. An extent consists of one or more data block. When the
database object is enlarged, an extent will be allocated. Unlike a tablespace or a segment, an extent
cannot be named.
Data Block
A data block is the smallest unit of storage in the Oracle database. The data block size is a specific
number of bytes within tablespace and it has the same number of bytes.
Physical Structures
The physical structures are structures of an Oracle database (in this case the disk files) that are not
directly manipulated by users. The physical structure consists of datafiles, redo log files, and control
files.

ORACLE DBA BASICS


Datafiles
A datafile is a file that correspondens with a tablespace. One datafile can be used by one tablespace,
but one tablespace can has more than one datafiles.
Redo Log Files
Redo log files are the files that store the redo entries generated by DML statements. It can be used for
recovery processes.
Control Files
Control files are used to store information about physical structure of database, such as datafiles size
and location, redo log files location, etc.
Starting up a database
First Stage: Oracle engine start an Oracle Instance
When Oracle starts an instance, it reads the initialization parameter file to determine the values of
initialization parameters. Then, it allocates an SGA, which is a shared area of memory used for
database information, and creates background processes. At this point, no database is associated with
these memory structures and processes.
Second Stage: Mount the Database
To mount the database, the instance finds the database control files and opens them. Control files are
specified in the CONTROL_FILES initialization parameter in the parameter file used to start the
instance. Oracle then reads the control files to get the names of the database's datafiles and redo log
files.
At this point, the database is still closed and is accessible only to the database administrator. The
database administrator can keep the database closed while completing specific maintenance
operations. However, the database is not yet available for normal operations.
Final Stage: Database open for normal operation
Opening a mounted database makes it available for normal database operations. Usually, a database
administrator opens the database to make it available for general use.
When you open the database, Oracle opens the online datafiles and online redo log files. If a
tablespace was offline when the database was previously shut down, the tablespace and its
corresponding datafiles will still be offline when you reopen the database.
If any of the datafiles or redo log files are not present when you attempt to open the database, then
Oracle returns an error. You must perform recovery on a backup of any damaged or missing files
before you can open the database.
Open a Database in Read-Only Mode
You can open any database in read-only mode to prevent its data from being modified by user
transactions. Read-only mode restricts database access to read-only transactions, which cannot write
to the datafiles or to the redo log files.
Disk writes to other files, such as control files, operating system audit trails, trace files, and alert files,
can continue in read-only mode. Temporary tablespaces for sort operations are not affected by the
database being open in read-only mode. However, you cannot take permanent tablespaces offline
while a database is open in read-only mode. Also, job queues are not available in read-only mode.

ORACLE DBA BASICS

Read-only mode does not restrict database recovery or operations that change the database's state
without generating redo data. For example, in read-only mode:
* Datafiles can be taken offline and online
* Offline datafiles and tablespaces can be recovered
* The control file remains available for updates about the state of the database
Shutdown the database
The three steps to shutting down a database and its associated instance are:
*Close the database.
*Unmount the database.
*Shut down the instance.
Close a Database
When you close a database, Oracle writes all database data and recovery data in the SGA to the
datafiles and redo log files, respectively. Next, Oracle closes all online datafiles and online redo log
files. At this point, the database is closed and inaccessible for normal operations. The control files
remain open after a database is closed but still mounted.
Close the Database by Terminating the Instance
In rare emergency situations, you can terminate the instance of an open database to close and
completely shut down the database instantaneously. This process is fast, because the operation of
writing all data in the buffers of the SGA to the datafiles and redo log files is skipped. The subsequent
reopening of the database requires recovery, which Oracle performs automatically.
Un mount a Database
After the database is closed, Oracle un mounts the database to disassociate it from the instance. At
this point, the instance remains in the memory of your computer.
After a database is un mounted, Oracle closes the control files of the database.
Shut Down an Instance
The final step in database shutdown is shutting down the instance. When you shut down an instance,
the SGA is removed from memory and the background processes are terminated.
Abnormal Instance Shutdown
In unusual circumstances, shutdown of an instance might not occur cleanly; all memory structures
might not be removed from memory or one of the background processes might not be terminated.
When remnants of a previous instance exist, a subsequent instance startup most likely will fail. In
such situations, the database administrator can force the new instance to start up by first removing
the remnants of the previous instance and then starting a new instance, or by issuing a SHUTDOWN
ABORT statement in Enterprise Manager.
Managing an Oracle Instance
When Oracle engine starts an instance, it reads the initialization parameter file to determine the
values of initialization parameters. Then, it allocates an SGA and creates background processes. At
this point, no database is associated with these memory structures and processes.
Type of initialization file:
Static (PFILE) Persistent (SPFILE)

ORACLE DBA BASICS

Text file Binary file


Modification with an OS editor Cannot Modified
Modification made manually Maintained by the Server
Initialization parameter file content:
*
*
*
*
*
*

Instance parameter
Name of the database
Memory structure of the SGA
Name and location of control file
Information about undo segments
Location of udump, bdump and cdump file

Creating an SPFILE:
Create SPFILE=..ORA
From PFILE=..ORA;
Note:
* Required SYSDBA Privilege.
* Execute before or after instance startup.
Oracle Background Processes
An Oracle instance runs two types of processes
Server Process
Background Process
Before work user must connect to an Instance. When user LOG on Oracle Server Oracle Engine create
a process called Server processes. Server process communicate with oracle instance on the behalf of
user process.
Each background process is useful for a specific purpose and its role is well defined.
Background processes are invoked automatically when the instance is started.
Database Writer (DBWr)
Process Name: DBW0 through DBW9 and DBWa through DBWj
Max Processes: 20
This process writes the dirty buffers for the database buffer cache to data files. One database writer
process is sufficient for most systems; more can be configured if essential. The initialisation
parameter, DB_WRITER_PROCESSES, specifies the number of database writer processes to start.
The DBWn process writes dirty buffer to disk under the following conditions:
When a checkpoint is issued.

ORACLE DBA BASICS


When a server process cannot find a clean reusable buffer after scanning a threshold number of
buffers.
Every 3 seconds.
When we place a normal or temporary table space offline and read only mode
When we drop and truncate table.
When we put a table space in backup mode;
Log Writer(LGWR)
Process Name: LGWR
Max Processes: 1
The log writer process writes data from the redo log buffers to the redo log files on disk.
The writer is activated under the following conditions:
*When a transaction is committed, a System Change Number (SCN) is generated and tagged to it. Log
writer puts a commit record in the redo log buffer and writes it to disk immediately along with the
transaction's redo entries.
*Every 3 seconds.
*When the redo log buffer is 1/3 full.
*When DBWn signals the writing of redo records to disk. All redo records associated with changes in
the block buffers must be written to disk first (The write-ahead protocol). While writing dirty buffers, if
the DBWn process finds that some redo information has not been written, it signals the LGWR to write
the information and waits until the control is returned.
*Log writer will write synchronously to the redo log groups in a circular fashion. If any damage is
identified with a redo log file, the log writer will log an error in the LGWR trace file and the system
Alert Log. Sometimes, when additional redo log buffer space is required, the LGWR will even write
uncommitted redo log entries to release the held buffers. LGWR can also use group commits (multiple
committed transaction's redo entries taken together) to write to redo logs when a database is
undergoing heavy write operations.
The log writer must always be running for an instance.
System Monitor
Process Name: SMON
Max Processes: 1
This process is responsible for instance recovery, if necessary, at instance startup. SMON also cleans
up temporary segments that are no longer in use. SMON wakes up about every 5 minutes to perform
housekeeping activities. SMON must always be running for an instance.
Process Monitor
Process Name: PMON
Max Processes: 1

10

ORACLE DBA BASICS

This process is responsible for performing recovery if a user process fails. It will rollback uncommitted
transactions. PMON is also responsible for cleaning up the database buffer cache and freeing resources
that were allocated to a process. PMON also registers information about the instance and dispatcher
processes with network listener.
PMON wakes up every 3 seconds to perform housekeeping activities. PMON must always be running
for an instance.
Checkpoint Process
Process Name: CKPT
Max processes: 1
Checkpoint process signals the synchronization of all database files with the checkpoint information. It
ensures data consistency and faster database recovery in case of a crash.
CKPT ensures that all database changes present in the buffer cache at that point are written to the
data files, the actual writing is done by the Database Writer process. The datafile headers and the
control files are updated with the latest SCN (when the checkpoint occurred), this is done by the log
writer process.
The CKPT process is invoked under the following conditions:
When a log switch is done.
When the time specified by the initialization parameter LOG_CHECKPOINT_TIMEOUT exists between
the incremental checkpoint and the tail of the log; this is in seconds.
When the number of blocks specified by the initialization parameter LOG_CHECKPOINT_INTERVAL
exists between the incremental checkpoint and the tail of the log; these are OS blocks.
The number of buffers specified by the initialization parameter FAST_START_IO_TARGET required to
perform roll-forward is reached.
Oracle 9i onwards, the time specified by the initialization parameter FAST_START_MTTR_TARGET is
reached; this is in seconds and specifies the time required for a crash recovery. The parameter
FAST_START_MTTR_TARGET replaces LOG_CHECKPOINT_INTERVAL and FAST_START_IO_TARGET, but
these parameters can still be used.
*
When the ALTER SYSTEM SWITCH LOGFILE command is issued.
*
When the ALTER SYSTEM CHECKPOINT command is issued.
Incremental Checkpoints initiate the writing of recovery information to datafile headers and
controlfiles. Database writer is not signaled to perform buffer cache flushing activity here.
Archiver
Process Name: ARC0 through ARC9
Max Processes: 10
The ARCn process is responsible for writing the online redo log files to the mentioned archive log
destination after a log switch has occurred. ARCn is present only if the database is running in
archivelog mode and automatic archiving is enabled. The log writer process is responsible for starting
multiple ARCn processes when the workload increases. Unless ARCn completes the copying of a redo

11

ORACLE DBA BASICS


log file, it is not released to log writer for overwriting.
The number of Archiver processes that can be invoked initially is specified by the initialization
parameter LOG_ARCHIVE_MAX_PROCESSES. The actual number of Archiver processes in use may
vary based on the workload.
Lock Monitor
Process Name: LMON
processes: 1
Meant for Parallel server setups, Lock Monitor manages global locks and resources. It handles the
redistribution of instance locks whenever instances are started or shutdown. Lock Monitor also
recovers instance lock information prior to the instance recovery process. Lock Monitor co-ordinates
with the Process Monitor to recover dead processes that hold instance locks.
Lock processes
Process Name: LCK0 through LCK9
Max Processes: 10
Meant for Parallel server setups, the instance locks that are used to share resources between
instances are held by the lock processes.
Block Server Process
Process Name: BSP0 through BSP9
Max processes: 10
Meant for Parallel server setups, Block server Processes have to do with providing a consistent read
image of a buffer that is requested by a process of another instance, in certain circumstances.
Queue Monitor
Process Name: QMN0 through QMN9
Max Processes: 10
This is the advanced Queuing Time manager process. QMNn monitors the message queues. Failure of
QMNn process will not cause the instance to fail.
Event Monitor
Process Name: EMN0/EMON
Max Processes: 1
This process is also related to Advanced Queuing, and is meant for allowing a publish/subscribe style
of messaging between applications.
Recoverer
Process Name: RECO
Max processes: 1
Intended for distributed recovery. All in-doubt transactions are recovered by this process in the
distributed database setup. RECO will connect to the remote database to resolve pending transactions.
Job Queue Processes
Process Name: J000 through J999 (Originally called SNPn processes)

12

ORACLE DBA BASICS


Max Processes: 1000

Job queue processes carry out batch processing. All scheduled jobs are executed by these processes.
The initialization parameter JOB_QUEUE_PROCESSES specifies the maximum job processes that can
be run concurrently. If a job fails with some Oracle error, it is recorded in the alert file and a process
trace file is generated. Failure of the Job queue process will not cause the instance to fail.
Dispatcher
Process Name: Dnnn
Max Processes: Intended for Shared server setups (MTS). Dispatcher processes listen to and receive requests from
connected sessions and places them in the request queue for further processing. Dispatcher processes
also pickup outgoing responses from the result queue and transmit them back to the clients. Dnnn are
mediators between the client processes and the shared server processes. The maximum number of
Dispatcher process can be specified using the initialization parameter MAX_DISPATCHERS.
Shared Server Processes
Process Name: Snnn
Max Processes: Intended for Shared server setups (MTS). These processes pickup requests from the call request
queue, process them and then return the results to a result queue. The number of shared server
processes to be created at instance startup can be specified using the initialization parameter
SHARED_SERVERS.
Parallel Execution Slaves
Process Name: Pnnn
Max Processes: These processes are used for parallel processing. It can be used for parallel execution of SQL
statements or recovery. The Maximum number of parallel processes that can be invoked is specified by
the initialization parameter PARALLEL_MAX_SERVERS.
Trace Writer
Process Name: TRWR
Max Processes: 1
Trace writer writes trace files from an Oracle internal tracing facility.
Input/Output Slaves
Process Name: Innn
Max Processes: These processes are used to simulate asynchronous I/O on platforms that do not support it. The
initialization parameter DBWR_IO_SLAVES is set for this purpose.
Wakeup Monitor Process
Process Name: WMON
Max Processes: -

13

ORACLE DBA BASICS

This process was available in older versions of Oracle to alarm other processes that are suspended
while waiting for an event to occur. This process is obsolete and has been removed.
Conclusion
With every release of Oracle, new background processes have been added and some existing ones
modified. These processes are the key to the proper working of the database. Any issues related to
background processes should be monitored and analyzed from the trace files generated and the alert
log.
Create Stand-alone 10g Database Manually
Step 1 Create a initSID.ora (Example: initTEST.ora) file in $ORACLE_HOME/dbs/ directory.
Example: $ORACLE_HOME/dbs/initTEST.ora
Put following entry in initTEST.ora file
##############################################################
background_dump_dest=<put BDUMP log destination>
core_dump_dest=<put CDUMP log destination>
user_dump_dest=<put UDUMP log destination>
control_files = (/<Destination>/control1.ctl,/ <Destination>/control2.ctl,/ <Destination>/control3.ctl)
undo_management = AUTO
undo_tablespace = UNDOTBS1
db_name = test
db_block_size = 8192
sga_max_size = 1073741824
sga_target = 1073741824
####################################################
Step 2 Create a password file
$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwd<sid>.ora password=<password>
entries=5
Step 3 Set your ORACLE_SID
$ export ORACLE_SID=test
$ export ORACLE_HOME=/<Destination>
Step 4 Run the following sqlplus command to connect to the database and startup the instance.
$sqlplus '/ as sysdba'
SQL> startup nomount
Step 5 Create the Database. use following scripts.
create database test
logfile group 1 ('<Destination>/redo1.log') size 100M,
group 2 ('<Destination>/redo2.log') size 100M,
group 3 ('<Destination>/redo3.log') size 100M

14

ORACLE DBA BASICS

character set WE8ISO8859P1


national character set utf8
datafile '<Destination>/system.dbf' size 500M autoextend on next 10M maxsize unlimited extent
management local
sysaux datafile '<Destination>/sysaux.dbf' size 100M autoextend on next 10M maxsize unlimited
undo tablespace undotbs1 datafile '<Destination>/undotbs1.dbf' size 100M
default temporary tablespace temp tempfile '<Destination>/temp01.dbf' size 100M;
Step 6 Run the scripts necessary to build views, synonyms, etc.:
CATALOG.SQL-- creates the views of data dictionary tables and the dynamic performance
views.
CATPROC.SQL-- establishes the usage of PL/SQL functionality and creates many of the PL/SQL
Oracle supplied packages.

Create 10g OMF Database Manually


Step 1
Create a initSID.ora(Example: initTEST.ora) file in $ORACLE_HOME/dbs/ directory.
Example: $ORACLE_HOME/dbs/initTEST.ora
Put following entry in initTEST.ora file
##############################################################
background_dump_dest=<put BDUMP log destination>
core_dump_dest=<put CDUMP log destination>
user_dump_dest=<put UDUMP log destination>
control_files = (/<Destination>/control1.ctl,/ <Destination>/control2.ctl,/ <Destination>/control3.ctl)
undo_management = AUTO
undo_tablespace = UNDOTBS1
db_name = test
db_block_size = 8192
sga_max_size = 1073741824
sga_target = 1073741824
db_create_file_dest = /<Put DB File Destination> #OMF
db_create_online_log_dest_1 = /<Put first redo and control file destination> #OMF
db_create_online_log_dest_2 = /<Put second redo and control file destination> #OMF
db_recovery_file_dest = /<put flash recovery area destination> #OMF

15

ORACLE DBA BASICS


###############################################################
#
Step 2 Create a password file
$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwd<sid>.ora password=<password>
entries=5
Step 3 Set your ORACLE_SID
export ORACLE_SID=test
export ORACLE_HOME=/<oracle home path>
Step 4 Run the following sqlplus command to connect to the database and startup the instance.
sqlplus '/ as sysdba'
SQL> startup nomount
Step 5 Create the database
create database test
character set WE8ISO8859P1
national character set utf8
undo tablespace undotbs1
default temporary tablespace temp;
Step 6 Run catalog and catproc
@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql

Managing Data Files


What is data File?
Data files are physical files of the OS that store the data of all logical structures in the database. Data
file must be created for each tablespace.
How to determine the number of dataf iles?
At least one datafile is required for the SYSTEM tablespace. We can create separate datafile for other
teblespace. When we create DATABASE , MAXDATAFILES may be or not specify in create database

16

ORACLE DBA BASICS


statement clause. Oracle assassin db_files default value to 200. We can also specify the number of
datafiles in init file.
When we start the oracle instance , the DB_FILES initialization parameter reserve for datafile
information and the maximum number of datafile in SGA. We can change the value of DB_FILES (by
changing the initialization parameter setting), but the new value does not take effect until you shut
down and restart the instance.

Important:
If the value of DB_FILES is too low, you cannot add datafiles beyond the DB_FILES limit.
Example : if init parameter db_files set to 2 then you can not add more then 2 in your database.
If the value of DB_FILES is too high, memory is unnecessarily consumed.
When you issue CREATE DATABASE or CREATE CONTROLFILE statements,
the MAXDATAFILES parameter specifies an initial size. However, if you attempt to add a new file whose
number is greater than MAXDATAFILES, but less than or equal toDB_FILES, the control file will expand
automatically so that the datafiles section can accommodate more files.
Note:
If you add new datafiles to a tablespace and do not fully specify the filenames, the database creates
the datafiles in the default database directory . Oracle recommends you always specify a fully qualified
name for a datafile. Unless you want to reuse existing files, make sure the new filenames do not
conflict with other files. Old files that have been previously dropped will be overwritten.
How to add datafile in execting tablespace?
alter tablespace <Tablespace_Name> add datafile /............../......./file01.dbf size 10m autoextend
on;
How to resize the datafile?
alter database datafile '/............../......./file01.dbf' resize 100M;
How to bring datafile online and offline?
alter database datafile '/............../......./file01.dbf' online;
alter database datafile '/............../......./file01.dbf' offline;
How to renaming the datafile in a single tablesapce?
Step:1 Take the tablespace that contains the datafiles offline. The database must be open.
alter tablespace <Tablespace_Name> offline normal;
Step:2 Rename the datafiles using the operating system.
Step:3 Use the ALTER TABLESPACE statement with the RENAME DATAFILE clause to change the
filenames within the database.
alter tablespace <Tablespace_Name> rename datafile '/...../..../..../user.dbf' to
'/..../..../.../users1.dbf';

17

ORACLE DBA BASICS


Step 4: Back up the database. After making any structural changes to a database, always perform an
immediate and complete backup.
How to relocate datafile in a single tablesapce?
Step:1 Use following query to know the specifiec file name or size.
select file_name,bytes from dba_data_files where tablespace_name='<tablespace_name>';
Step:2 Take the tablespace containing the datafiles offline:
alter tablespace <Tablespace_Name> offline normal;
Step:3 Copy the datafiles to their new locations and rename them using the operating system.
Step:4 Rename the datafiles within the database.
ALTER TABLESPACE <Tablespace_Name> RENAME DATAFILE
'/u02/oracle/rbdb1/users01.dbf', '/u02/oracle/rbdb1/users02.dbf'
TO '/u03/oracle/rbdb1/users01.dbf','/u04/oracle/rbdb1/users02.dbf';
Step:5 Back up the database. After making any structural changes to a database, always perform an
immediate and complete backup.
How to Renaming and Relocating Datafiles in Multiple Tablespaces?
Step:1 Ensure that the database is mounted but closed.
Step:2 Copy the datafiles to be renamed to their new locations and new names, using the operating
system.
Step:3 Use ALTER DATABASE to rename the file pointers in the database control file.
ALTER DATABASE
RENAME FILE
'/u02/oracle/rbdb1/sort01.dbf',
'/u02/oracle/rbdb1/user3.dbf'
TO '/u02/oracle/rbdb1/temp01.dbf',
'/u02/oracle/rbdb1/users03.dbf;
Step:4 Back up the database. After making any structural changes to a database, always perform an
immediate and complete backup.
How to drop a datafile from a tablespace

18

ORACLE DBA BASICS


Important : Oracle does not provide an interface for dropping datafiles in the same way you would
drop a schema object such as a table or a user.

Reasons why you want to remove a datafile from a tablespace:


You may have mistakenly added a file to a tablespace.
You may have made the file much larger than intended and now want to remove it.
You may be involved in a recovery scenario and the database won't start because a datafile is
missing.
Important : Once the DBA creates a datafile for a tablespace, the datafile cannot be removed. If you
want to do any critical operation like dropping datafiles, ensure you have a full backup of the
database.
Step: 1 Determining how many datafiles make up a tablespace
To determine how many and which datafiles make up a tablespace, you can use the following query:
SELECT file_name, tablespace_name FROM dba_data_files WHERE tablespace_name ='<name of
tablespace>';
Case 1
If you have only one datafile in the tablespace and you want to remove it. You can simply drop the
entire tablespace using the following:
DROP TABLESPACE <tablespace name> INCLUDING CONTENTS;
The above command will remove the tablespace, the datafile, and the tablespace's contents from the
data dictionary.
Important : Oracle will not drop the physical datafile after the DROP TABLESPACE command. This
action needs to be performed at the operating system.
Case 2
If you have more than one datafile in the tablespace, and you wnat to remove all datafiles and also no
need the information contained in that tablespace, then use the same command as above:
DROP TABLESPACE <tablespace name> INCLUDING CONTENTS;
Case 3
If you have more than one datafile in the tablespace and you want to remove only one or two ( not
all) datafile in the tablesapce or you want to keep the objects that reside in the other datafile(s) which
are part of this tablespace, then you must export all the objects inside the tablespace.
Step: 1 Gather information on the current datafiles within the tablespace by running the following
query in SQL*Plus:
SELECT file_name, tablespace_name FROM dba_data_files WHERE tablespace_name ='<name of
tablespace>';

19

ORACLE DBA BASICS


Step: 2 You now need to identify which objects are inside the tablespace for the purpose of running
an export. To do this, run the following query:
SELECT owner, segment_name, segment_type FROM dba_segments WHERE
tablespace_name='<name of tablespace>'
Step : 3 Now, export all the objects that you wish to keep.
Step : 4 Once the export is done, issue the
DROP TABLESPACE <tablespace name> INCLUDING CONTENTS.
Step : 5 Delete the datafiles belonging to this tablespace using the operating system.
Step : 6 Recreate the tablespace with the datafile(s) desired, then import the objects into that
tablespace.

Case : 4
If you do not want to follow any of these procedures, there are other things that can be done besides
dropping the tablespace.
If the reason you wanted to drop the file is because you mistakenly created the file of the
wrong size, then consider using the RESIZE command.
If you really added the datafile by mistake, and Oracle has not yet allocated any space within
this datafile, then you can use ALTER DATABASE DATAFILE <filename> RESIZE; command to make
the file smaller than 5 Oracle blocks. If the datafile is resized to smaller than 5 oracle blocks, then it
will never be considered for extent allocation. At some later date, the tablespace can be rebuilt to
exclude the incorrect datafile.
Important : The ALTER DATABASE DATAFILE <datafile name> OFFLINE DROP command is not
meant to allow you to remove a datafile. What the command really means is that you are offlining the
datafile with the intention of dropping the tablespace.
Important : If you are running in archivelog mode, you can also use: ALTER DATABASE DATAFILE
<datafile name> OFFLINE; instead of OFFLINE DROP. Once the datafile is offline, Oracle no longer
attempts to access it, but it is still considered part of that tablespace. This datafile is marked only as
offline in the controlfile and there is no SCN comparison done between the controlfile and the datafile
during startup (This also allows you to startup a database with a non-critical datafile missing). The
entry for that datafile is not deleted from the controlfile to give us the opportunity to recover that
datafile.
Managing Control Files
A control file is a small binary file that records the physical structure of the database with database
name, Names and locations of associated datafiles, online redo log files, timestamp of the database
creation, current log sequence number and Checkpoint information.
Note:

Without the control file, the database cannot be mounted.

You should create two or more copies of the control file during database creation.
Role of Control File:

20

ORACLE DBA BASICS


When Database instance mount, Oracle recognized all listed file in Control file and open it. Oracle
writes and maintains all listed control files during database operation.
Important:

If you do not specify files for CONTROL_FILES before database creation, and you are not using
the Oracle Managed Files feature, Oracle creates a control file in <DISK>:\ORACLE_HOME\DTATBASE\
location and uses a default filename. The default name is operating system specific.

Every Oracle database should have at least two control files, each stored on a different disk. If
a control file is damaged due to a disk failure, the associated instance must be shut down.

Oracle writes to all filenames listed for the initialization parameter CONTROL_FILES in the
database's initialization parameter file.

The first file listed in the CONTROL_FILES parameter is the only file read by the Oracle
database server during database operation.

If any of the control files become unavailable during database operation, the instance becomes
inoperable and should be aborted.

How to Create Control file at the time od database creation:


The initial control files of an Oracle database are created when you issue the CREATE DATABASE
statement. The names of the control files are specified by the CONTROL_FILES parameter in the
initialization parameter file used during database creation.
How to Create Additional Copies, Renaming, and Relocating Control Files
Step:1 Shut down the database.
Step:2 Copy an existing control file to a different location, using operating system commands.
Step:3 Edit the CONTROL_FILES parameter in the database's initialization parameter file to add the
new control file's name, or to change the existing control filename.
Step:4 Restart the database.
When you Create New Control Files?

All control files for the database have been permanently damaged and you do not have a
control file backup.

You want to change one of the permanent database parameter settings originally specified in
the CREATE DATABASE statement. These settings include the database's name and the following
parameters: MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES, and
MAXINSTANCES.

21

ORACLE DBA BASICS


Steps for Creating New Control Files
Step:1 Make a list of all datafiles and online redo log files of the database.
SELECT MEMBER FROM V$LOGFILE;
SELECT NAME FROM V$DATAFILE;
SELECT VALUE FROM V$PARAMETER WHERE NAME = 'CONTROL_FILES';
Step:2 Shut down the database.
Step:3 Back up all datafiles and online redo log files of the database.
Step:4 Start up a new instance, but do not mount or open the database:
STARTUP NOMOUNT
Step:5 Create a new control file for the database using the CREATE CONTROLFILE statement.
Example:
CREATE CONTROLFILE REUSE DATABASE "<DB_NAME" NORESETLOGS NOARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 '<DISK>:\Directory\REDO01.LOG' SIZE 5024K,
GROUP 2 '<DISK>:\Directory\REDO02.LOG' SIZE 5024K,
GROUP 3 '<DISK>:\Directory\REDO03.LOG' SIZE 5024K
# STANDBY LOGFILE
DATAFILE
'<DISK>:\Directory\SYSTEM.DBF',

22

ORACLE DBA BASICS


'<DISK>:\Directory\UNDOTBS.DBF'
CHARACTER SET WE8MSWIN1252
;
Step:6 Open the database using one of the following methods:

If you specify NORESETLOGS when creation the control file, use following commands: ALTER
DATABASE OPEN;

If you specified RESETLOGS when creating the control file, use the ALTER DATABASE
statement, indicating RESETLOGS.
ALTER DATABASE OPEN RESETLOGS;

TIPS:
When creating a new control file, select the RESETLOGS option if you have lost any online redo log
groups in addition to control files. In this case, you will need to recover from the loss of the redo logs .
You must also specify the RESETLOGS option if you have renamed the database. Otherwise, select the
NORESETLOGS option.
Backing Up Control Files
Method 1:
Back up the control file to a binary file (duplicate of existing control file) using the following
statement:
ALTER DATABASE BACKUP CONTROLFILE TO '<DISK>:\Directory\control.bkp';
Method 2:
Produce SQL statements that can later be used to re-create your control file:
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
How to retrieve information related to Control File:
V$DATABASE
Displays database information from the control file
V$CONTROLFILE
Lists the names of control files

23

ORACLE DBA BASICS


V$CONTROLFILE_RECORD_SECTION
Displays information about control file record sections

Managing Redo Log Files


Redo logs consists of two or more pre allocated files that store all changes made to the database.
Every instance of an Oracle database has an associated online redo log to protect the database in case
of an instance failure.

Main points to consider before creating redo log files?


Members of the same group should be stores in separate disk so that no single disk failure can
cause LGWR and database instance to fail.
Set the archive destination to separate disk other than redo log members to avoid contention
between LGWR and Arch.
With mirrored groups of online redo logs , all members of the same group must be the same
size.
What are the parameters related to Redo log files?
Parameters related to redo log files are
MAXLOGFILES
MAXLOGMEMEBERS
MAXLOGFILES and MAXLOGMEMEBERS parameters are defined while creation of database. You can
increase these parameters by recreating the control file.
How do you create online Redo log group?
Alter database add logfile group <group Number> (<DISK>:\Directory\<LOG_FILE_NAME>.log,
(<DISK>:\Directory\<LOG_FILE_NAME>.log) size 500K;
How to check the status of added redo log group?
Select * from v$log;
Interpretation:
Here you will observe that status is UNUSED means that this redo log file is not being used by oracle
as yet. ARC is the archived column in v$log , it is by default YES when you create a redo log file. It will
returns to NO if the system is not in archive log mode and this file is used by oracle. Sequence# 0 also
indicate that it is not being used as yet.
How to create online redo log member ?
alter database add logfile member
'<DISK>:\Directory\<LOG_FILE_NAME>.log,'<DISK>:\Directory\<LOG_FILE_NAME>.log' to group
<GROUP NUMBER>;
How to rename and relocate online redo log members ?

24

ORACLE DBA BASICS


Important: Take the backup before renaming and relocating.
Step:1 Shutdown the database .
Step:2 Startup the database as startup mount.
Step:3 Copy the desired redo log files to new location . You can change the name of redo log file in
the new location.
Step:4 Alter database rename file <DISK>:\Directory\<LOG_FILE_NAME>.log to <new
path><DISK>:\Directory\<LOG_FILE_NAME>.log,
Step:5 Alter database open;
Step: 6 Shutdown the database normal and take the backup.
How to drop online redo log group?

Important:
You must have at- least two online groups.
You can not drop a active online redo log group. If it active switch it by alter system switch
logfile before dropping.
Also make sure that online redo log group is archived ( if archiving is enabled).
Syntax:
If you want to drop log group:
Alter database drop logfile group <GROUP_NUMBER>;
If you want to drop a logfile member:
Alter database drop logfile member <DISK>:\Directory\<LOG_FILE_NAME>.log;
How to Viewing Online Redo Log Information?
SELECT * FROM V$LOG;
GROUP# THREAD# SEQ BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
------ ------- ----- ------- ------- --- --------- ------------- --------1 1 10605 1048576 1 YES ACTIVE 11515628 16-APR-00
2 1 10606 1048576 1 NO CURRENT 11517595 16-APR-00
3 1 10603 1048576 1 YES INACTIVE 11511666 16-APR-00

25

ORACLE DBA BASICS


4 1 10604 1048576 1 YES INACTIVE 11513647 16-APR-00
SELECT * FROM V$LOGFILE;
GROUP# STATUS MEMBER
------ ------- ---------------------------------1 D:\ORANT\ORADATA\IDDB2\REDO04.LOG
2 D:\ORANT\ORADATA\IDDB2\REDO03.LOG
3 D:\ORANT\ORADATA\IDDB2\REDO02.LOG
4 D:\ORANT\ORADATA\IDDB2\REDO01.LOG
If STATUS is blank for a member, then the file is in use.
Managing Temporary Tablespace
First we will discus about use of temporary tablespace. We use it to manage space for database sort
operation. For example: if we join two large tables it require space for sort operation because oracle
cannot do shorting in memory. This sort operation will be done in temperory tablespace.
We must assign a temporary tablespace to each user in the database; if we dont assign temperory
tablespace to user in the database oracle allocate sort space in the SYSTEM tablespace by default.

Important:
That a temporary tablespace cannot contain permanent objects and therefore doesn't need to
be backed up.
When we create a TEMPFILE, Oracle only writes to the header and last block of the file. This is
why it is much quicker to create a TEMPFILE than to create a normal database file.
TEMPFILEs are not recorded in the database's control file.
We cannot remove datafiles from a tablespace until you drop the entire tablespace but we can
remove a TEMPFILE from a database:
SQL> ALTER DATABASE TEMPFILE ''<disk>:\<directory>\<Tablespace Name>.dbf' DROP INCLUDING
DATAFILES;
Except for adding a tempfile, you cannot use the ALTER TABLESPACE statement for a locally
managed temporary tablespace (operations like rename, set to read only, recover, etc. will fail).
How does create Temporary Tablespaces?
CREATE TEMPORARY TABLESPACE temp
TEMPFILE '<disk>:\<directory>\<Tablespace Name>.dbf' size 20M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M;
For best performance, the UNIFORM SIZE must be a multiple of the SORT_AREA_SIZE parameter.

26

ORACLE DBA BASICS


How can define Default Temporary Tablespaces?
We can define a Default Temporary Tablespace at database creation time, or by issuing an "ALTER
DATABASE" statement:
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;

Important:
The default Default Temporary Tablespace is SYSTEM.
Each database can be assigned one and only one Default Temporary Tablespace.
Temporary Tablespace is automatically assigned to users.
Restriction:

The following restrictions apply to default temporary tablespaces:


The Default Temporary Tablespace must be of type TEMPORARY
The DEFAULT TEMPORARY TABLESPACE cannot be taken off-line
The DEFAULT TEMPORARY TABLESPACE cannot be dropped until you create another one.
How to see the default temporary tablespace for a database?
SELECT * FROM DATABASE_PROPERTIES where PROPERTY_NAME='DEFAULT_TEMP_TABLESPACE';
How to Monitoring Temporary Tablespaces and Sorting?
Use following query to view temp file information:
Select * from dba_temp_files; or Select * from v$tempfile;
Use following query for monitor temporary segment
Select * from v$sort_segments or Select * from v$sort_usage
Use following query for free space in tablespace :
select TABLESPACE_NAME,BYTES_USED, BYTES_FREE from V$TEMP_SPACE_HEADER;
How to Dropping / Recreating Temporary Tablespace? (Method)
This should be performed during off ours with no user logged on performing work.
If you are working with a temporary tablespace that is NOT the default temporary tablespace for the
database, this process is very simple. Simply drop and recreate the temporary tablespace:
Step:1 Drop the Tablespace
DROP TABLESPACE temp;

27

ORACLE DBA BASICS


Tablespace dropped.
Step: 2 Create new temporary tablespace.
CREATE TEMPORARY TABLESPACE TEMP
TEMPFILE '<disk>:\<directory>\<Tablespace Name>.dbf' SIZE 500M REUSE
AUTOEXTEND ON NEXT 100M MAXSIZE unlimited
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;
How to Dropping / Recreating Default Temporary Tablespace? (Method)
You will know fairly quickly if the tablespace is a default temporary tablespace when you are greeted
with the following exception:
DROP TABLESPACE temp;
drop tablespace temp
*ERROR at line 1:
ORA-12906: cannot drop default temporary tablespace
Step: 1 Create another temperory tablespace.
CREATE TEMPORARY TABLESPACE temp2
TEMPFILE '<disk>:\<directory>\<Tablespace Name>.dbf'SIZE 5M REUSE
AUTOEXTEND ON NEXT 1M MAXSIZE unlimited
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;
Tablespace created.
Step: 2 Make default tablespace.
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2;
Database altered.
Step: 3 Drop old defaule tablespace.
DROP TABLESPACE temp INCLUDING CONTENTS AND DATAFILES;

28

ORACLE DBA BASICS


Tablespace dropped.
Most Importent:
You do not need to assign temporary tablespace while creating a database user. The Temporary
Tablespace is automatically assigned. The name of the temporary tablespace is determined by the
DEFAULT_TEMP_TABLESPACE column from the data dictionary view DATABASE_PROPERTIES_VIEW.
Example:
Step:1 Create database user
create user test identified by test default TABLESPACE users;
User created.
Step: 2 View information
SELECT USERNAME, DEFAULT_TABLESPACE, TEMPORARY_TABLESPACE FROM
DBA_USERS WHERE USERNAME='TEST';
USERNAME
--------

DEFAULT_TABLESPACE TEMPORARY_TABLESPACE
------------------------------

------------------------------

TEST USERS TEMP


NOTE: Temporary Tablespace TEMP is automatically assigned to the user TEST.
Certain Restrictions?

The default temporary tablespace can not be DROP.

The default temporary tablespace cab not be taken offline

Managing UNDO TABLESPACE


Before commit, Oracle Database keeps records of actions of transaction because Oracle needs this
information to rollback or Undo the Changes.
What is the main Init.ora Parameters for Automatic Undo Management?
UNDO_MANAGEMENT:
The default value for this parameter is MANUAL. If you want to set the database in an automated
mode, set this value to AUTO. (UNDO_MANAGEMENT = AUTO)

29

ORACLE DBA BASICS


UNDO_TABLESPACE:
UNDO_TABLESPACE defines the tablespaces that are to be used as Undo Tablespaces. If no value is
specified, Oracle will use the system rollback segment to startup. This value is dynamic and can be
changed online (UNDO_TABLESPACE = <Tablespace_Name>)
UNDO_RETENTION:
The default value for this parameter is 900 Secs. This value specifies the amount of time, Undo is kept
in the tablespace. This applies to both committed and uncommitted transactions since the introduction
of FlashBack Query feature in Oracle needs this information to create a read consistent copy of the
data in the past.
UNDO_SUPRESS_ERRORS:
Default values is FALSE. Set this to true to suppress the errors generated when manual management
SQL operations are issued in an automated management mode.
How to Creating UNDO Tablespaces?
UNDO tablespaces can be created during the database creation time or can be added to an existing
database using the create UNDO Tablespace command
Scripts at the time of Database creation:
CREATE DATABASE <DB_NAME>
MAXINSTANCES 1
MAXLOGHISTORY 1
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 100
DATAFILE '<DISK>:\Directory\<FILE_NAME>.DBF' SIZE 204800K REUSE
AUTOEXTEND ON NEXT 20480K MAXSIZE 32767M
UNDO TABLESPACE "<UNDO_TABLESPACE_NAME>"
DATAFILE '<DISK>:\DIRECTORY\<FILE_NAME>.DBF SIZE 1178624K REUSE
AUTOEXTEND ON NEXT 1024K MAXSIZE 32767M
CHARACTER SET WE8MSWIN1252

30

ORACLE DBA BASICS


NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 (<DISK>:\DIRECTORY\<FILE_NAME>.LOG') SIZE 5024K,
GROUP 2 ('<DISK>:\DIRECTORY\<FILE_NAME>.LOG') SIZE 5024K,
GROUP 3 (<DISK>:\DIRECTORY\<FILE_NAME>.LOG') SIZE 5024K;
Scripts after creating Database:
CREATE UNDO TABLESPACE "<UNDO_TABLESPACE_NAME"
DATAFILE '<DISK>:\DIRECTORY\<FILE_NAME>.DBF' SIZE 1178624K REUSE
AUTOEXTEND ON;
How to Dropping an Undo Tablespace?
You cannot drop Active undo tablespace. Means, undo tablespace can only be dropped if it is not
currently used by any instance. Use the DROP TABLESPACE statement to drop an undo tablespace and
all contents of the undo tablespace are removed.
Example:
DROP TABLESPACE <UNDO_TABLESPACE_NAME> including contents;
How to Switching Undo Tablespaces?
We can switch form one undo tablespace to another undo tablespace. Because
the UNDO_TABLESPACE initialization parameter is a dynamic parameter, the ALTER SYSTEM
SET statement can be used to assign a new undo tablespace.
Step 1: Create another UNDO TABLESPACE
CREATE UNDO TABLESPACE "<ANOTHER_UNDO_TABLESPACE>"
DATAFILE '<DISK>:\Directory\<FILE_NAME>.DBF' SIZE 1178624K REUSE
AUTOEXTEND ON;
Step 2: Switches to a new undo tablespace:
alter system set UNDO_TABLESPACE=<UNDO_TABLESPACE>;
Step 3: Drop old UNDO TABLESPACE
drop tablespace <UNDO_TABLESPACE> including contents;
IMPORTANT:

31

ORACLE DBA BASICS


The database is online while the switch operation is performed, and user transactions can be executed
while this command is being executed. When the switch operation completes successfully, all
transactions started after the switch operation began are assigned to transaction tables in the new
undo tablespace.
The switch operation does not wait for transactions in the old undo tablespace to commit. If there is
any pending transactions in the old undo tablespace, the old undo tablespace enters into a PENDING
OFFLINE mode (status). In this mode, existing transactions can continue to execute, but undo records
for new user transactions cannot be stored in this undo tablespace.
An undo tablespace can exist in this PENDING OFFLINE mode, even after the switch operation
completes successfully. A PENDING OFFLINE undo tablespace cannot used by another instance, nor
can it be dropped. Eventually, after all active transactions have committed, the undo tablespace
automatically goes from the PENDING OFFLINE mode to the OFFLINE mode. From then on, the undo
tablespace is available for other instances (in an Oracle Real Application Cluster environment).
If the parameter value for UNDO TABLESPACE is set to '' (two single quotes), the current undo
tablespace will be switched out without switching in any other undo tablespace. This can be used, for
example, to unassign an undo tablespace in the event that you want to revert to manual undo
management mode.
The following example unassigns the current undo tablespace:
ALTER SYSTEM SET UNDO_TABLESPACE = '';
How to Monitoring Undo Space?
The V$UNDOSTAT view is useful for monitoring the effects of transaction execution on undo space in
the current instance. Statistics are available for undo space consumption, transaction concurrency, and
length of queries in the instance.
The following example shows the results of a query on the V$UNDOSTAT view.
SELECT BEGIN_TIME, END_TIME, UNDOTSN, UNDOBLKS, TXNCOUNT,
MAXCONCURRENCY AS "MAXCON" FROM V$UNDOSTAT;

Important Query related to Tablesapce


How to retrieve tablespace default storage Parameters?
SELECT TABLESPACE_NAME "TABLESPACE",
INITIAL_EXTENT "INITIAL_EXT",
NEXT_EXTENT "NEXT_EXT",
MIN_EXTENTS "MIN_EXT",
MAX_EXTENTS "MAX_EXT",
PCT_INCREASE
FROM DBA_TABLESPACES;
TABLESPACE INITIAL_EXT NEXT_EXT MIN_EXT

MAX_EXT

PCT_INCREASE

32

ORACLE DBA BASICS


---------- ----------- -------- ------RBS
1048576 1048576
SYSTEM
106496 106496
TEMP
106496 106496
TESTTBS
57344
16384
USERS
57344
57344

------- -----------2
40
0
1
99
1
1
99
0
2
10
1
1
99
1

How to retrieve information tablesapce and associated datafile?


SELECT FILE_NAME, BLOCKS, TABLESPACE_NAME
FROM DBA_DATA_FILES;
FILE_NAME
BLOCKS TABLESPACE_NAME
--------------------- ------------------/U02/ORACLE/IDDB3/RBS01.DBF
1536 RBS
/U02/ORACLE/IDDB3/SYSTEM01.DBF
6586 SYSTEM
/U02/ORACLE/IDDB3/TEMP01.DBF
6400 TEMP
/U02/ORACLE/IDDB3/TESTTBS01.DBF
6400 TESTTBS
/U02/ORACLE/IDDB3/USERS01.DBF
384 USERS
How to retrive Statistics for Free Space (Extents) of Each Tablespace?
SELECT TABLESPACE_NAME "TABLESPACE", FILE_ID,
COUNT(*) "PIECES",
MAX(blocks) "MAXIMUM",
MIN(blocks) "MINIMUM",
AVG(blocks) "AVERAGE",
SUM(blocks) "TOTAL"
FROM DBA_FREE_SPACE
GROUP BY TABLESPACE_NAME, FILE_ID;
TABLESPACE FILE_ID PIECES MAXIMUM MINIMUM AVERAGE
---------- ------- ------ ------- ------- ------- -----RBS
2
1
955
955
955
955
SYSTEM
1
1
119
119
119
119
TEMP
4
1
6399
6399
6399
6399
TESTTBS
5
5
6364
3
1278
6390
USERS
3
1
363
363
363
363

TOTAL

PIECES shows the number of free space extents in the tablespace file, MAXIMUM and MINIMUM show
the largest and smallest contiguous area of space in database blocks, AVERAGE shows the average
size in blocks of a free space extent, and TOTAL shows the amount of free space in each tablespace
file in blocks. This query is useful when you are going to create a new object or you know that a
segment is about to extend, and you want to make sure that there is enough space in the containing
tablespace.
Managing Tablespace
A tablespace is a logical storage unit. Why we are say logical because a tablespace is not visible in the
file system. Oracle store data physically is datafiles. A tablespace consist of one or more datafile.
Type of tablespace?

System Tablespace
Created with the database
Required in all database
Contain the data dictionary

33

ORACLE DBA BASICS

Non System Tablespace:


Separate undo, temporary, application data and application index segments Control the
amount of space allocation to the users objects
Enable more flexibility in database administration
How to Create Tablespace?
CREATE TABLESPACE "tablespace name"
DATAFILE clause SIZE . REUSE
MENIMUM EXTENT (This ensure that every used extent size in the tablespace is a multiple of the
integer)
BLOCKSIZE
LOGGING | NOLOGGING (Logging: By default tablespace have all changes written to redo, Nologging :
tablespace do not have all changes written to redo)
ONLINE | OFFLINE (OFFLINE: tablespace unavailable immediately after creation)
PERMANENT | TEMPORARY (Permanent: tablespace can used to hold permanent object, temporary:
tablespace can used to hold temp object)
EXTENT MANAGEMENT clause
Example:
CREATE TABLESPACE "USER1"
DATAFILE 'C:\LOCAL\ORADATA\USER_DATA.DBF' SIZE 10m REUSE
BLOCKSIZE 8192
LOGGING
ONLINE
PERMANENT
EXTENT MANAGEMENT LOCAL
How to manage space in Tablespace?
Tablespace allocate space in extent.
Locally managed tablespace:
The extents are managed with in the tablespace via bitmaps. In locally managed tablespace, all
tablespace information store in datafile header and dont use data dictionary table for store
information. Advantage of locally managed tablespace is that no DML generate and reduce contention
on data dictionary tables and no undo generated when space allocation or deallocation occurs.

34

ORACLE DBA BASICS


Extent Management [Local | Dictionary]
The storage parameters NEXT, PCTINCREASE, MINEXTENTS, MAXEXTENTS, and DEFAULT STORAGE
are not valid for segments stored in locally managed tablespaces.
To create a locally managed tablespace, you specify LOCAL in the extent management clause of the
CREATE TABLESPACE statement. You then have two options. You can have Oracle manage extents for
you automatically with the AUTOALLOCATE option, or you can specify that the tablespace is managed
with uniform extents of a specific size (UNIFORM SIZE).
If the tablespace is expected to contain objects of varying sizes requiring different extent sizes and
having many extents, then AUTOALLOCATE is the best choice.
If you do not specify either AUTOALLOCATE or UNIFORM with the LOCAL parameter, then
AUTOALLOCATE is the default.
Dictionary Managed tablespace
When we declaring a tablespace as a Dictionary Managed, the data dictionary manages the extents.
The Oracle server updates the appropriate tables (sys.fet$ and sys.uet$) in the data dictionary
whenever an extent is allocated or deallocated.
How to Create a Locally Managed Tablespace?
The following statement creates a locally managed tablespace named USERS, where AUTOALLOCATE
causes Oracle to automatically manage extent size.
CREATE TABLESPACE users
DATAFILE 'C:\LOCAL\ORADATA\USER_DATA.DBF' SIZE 50M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Alternatively, this tablespace could be created specifying the UNIFORM clause. In this example, a 512K
extent size is specified. Each 512K extent (which is equivalent to 64 Oracle blocks of 8K) is
represented by a bit in the bitmap for this file.
CREATE TABLESPACE users
DATAFILE 'C:\LOCAL\ORADATA\USER_DATA.DBF' SIZE 50M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K;
How to Create a Dictionary Managed Tablespace?
The following is an example of creating a DICTIONARY managed tablespace in Oracle9i:
CREATE TABLESPACE users
DATAFILE 'C:\LOCAL\ORADATA\USER_DATA.DBF' SIZE 50M
EXTENT MANAGEMENT DICTIONARY

35

ORACLE DBA BASICS


DEFAULT STORAGE (
INITIAL 64K
NEXT 64K
MINEXTENTS 2
MAXEXTENTS 121
PCTINCREASE 0);
What is Segment Space Management Options?
Two choices for segment-space management, one is manual (the default) and another auto.
Manual: This is default option. This option use free lists for managing free space within segments.
What are free lists: Free lists are lists of data blocks that have space available for inserting new rows.
Auto: This option use bitmaps for managing free space within segments. This is typically
called automatic segment-space management
Example:
CREATE TABLESPACE users
DATAFILE 'C:\LOCAL\ORADATA\USER_DATA.DBF' SIZE 10M REUSE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K
SEGMENT SPACE MANAGEMENT AUTO
PERMANENT
ONLINE;
How to Convert between LMT and DMT Tablespace?
The DBMS_SPACE_ADMIN package allows DBAs to quickly and easily convert between LMT and DMT
mode. Look at these examples:
SQL> exec dbms_space_admin.Tablespace_Migrate_TO_Local('ts1');
PL/SQL procedure successfully completed.
SQL> exec dbms_space_admin.Tablespace_Migrate_FROM_Local('ts2');
PL/SQL procedure successfully completed.
Oracle Tablespace: create, add a file, remove a file, drop, rename
Tablespaces & Datafiles: Overview

36

ORACLE DBA BASICS


Data for Oracle tables, indexes, etc is stored in data files, but never when an object is defined, the
object is associated with a file directly. All the time the Oracle objects are "located" in the tablespaces.
The "tablespaces" are logical concepts and each tablespace is in relation with one or more physical
file. So, when an object is created in a tablespace, the data will be stored automatically in the file(s)
associated with this tablespace.
Tablespace creation
=> Data tablespace (created for data objects like tables, materialized view, indexes)
CREATE TABLESPACE DATA_1_TBS
DATAFILE 'C:\oradata\data_1.dbf'
SIZE 20M AUTOEXTEND ON;
This tablespace named DATA_1_TBS has allocated 20M space on the file data_1.dbf and the size of the
file will increase if the tablespace will need more space on the disk.
=> Temporary tablespace (keep temporary data for sort, join operations)
CREATE TEMPORARY TABLESPACE temp_1
TEMPFILE 'c:\temp01.dbf' SIZE 5M AUTOEXTEND ON;
This tablespace named "temp_1" has allocated 5M space on the file temp01.dbf and the size of the
file will increase if the tablespace will need more space on the disk. However, in general, the
temporary tablespaces are not set to be "autoextend off", but have enough room for the database
needs.
=> UNDO tablespace (keep the old values for the transactions which are not commited)
CREATE UNDO TABLESPACE undo1
DATAFILE 'c:\oradata\undo1.dbf' SIZE 10M AUTOEXTEND ON
RETENTION GUARANTEE;
If you use the "RETENTION GUARANTEE" clause Oracle guarantees that whatever retention period
you have set will be honored.
NOTES:

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; --> creates a locally managed
tablespace in which every extent is 128K

EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO; --> creates a tablespace
with automatic segment-space management (ASSM).
Add a file to the tablespace
ALTER TABLESPACE DATA_1_TBS
ADD DATAFILE 'c:\oradata\data_file2.dbf' SIZE 30M AUTOEXTEND OFF;
To get more information on the files which are associated with a tablespace the following query
could be used:
SELECT TABLESPACE_NAME, FILE_NAME, FILE_ID, AUTOEXTENSIBLE, ONLINE_STATUS
FROM DBA_DATA_FILES ORDER BY 1;
Remove a file from a tablespace (Resizing a tablespace)
Removing a file from a tablespace cannot be done directly. First, the objects must be moved in
another tablespace, the initial tablespace will be dropped and recreated. After that the objects could
be moved again in the tablespace which was resized. If the reason you wanted to drop the file is
because you mistakenly created the file of the wrong size, then consider using the RESIZE command.
Add more space to a tablespace without adding a new file
ALTER DATABASE DATAFILE 'C:\oradata\data_1.dbf' RESIZE 25M;
Dropping a tablespace
DROP TABLESPACE DATA_1_TBS; (if the tablespace is empty)
DROP TABLESPACE DATA_1_TBS INCLUDING CONTENTS; (if the objects in the tablespace are no
longer needed)
However the files must be deleted from the OS level
Rename a tablespace

37

ORACLE DBA BASICS


(in 10g)
ALTER TABLESPACE DATA_1_TBS RENAME TO DATA_10_TBS;
(in 9i)
1. Create a new tablespace NEW_TBLS.
2. Copy all objects from OLD_TBLS to NEW_TBLS.
3. Drop tablespace OLD_TBLS.
Moving the tablespace files
=> for Data files, Log files:
1) Shutdown the database.
2) Rename the physical file on the OS. ==> Win: SQL> HOST MOVE file1.dbf file2.dbf
3) Start the database in MOUNT mode.
ALTER DATABASE RENAME FILE 'C:\OHOME_9I\ORADATA\DB9\REDO01.LOG' TO 'C:\ORACLE\data\RED
O01.LOG';
ALTER DATABASE RENAME FILE 'C:\OHOME_9I\ORADATA\DB9\REDO02.LOG' TO 'C:\ORACLE\data\RED
O02.LOG';
ALTER DATABASE RENAME FILE 'C:\OHOME_9I\ORADATA\DB9\REDO03.LOG' TO 'C:\ORACLE\data\RED
O03.LOG';
ALTER DATABASE OPEN;
=> for Control File (SPFILE is used)
1) Alter control_files initialisation parameter in SPFILE
ALTER SYSTEM SET control_files = 'C:\NEW_PATH\RENAME_CONTROL01.CTL',
'C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL02.CTL',
'C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL03.CTL'
SCOPE = SPFILE;
2) Shutdown the database.
3) Rename the physical Controle file on the OS. ==> Win: SQL> HOST MOVE file1.ctl file2.ctl
4) Start the database.
=> for TEMP files
1) CREATE TEMPORARY TABLESPACE TEMP1 TEMPFILE '...\temp_temp1.dbf' SIZE 2M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;
2) ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP1;
3) DROP TABLESPACE TEMP INCLUDING CONTENTS; -- TEMP = 1st temporary tablespace
4) CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '...\temp01.dbf' SIZE 2G
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 100M;
5) ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP;
6) DROP TABLESPACE TEMP1 INCLUDING CONTENTS;

SQL Loader in Oracle


SQL*Loader Overview

38

ORACLE DBA BASICS

SQL*Loader is an Oracle-supplied utility that allows you to load data from a flat file
(the flat file must be formatted) into an Oracle database. SQL*Loader supports
various load formats, selective loading, and multi-table loads. SQL*Loader utility
(must be run at the OS level) use a control file which contains the way the data is
formatted and inserted into the Oracle database. During the insert operation a
discard, log and bad files are created. The log file is a record of SQL*Loader's
activities during a load session. Where a row is not inserted in a table (constraint
violations, not enough disk space, etc) a record is inserted in thebad file. Sometimes,
in the control file there are some criteria that a record must meet before it is loaded.
If the criteria is not meet the record is not inserted in the database but in the discard
file. The discard file is optional.

Invoking SQL*Loader

The SQL*Loader is invoked by running the following command:

sqlldr scott/s control = C:\sqlloader.ctl

In this case the SQL*Loader connects to the database as scott and using the
information provided in sqlloader.ctl file insert the data in the database.

39

ORACLE DBA BASICS

Supposing we have a text file which contains data for SCOTT.DEPT1 table. Here is the
content of the file C:\DEPT.txt :

SCOTT.DEPT1 table has the following description:

For this we create the following control file C:\sqlloader.ctl

load data
infile 'c:\DEPT.txt'
into table DEPT1
fields terminated by "," optionally enclosed by '"'
( DNAME, DEPTNO, LOC )

The following command will insert the data in the DEPT table:

40

ORACLE DBA BASICS


sqlldr scott/s control = C:\sqlloader.ctl log = DEPTNO1.log discard = DEPTNO1.dis

The default behavior of SQL*Loader is to insert data in an empty table. If the table is
not empty an error will occur. If we want to append data the APPEND parameter must
be added in the control file. If we want to replace the old data with the new one the
REPLACE parameter must be added. Here is an example using APPEND parameter:

load data
infile 'c:\DEPT.txt'
APPEND
into table DEPT1
fields terminated by "," optionally enclosed by '"'
( DNAME, DEPTNO, LOC )

Also, the WHEN clause could be added to filter the data which will be inserted in the
database:

load data
infile 'c:\DEPT.txt'
INSERT into table DEPT1
WHEN (12:13) = '10'
( DNAME POSITION(1:10),
DEPTNO POSITION(12:13),
LOC POSITION(15:23))

41

ORACLE DBA BASICS

Data transformation

Data transformation is possible during the data load. The control file must be modified
to allow this. Here is an example of control file which allow data transformation:

load data
infile 'c:\DEPT.txt'
into table DEPT1
fields terminated by "," optionally enclosed by '"'
( DNAME,
DEPTNO,
LOC constant "TORONTO")

Here are other examples where data is transformed at the column level:
-> using sequences:
-> modifying the data from the file:
(...)
-> using a constant:
-> using an Oracle function:
(...)

(...) rec_no "SEQ_NAME.nextval", (...)


(...) hire_date POSITION(1:5) ":hire_date+1",

(...) LOC constant "TORONTO" (...)


(...) name POSITION(6:15) "upper(:name)"

Load Fixed & Variable length data records

The example which use FIELDS TERMINATED BY "," allows variable length records,
because the columns are delimited by "," (could be used any other sign). Sometimes
we don't have delimiters and the columns are fixed length values. In this case the
control file must be like:

load data
infile 'c:\DEPT.txt'
into table DEPT1

42

ORACLE DBA BASICS


( DNAME POSITION(1:10),
DEPTNO POSITION(12:13),
LOC POSITION(15:23))

and the data file must have the content like:

ACOUNTING
10 OTTAWA
MANAGEMENT 20 MONTREAL
RESEARCH
30 HALIFAX
SALES
40 QUEBEC

Loading in multiple table at once

Here is an example where the insertions are done on 2 tables (in my example DEPT1
and DEPT2 have identical structure):

load data
infile 'c:\DEPT.txt'
into table DEPT1
( DNAME POSITION(1:10),
DEPTNO POSITION(12:13),
LOC POSITION(15:23))
into table DEPT2
( DNAME POSITION(1:10),
DEPTNO POSITION(12:13),
LOC POSITION(15:23))

Improving the SQL*Loader performance

Here are some techniques to improve the load speed:

Using the direct load path (DIRECT=TRUE)

43

ORACLE DBA BASICS

Committing after a big number of insertions (ROWS=<specify a number>)

Automatic Diagnostic Repository (ADR) in Oracle 11g


A special repository, named ADR (Automatic Diagnostic Repository) is automatically maintained
by Oracle 11g to hold diagnostic information about critical error events. This repository is maintained
in memory which enables database components to capture diagnostic data at its first failure for critical
errors.
In Oracle 11g, the init.ora parameters like user_dump_dest and background_dump_dest are
deprecated. They have been replaced by the single parameter DIAGNOSTIC_DEST which identifies the
location of the ADR . ADR is file based repository for diagnostic data like trace file,process dump,data
structure dump etc.
The default location of DIAGNOSTIC_DEST is $ORACLE_HOME/log, and if ORACLE_BASE is set in
environment then DIAGNOSTIC_DEST is set to $ORACLE_BASE. The ADR can be managed via the
11g Enterprise Manager GUI (Database Control and not Grid Control) or via the ADR command line
interpreter adrci .
11g new initialize parameter DIAGNOSTIC_DEST decide location of ADR root.

Structure of ADR Directory is designed in such a way that uses consistent diagnostic data formats
across products and instances, and a integrated set of tools enable customers and Oracle Support to
correlate and analyze diagnostic data across multiple instances .
In 11g alert file is saved in 2 location, one is in alert directory ( in XML format) and old style alert file
in trace directory . Within ADR base, there can be many ADR homes, where each ADR home is the
root directory for all diagnostic data for a particular instance. The location of an ADR home for a
database is shown in the above pictures . Both the files can be viewed with EM and ADRCI Utility.
SQL> show parameter diag
NAME
TYPE
VALUE
----------------------------------diagnostic_dest string
D:\ORACLE
Below table shows us the new location of Diagnostic trace files
Data
Old location
ADR location

44

ORACLE DBA BASICS


-------------------------------------------------------------Core Dump
CORE_DUMP_DEST
$ADR_HOME/cdump
Alert log data
BACKGROUND_DUMP_DEST
$ADR_HOME/trace
Background process trace BACKGROUND_DUMP_DEST
$ADR_HOME/trace
User process trace
USER_DUMP_DEST
$ADR_HOME/trace
We can use V$DIAG_INFOview to list some important ADR locations such as ADR Base, ADR Home,
Diagnostic Trace, Diagnostic Alert, Default Trace file, etc.
SQL> select * from v$diag_info;
INST_ID
NAME
VALUE
---------- ------------------------------------1
Diag Enabled
TRUE
1
ADR Base
d:\oracle
1
ADR Home
d:\oracle\diag\rdbms\noida\noida
1
Diag Trace
d:\oracle\diag\rdbms\noida\noida\trace
1
Diag Alert
d:\oracle\diag\rdbms\noida\noida\alert
1
Diag Incident
d:\oracle\diag\rdbms\noida\noida\incident
1
Diag Cdump
d:\oracle\diag\rdbms\noida\noida\cdump
1
Health Monitor
d:\oracle\diag\rdbms\noida\noida\hm
1
Active Problem Count
0
1
Active Incident Count
0
10 rows selected.
ADRCI ( Automatic Diagnostic Repository Command Interpreter) :
The ADR Command Interpreter (ADRCI) is a command-line tool that we use to manage Oracle
Database diagnostic data. ADRCI is a command-line tool that is part of the fault diagnosability
infrastructure introduced in Oracle Database Release 11g. ADRCI enables:

Viewing diagnostic data within the Automatic Diagnostic Repository (ADR).

Viewing Health Monitor reports.

Packaging of incident and problem information into a zip file for transmission to Oracle
Support.
Diagnostic data includes incident and problem descriptions, trace files, dumps, health monitor reports,
alert log entries, and more .
ADRCI has a rich command set, and can be used in interactive mode or within scripts. In addition,
ADRCI can execute scripts of ADRCI commands in the same way that SQL*Plus executes scripts of
SQL and PL/SQL commands.
To use ADRCI in interactive mode :
Enter the following command at the operating system command prompt:
C:\>adrci
ADRCI: Release 11.1.0.6.0 - Beta on Wed May 18 12:31:40 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
ADR base = "d:\oracle"
To get list of adrci command type help command as below :
adrci> help
HELP [topic]

45

ORACLE DBA BASICS


Available Topics:
CREATE REPORT
ECHO
EXIT
HELP
HOST
IPS
PURGE
RUN
SET BASE
SET BROWSER
SET CONTROL
SET ECHO
SET EDITOR
SET HOMES | HOME | HOMEPATH
SET TERMOUT
SHOW ALERT
SHOW BASE
SHOW CONTROL
SHOW HM_RUN
SHOW HOMES | HOME | HOMEPATH
SHOW INCDIR
SHOW INCIDENT
SHOW PROBLEM
SHOW REPORT
SHOW TRACEFILE
SPOOL
There are other commands intended to be used directly by Oracle, type "HELP EXTENDED" to see the
list
Viewing the Alert Log :
The alert log is written as both an XML-formatted file and as a text file. we can view either format of
the file with any text editor, or we can run an ADRCI command to view the XML-formatted alert log
with the XML tags stripped. By default, ADRCI displays the alert log in your default editor
The following are variations on the SHOW ALERT command:
adrci > SHOW ALERT -TAIL
This displays the last portion of the alert log (the last 10 entries) in your terminal session.
adrci> SHOW ALERT -TAIL 50
This displays the last 50 entries in the alert log in your terminal session.
adrci> SHOW ALERT -TAIL -F
This displays the last 10 entries in the alert log, and then waits for more messages to arrive in the
alert log. As each message arrives, it is appended to the display. This command enables you to
perform "live monitoring" of the alert log. Press CTRL-C to stop waiting and return to the ADRCI
prompt.Here are few Example :
adrci> show alert
Choose the alert log from the following homes to view:
1: diag\clients\user_neerajs\host_444208803_11
2: diag\clients\user_system\host_444208803_11
3: diag\clients\user_unknown\host_411310321_11
4: diag\rdbms\delhi\delhi
5: diag\rdbms\noida\noida

46

ORACLE DBA BASICS


6: diag\tnslsnr\ramtech-199\listener
Q: to quit
Please select option: 4
Output the results to file: c:\docume~1\neeraj~1.ram\locals~1\temp\alert_932_4048_delhi_1.ado
'vi' is not recognized as an internal or external command,
operable program or batch file.
Please select option: q
Since we are on window platform so we don't have vi editor.So we have set editor for window say
notepad.
adrci> set editor notepad
adrci> SHOW ALERT
Choose the alert log from the following homes to view:
1: diag\clients\user_neerajs\host_444208803_11
2: diag\clients\user_system\host_444208803_11
3: diag\clients\user_unknown\host_411310321_11
4: diag\rdbms\delhi\delhi
5: diag\rdbms\noida\noida
6: diag\tnslsnr\ramtech-199\listener
Q: to quit
Please select option: 4
Output the results to file: c:\docume~1\neeraj~1.ram\locals~1\temp\alert_916_956_noida_7.ado
Here it will open the alert log file and check the file as per our need .
If we want to filter the alert log file then we can filter as below :
adrci> show alert -P "message_text LIKE '%ORA-600%'"
This displays only alert log messages that contain the string 'ORA-600'.
Choose the alert log from the following homes to view:
1: diag\clients\user_neerajs\host_444208803_11
2: diag\clients\user_system\host_444208803_11
3: diag\clients\user_unknown\host_411310321_11
4: diag\rdbms\delhi\delhi
5: diag\rdbms\noida\noida
6: diag\tnslsnr\ramtech-199\listener
Q: to quit
Please select option: 5
Here, there is no ora-600 error in alert log file so it is blank
Finding Trace Files :
ADRCI enables us to view the names of trace files that are currently in the automatic diagnostic
repository (ADR). We can view the names of all trace files in the ADR, or we can apply filters to view a
subset of names. For example, ADRCI has commands that enable us to:

Obtain a list of trace files whose file name matches a search string.

Obtain a list of trace files in a particular directory.

Obtain a list of trace files that pertain to a particular incident.


The following statement lists the name of every trace file that has the string 'mmon' in its file name.
The percent sign (%) is used as a wildcard character, and the search string is case sensitive.
adrci> SHOW TRACEFILE %pmon%
This statement lists the name of every trace file that is located in the directory and that has the string
'mmon' in its file name:

47

ORACLE DBA BASICS


adrci> SHOW TRACEFILE -RT
This statement lists the names of all trace files related to incident number 1681:
Viewing Incidents :
The ADRCI SHOW INCIDENT command displays information about open incidents. For each incident,
the incident ID, problem key, and incident creation time are shown. If the ADRCI homepath is set so
that there are multiple current ADR homes, the report includes incidents from all of them.
adrci> SHOW INCIDENT
ADR Home = d:\oracle\diag\rdbms\noida\noida:
*******************************************************************
0 rows fetched
Purging Alert Log Content :
The adrci command purge can be used to purge entries from the alert log. Note that this purge will
only apply to the XML based alert log and not the text file based alert log which still has to be
maintained using OS commands. The purge command takes the input in minutes and specifies the
number of minutes for which records should be retained.
So to purge all alert log entries older than 7 days the following command will be used:
adrci > purge -age 10080 -type ALERT
ADR Retention can be controlled with ADRCI :
There is retention policy for ADR that allow to specify how long to keep the data ADR incidents are
controlled by two different policies:
The incident metadata retention policy ( default is 1 year )
The incident files and dumps retention policy ( Default is one month)
We can change retention policy using adrci MMON purge data automatically on expired ADR data.
adrci> show control
The above command will show the shortp_policy and longp_policy and this policy can the changed as
below:
adrci> set control (SHORTP_POLICY = 360 )
adrci> set control (LONGP_POLICY = 4380 )
What Is Spfile ?
There are hundreds of instance parameters that define the way an instance operates. As an
administrator you have to set each of these parameters correctly. All these parameters are stored in a
file called parameter file. These parameter files are also called initialization files as they are needed for
an instance to startup. (More information on how a database is opened)
There are two kinds of parameter file. Parameter file (pfile) and server parameter file (spfile).
Differences between an spfile and pfile
1. Spfiles are binary files while pfiles are plain text files.
2.

If you are using spfile, instance parameters can be changed permanently using SQL*Plus

commands. If you are using pfile, you have to edit pfile using an editor to change values permanently.
3. Spfile names should be either spfile<SID>.ora or spfile.ora. Pfile names must be init<SID>.ora.
(More information onspfile naming)

48

ORACLE DBA BASICS


How to find out if you are using pfile or spfile
If you are using spfile, the "spfile" parameter will show the path of spfile. Otherwise the value of
"spfile" parameter will be null.
1
SQL> show parameter spfile ;
2
3
NAME
TYPE
VALUE
4
------------------------------------ ----------- -----------------------------5
spfile
string
In the example above, the value is null, which means I am using a pfile.
Contents Of Parameter Files
Parameter files contain name-value pairs for instance parameters.
<parameter_name> = <parameter_value>
1
2
3
4
5
6
7
8 $ cd $ORACLE_HOME/dbs/
9
1 $ less inittestdb.ora
0
1 testdb.__db_cache_size=1795162112
1 testdb.__java_pool_size=16777216
1 testdb.__large_pool_size=16777216
2 testdb.__oracle_base='/u01'#ORACLE_BASE set from environment
1 testdb.__pga_aggregate_target=1677721600
3 testdb.__sga_target=2516582400
1 testdb.__shared_io_pool_size=0
4 testdb.__shared_pool_size=637534208
1 testdb.__streams_pool_size=16777216
5 *.audit_file_dest='/u01/admin/testdb/adump'
1 *.audit_trail='db'
6 *.compatible='11.2.0.0.0'
1 *.control_files='+ORADATA/testdb/controlfile/current.475.758824101','+FRA/testdb/controlfile/current.257.75
7 8824101'
1 *.db_block_size=8192
8 *.db_create_file_dest='+ORADATA'
1 *.db_name='testdb'
9 *.db_recovery_file_dest='+FRA'
2 *.db_recovery_file_dest_size=4227858432
0
2 **************** Output Truncated ***********************
1
2
2
2
3
2
4

49

ORACLE DBA BASICS


Pfile and spfiles have the same content except that one is plain text while other is binary.
How To Relocate an Spfile
Oracle will always search for the parameter files under "$ORACLE_HOME/dbs" folder. You also cannot
change the name of a parameter file. The naming format is mandatory. (More information
on parameter file naming format).
We usually don't want to change the path of parameter file under normal circumstances but if you are
using RAC, you will have to place your spfile in a shared storage different than the usual
"$ORACLE_HOME/dbs" folder.
1. Create a pfile.
1
$ vi initMYDB.ora
2. Enter the line below into the pfile. By writing this line, you are setting the "spfile" parameter to the
new location of your spfile.
1
SPFILE='/new_location/spfileMYDB.ora'
3. Shutdown your database.
1
sql> shutdown immediate ;
4. Copy current spfile to the new location.
1
$ cp spfileMYDB.ora /new_location/spfileMYDB.ora
5. Startup your instance.
1
sql> startup ;
6. Verify that your database is using the spfile at the new location.
1
SQL> show parameter spfile ;
2
3
NAME
TYPE
VALUE
4
------------------------------------ ----------- -----------------------------5
spfile
string
/new_location/spfileMYDB.ora
Transforming Pfile Into Spfile or Vice Versa
There are sql commands to transform pfile into spfile and vice versa.
1
SQL> create spfile='/home/oracle/myspfile.ora' from pfile;
2
3
File created.
The command above will transform your current pfile into spfile and will store the spfile at the location
you specify. (/home/oracle/myspfile.ora)
If you don't provide a location for spfile, the server parameter file will be created at its default
location. ( $ORACLE_HOME/dbs/spfile<SID>.ora )

50

ORACLE DBA BASICS


1
2
3
4
5
6
7

SQL> create spfile from pfile ;


File created.
$ file $ORACLE_HOME/dbs/spfiletestdb.ora
/u01/app/oracle/11.2.0.2/dbs/spfiletestdb.ora: data

Now lets assume that you are using spfile and create a pfile from it.
1
SQL> create pfile='/home/oracle/mypfile.ora' from spfile;
2
3
File created.
4
5
$ file /home/oracle/mypfile.ora
6
7
/home/oracle/mypfile.ora: ASCII text
The command above will transform your current spfile into pfile and will store the pfile at the location
you specify. (/home/oracle/mypfile.ora)
One thing to mention is that infact there does not even have to be a database or an instance to
perform a transformation. SQL*Plus is the only required tool.
Loss of an Spfile
If you lose your spfile (for ex: accidentally deleted it) and your instance is up, you may recreate an
spfile. The values of parameters are read by instance at startup.
So,
1
2
3
4
5
6
7

your instance knows all the values. It can create a new spfile from those parameter values.
SQL> create spfile='/home/oracle/myspfile.ora' from memory ;
File created.
$ file /home/oracle/myspfile.ora
/home/oracle/myspfile.ora: data

If you lose your spfile and your db is down you'll have to restore your spfile from a backup. That
subject is related with rman (recovery manager) and is not covered in this article.
Instance Parameters
There are hundreds of instance parameters that determine the way an instance operates. As an
administrator, you have to set each of these parameters correctly.
All these parameter are stored in a file called parameter file. (More information on spfile)
How To View Parameters Of An Instance

51

ORACLE DBA BASICS

You can query V$SYSTEM_PARAMETER view to see the parameters of an instance.


1
sql> select name,value,description from V$SYSTEM_PARAMETER ;
NAME

VALUE

lock_name_space

DESCRIPTION
lock name space used for generating lock names for standby/clone database

processes

550

user processes

sessions

848

user and system sessions

timed_statistics

TRUE

maintain internal timing statistics

name

=> Name of the parameter.

value

=> Value of the parameter.

description

=> This describes what the parameter is about.

Modifying Parameters
1. Session-wide Parameters
You can change the value of a parameter session-wide using "ALTER SESSION" command. The scope
is limited to session, not instance. It is valid only for the current session. At next login, you'll see that
the parameter has been reset. V$PARAMETER view shows the session-wide parameters.
1
sql> SELECT name, VALUE, isses_modifiable
2
FROM V$PARAMETER
3
WHERE NAME = 'nls_language'
NAME

VALUE

ISSES_MODIFIABLE

nls_language

TURKISH

TRUE

The value of "nls_language" parameter is "TURKISH".

"ISSES_MODIFIABLE" column shows whether

this parameter can be changed using "ALTER SESSION" command. In this case this is "TRUE" which
means that I can change it.
1
sql> alter session set nls_language='ENGLISH' ;
2
3
sql> SELECT name, VALUE, isses_modifiable
4
FROM V$PARAMETER
5
WHERE NAME = 'nls_language' ;
NAME

VALUE

ISSES_MODIFIABLE

nls_language

ENGLISH

TRUE

2. Instance-wide Parameters
You can change the value of a parameter instance-wide using "ALTER SYSTEM" command. The scope is
instance-wide. Every session is affected. When a user logs in, a session is created and that session
inherits the values of parameters from instance-wide parameter values. V$SYSTEM_PARAMETER view
shows the instance-wide parameters.
1
sql> SELECT name, VALUE, issys_modifiable
2
FROM V$SYSTEM_PARAMETER
3
WHERE NAME = 'db_recovery_file_dest_size' ;

52

ORACLE DBA BASICS

NAME

VALUE

ISSYS_MODIFIABLE

db_recovery_file_dest_size

4227858432

IMMEDIATE

ISSYS_MODIFIABLE => this column shows how the parameter change will affect the instance. If it is
"IMMEDIATE", this means that the changes will take effect immediately. Such parameters are called
dynamic parameters. If the column is "FALSE", then you will have to restart your instance for changes
to take effect. Such parameters are called static parameters.
1
SQL> alter system set db_recovery_file_dest_size=4000000000 scope=both;
2
3
SQL> select name,value,issys_modifiable
4
from v$system_parameter
5
where name='db_recovery_file_dest_size' ;
NAME

VALUE

ISSYS_MODIFIABLE

db_recovery_file_dest_size

4000000000

IMMEDIATE

Scope Option
While changing an instance-wide parameter using "ALTER SYSTEM" command you can also set scope
option to determine the scope of change. Scope option can take the value "MEMORY","SPFILE" or
"BOTH"
MEMORY => if you are modifying a dynamic parameter, the changes will take effect immediately for
current instance. But after a restart of the instance the changes will revert. If you use "MEMORY", the
changes will be temporary.
You
1
2
3
4
5
6

cannot modify a static parameter with "MEMORY".


SQL> alter system set processes=300 scope=MEMORY;
alter system set processes=300 scope=MEMORY
*
ERROR at line 1:
ORA-02095: specified initialization parameter cannot be modified

SPFILE => If the instance was started using an spfile (more information on how a database starts)
and you set "SPFILE" for scope option, then the change will be recorded in spfile. However, the current
instance will keep on operating with old values. The changes will take effect only after a restart.
You
1
2
3
4
5
6

cannot use "SPFILE" if the instance was started using a pfile.


SQL> alter system set processes=300 scope=spfile;
alter system set processes=300 scope=spfile
*
ERROR at line 1:
ORA-32001: write to SPFILE requested but no SPFILE is in use

BOTH => IF you use "BOTH" for scope option for a dynamic parameter, the changes will take effect
immediately and be permanent.

53

ORACLE DBA BASICS

Instance Parameters In RAC


In a RAC, every instance can have their own parameter values but there can only be one single shared
spfile. The spfile entries have <instance_name>.<parameter_name> = <parameter_value> format.
For example in a 2 node RAC (MYDB1 and MYDB2) an spfile may contain such entries;
1
MYDB1.processes=150
2
MYDB2.processes=230
If the value for a parameter is identical in all nodes, then instead of instance names, a star (*) can be
added as a prefix.
1
*.processes=150
"SID" Option When Changing Parameters Using "ALTER SYSTEM" Command
In a RAC configuration you can change parameters with "ALTER SYSTEM" command, but you have to
specify which instances will be affected from the change.
You can specify which instances the change will apply to by setting SID option. You can connect to any
instance and change parameter of any instance from that instance. For ex:
1
sql> alter system set processes=230 scope=spfile SID='MYDB2';
In the example above, I've set the "processes" parameter to 230 for instance MYDB2. To change a
parameter for instance MYDB2, I don't even have to connect to MYDB2. I can do that from any node.
But be careful, Oracle does not verify that a node named "MYDB2" exists. Be sure you type the
instance name correctly.
1
sql> alter system set processes=200 scope=spfile SID='*';
Here, all the nodes in the RAC will have "200" for parameter "processes".
Default Parameter Values
You don't have to set every parameter explicitly in spfile. Those parameters, which you haven't set,
will get their default values that are determined by Oracle.
1
sql> select name,value,isdefault from v$system_parameter ;
NAME

VALUE

ISDEFAULT

log_archive_max_processes

TRUE

open_cursors

300

FALSE

If a parameter is not defined in spfile, the "ISDEFAULT" column will be "TRUE". Otherwise it will
be "FALSE". I haven't set "log_archive_max_processes" parameter and its default value is 4. However,
I've set "open_cursors" parameter in spfile and it doesn't have a default value.
1
$ strings spfileMYDB.ora | grep open_cursors
2
3
open_cursors=300

54

ORACLE DBA BASICS


As seen above, the "open_cursors" parameter is set in spfile.
Resetting Default Values
You can always reset a parameter that you've set, using "ALTER SYSTEM RESET" command. This
command will remove the entry from spfile.
1
SQL> alter system reset open_cursors;
After restarting the instance, the "open_cursors" parameter is reverted to its default value.
1
sql> SELECT name, VALUE, isdefault
2
FROM v$system_parameter
3
WHERE name = 'open_cursors' ;
NAME

VALUE

ISDEFAULT

open_cursors
50
1
$ strings spfileMYDB.ora | grep open_cursors

TRUE

The "open_cursors" parameter is no longer in spfile.


Finding Modified Parameters
You can find which parameters have been modified using "ALTER SYSTEM" command since the
instance started. There is a column named "ISMODIFIED" in v$system_parameter view. If the
parameter has been modified it shows "TRUE", otherwise it shows "FALSE"

Deprecated Parameters
As new database versions are developed, some parameters used in earlier versions may become
deprecated. There is a column named "ISDEPRECATED" in v$SYSTEM_PARAMETER view. This column
shows whether the parameter is deprecated or not in current version.
1
sql> SELECT name, VALUE, isdeprecated
2
FROM v$system_parameter
3
WHERE name = 'background_dump_dest' ;
NAME

VALUE

ISDEPRECATED

background_dump_dest

/u01/diag/rdbms/testdb/testdb/trace

TRUE

As of 11g, the "background_dump_dest" parameter is no longer being used. It used to show the path
of background dump files. We now have "diagnostic_dest" parameter instead in version 11g. As seen
in the query, the "ISDEPRECATED" column shows "TRUE".
How Is Database Opened ?
An instance can be "started" or "shutdown". A database can be "mounted","opened", "closed" and
"dismounted". However instance and database is tightly attached so you may also use "starting a
database" or "shutting down a database".
Your database will be opened automatically after a server reboot if you've configured "Oracle Restart"
or if your database is a RAC.
You may also manually shut your database down and then start it up anytime you want. Let's take a

55

ORACLE DBA BASICS


database which is down, start it up using SQL*Plus and see how Oracle opens a database.
1. As oracle user set ORACLE_HOME environment variable to the home directory of your Oracle
installation and set ORACLE_SID environment variable to anything you want. Your instance name will
be the value you set as $ORACLE_SID.
1
$ export ORACLE_HOME=/u01/app/oracle/11.2.0.2/
2
$ export ORACLE_SID=myins
2. On the server, start SQL*Plus as oracle user.
1
$ sqlplus / as sysdba
2
SQL*Plus: Release 11.2.0.2.0 Production on Fri Oct 21 16:05:16 2011
3
Copyright (c) 1982, 2010, Oracle. All rights reserved.
4
Connected to an idle instance.
At this point you can execute "startup" command and the database completes all the stages and is
opened. However you may also pass each stage manually.

3. NOMOUNT State
1
SQL> startup nomount;
2
ORACLE instance started.
3
Total System Global Area 4175568896 bytes
4
Fixed Size
2233088 bytes
5
Variable Size
3137342720 bytes
6
Database Buffers
1023410176 bytes
7
Redo Buffers
12582912 bytes
At this stage the instance is started but there is no database yet.
To start an instance, oracle needs a parameter file. It will search for a parameter file under
"$ORACLE_HOME\dbs\" directory in the order below and will use the first file it finds.
- spfile<SID>.ora (spfile)
- spfile.ora
(spfile)
- init<SID>.ora
(pfile)
You will also find a parameter file named "init.ora" under this directory. This is a template parameter
file that comes with an installation. You may use this template file as a starting point for building your

56

ORACLE DBA BASICS


parameter file. However Oracle will ignore it until you name it init<SID>.ora .
Although a typical parameter file will consist many parameters, "db_name" parameter is the only
required parameter to start an instance. Your database name will be the value you set for this
parameter.
When an instance starts, SGA (System Global Area) is created in RAM and background processes are
started.

At nomount stage, as your instance is started, you should be able to see the background processes
associated with the instance.
$ ps -ef | grep myins
1
oracle 20227
1 0 18:56 ?
00:00:00 ora_pmon_myins
2
oracle 20229
1 0 18:56 ?
00:00:00 ora_psp0_myins
3
oracle 20232
1 0 18:56 ?
00:00:00 ora_vktm_myins
4
oracle 20236
1 0 18:56 ?
00:00:00 ora_gen0_myins
5
oracle 20238
1 0 18:56 ?
00:00:00 ora_diag_myins
6
oracle 20240
1 0 18:56 ?
00:00:00 ora_dbrm_myins
7
oracle 20242
1 0 18:56 ?
00:00:00 ora_dia0_myins
8
oracle 20244
1 27 18:56 ?
00:00:02 ora_mman_myins
9
oracle 20246
1 0 18:56 ?
00:00:00 ora_dbw0_myins
10
oracle 20248
1 0 18:56 ?
00:00:00 ora_lgwr_myins
11
oracle 20250
1 0 18:56 ?
00:00:00 ora_ckpt_myins
12
oracle 20252
1 0 18:56 ?
00:00:00 ora_smon_myins
13
oracle 20254
1 0 18:56 ?
00:00:00 ora_reco_myins
14
oracle 20256
1 0 18:56 ?
00:00:00 ora_rbal_myins
15
oracle 20258
1 0 18:56 ?
00:00:00 ora_asmb_myins
16
oracle 20260
1 0 18:56 ?
00:00:00 ora_mmon_myins
17
oracle 20262
1 0 18:56 ?
00:00:00 oracle+ASM_asmb_myins (DESCRIPTION=(LOCAL=YES)
18
(ADDRESS=(PROTOCOL=beq)))
19
oracle 20264
1 0 18:56 ?
00:00:00 ora_mmnl_myins
20
oracle 20266
1 0 18:56 ?
00:00:00 ora_d000_myins
21
oracle 20268
1 0 18:56 ?
00:00:00 ora_s000_myins
22
oracle 20270
1 0 18:56 ?
00:00:00 ora_mark_myins
23
oracle 20277
1 0 18:56 ?
00:00:00 ora_ocf0_myins
24
oracle 20282
1 0 18:56 ?
00:00:00 oracle+ASM_ocf0_myins (DESCRIPTION=(LOCAL=YES)
25
(ADDRESS=(PROTOCOL=beq)))
26
oracle 20397
1 0 18:57 ?
00:00:00 oraclemyins (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=
27
oracle 20399 16888 0 18:57 pts/1 00:00:00 grep myins
You can also query the spfile, see parameter values or change them at nomount stage.
1
sql> show parameter
2
NAME
TYPE
VALUE
3
-------------------------- ----------- ----------------4
O7_DICTIONARY_ACCESSIBILITY boolean
FALSE
5
active_instance_count
integer
6
aq_tm_processes
integer
0
7
archive_lag_target
integer
0
8
asm_diskgroups
string
9
asm_diskstring
string
10 asm_power_limit
integer
1
11 audit_file_dest
string
/u01/admin/testdb/adump

57

ORACLE DBA BASICS


12
13
14
15
16

audit_sys_operations
audit_syslog_level

boolean FALSE
string

********** Output Truncated ****************


SQL> show parameter instance_name;
NAME
TYPE
VALUE
------------------------------------ ----------- -----------------------------instance_name
string
myins

"instance_name" parameter shows the name of the instance. However, the value of this parameter is
not stored in parameter file. It is populated from the value of ORACLE_SID environment variable at
the time the startup command is executed.
4. MOUNT State
SQL> alter database mount;
Database altered.
To proceed to mount state oracle will find the control file and verify its syntax. However, the
information found in control file is not validated. For example, location of data files and redo log files
are stored in a control file but Oracle will not validate if these files exists or not at mount stage. But
there should valid records that show the location of those file in the control file. Otherwise you cannot
proceed to mount state.
The path of control files is stored in parameter file.
SQL> show parameter control_files;

NAME
TYPE
VALUE
------------------------------------ ----------- -----------------------------control_files
string
+ORADATA/testdb/controlfile/cu
rrent.475.758824101, +FRA/test
db/controlfile/current.257.758
824101

Here I've got two control files (paths are seperated by comma) that reside in ASM.
You can find which stage you are at by querying v$database view. This view is not available at
nomount stage because at nomount stage there is no database. However in mount state the database
is associated with the instance.
1
SQL> select name,open_mode from v$database;
2
3
NAME
OPEN_MODE
4
--------- -------------------5
TESTDB MOUNTED
Here, the name of the database is "TESTDB". This is the value of "db_name" parameter you've set in
parameter file. And the database is in mount state.
1
SQL> show parameter db_name;
2
3
NAME
TYPE
VALUE
4
------------------------------------ ----------- -----------------------------5
db_name
string
testdb
As seen above, the name of the database is determined by db_name parameter.

58

ORACLE DBA BASICS


v$log,v$logfile,v$datafile views are also available at mount stage to show you the path of online
redolog files and data files.
At mount stage you can:
- rename data files,
- enable/disable archive mode of database (more information on archive mode),
- perform media recovery.
5. OPEN Mode
1
SQL> alter database open;
2
3
Database altered.
While opening the database, Oracle will check the availability of data files and online redo log files. If a
data file or online redolog group is missing, the database won't open. This stage is where the presence
of these files are checked.
If database was shutdown in a consistent way (more information on shutdown types) it will be opened
immediately. If it was an inconsistent shutdown, SMON process will perform an instance recovery and
open the database after that.
1
SQL> select open_mode from v$database;
2
3
OPEN_MODE
4
-------------------5
READ WRITE
Here my database is open in read-write mode (default). Users can connect to it.
6. Shutting down Database
While opening the database Oracle has passed through each stage respectively:
- start the instance (nomount state),
- mount the database (mount state),
- open the database.
While shutting down Oracle will pass through each stage in reverse order:
- close database (mount state)
- dismount database (nomount state)
- shutdown the instance.
The only way to shutdown a database using SQL*Plus is entering "shutdown" command with an
appropriate shutdown option for your needs. (More information on shutdown types)
1
SQL> shutdown immediate;
2
3
Database closed.
4
Database dismounted.
5
ORACLE instance shut down.
Most of the time your database will be up (open database) or completely down (no database or
instance). You will be on nomount and mount stages at maintenance and installation times or when
there is a problem in your database.
Checkpoint

59

ORACLE DBA BASICS


What is checkpoint?
Checkpoint is an internal mechanism of Oracle. When a checkpoint occurs the latest SCN (system
change number) is written to the control file and to all datafile headers. This operation is performed by
the checkpoint process. The name of the process is ora_ckpt_<SID> in Linux.
Also, during checkpoint, the ckpt process triggers the database writer process (DBWn) to write dirty
blocks to disk.

How can I view when the last checkpoint happened?


1
SQL> select checkpoint_change#,current_scn from v$database ;
2
3
CHECKPOINT_CHANGE#
CURRENT_SCN
4
---------------------------5
2008597
2023173
checkpoint_change# is the scn number written to control file during checkpoint. Current scn is the scn
of database at this moment. As there is always something changing in a database the current SCN will
always be incrementing and also will be ahead of checkpoint_change#.
1
SQL> select checkpoint_change#,current_scn from v$database ;
2
3
CHECKPOINT_CHANGE#
CURRENT_SCN
4
---------------------------5
2008597
2023600
I've re-executed the command. The checkpoint_change# remains the same because no checkpoint
occurred between the execution of two commands. However, the current scn increased because
something changed in database during that period.
There is a function called "scn_to_timestamp" which tells the time that an scn was current. The
function is not 100% accurate. There can be up to a 3 seconds gap between the actual time.
1
sql> SELECT checkpoint_change# checkpoint_scn,

60

ORACLE DBA BASICS


2
scn_to_timestamp (checkpoint_change#) checkpoint_time,
3
current_scn,
4
scn_to_timestamp (current_scn) current_time
5
FROM v$database ;
CHECKPOINT_SCN CHECKPOINT_TIME
CURRENT_SCN
CURRENT_TIME
2008597
18.08.2011 15:00:57
2025607
18.08.2011 18:12:24
The query above contains the timestamps, therefore is more readable than sole change numbers.
Notice that 3 hours and 12 minutes passed since the checkpoint and there were 17010 (20256072008597) changes during that period.
What triggers a checkpoint operation?
a) If an active redolog is to be overwritten, a checkpoint occurs implicitly to write the dirty blogs
associated with the redo records in the active redolog.(More information on redolog states)
1
SQL> SELECT checkpoint_change# checkpoint_scn,
2
scn_to_timestamp (checkpoint_change#) checkpoint_time
3
FROM v$database ;
4
5
CHECKPOINT_SCN
CHECKPOINT_TIME
6
----------------------------------------------------7
2026455
08.18.2011 18:25:36
The last checkpoint was at 18:25:36 .
1
SQL> select group#,status from v$log ;
GROUP# STATUS
1
INACTIVE
2
ACTIVE
3
CURRENT
At the moment group 2 is the active and group 3 is the current redolog group.
1
SQL> alter system switch logfile;
2
SQL> select group#,status from v$log ;
GROUP# STATUS
1
CURRENT
2
ACTIVE
3
ACTIVE
Group 1 is the current. At next log switch group 2 must be the current but it is still active. So an
implicit checkpoint will occur to make it available for overwriting.
1
SQL> alter system switch logfile ;
2
3
SQL> SELECT checkpoint_change# checkpoint_scn,
4
scn_to_timestamp (checkpoint_change#) checkpoint_time
5
FROM v$database ;
6
7
CHECKPOINT_SCN CHECKPOINT_TIME
8
--------------------------------------------------9
2027297
08.18.2011 18:38:48
As seen here the log switch operation caused a log switch.
b) You can manually trigger a checkpoint.
1
SQL> SELECT checkpoint_change# checkpoint_scn,
2
scn_to_timestamp (checkpoint_change#) checkpoint_time
3
FROM v$database ;
4
5
CHECKPOINT_SCN
CHECKPOINT_TIME
6
----------------------------------------------7
2027550
08.18.2011 18:43:49

61

ORACLE DBA BASICS


8
9
SQL> alter system checkpoint;
The command above explicitly caused a checkpoint.
1
SQL> SELECT checkpoint_change# checkpoint_scn,
2
scn_to_timestamp (checkpoint_change#) checkpoint_time
3
FROM v$database ;
4
5
CHECKPOINT_SCN
CHECKPOINT_TIME
6
--------------------------------------------7
2027918
08.18.2011 18:49:25
c) Consistent shutdown operations cause a checkpoint. (More information on shutdown types)
1
SQL> SELECT checkpoint_change# checkpoint_scn,
2
scn_to_timestamp (checkpoint_change#) checkpoint_time
3
FROM v$database ;
4
5
CHECKPOINT_SCN CHECKPOINT_TIME
6
---------------------------------------7
2027918
08.18.2011 18:49:25
8
9
SQL> shutdown immediate;
10
Database closed.
11
Database dismounted.
12
ORACLE instance shut down.
13
14
SQL> startup
15
ORACLE instance started.
16
17
Total System Global Area 4175568896 bytes
18
Fixed Size
2233088 bytes
19
Variable Size
2365590784 bytes
20
Database Buffers
1795162112 bytes
21
Redo Buffers
12582912 bytes
22
Database mounted.
23
Database opened.
24
25
SQL> SELECT checkpoint_change# checkpoint_scn,
26
scn_to_timestamp (checkpoint_change#) checkpoint_time
27
FROM v$database ;
28
29
CHECKPOINT_SCN
CHECKPOINT_TIME
30
------------------------------------------------31
2028465
08.18.2011 18:55:12
d) If you've enabled MTTR (Mean Time To Recover) optimization by setting
FAST_START_MTTR_TARGET parameter, then Oracle will automatically perform regular checkpoints to
keep the MTTR at the level you defined. A checkpoint decreases MTTR because dirty blocks are written
to disk and there will be less blocks to recover. Oracle will adjust the checkpoint frequency according
to the FAST_START_MTTR_TARGET parameter you've set.
e) If you take a datafile offline or read only, dirty blocks regarding that datafile are written to disk.
This is a partial checkpoint. Not all dirty blocks in the database are written.
1
SQL> SELECT checkpoint_change# checkpoint_scn,
2
scn_to_timestamp (checkpoint_change#) checkpoint_time

62

ORACLE DBA BASICS


3
FROM v$database;
4
5
CHECKPOINT_SCN
CHECKPOINT_TIME
6
----------------------------------7
2118315
08.19.2011 10:07:37
The database wide checkpoint happened at 10:07:37 and the SCN at that time (2118315) was written
to control file and all datafile headers. v$database shows the scn recorded in control file.
sql> SELECT name, checkpoint_time,
1
checkpoint_change#
2
FROM v$datafile_header
3
WHERE tablespace_name='USERS' ;
NAME
CHECKPOINT_TIME
CHECKPOINT_CHANGE#
+ORADATA/testdb/datafile/users.435.758824021 19.08.2011 10:07:37
2118315
v$datafile_header shows the scn recorded in datafile header. It is same as the scn recorded in control
file. That is what we expect.
sql> alter tablespace users read only ;
1
2
sql> SELECT name, checkpoint_time,
3
checkpoint_change#
4
FROM v$datafile_header
5
WHERE tablespace_name='USERS'
NAME
CHECKPOINT_TIME
CHECKPOINT_CHANGE#
+ORADATA/testdb/datafile/users.435.758824021 19.08.2011 10:49:21
2120846
I made the datafile read only. Then a partial checkpoint occurred at 10:49:21 and the SCN recorded in
datafile header has changed.
1
SQL> SELECT checkpoint_change# checkpoint_scn,
2
scn_to_timestamp (checkpoint_change#) checkpoint_time
3
FROM v$database ;
4
5
CHECKPOINT_SCN
CHECKPOINT_TIME
6
------------------------------------7
2118315
08.19.2011 10:07:37
Since no database wide checkpoint occurred, the SCN recorded in control file is still 2118315. Notice
that the SCN recorded in datafile's header is ahead of the one in control file.

Redolog
What is Redolog?
Literally, re-do means to do it again.
There is always something changing (update, delete, insert) in your database.
Those changes are recorded by your database.
Each record regarding a change in your database is called a redo record or redolog.
These redologs are stored in files called redolog files.
This is an internal mechanism of Oracle. You can not disable it. Every database should have redolog
files.
Why does Oracle need redologs?

63

ORACLE DBA BASICS

What if your database crashes and you lose data?


With help of redolog files Oracle can know what has happened in the past and can re-apply the
changes that it sees in redo records.
You can think of redologs as the history of database.
Every change that happened in the database is recorded there.
If one day you lose data, you can examine your redo records and recover your data.
What is redolog group?
There can't be a single redolog file in a database.
You should have at least 3 redolog files in your database.
Oracle uses only one redolog file at a time.
When this file is filled then it proceeds to next file. When all files are filled up then it returns back to
first file.
The files are used in a circular fashion and the files are overwritten.
In this architecture each file is called a redolog group.

What is redolog member?


In each group there has to be at least one file.
Oracle recommends that each group contains at least two files.
Each file in a group is called a member.

64

ORACLE DBA BASICS


Every member in a group is identical. They are copies. They contain the same redo records.
Members are used for fault tolerance. If any member is lost or damaged Oracle can continue to
function using other members.
What should a typical configuration be?
In a typical configuration there has to be at least 3 redolog groups.
Each redolog group should have at least 2 members.
Redologs should be on your fastest disks as there will always be I/O on the files.
Redologs should not be on a RAID-5 disks. RAID-5 disks have low I/O.
If possible place members on different storage controllers or completely on a different storage unit.
This will provide you high availability.
At least one member should be available for Oracle to function.

Redolog States

CURRENT:
If a redolog group is in "current" state it means that, redo records are currently being written to that
group. That redolog group will keep being "current" until a log switch occurs. The act of moving from
one redolog group to another is called a log switch. There can only be one current redolog group at a
time. Example:
1
sql> select group#,status from v$log ;
GROUP# STATUS
1
CURRENT
2
INACTIVE
3
INACTIVE
Here I've got 3 redolog groups. Group 1 is the current redolog group. The other two are inactive.
ACTIVE:

65

ORACLE DBA BASICS


Redolog files keep changes made to data blocks. Data blocks are modified in a special area
in SGA (system global area) called buffer cache. When a data block is modified in buffer cache that
block is called a "dirty buffer". Dirty buffers have to be written to disk because they are kept in RAM.
When they are written to disk they are kept in a permanent media. If a redolog group contains redo
records belonging to a dirty buffer that redolog group is said to be in "active" state.
A current redolog group is expected to move to "active" state just after a log switch. Example:
1
sql> alter system switch logfile ;
I've triggered a log switch manually with the command above.
1
sql> select group#,status from v$log ;
GROUP# STATUS
1
ACTIVE
2
CURRENT
3
INACTIVE
Now group 2 is the current group because a log switch occurred. Redo records are no longer being
written to group 1. They are being written to group 2 instead.
There are some dirty buffers in buffer cache. Redo records relevant to those dirty buffers are still in
group 1. Because of that group 1 is in "active" state. Redolog files belonging to an active redolog
group can not be overwritten. Because the changes haven't been written to disk yet. Those redo
records are still needed for recovery.
INACTIVE:
When a redolog group contains no redo records belonging to a dirty buffer it is in an "inactive" state.
Redolog files belonging to an "inactive" redolog group can be overwritten. Example:
1
sql> alter system flush buffer_cache ;
I've manually flushed the buffer_cache (so the dirty blocks) to disk. There are no more dirty buffers.
Therefore those redo records in the "active" redolog group are not needed anymore.
1
sql> select group#,status from v$log ;
GROUP# STATUS
1
INACTIVE
2
CURRENT
3
INACTIVE
Group 2 is still the current group because no log switch occurred. Just the dirty blocks were written to
disk. Redo records in group 1 is no longer needed, thus it moved to inactive state.
We expect an active redolog group to move to inactive state after a while because Oracle's internal
mechanism will write dirty buffers to disk. You don't have to manually flush dirty buffers every time.
UNUSED:
When you add a new redolog group, its redolog files will be empty. That redolog group will be in
UNUSED state. Later, when it is used, it will be in one of the states explained above.
1
sql> alter database add logfile group 4 ;
I've added a new redolog group whose group number is 4.
1
sql> select group#,status from v$log ;
GROUP# STATUS
1
INACTIVE
2
CURRENT
3
INACTIVE
4
UNUSED
Group 4 is "unused".
1
sql> alter system switch logfile ;
I've triggered a log switch.
1
sql> select group#,status from v$log ;
GROUP# STATUS

66

ORACLE DBA BASICS


1
INACTIVE
2
ACTIVE
3
INACTIVE
4
CURRENT
Now group 4 is no longer unused. It became the current redolog group.

Archivelog
Redolog files are used in a circular fashion. When a redolog file is filled, the log writer process (lgwr)
shifts to next redolog file. Those redolog files are officially called "online redolog files" but they are
usually called only "redolog files" in community. When all the redolog files are filled then the first
redolog file is overwritten.

What is archived redolog files?


Your database can operate in two mode. Archivelog mode and noarchivelog mode.
If your database operates in "archivelog mode", when an online redolog file is filled, it is copied to a
destination outside of database as a regular file.
If it operates in "noarchivelog mode" nothing is archived. Later when that redolog file is overwritten
the redo records are lost permanently.
These copies of online redolog files are called "archived redolog files" officially. But they are usually
called "archive logfiles" or simply "archivelogs" in community.
Which mode should my database operate in?
We usually recommend the database to be in "archivelog mode" because archivelogs are a vital part
of Oracle recovery mechanism. If you restore your database from a backup then you can roll forward
your database to a time you wish using archivelogs. If there is no archivelog you will be stuck at the
time that your backup was taken.
Is there any drawback for being in archivelog mode?
There is a process called archiver (ARCn) which is responsible for archiving the online redolog files.
When a log switch occurs the archiver process starts copying the file to the archiving destination. This
operation is I/O bound. There will be more disk activity in your database in archivelog mode.
Also, if the archiver hasn't completed copying an online redolog file yet, log writer can't overwrite that
online redolog file and hangs until it is archived. If there is high DML (data manipulation language -

67

ORACLE DBA BASICS


insert, update, delete, merge) on your system and your disk I/O speed is low, you may encounter
extra waits.
But as mentioned before, being in archive mode is recommended and in a properly configured
database, archiving should not be a performance issue.
How to determine the current mode of your database?
1. Using SQL*Plus:
1
SQL> archive log list;
2
3
Database log mode
Archive Mode
4
Automatic archival
Enabled
5
Archive destination
USE_DB_RECOVERY_FILE_DEST
6
Oldest online log sequence
24
7
Next log sequence to archive 27
8
Current log sequence
27
In the third line you can see that the database is in archive mode.
2. Using SQL Command:
a)
1
SQL> select log_mode from v$database ;
2
3
LOG_MODE
4
-----------5
ARCHIVELOG
v$database view shows the mode of database. Here my database is in archivelog mode.
b)
1
SQL> select archiver from v$instance ;
2
3
ARCHIVE
4
------5
STARTED
If archiver process is started it means that your database is operating in archivelog mode. You can
query v$instance to see the status of archiver process. Here my database is in archivelog mode.
Archivelog Settings
Where are my archivelogs stored?
The parameters log_archive_dest_<n> determine the path that your archivelogs are stored.
In default configuration, redologs are archived to a single destination specified by "log_archive_dest"
parameter.
Using SQL*Plus:
1
SQL> show parameter log_archive_dest ;
2
3
NAME
TYPE
VALUE
4
----------------------------------------------5
log_archive_dest
string
If log_archive_dest parameter is not set, the archivelogs will be stored in your fast recovery area.
The path of fast recovery area is determined by "db_recovery_file_dest" parameter.
Using SQL*Plus:
1
SQL> show parameter db_recovery_file_dest ;
2
3
NAME
TYPE
VALUE
4
-----------------------------------------------5
db_recovery_file_dest
string
+FRA

68

ORACLE DBA BASICS


Here my fast recovery area points to asm disk group "+FRA". My archivelogs will be stored under FRA
disk group.
Where can I view where each of my archivelog is stored?
You can query v$archived_log to view where each archivelog is stored.
1
sql> select name from v$archived_log ;
2
3
NAME
4
5
+FRA/testdb/archivelog/2011_08_10/thread_1_seq_3.5203.758824381
6
+FRA/testdb/archivelog/2011_08_10/thread_1_seq_4.5204.758829647
7
+FRA/testdb/archivelog/2011_08_10/thread_1_seq_5.5205.758831241
My archivelogs are managed by Oracle under +FRA disk. "testdb" is the name of my database. Then
follows the date those archivelogs were created (2011_08_10) and finally the name of each archivelog
file.
How does Oracle name archivelogs in ASM ?
If your archivelogs are managed by Oracle then you don't have control over how archivelogs will be
named. You just set the asm disk group that they will be stored in. But again Oracle names the files
according a certain format.
thread_1_seq_3.5203.758824381
thread_1: A database can have multiple instances. (Ex: real application clusters) Each instance is also
called a thread and each instance has a unique thread number. Here we know that the archivelog
belongs to thread with number 1.
seq_3: When log writer process (lgwr) starts writing to a redolog file, Oracle assigns that redolog file a
sequence number. Redo records in that redolog file are associated with that sequence number.
Sequence number increments with every log switch. As redolog files are overwritten, a redolog file will
have many sequence numbers associated with it. Here in this example archivelog file has sequence
number 3
5203: This is the ASM (Automatic Storage Management) file number of archivelog. Every file in ASM
has a file number. It is not special to archivelogs.
758824381: When a file is created in ASM, it is assigned an incarnation number to make it unique.
Incarnation number is derived from the timestamp the file was created at. Every file in ASM has an
incarnation file number. It is not special to archivelogs.
How to store archivelogs in regular file system
Besides ASM, archivelogs can also be stored in regular file systems. Set log_archive_dest
parameter (or log_archive_dest_1 if you have fast recovery area defined) to a directory on your
operating system. Oracle user must have write permission on that directory.
1
sql> alter system set log_archive_dest_1='LOCATION=/oradata/archivelogs';
2
3
sql> alter system archive log current ;
4
5
sql> SELECT name
6
FROM v$archived_log
7
WHERE sequence# = (SELECT MAX (sequence#) FROM v$archived_log) ;
8
9
/oradata/archivelogs/1_46_758824103.arc
In the first command I've told Oracle to store archivelogs under "/oradata/archivelogs" folder.

69

ORACLE DBA BASICS


In the second command I've manually archived the current redolog.
The most recent archivelog will have the highest sequence number. In the third command I queried
the most recent archivelog's path. As you see the archivelogs are no longer stored in ASM. They are
stored under /oradata/archivelogs/ folder instead.
How does Oracle name archivelogs in regular file system?
When naming archivelogs, Oracle regards the "log_archive_format" parameter.
SQL> show parameter log_archive_format;
NAME
TYPE
VALUE
-------------------------------------------log_archive_format
string
%t_%s_%r.arc
%t : This is the thread number
%s : This is the sequence number
%r : This is the Resetlogs ID. When a database is recovered to a point in the past, the database is
opened with resetlogs options. After opening with resetlogs , the sequence is reset. The sequence
again starts from 1. To distinguish the new sequence from the past sequence each archivelog is
assigned a Resetlogs ID. Therefore the past versions and new versions of the archivelogs can be
distinguished.
When naming archivelogs, Oracle will concatenate the values of those variables in the sequence you
define in log_archive_format parameter.
%t_%s_%r.arc => 1_46_758824103.arc => thread number 1, sequence 46, Resetlogs ID
758824103
Using %t, %s and %r variables are mandatory. You can also use %d and %a variables in
log_archive_fromat parameter.
%a : activation ID. This is a concept used in standby databases.
%d : database ID. This variable will add a unique id for the database. If multiple databases store their
archivelogs in the same location you have better include %d variable to distinguish archivelogs from
different databases.
SQL> alter system set log_archive_format='%d_%t_%s_%r.arc' scope=spfile ;
System altered.

1
3
1
4
1
5
1
6
1
7
1

SQL> shutdown immediate;


Database closed.
Database dismounted.
ORACLE instance shut down
SQL> startup
ORACLE instance started.
Total System Global Area 4175568896 bytes
Fixed Size
2233088 bytes
Variable Size
2365590784 bytes
Database Buffers
1795162112 bytes
Redo Buffers
12582912 bytes
Database mounted.
Database opened.
SQL> alter system archive log current ;

70

ORACLE DBA BASICS


8
1
9
2
0
2
1
2
System altered.
2
2
SQL> SELECT name
3
FROM v$archived_log
2
WHERE sequence# = (SELECT MAX (sequence#) FROM v$archived_log) ;
4
2
NAME
5
----------------------------------------------------------------------------------------------------------------------2
6
/oradata/archivelogs/97862a24_1_50_758824103.arc
2
7
2
8
2
9
3
0
Here, I've modified the log_archive_format parameter to include the database id at the beginning of
filename, restarted the instance for changes to take effect and then manually archived the current
redolog.
As you see the archivelog name now includes 97862a24 (the database ID).
What Oracle says on Redo Log File???
The most crucial and vital structure for recovery operations is the online redo log, which consists of
two or more pre-allocated files that store all changes made to the database as they occur. Every
instance of an Oracle database has an associated online redo log to protect the database in case of an
instance failure.
What is a redo log thread?
Each database instance has its own online redo log groups. These online redo log groups, multiplexed
or not, are called an instance's thread of online redo. In typical configurations, only one database
instance accesses an Oracle database, so only one thread is present. When running Oracle Real
Application Clusters, however, two or more instances concurrently access a single database and each
instance has its own thread. The relation ship between Oracle Instance and Database is many-to-one.
More than one Instance can access a Database. This kind of configuration is called Parallel Server
Configuration.
What those files contain??
Online redo log files are filled with redo records. A redo record, also called a redo entry, is made up of
a group of change vectors, each of which is a description of a change made to a single block in the
database. For example, if you change a salary value in an employee table, you generate a redo record
containing change vectors that describe changes to the data segment block for the table, the rollback
segment data block, and the transaction table of the rollback segments.
Redo entries record data that you can use to reconstruct all changes made to the database, including
the rollback segments. Therefore, the online redo log also protects rollback data.

71

ORACLE DBA BASICS


When you recover the database using redo data, Oracle reads the change vectors in the redo records
and applies the changes to the relevant blocks.
Redo records are buffered in a circular fashion in the redo log buffer of the SGA and are written to one
of the online redo log files by the Oracle background process Log Writer (LGWR). Whenever a
transaction is committed, LGWR writes the transaction's redo records from the redo log buffer of the
SGA to an online redo log file, and a system change number (SCN) is assigned to identify the redo
records for each committed transaction. Only when all redo records associated with a given
transaction are safely on disk in the online logs is the user process notified that the transaction has
been committed.
Redo records can also be written to an online redo log file before the corresponding transaction is
committed. If the redo log buffer fills, or another transaction commits, LGWR flushes all of the redo
log entries in the redo log buffer to an online redo log file, even though some redo records may not be
committed. If necessary, Oracle can roll back these changes.
How Oracle Writes to the Online Redo Log?
The online redo log of a database consists of two or more online redo log files. Oracle requires
a minimum of two files to guarantee that one is always available for writing while the other is being
archived (if in ARCHIVELOG mode).
LGWR writes to online redo log files in a circular fashion. When the current online redo log file fills,
LGWR begins writing to the next available online redo log file. When the last available online redo log
file is filled, LGWR returns to the first online redo log file and writes to it, starting the cycle again.
Filled online redo log files are available to LGWR for reuse depending on whether archiving is enabled.
There can be contention between filling up of the on line redo log files and archiving of the redo log
files, if they filled faster than they are written to the archived log file. This because online log file
written in Oracle blocks and archives are written in OS blocks
If archiving is disabled (NOARCHIVELOG mode), a filled online redo log file is available once the
changes recorded in it have been written to the data files.
If archiving is enabled (ARCHIVELOG mode), a filled online redo log file is available to LGWR once the
changes recorded in it have been written to the datafiles and once the file has been archived.
What is meant by Active (Current) and Inactive Online Redo Log Files???
At any given time, Oracle uses only one of the online redo log files to store redo records written from
the redo log buffer. The online redo log file that LGWR is actively writing to is called the current online
redo log file.
Online redo log files that are required for instance recovery are called active online redo log files.
Online redo log files that are not required for instance recovery are called inactive.
If you have enabled archiving (ARCHIVELOG mode), Oracle cannot reuse or overwrite an active online
log file until ARCn has archived its contents. If archiving is disabled (NOARCHIVELOG mode), when the
last online redo log file fills writing continues by overwriting the first available active file.
Which parameter influences the log switches??
LOG_CHECKPOINT_TIMEOUT specifies the amount of time, in seconds, that has passed since the
incremental checkpoint at the position where the last write to the redo log (sometimes called the tail
of the log) occurred. This parameter also signifies that no buffer will remain dirty (in the cache) for
more than integer seconds. This is time based switching of the log files.
LOG_CHECKPOINT_INTERVAL specifies the frequency of checkpoints in terms of the number of redo
log file blocks that can exist between an incremental checkpoint and the last block written to the redo
log. This number refers to physical operating system blocks, not database blocks. This block based
switching of the log files.
How do I respond the redo log failures???
If

Then

LGWR can successfully


write to at least one

Writing proceeds as normal. LGWR simply writes to the available


members of a group and ignores the unavailable members.

72

ORACLE DBA BASICS


member in a group
LGWR cannot access the
next group at a log switch
because the group needs
to be archived

Database operation temporarily halts until the group becomes available,


or, until the group is archived.

All members of the next


group are inaccessible to
LGWR at a log switch
because of media failure

Oracle returns an error and the database instance shuts down. In this
case, you may need to perform media recovery on the database from the
loss of an online redo log file. If the database checkpoint has moved
beyond the lost redo log, media recovery is not necessary since Oracle
has saved the data recorded in the redo log to the data files. Simply drop
the inaccessible redo log group. If Oracle did not archive the bad log, use
ALTER DATABASE CLEAR UNARCHIVED LOG to disable archiving before
the log can be dropped.

If all members of a group


suddenly become
inaccessible to LGWR while
it is writing to them

Oracle returns an error and the database instance immediately shuts


down. In this case, you may need to perform media recovery. If the
media containing the log is not actually lost--for example, if the drive for
the log was inadvertently turned off--media recovery may not be needed.
In this case, you only need to turn the drive back on and let Oracle
perform instance recovery.

How to add a redo log file group and member??


Suppose you are to add group 5 with 2 members the command is:
ALTER DATABASE
ADD LOGFILE GROUP 10
('c:\oracle\oradata\whs\redo\redo_05_01.log',
'd:\oracle\oradata\whs\redo\redo_05_02.log')
SIZE 100M;
This command is used to add another member to the group already existing.
ALTER DATABASE ADD LOGFILE MEMBER
'c:\oracle\oradata\whs\redo\redo_05_03.log'
TO GROUP 5;
How to move a redo log file from one destination to another destination??
01. Shutdown database normal/immediate but not abort.
Shutdown immediate;
02. Copy the online redo log files to the new location.
Unix use mv command
Windows move command
03. Startup MOUNT database logging in as sysdba (do not open the database)
startup mount pfile=<initialization parameter file with path>
04. Issue the following statement
Ex
You are changing the file from c:\oracle\oradata\redologs to c:\oracle\oradata\whs\redologs and like
wise on d:\ drive.
ALTER DATABASE
RENAME FILE
'c:\oracle\oradata\redologs\redo_01_01.log',
'd:\oracle\oradata\redologs\redo_01_02.log'
TO
'c:\oracle\oradata\whs\redologs\redo_01_01.log',
'd:\oracle\oradata\whs\redologs\redo_01_02.log'
/

73

ORACLE DBA BASICS


05. Open the database
alter database open;
How to drop a redo log file group or/and member???
To drop an online redo log group, you must have the ALTER DATABASE system privilege. Before
dropping an online redo log group, consider the following restrictions and precautions:
An instance requires at least two groups of online redo log files, regardless of the number of members
in the groups. (A group is one or more members.)
You can drop an online redo log group only if it is inactive. If you need to drop the current group, first
force a log switch to occur.
Make sure an online redo log group is archived (if archiving is enabled) before dropping it. To see
whether this has happened, use the V$LOG view.
SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
GROUP# ARC STATUS
--------- --- ---------------1 YES ACTIVE
2 NO CURRENT
3 YES INACTIVE
4 YES INACTIVE
Drop an online redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE
clause.
The following statement drops redo log group number 3:
ALTER DATABASE DROP LOGFILE GROUP 3;
When an online redo log group is dropped from the database, and you are not using the Oracle
Managed Files feature, the operating system files are not deleted from disk. The control files of the
associated database are updated to drop the members of the group from the database structure. After
dropping an online redo log group, make sure that the drop completed successfully, and then use the
appropriate operating system command to delete the dropped online redo log files.
To drop an online redo log member, you must have the ALTER DATABASE system privilege. Consider
the following restrictions and precautions before dropping individual online redo log members:
It is permissible to drop online redo log files so that a multiplexed online redo log becomes temporarily
asymmetric. For example, if you use duplexed groups of online redo log files, you can drop one
member of one group, even though all other groups have two members each. However, you should
rectify this situation immediately so that all groups have at least two members, and thereby eliminate
the single point of failure possible for the online redo log.
An instance always requires at least two valid groups of online redo log files, regardless of the number
of members in the groups. (A group is one or more members.) If the member you want to drop is the
last valid member of the group, you cannot drop the member until the other members become valid.
To see a redo log file's status, use the V$LOGFILE view. A redo log file becomes INVALID if Oracle
cannot access it. It becomes STALE if Oracle suspects that it is not complete or correct. A stale log file
becomes valid again the next time its group is made the active group. You can drop an online redo log
member only if it is not part of an active or current group. If you want to drop a member of an active
group, first force a log switch to occur. Make sure the group to which an online redo log member
belongs is archived (if archiving is enabled) before dropping the member. To see whether this has
happened, use the V$LOG view.
To drop specific inactive online redo log members, use the ALTER DATABASE statement with the DROP
LOGFILE MEMBER clause.
The following statement drops the redo log 'redo_01_01.log' for group 01 member 01
ALTER DATABASE DROP LOGFILE MEMBER 'c:\oracle\oradata\redologs\redo_01_01.log'
When an online redo log member is dropped from the database, the operating system file is not
deleted from disk. Rather, the control files of the associated database are updated to drop the member

74

ORACLE DBA BASICS


from the database structure. After dropping an online redo log file, make sure that the drop completed
successfully, and then use the appropriate operating system command to delete the dropped online
redo log file.
To drop a member of an active group, you must first force a log switch and as a result that member
becomes inactive.
How can I force the log switch??
ALTER SYSTEM SWITCH LOGFILE;
How can I Clear an Online Redo Log File??
ALTER DATABASE CLEAR LOGFILE GROUP 3;
This statement overcomes two situations where dropping redo logs is not possible:
(1) If there are only two log groups
(2) The corrupt redo log file belongs to the current group
(3) If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the statement.
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
Which Metadata Views I am to refer for redo log files????
View

Description

V$LOG

Displays the redo log file information from the control file

V$LOGFILE

Identifies redo log groups and members and member


status

V$LOG_HISTORY Contains log history information


What is an archived redo log file?
Oracle enables you to save filled groups of online redo log files to one or more offline destinations,
known collectively as the archived redo log, or more simply archive log. The process of turning online
redo log files into archived redo log files is called archiving. This process is only possible if the
database is running in ARCHIVELOG mode. You can choose automatic or manual archiving.
An archived redo log file is a copy of one of the identical filled members of an online redo log group. It
includes the redo entries present in the identical member of a redo log group and also preserves the
group's unique log sequence number. For example, if you are multiplexing your online redo log, and if
Group 1 contains member files (depending upon the number of members you have attached for each
group) redo_log_01_01.log and redo_log_01_02.log, then the archiver process (ARCn) will archive
one of these identical members. Should a_log1 become corrupted, then ARCn can still archive the
identical b_log1. The archived redo log contains a copy of every group created since you enabled
archiving.
When running in ARCHIVELOG mode, the log writer process (LGWR) is not allowed to reuse and hence
overwrite an online redo log group until it has been archived. The background process ARCn
automates archiving operations when automatic archiving is enabled. Oracle starts multiple archiver
processes as needed to ensure that the archiving of filled online redo logs does not fall behind.
How they are useful to me???
ARCHIVELOG mode enabled databases are also called media recovery enabled databases. This means
that you shall be able to restore and recover the database.
RESTORE-ing the database means restoring from the backup. The backup may be online, offline or a
logical export of a database, schema or table.
RECOVERY means recovering the database to point of time of failure, log sequence or change
sequence number. To perform these kind of operations we need the archived redo log files generated
by Oracle from the time of the start of the last backup, which has been used to restore the database.
The required number of the archived log files is dependent on kind of recovery you are going for.
So you can use archived redo logs to:

75

ORACLE DBA BASICS


(1) Recover a database
(2) Update a standby database
(3) Gain information about the history of a database using the LogMiner utility.
(We shall discuss standby database and LogMiner at a later point of time, as that discussion is
irrelevant here).
What if I run my database in NOARCHIVELOG MODE???
When you run your database in NOARCHIVELOG mode, you disable the archiving of the online redo
log. The database's control file indicates that filled groups are not required to be archived. Therefore,
when a filled group becomes inactive after a log switch, the group is available for reuse by LGWR.
The choice of whether to enable the archiving of filled groups of online redo log files depends on the
availability and reliability requirements of the application running on the database. If you cannot afford
to lose any data in your database in the event of a disk failure, use ARCHIVELOG mode. The archiving
of filled online redo log files can require you to perform extra administrative operations.
NOARCHIVELOG mode protects a database only from instance failure, but not from media failure. Only
the most recent changes made to the database, which are stored in the groups of the online redo log,
are available for instance recovery. In other words, if a media failure occurs while in NOARCHIVELOG
mode, you can only restore (not recover) the database to the point of the most recent full database
backup. You cannot recover subsequent transactions.
Also, in NOARCHIVELOG mode you cannot perform online tablespace backups. Furthermore, you
cannot use online tablespace backups previously taken while the database operated in ARCHIVELOG
mode.
You can only use whole database backups taken while the database is closed to restore a database
operating in NOARCHIVELOG mode. Therefore, if you decide to operate a database in NOARCHIVELOG
mode, take whole database backups at regular, frequent intervals.
Can I use RMAN to backup my database, which is in NOARCHIVELOG mode?
Yes, you can use RMAN for offline or cold backups by keeping the database in mount mode. It is
because the data files are not open at that stage.
What if I run the database in ARCHIVELOG mode???
When you run a database in ARCHIVELOG mode, you specify the archiving of the online redo log. The
database control file indicates that a group of filled online redo log files cannot be used by LGWR until
the group is archived. A filled group is immediately available for archiving after a redo log switch
occurs.
The archiving of filled groups has these advantages:
A database backup, together with online and archived redo log files, guarantees that you can recover
all committed transactions in the event of an operating system or disk failure. You can use a backup
taken while the database is open and in normal system use if you keep an archived log.
You can keep a standby database current with its original database by continually applying the
original's archived redo logs to the standby.
Can I anytime delete these archived log files from the computer at all???
If you do not delete these files at all it is a showstopper. The disks get filled with these files and
database gets stuck. It is always advised to backup these archived log files on to a tape or any other
destination of your choice, which does not effect the functioning of the database, and then delete
them. You can determine required archived log files basing on the latest full backup of the database.
There are so many factors to be considered and we shall discuss them in detail while discussing the
backup strategies.
How can I set my database to ARCHIVELOG mode from NOARCHIVELOG mode???
(1) Shutdown the database with normal or immediate option but not abort option.
Shutdown immediate;
(2) Back up the database
(3) Edit the initialization parameter file to set archive log mode PARAMETERS

76

ORACLE DBA BASICS


LOG_ARCHIVE_START=TRUE
LOG_ARCHIVE_DEST= 'c:\oracle\admin\whs\archive\'
(4) Start a new instance and mount, but do not open, the database.
STARTUP MOUNT
To enable or disable archiving, the database must be mounted but not open.
(5) Switch the database's archiving mode. Then open the database for normal operations.
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
(6) Shut down the database.
SHUTDOWN IMMEDIATE
(7) Back up the database:
Changing the database archiving mode updates the control file. After changing the database archiving
mode, you must back up all of your database files and control file. Any previous backup is no longer
usable because it was taken in NOARCHIVELOG mode.
When you set LOG_ARCHIVE_START=TRUE you are setting up automatic archiving of the database. If
you do not set this it is manual and you are to manually archive the archive log files as and when they
are filled. If not Oracle overwrites those online redo log files and they are no more available for
performing any recovery.
How can I start the archiving after the instance is started???
To start the archiving after the instance is started issue the following statement
ALTER SYSTEM ARCHIVE LOG START;
You can optionally include the archiving destination.
If an instance is shut down and restarted after automatic archiving is enabled using the ALTER
SYSTEM statement, the instance is reinitialized using the settings of the initialization parameter file.
Those settings may or may not enable automatic archiving. If your intent is to always archive redo log
files automatically, then you should include LOG_ARCHIVE_START = TRUE in your initialization
parameters.
How can I control the number of archive processes???
This is controlled by Oracle, if it is left to Oracle, provided you are ready to take the runtime over
heads. If you do not want them and you need to take control of these kind of situations set this
parameter LOG_ARCHIVE_MAX_PROCESSES. The maximum number of processes you can start using
this parameter is 10.
The LOG_ARCHIVE_MAX_PROCESSES is dynamic, and can be changed using the ALTER SYSTEM
statement. The following statement increases (or decreases) the number of ARCn processes currently
running:
ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=3;
There is usually no need to change the LOG_ARCHIVE_MAX_PROCESSES initialization parameter from
its default value of 2, because Oracle will adequately adjust ARCn processes according to system
workload.
How can I disable the archiving for my database??
To disable the automatic archiving, set this parameter in the initialization parameter file:
LOG_ARCHIVE_START=FALSE
To automatic archiving after instance startup:
ALTER SYSTEM ARCHIVE LOG STOP;
To perform manual archiving:
ALTER SYSTEM ARCHIVE LOG ALL;
How can I set the archive destination? And Can I archive to multiple destinations???
(These methods are suggested by Oracle)
Method Initialization Parameter

Host

Example

Local

LOG_ARCHIVE_DEST_1 =

LOG_ARCHIVE_DEST_n

77

ORACLE DBA BASICS


'LOCATION=/disk1/arc'
where: n is an integer from 1 or
LOG_ARCHIVE_DEST_2 =
to 10
remote
'SERVICE=standby1'

LOG_ARCHIVE_DEST and
Local
LOG_ARCHIVE_DUPLEX_DES
only
T

LOG_ARCHIVE_DEST =
'/disk1/arc'
LOG_ARCHIVE_DUPLEX_DES
T = '/disk2/arc'

01 Method
Perform the following steps to set the destination for archived redo logs using the
LOG_ARCHIVE_DEST_n initialization parameter:
(1) Use SQL*Plus to shut down the database with normal or immediate option but not abort.
SHUTDOWN
(2) Edit the LOG_ARCHIVE_DEST_n parameter to specify from one to ten archiving locations. The
LOCATION keyword specifies an operating system specific path name.
For example, enter:
LOG_ARCHIVE_DEST_1 = 'LOCATION = /disk1/archive'
LOG_ARCHIVE_DEST_2 = 'LOCATION = /disk2/archive'
LOG_ARCHIVE_DEST_3 = 'LOCATION = /disk3/archive'
If you are archiving to a standby database, use the SERVICE keyword to specify a valid net service
name from the tnsnames.ora file. For example, enter:
LOG_ARCHIVE_DEST_4 = 'SERVICE = standby1'
(3) Edit the LOG_ARCHIVE_FORMAT initialization parameter, using %s to include the log sequence
number as part of the file name and %t to include the thread number. Use capital letters (%S and
%T) to pad the file name to the left with zeroes. For example, enter:
LOG_ARCHIVE_FORMAT = arch%s.arc
These settings will generate archived logs as follows for log sequence
numbers 100, 101, and 102:
/disk1/archive/arch100.arc,
/disk1/archive/arch101.arc,
/disk1/archive/arch102.arc
/disk2/archive/arch100.arc,
/disk2/archive/arch101.arc,
/disk2/archive/arch102.arc
/disk3/archive/arch100.arc,
/disk3/archive/arch101.arc,
/disk3/archive/arch102.arc
02 Method
The second method, which allows you to specify a maximum of two locations, is to use the
LOG_ARCHIVE_DEST parameter to specify a primary archive destination and the
LOG_ARCHIVE_DUPLEX_DEST to specify an optional secondary archive destination. Whenever Oracle
archives a redo log, it archives it to every destination specified by either set of parameters.
Perform the following steps to use method 2:
(1) Use SQL*Plus to shut down the database.
SHUTDOWN
(2) Specify destinations for the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameter
(you can also specify LOG_ARCHIVE_DUPLEX_DEST dynamically using the ALTER SYSTEM statement).
For example, enter:

78

ORACLE DBA BASICS


LOG_ARCHIVE_DEST = '/disk1/archive'
LOG_ARCHIVE_DUPLEX_DEST = '/disk2/archive'
(3) Edit the LOG_ARCHIVE_FORMAT parameter, using %s to include the log sequence number as part
of the file name and %t to include the thread number. Use capital letters (%S and %T) to pad the file
name to the left with zeroes.
For example, enter:
LOG_ARCHIVE_FORMAT = arch_%t_%s.arc
For example, the above settings generates archived logs as follows for log sequence numbers 100 and
101 in thread 1:
/disk1/archive/arch_1_100.arc, /disk1/archive/arch_1_101.arc
/disk2/archive/arch_1_100.arc, /disk2/archive/arch_1_101.arc
Which metadata views are to be accessed for information???
There are several dynamic performance views that contain useful information about archived redo log
files and redo logs.
Dynamic Performance View Description
V$ARCHIVE_DEST

this enables you to know the status of archive destination

V$ARCHIVE_DEST_STATUS this enables you to know if a destination is valid or not


V$ARCHIVE_GAP

if you find any entries there in this database your stand by database is
not in synch with production and you are to find apply all those archive
log files to fill that gap.

V$ARCHIVE_PROCESSES

this lists the archive processes configured and valid with ACTIVE state
besides displaying information about the state of the various archive
processes for an instance.

V$PROXY_ARCHIVEDLOG

this view contains descriptions of archived log backups which are taken
with a new feature called Proxy Copy. Each row represents a backup of
one archived log

V$ARCHIVED_LOG

Displays historical archived log information from the control file. If you
use a recovery catalog, the RC_ARCHIVED_LOG view contains similar
information.

V$ARCHIVE

This lists the redo log files that require archiving.

V$DATABASE

Identifies whether the database is in ARCHIVELOG or NOARCHIVELOG


mode.

V$ARCHIVE_DEST

Describes the current instance, all archive destinations, and the current
value, mode, and status of these destinations.

V$BACKUP_REDOLOG

Contains information about any backups of archived logs. If you use a


recovery catalog, the RC_BACKUP_REDOLOG contains similar
information.

V$LOG

Displays all online redo log groups for the database and indicates which
need to be archived.

V$LOG_HISTORY

Contains log history information such as which logs have been archived
and the SCN range for each archived log.

What is ARCHIVE LOG LIST and when I can issue that???


You can issue that command to know the log mode and the archive log file sequence numbers etc. To
issue this command you are to log in as SYS at SQL*Plus ( for Oracle 9i or higher) or at SVRMGR for
all other lower versions except 7 where SQLDBA was available for login.
ARCHIVE LOG LIST
Database log mode Archive Mode
Automatic archival Enabled

79

ORACLE DBA BASICS


Archive destination D:\ORACLE\WHS\oradata \archive
Oldest online log sequence 11160
Next log sequence to archive 11163
Current log sequence 11163
This display tells you all the necessary information regarding the archived redo log settings for the
current instance:
The database is currently operating in ARCHIVELOG mode.
Automatic archiving is enabled.
The archived redo log's destination is D:\ORACLE\WHS\oradata \archive
The oldest filled online redo log group has a sequence number of 11160.
The next filled online redo log group to archive has a sequence number of 11163.
The current online redo log file has a sequence number of 11163.
Oracle Database Security Tutorial
Authentication
Authentication is the process of validating whether proper user is accesing the database Authentication
has been done mostly
by "Password Authentication".
Password Authentication
A password profile is a mechanism in the database that forces a user to follow guidelines when
creating or changing passwords. The guidelines help provide stronger security in the system by not
allowing weak passwords. You can create your own password verify function and attach it to a profile.
A password verify function is a program written in PL/SQL that examines passwords when theyre
chosen and accepts or rejects them based on criteria. If you have special password requirements, you
can write your own password verify function and assign it to your password profile using the
PASSWORD_VERIFY_FUNCTION attribute of the profile.
To use Oracles provided password verify function, Do the following ->
$ sqlplus as SYSDBA
SQL> @/rdbms/admin/utlpwdmg.sql
This creates the default password verify function and assigns it to the DEFAULT profile. Using PL/SQL,
you can even take Oracles example file and modify it to fit your needs.
Creating a password profile
$ sqlplus as sysdba
SQL>CREATE PROFILE writer_profile LIMIT
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LOCK_TIME 1/96
PASSWORD_LIFE_TIME 90;
PASSWORD_LOCK_TIME 1/96 means 1 day/96 = 15 min
Now, assign a user in the above profile using following command.
SQL>alter user HR profile writer_profile ;
If you don't assign a user to any profile it will be assigned to default profile. By default in Oracle 11g,
the DEFAULT profile limits the following.
FAILED_LOGIN_ATTEMPT = 10
PASSWORD_GRACE_TIME 7 (DAYS)
PASSWORD_LIFE_TIME 180 (DAYS)
PASSWORD_LOCK_TIME 1 (DAY)
PASSWORD_REUSE_MAX UNLIMITED
PASSWORD_REUSE_TIME UNLIMITED
You can edit your profile or the DEFAULT profile. For example, to change the failed login attempts
setting to 3 on the DEFAULT profile, type the following.

80

ORACLE DBA BASICS


SQL> alter profile default limit Failed_login_attempts 3;
Operating system authentication
Operating system authentication recognizes a user as logged into the OS and waives the password
requirement. Type this code to create an OS-authenticated user in Oracle for someone named
REPORTS.
SQL> create user oracle_user$solaris_user identified externally;
oracle_user = user name of oracle
solaris_user = OS user by which oracle_user will be identified
System privileges
System privileges are the first privileges any user needs.
The CREATE SESSION privilege gives users access to the database.
SQL> grant create session to oracle_user$solaris_user;
To restrict the operating system authentication revoke create session privilage from the above user.
SQL> revoke create session from oracle_user$solaris_user;
Object privileges
You can grant only eight object privileges:
SELECT lets the recipient select rows from tables.
INSERT lets the recipient insert rows into tables.
UPDATE lets the recipient change existing rows in tables.
DELETE lets the recipient remove existing rows from tables.
REFERENCES lets a user create a view on, or a foreign key to, another users table.
INDEX lets one user create an index on another users table.
ALTER lets one user change or add to the structure of another users table.
EXECUTE lets the recipient run procedures owned by another user.
Role
The following commands set create the role.
SQL> create role SALES_ROLE;
Grant system and object privileges to the role:
SQL> grant INSERT on SALES to SALES_ROLE;
SQL> grant UPDATE on INVENTORY to SALES_ROLE;
SQL> grant DELETE on ORDERS to SALES_ROLE;
grant the above roles to some users
SQL> grant SALES_ROLE to user1, user2;
Oracle supply the following roles during installation.
CONNECT = includes the privileges needed to connect to the database.
RESOURCE = includes many of the roles a developer might use to create and manage an
application, such as creating and altering many types of objects including tables, view, and sequences.
EXP_FULL_DATABASE/IMP_FULL_DATABASE = allows the grantee to do logical backups of the
database.
RECOVERY_CATALOG_OWNER = allows grantee to administer Oracle Recovery Manager catalog.
SCHEDULER_ADMIN = allows the grantee to manage the Oracle job scheduler.
DBA = gives a user most of the major privileges required to administer a database. These privileges
can manage users, security, space, system parameters, and backups.
Auditing Oracle Database
All the following database actions are automatically audited by default in Oracle 11g.
1. CREATE EXTERNAL JOB
2. CREATE ANY JOB
3. GRANT ANY OBJECT
4. PRIVILEGE
5. EXEMPT ACCESS POLICY
6. CREATE ANY LIBRARY

81

ORACLE DBA BASICS


7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.

GRANT ANY PRIVILEGE


DROP PROFILE
ALTER PROFILE
DROP ANY
PROCEDURE
ALTER ANY
PROCEDURE
CREATE ANY
PROCEDURE
ALTER DATABASE
GRANT ANY ROLE
CREATE PUBLIC
DATABASE LINK
DROP ANY TABLE
ALTER ANY TABLE
CREATE ANY TABLE
DROP USER
ALTER USER
CREATE USER
CREATE SESSION
AUDIT SYSTEM
ALTER SYSTEM
SYSTEM AUDIT
ROLE

the database parameter AUDIT_TRAIL is also set to DB by default. Alternatively it can be changed to
OS for file based auditing. The output file location can be tracked as.
SQL> show parameter audit_file_dest;
Turn auditing on and off with the AUDIT or NOAUDIT command.
Example is you can turn on auditing for any CREATE TABLE statement. You might want to track how
often and who is creating tables in the database. Auditing CREATE TABLE this way means an audit
entry is generated every time someone creates a table.
SQL> audit create table; or SQL> audit create table by user1;
Turn off auditing for CREATE TABLE statements:
SQL> noaudit create table;
other examples.
SQL> audit drop any table by user1 whenever not successful;
SQL> audit select on hr.employees by access;
SQL> audit select on hr.employees by session
View Audit Information
To verify the audits you have already implemented use the following command.
SQL> select * from DBA_PRIV_AUDIT_OPTS;
View the audits turned on for objects owned by HR for the SELECT, INSERT, UPDATE, and DELETE
privileges.
SQL> select OWNER, OBJECT_NAME, OBJECT_TYPE, SEL, INS, UPD, DEL
from DBA_OBJ_AUDIT_OPTS
where owner = HR;
DBA_AUDIT_TRAIL = shows all audit entries in the system.
DBA_AUDIT_OBJECT = shows all audit entries in the system for objects.
DBA_AUDIT_STATEMENT = shows audit entries for the statements GRANT, REVOKE, AUDIT, NOAUDIT,
and ALTER SYSTEM.

82

ORACLE DBA BASICS


DBA_AUDIT_SESSION = shows audit entries for the CONNECT and DISCONNECT actions.
Oracle Database Encryption
Just because someone cant get into your database doesnt mean she doesnt have access to your
data. A clever hacker who gains access to your Oracle databases raw files can extract plain text data
from these files.
Raw files include the following.
Data files
Recovery Manager backup files
Data Pump export files
Encrypt these file by using Oracle Wallet and an Encryption Key. Depending on the strength of an
encryption key, you can make these files virtually indecipherable to anyone.
Data encryption can enhance security both inside and outside the database. A user may have a
legitimate need for access to most columns of a table, but if one of the columns is encrypted and the
user does not know the encryption key, the information is not usable. The same concern is true for
information that needs to be sent securely over a network. The techniques I presented so far in this
chapter, including authentication, authorization, and auditing, ensure legitimate access to data from a
database user but do not prevent access to an operating system user that may have access to the
operating system files that compose the database itself. Users can leverage one of two methods for
data encryption: using the package DBMS_CRYPTO, an Oracle Database 10g replacement for the
package DBMS_OBFUSCATION_TOOLKIT found in Oracle9i, and transparent data encryption, which
stores encryption keys globally and includes methods for encrypting entire tablespaces.
DBMS_CRYPTO Package
New to Oracle 10g, the package DBMS_CRYPTO replaces the DBMS_OBFUSCATION_TOOLKIT and
includes the Advanced Encryption Standard (AES) encryption algorithm, which replaces the Data
Encryption Standard (DES). Procedures within DBMS_CRYPTO can generate private keys for you, or
you can specify and store the key yourself. In contrast to DBMS_OBFUSCATION_TOOLKIT, which could
only encrypt RAW or VARCHAR2 datatypes, DBMS_CRYPTO can encrypt BLOB and CLOB types.
Transparent Data Encryption
Transparent data encryption is a key-based access control system that relies on an external module
for enforcing authorization. Each table with encrypted columns has its own encryption key, which in
turn is encrypted by a master key created for the database and stored encrypted within the database;
the master key is not stored in the database itself. The emphasis is on the wordtransparent
authorized users do not have to specify passwords or keys when accessing encrypted columns in a
table or in an encrypted tablespace. Although transparent data encryption has been significantly
enhanced in Oracle Database 11g, there are still a few restrictions on its use; for example, you cannot
encrypt columns using foreign key constraints, since every table has a unique column encryption key.
Thisshould typically not be an issue, since keys used in foreign key constraints should be systemgenerated, unique, and unintelligent. Business keys and other business attributes of a table are more
likely candidates for encryption and usually do not participate in foreign key relationships with other
tables. Other database features and types are also not eligiblefor transparent data encryption

Index types other than B-tree


Range-scan searching of indexes

BFILEs (external objects)


Materialized view logs

Synchronous Change Data Capture


Creating an Oracle Wallet
You can create a wallet for Transparent Data Encryption using Oracle Enterprise Manager. Select the
Server tab, and then click the Transparent Data Encryption link under the Security Heading. You will
see the page in Figure 9-10. In this example, there is no wallet created yet. The file sqlnet.ora stores

83

ORACLE DBA BASICS


the location of the wallet using the ENCRYPTION_WALLET_LOCATION variable. If this variable does not
exist in sqlnet.ora, the wallet is created in $ORACLE_HOME/admin/database_name/wallet, which in
this example is /u01/app/oracle/admin/dw/wallet. To create the encryption key and place it in the
wallet, create a wallet password that is at least ten characters long, a mix of upper- and lowercase
letters, numbers, and punctuation.
The equivalent SQL commands to create, open, and close a wallet are very straight forward and
probably take less time to type than using Oracle Enterprise Manager! To create a new key, and create
the wallet if it does not already exist, use the alter system command as follows.
SQL> alter system set encryption key identified by "Uni123#Lng";
System altered.
Note the importance of putting the wallet key within double quotes; otherwise, the password will map
all lowercase characters and the wallet will not open. After the database instance is shut down and
restarted, you need to open the wallet with the alter system command if this task is not automated
otherwise.
SQL> alter system set encryption wallet open identified by "Uni123#Lng";
System altered.
Finally, you can easily disable access to all encrypted columns in the database at any time by closing
the wallet.
SQL> alter system set encryption wallet close;
System altered.
Make frequent backups of your wallet and dont forget the wallet key (or the security administrator
which can be a separate role from the DBAs roleshould not forget the wallet key), because losing
the wallet or the password to the wallet will prevent decryption of any encrypted columns or
tablespaces.
Encrypting a Table
You can encrypt a column or columns in one or more tables simply by adding the encrypt keyword
after the columns datatype in a create table command or after the column name in an existing
column. For example, to encrypt the SALARY column of the EMPLOYEES table, use this command.
SQL> alter table employees modify (salary encrypt);
Table altered.
Any users who had the privileges to access this column in the past still have the same access to the
SALARY columnits completely transparent to the users. The only difference is that the SALARY
column is indecipherable to anyone accessing the operating system file containing the EMPLOYEES
table.
Encrypting a Tablespace
To encrypt an entire database, the COMPATIBLE initialization parameter must be set to 11.1.0.0.0
the default for Oracle Database 11g. If the database has been upgraded from an earlier release, and
you change the COMPATIBLE parameter to 11.1.0.0.0, the change is irreversible. An existing
tablespace cannot be encrypted; to encrypt the contents of an existing tablespace, you must create a
new tablespace with the ENCRYPTION option and copy or move existing objects to the new tablespace.
CREATE SMALLFILE TABLESPACE "USERS_CRYPT"
DATAFILE'+DATA' SIZE 500M LOGGING EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO NOCOMPRESS ENCRYPTION
USING 'AES256' DEFAULT STORAGE(ENCRYPT)
How to create and manage a Oracle User?
The following command create a new user.
SQL> create user user1 identified by user1 account unlock
default tablespace users
temporary tablespace temp;
To restrict the user with a space quota in a tablespace, use the following command.

84

ORACLE DBA BASICS


SQL> alter user user1 quota 250M on users;
Give basic privilages to a user for work.
SQL> grant create session, create table to sking;
Drop a user.
SQL> drop user queenb cascade;
Alter a user properties.
SQL> alter user user1
default tablespace users2
quota 500M on users2;

To debug an application, a DBA sometimes needs to connect as another user to simulate the problem.
Without knowing the actual plain-text password of the user, the DBA can retrieve the encrypted
password from the database, change the password for the user, connect with the changed password,
and then change back the password using an undocumented clause of the alter
user command.
The first step is to retrieve the encrypted password for the user, which is stored in the table
DBA_USERS.
SQL> select password from dba_users where username = 'USER1';
PASSWORD
-----------------------------94b7CBD64A941432
Save this password using cut and paste in a GUI environment, or save it in a text file to retrieve later.
The next step is to temporarily change the users password and then log in using the temporary
password:
SQL> alter user user1 identified by temp_pass;
User altered.
SQL> connect user1/temp_pass@db;
Connected.
At this point, you can debug the application from USER1s point of view. Once you are done
debugging, change the password back using the undocumented by values clause of alter user.
SQL> alter user user1 identified by values '94b7CBD64A941432';
Profiles and Resource Control
The list of resource-control profile options that can appear after CREATE PROFILE profilename LIMIT
are explained in below. Each of these parameters can either be an integer, UNLIMITED or DEFAULT. As
with the password-related parameters, UNLIMITED means that there is no bound on how much of the
given resource can be used. DEFAULT means that this parameter takes its values from the DEFAULT
profile. The COMPOSITE_LIMIT parameter allows you to control a group of resource limits when the
types of resources typically used varies widely by type; it allows a user to use a lot of CPU time but
not much disk I/O during one session, and vice versa during another session, without being
disconnected by the policy.
Resource Parameter Description
SESSIONS_PER_USER = The maximum number of sessions a user can simultaneously have
CPU_PER_SESSION = The maximum CPU time allowed per session, in hundredths of a second
CPU_PER_CALL = Maximum CPU time for a statement parse, execute, or fetch operation, in
hundredths of a second
CONNECT_TIME = Maximum total elapsed time, in minutes
IDLE_TIME = Maximum continuous inactive time in a session, in minutes, while a query or other
operation is not inprogress
LOGICAL_READS_PER_SESSION = Total number of data blocks read per session, either from memory
or disk

85

ORACLE DBA BASICS


LOGICAL_READS_PER_CALL = Maximum number of data blocks read for a statement parse, execute,
or fetch operation
COMPOSITE_LIMIT = Total resource cost, in service units, as a composite
weighted sum of CPU_PER_SESSION, CONNECT_TIME, LOGICAL_READS_PER_SESSION, and
PRIVATE_SGA
PRIVATE_SGA = Maximum amount of memory a session can allocate in the shared pool, in bytes,
kilobytes, or megabytes
By default, all resource costs are zero:
SQL> select * from resource_cost;
RESOURCE_NAME UNIT_COST
-------------------------------- ---------CPU_PER_SESSION 0
LOGICAL_READS_PER_SESSION 0
CONNECT_TIME 0
PRIVATE_SGA 0
4 rows selected.
To adjust the resource cost weights, use the ALTER RESOURCE COST command. In this example, we
change the weightings so that CPU_PER_SESSION favors CPU usage over connect time by a factor of
25 to 1; in other words, a user is more likely to be disconnected because of CPU usage than connect
time.
SQL> alter resource cost cpu_per_session 50 connect_time 2;
SQL> select * from resource_cost;
RESOURCE_NAME UNIT_COST
-------------------------------- ---------CPU_PER_SESSION 50
LOGICAL_READS_PER_SESSION 0
CONNECT_TIME 2
PRIVATE_SGA 0
4 rows selected.
The next step is to create a new profile or modify an existing profile to use a composite limit.
SQL> create profile lim_comp_cpu_conn limit composite_limit 250;
Profile created.
As a result, users assigned to the profile LIM_COMP_CPU_CONN will have their session resources
limited using the following formula to calculate cost.
composite_cost = (50 * CPU_PER_SESSION) + (2 * CONNECT_TIME);
The parameters PRIVATE_SGA and LOGICAL_READS_PER_SESSION are not used in this particular
example, so unless they are specified otherwise in the profile definition, they default to whatever their
value is in the DEFAULT profile. The goal of using composite limits is to giveusers some leeway in the
types of queries or DML they run. On some days, they may run a lot of queries that perform numerous
calculations but dont access a lot of table rows; on other days, they may do a lot of full table scans
but dont stay connected very long. In these situations, we dont want to limit a user by a single
parameter, but instead by total resource usage weighted by the availability of each resource on the
server.
Profiles are a means to limit resources a user can use.We can set up limits on the system resources
used by setting up profiles with defined limits on resources. Profiles are very useful in large, complex
organizations with many users. It allows us to regulate the amount of resources used by each
database user by creating and assigning profiles to users.
The limits resource can be categories into two parts :
1.) Kernel Limits
2.) Password Limits
1.) Kernel Limits : Kernel Resources covers the following options to limit the kernels resources.

86

ORACLE DBA BASICS

Composite_limits : it specify the total resource cost for a session, expressed in service
units. Oracle Database calculates the total service units as a weighted sum of
CPU_PER_SESSION, CONNECT_TIME,
LOGICAL_READS_PER_SESSION,
and
PRIVATE_SGA. composite_limit <value | unlimited | default>
e.g; SQL> alter profile P1 LIMIT composite_limit 5000000;

CONNECT_TIME : it specify the total elapsed time limit for a session, expressed in
minutes. connect_time <value | unlimited | default> . e.g ,
SQL> alter profile P1 LIMIT connect_time 600;

CPU_PER_CALL : Specify the CPU time limit for a call (a parse, execute, or fetch), expressed
in hundredths of seconds. cpu_per_call < value | unlimited | default > . e.g;
SQL> alter profile P1 LIMIT cpu_per_call 3000;

CPU_PER_SESSION : Specify the CPU time limit for a session,expressed in hundredth of


seconds. cpu_per_session <value | unlimited | default> .e.g ;
SQL> alter profile P1 LIMIT cpu_per_session UNLIMITED ;

IDLE_TIME : Specify the permitted periods of continuous inactive time during a


session,expressed in minutes. Long-running queries and other operations are not subject to this
limit . idle_time <value | unlimited | default> . e.g;
SQL> alter profile P1 LIMIT idle_time 20 ;

LOGICAL_READS_PER_CALL : Specify the permitted number of data blocks read for a call
to process a SQL statement (a parse, execute, or fetch). logical_reads_per_call <value
| unlimited | default> .e.g;
SQL> ALTER PROFILE P1 LIMIT logical_reads_per_call 1000;

LOGICAL_READS_PER_SESSION : Specify the permitted number of data blocks read in a


session, including blocks read from memory and disk. logical_reads_per_session <value |
unlimited | default> e.g;
SQL> ALTER PROFILE P1 LIMIT logical_reads_per_session UNLIMITED;

PRIVATE_SGA : Specify the amount of private space a session can allocate in the shared
pool of the system global area (SGA). private_sga <value | unlimited | default> Only valid with
TP-monitor. e.g;
SQL> ALTER PROFILE P1 LIMIT private_sga 15K ;

SESSIONS_PER_USER : Specify the number of concurrent sessions to which you want to


limit the user. sessions_per_user <value | unlimited | default> e.g;
SQL> ALTER PROFILE P1 LIMIT sessions_per_user 1;

2 .) Password Limits : Password Limits covers the following option to restrict


Password resources. Parameters that set lengths of time are interpreted in number of days .

the

87

ORACLE DBA BASICS

FAILED_LOGIN_ATTEMPTS : Specify the number of failed attempts to log in to the user


account before the account is locked. failed_login_attempts <value | unlimited | default> e.g;
SQL> ALTER PROFILE P1 LIMIT failed_login_attempts 3;

-- to count failed log in attempts , fire the below command


SQL> SELECT name, lcount FROM user$ WHERE lcount <> 0;

PASSWORD_GRACE_TIME : The number of days during which a login is alowed but a


warning is issued. password_gracetime <value | unlimited | default> . e.g;
SQL> ALTER PROFILE P1 LIMIT password_grace_time 10;

PASSWORD_LOCK_TIME : The number of days an account will be locked after the


specified number of consecutive failed login attempts defined by FAILED_LOGIN_ATTEMPTS.
password_lock_time <value | unlimited | default> . e.g;
SQL> ALTER PROFILE P1 LIMIT password_lock_time 30;

PASSWORD_REUSE_MAX :Times a password can be


reused .password_reuse_max <value | unlimited | default> . e.g;
SQL> ALTER PROFILE P1 LIMIT password_reuse_max 0 ;

PASSWORD_REUSE_TIME : Days between password


reuses.password_reuse_time <value | unlimited | default> . e.g ;
SQL> ALTER PROFILE P1 LIMIT password_reuse_time 0 ;

To view existing profiles fire the below query :


SQL> select profile, resource_name, limit FROM dba_profiles order by profile, resource_name ;
How to create the profile

SQL Create profile CREATE PROFILE <profile_name> LIMIT


<profile_item_name> <value>
<profile_item_name> <value>
....;
e.g;
SQL> create profile P1 limit
password_life_time 90
password_grace_time 15
password_reuse_time 0
password_reuse_max 0
failed_login_attempts 4
password__login_atempts 4
password_lock_time 2
cpu_per_call 2
private_sga 500K
logical_read_per_call 1000;
How to assign profile to user : we can assign the profile to user while creating the user or by
altering the user. Here is demo to assign a profile to user .Say we have profile P1 and we assign to
user ABC.
SQL> create user abc identified by abc
default tablespace users

88

ORACLE DBA BASICS


quota unlimited on users
profile P1;
or
SQL> alter user abc identified by abc profile P1;
DATA PUMP ARCHITECTURE IN ORACLE 10G
Oracle Data Pump was written from the ground up with an architecture designed to produce high
performance with maximum flexibility. Understanding the architecture of Data Pump will help us to
take advantage of its speed and features.
Data Pump Architecture :

Master Table :
At the heart of every Data Pump operation is the master table. This is a table created in the schema of
the user running a Data Pump job. It is a directory that maintains all details about the job: the
current state of every object being exported or imported, the locations of those objects in the dumpfile
set, the user-supplied parameters for the job, the status of every worker process, the current set of
dump files, restart information, and so on. During a file-based export job, the master table is built
during execution and written to the dumpfile set as the last step. Conversely, loading the master table
into the current users schema is the first step of a file-based import operation, so that the master
table can be used to sequence the creation of all objects imported. The use of the master table is the
key to the ability of Data Pump to restart a job in the event of a planned or unplanned job stoppage.
Because it maintains the status of every object to be processed by the job, Data Pump knows which
objects were currently being worked on, and whether or not those objects were successfully
completed.

Process

Structure

89

ORACLE DBA BASICS


Process Structure : A Data Pump job comprises several processes. These processes are described in
the order of their creation.
Client Process : This is the process that makes calls to the Data Pump API. Oracle Database ships
four client utilities of this API. These have a very similar look and feel to the original exp and imp
clients, but have many more capabilities.Data Pump is integrated into Oracle Database, a client is not
required once a job is underway. Multiple clients may attach and detach from a job as necessary for
monitoring and control.
Shadow Process : This is the standard Oracle shadow (or foreground) process created when a client
logs into Oracle Database. The shadow services Data Pump API requests.1 Upon receipt of a
DBMS_DATAPUMP.OPEN request, the shadow process creates the job, which consists primarily of
creating the master table, the Advanced Queuing (AQ) queues used for communication among the
various processes, and the master control process. Once a job is running, the main task of the shadow
process consists of servicing GET_STATUS requests from the client. If the client detaches, the
shadow process also goes away.
Master Control Process (MCP) : As the name implies, the MCP controls the execution and
sequencing of a Data Pump job. There is one MCP per Data Pump job, maintaining the job state, job
description, restart, and dumpfile information in the master table. A job is divided into various phases
of metadata and data unloading or loading, and the MCP hands out work requests to the worker
processes appropriate for the current phase. The bulk of MCP processing is performed in this work
dispatch loop. The MCP also performs
central file management duties, maintaining the active dumpfile list and handing out file pieces as
requested by processes unloading data or metadata. An MCP has a process name of the form:
<instance>_DMnn_<pid>.
Worker Process : Upon receipt of a START_JOB request, the MCP creates worker processes as
needed, according to the value of the PARALLEL parameter. The worker processes perform the tasks
requested by the MCP (primarily unloading and loading of metadata and data), and maintain the
object rows that make up the bulk of the master table. As database objects are unloaded or loaded,
these rows are written and updated with the current status of these objects: pending, completed,
failed, and so on. The worker processes also
maintain type completion rows, which describe the type of object currently being worked on: tables,
indexes, views, and so on. These types completion rows are used during restart. A worker process has
a name of the form: *DWnn*.
Parallel Query (PQ) Process : If the External Tables data access method is chosen for loading or
unloading a table or partition, some parallel query processes are created by the worker process that
was given the load or unload assignment, and the worker process then acts as the query coordinator.
These are standard parallel execution slaves that exploit the parallel execution architecture of Oracle
Database, and enable intra-partition loading and unloading. The Data Pump public API is embodied in
the PL/SQL package DBMS_DATAPUMP.
Data Movement : Data Pump supports four methods of data movement, each of which has different
performance and functional characteristics. In descending order of speed, these four methods are:

Data File Copying (transportable tablespaces)

Direct Path load and unload

External Tables

Conventional Path
Data Pump will choose the best data movement method for a particular operation. It is also possible
for the user to specify an access method using command line parameters.The fastest method of

90

ORACLE DBA BASICS


moving data is by copying the database data files that contain the data without interpretation or
altering of the data. This is the method used to move data when transportable mode is specified at
export time. There are some restrictions on the use of data file copying. Some types of data, some
types of tables, and some types of table oganization
cannot be moved with this method. For example, tables with encrypted columns cannot be moved
using this access method. In addition, the character sets must be identical on both the source and
target databases in order to use data file copying.
Direct path and external tables are the two main data access methods provided by
Oracle Database 11g. The direct path access method is the faster of the two, but does not
support intra-partition parallelism. The external tables access method does support this function, and
therefore may be chosen to load or unload a very large table or partition. Each access method also
has certain restrictions regarding the use of the other. For example, a table being loaded with active
referential constraints or global indexes cannot be loaded using the direct path access method. A table
with a column of data type LONG cannot be loaded with the external
tables access method. In most cases, you need not be concerned about choosing an access method;
the Data Pump job will make the correct choice based on an array of job characteristics. Both methods
write to the dumpfile set in a compact, binary stream format that is approximately 15 percent smaller
than the original exp data representation.
When neither direct path nor external tables can handle the data to be imported, Data Pump uses a
method called conventional path. For example, a table that contains an encrypted column and a LONG
column would be imported using conventional path because direct path cannot be used to import
encrypted columns and external tables cannot be used to import LONG columns. Data loading with the
conventional access method is much slower than the direct path and external tables methods. So, the
Data Pump uses this method only when it has no other choice.
Metadata Movement : The Metadata API (DBMS_METADATA) is used by worker processes for all
metadata unloading and loading. Unlike the original exp function (which stored object definitions as
(SQL DDL), the Metadata API extracts object definitions from the database, and writes them to the
dumpfile set as XML documents. This allows great flexibility to apply XML Style sheet Language
Transformations (XSLTs) when creating the DDL at import time. For example, an objects ownership,
storage characteristics, and tablespace residence can be changed easily during import. This robust
XML might take up more dumpfile space than the old style SQL
DDL, but it provides more flexibility and features. In addition, the COMPRESSION parameter can be
used to decrease the size of metadata written during a Data Pump export job.
Interprocess Communication : Advanced Queuing (AQ) is used for communicating among the
various Data Pump processes. Each Data Pump job has two queues:

Command and control queue: All processes (except clients) subscribe to this queue. All
API commands, work requests and responses, file requests, and log messages are processed on this
queue.

Status queue: Only shadow processes subscribe to read from this queue. It is used to
receive work-in-progress and error messages queued by the MCP. The MCP is the only writer to this
queue.
File Management : The file manager is distributed across several parts of the Data Pump job. The
actual creation of new files and allocation of file segments is handled centrally within the MCP.
However, each worker and parallel query process makes local process requests to the file manager to
allocate space, read a file chunk, write to a buffer, or update progress statistics. The local file manager
determines if the request can be handled locally and if not, forwards it to the MCP using the command
and control queue. Reading file chunks and updating file statistics in the master table are handled
locally. Writing to a buffer is typically handled locally, but may result in a request to the MCP for more
file space.

91

ORACLE DBA BASICS


Directory Management : Because Oracle background server processes handle all dumpfile set I/O,
not the user running the job. This presents a security dilemma because oracle is typically a privileged
account. Therefore, all directory specifications are made using Oracle directory objects with read/write
grants established by the DBA.
For example, the DBA may set up a directory as follows:
Step 1 : Create Directory
SQL> Create directory datapump as D:\dpump\;
Step 2 : Grant privileges to user
SQL> Grant read, write on directory datapump to scott;
Then scott can specify a dump file on the expdp command line as:
C:\>expdp
scott/tiger
logfile=hr_emp_log.log

dumpfile=hr_emp_tab.dmp

tables=hr.employees

directory=datapump

ORACLE_HOME AND ORACLE_BASE


ORACLE_HOME specifies the directory containing the Oracle software for a given release.
It corresponds to the environment in which Oracle Database products run. This environment includes
the location of installed product files, the PATH variable pointing to the binary files of installed
products, registry entries, net service names, and program groups.The Optimal Flexible Architecture
(OFA) recommended value is: $ORACLE_BASE/product/release/db_1/ .
For example :
/u01/app/oracle/product/10.2.0/db_1.
ORACLE_BASE specifies the directory at the top of the Oracle software and administrative file
structure. The value recommended for an OFA configuration is software_mount_point/app/oracle.
For example: /u01/app/oracle.
If we are not using an OFA-compliant system, then we do not have to set ORACLE_BASE, but it is
highly recommended that we should set it.We can find the ORACLE_HOME from below steps :
Oracle 9i /10g/11g
SQL> select NVL(SUBSTR(FILE_SPEC, 1, INSTR(FILE_SPEC, '\', -1, 2) -1) , SUBSTR(FILE_SPEC, 1,
INSTR(FILE_SPEC, '/', -1, 2) -1)) FOLDER FROM DBA_LIBRARIES WHERE LIBRARY_NAME =
'DBMS_SUMADV_LIB' ;
Output :
FOLDER
------------------------------------C:\app\neerajs\product\11.2.0\dbhome_1

Oracle 10g
SQL > VAR OHM VARCHAR2(100);
SQL > EXEC DBMS_SYSTEM.GET_ENV('ORACLE_HOME', :OHM) ;
SQL > PRINT OHM ;

Linux/Unix echo $ORACLE_HOME


If we fire command ps -ef | grep tns
then tns entry details and also shows full path of oracle home.Because if env is not set then
echo$ORACLE_HOME does not work.

92

ORACLE DBA BASICS

IDENTIFY ORACLE S/W RELEASE


To understand the release nomenclature used by Oracle, examine the following example of an Oracle
Database server labeled "Release 10.1.0.1.0".
Oracle Database continues to evolve and can require maintenance, Oracle periodically produces new
releases. Not all customers initially subscribe to a new release or require specific maintenance for their
existing release. As a result, multiple releases of the product exist simultaneously.As many as five
numbers may be required to fully identify a release. The significance of these numbers is
Release Number Format

Note:
Starting with release 9.2, maintenance releases of Oracle Database are denoted by a change to the
second digit of a release number. In previous releases, the third digit indicated a particular
maintenance release.
Major Database Release Number : The first digit is the most general identifier. It represents a
major new version of the software that contains significant new functionality.
Database Maintenance Release Number : The second digit represents a maintenance release
level. Some new features may also be included.
Application Server Release Number : The third digit reflects the release level of the Oracle
Application Server (OracleAS) .
Component-Specific Release Number : The fourth digit identifies a release level specific to a
component. Different components can have different numbers in this position depending upon, for
example, component patch sets or interim releases.
Platform-Specific Release Number : The fifth digit identifies a platform-specific release. Usually
this is a patch set. When different platforms require the equivalent patch set, this digit will be the
same across the affected platforms.
Checking The Current Release Number : To identify the release of Oracle Database that is
currently installed and to see the release levels of other database components we are using, query the
data dictionary view product_component_version. A sample query follows.(We can also query the
v$version view to see component-level information.) Other product release levels may increment
independent of the database server.
SQL> select * from product_component_version;
PRODUCT
VERSION
STATUS
------------------------------------ ----------NLSRTL
10.2.0.1.0 Production
Oracle Database 10g Enterprise Edition
10.2.0.1.0 Production
PL/SQL
10.2.0.1.0 Production
It is important to convey to Oracle the results of this query when we report problems with the
software .

93

You might also like