You are on page 1of 118

KANNA TECHNOLOGIES

OPERATING SYSTEM FUNCTIONALITY

USER

RAM

HARD DISK

1. Whenever user sends a request, primary search for the data will be done in RAM. If the
information is available, it will be given to the user
2. Otherwise, secondary search will be done in hard disk and copy of that info will be placed in
RAM before giving that to the user
3. The request and response between RAM and hard disk is called I/O operation
4. Files with more size will be fitted into RAM using swapping (flushing old data Least Recently
Used algorithm)
Questions:
1. Why to copy data to RAM?
a. It will give benefit for next users who are requesting for same data. Accessing
information from memory is always faster than accessing it from disk
2. What happens if data size is more than RAM size?
a. Data will be splitted based on the RAM size and swapping will takes place (In windows,
this is called paging)
3. How the data will be managed in RAM?
a. Using Least Recently Used (LRU) algorithm

Note: The primary goal of DBA is to reduce response time there by increasing performance and also
avoiding I/O (all these 3 are interlinked)
Note: Any request and response between memory and disk is called I/O

DBA CLASS NOTES | version 3.0 1

KANNA TECHNOLOGIES
ORACLE DATABASE FUNCTIONALITY

USER

INSTANCE

DATABASE

1. The functionality of oracle database is similar to operating system functionality


2. When a request is placed by user, primary search will be done in Instance. If info is available, it
will be given to the user. Otherwise, secondary search will be done in database and a copy will
be placed in the instance and then that info will be given to the user
3. Instance also will follow LRU algorithm.
4. The request and response between an instance and database is also I/O

Instance it is a way through which users can connect to database


Database it is the location where users data is stored

DBA CLASS NOTES | version 3.0 2

KANNA TECHNOLOGIES
ORACLE 10g DATABASE ARCHITECTURE
Shared pool
User
process

Server
process
PGA

Database buffer cache


MRU end

Library cache
WRITE
list

LRU
list

Data
dictionary
cache

Log buffer
cache
Large pool
Java pool

LRU end

Stream pool

SGA

SMONn

PMON

DBWRn

LGWR

CKPT

PARAMETER FILE

PASSWORD FILE

ARCHIVED
REDOLOG FILES

DATA FILES

REDOLOG
FILES

CONTROL
FILES

DATABASE

DBA CLASS NOTES | version 3.0 3

ARCHn

KANNA TECHNOLOGIES
INTERNALS OF USER CONNECTIVITY

1. Whenever user starts and application, a user process will be created on the client side. E.g.:
sqlplusw.exe process will be started when a user clicks on sqlplus executable on a windows
operating system
2. This user process will send a request to establish a connection to server by providing login
credentials (sometimes even host string also)
3. On server side, Listener service will accept all connections that are coming in and will hand over
user information (like username, password, ip address, network etc) to a background process
called PMON (process monitor)
4. PMON will then perform the authentication of the user using base tables. For this it will do a
primary search in data dictionary cache and if a copy of base table is not available in that, then it
will copy from the database
5. Once authenticated, user will receive and acknowledgement statement. This can be either
successful / unsuccessful message
6. If successful connection, PMON will create server process and a memory will be allocated to that
server process which is called as PGA (private global area)
7. Server process is the one which will do work on behalf of user process

DBA CLASS NOTES | version 3.0 4

KANNA TECHNOLOGIES
BASE TABLES
1. Base tables store the information i.e. helpful for database functionality. This info is also called as
dictionary information
2. Base tables will be in the form of XXX$ (i.e. name suffixed with a $ sign) and will reside in
SYSTEM tablespace
3. Information in base tables will be in cryptic format and because of this we can access but cannot
understand data inside them
4. A try to modify base tables (performing DML or DDL operations) may lead to database
corruption. Only oracle processes are having authority to modify them
5. Base tables will be created at the time of database creation using SQL.BSQ script

VIEWS ON BASE TABLES


1. Oracle provided 2 types of views to access information inside base tables
a. Data dictionary views which will be in the format of dba_XXX (name prefixed with
dba_ keyword) and provides permanent info of the database
b. Dynamic performance views which will be in the format of v$XXX (name prefixed with
v$ sign) and provides ongoing (current actions) of the database
2. These views will be created after database creation by executing CATALOG.SQL script
3. All procedures and packages helpful for DBA will be created using CATPROC.SQL script
4. Catalog.sql and catproc.sql scripts should be run after database creation, if database is created
manually. While creating database using DBCA, oracle will execute them
5. These two scripts will reside in ORACLE_HOME/rdbms/admin path

PHASES OF SQL EXECUTION


1. Any SQL statement will undergo following phases to get executed
a. PARSING : This phase will perform following actions
i. Syntax checking of the SQL statement
ii. Semantic checking of the statement i.e. checking for the privileges using base
tables
iii. Diving the statement into literals
b. EXECUTION : This phase will perform following actions
i. Converting the statement into ASCII format

DBA CLASS NOTES | version 3.0 5

KANNA TECHNOLOGIES
ii. Compiling the statement
iii. Running or executing the statement
c. FETCH : Data will be retrieved in this phase
Note: For a PL/SQL program, BINDING will happen after PARSING phase (so it will have 4 phases to go)

SELECT STATEMENT PROCESSING


1. Server process will receive the statement sent by user process on server side and will handover
that to library cache of shared pool
2. The 1st phase of sql execution i.e. Parsing will be done in library cache
3. Then, OPTIMIZER (brain of oracle sql engine) will generate many execution plans, but chooses
the best one based on time & cost (time response time, cost cpu resource utilization)
4. Server process will send the parsed statement with its execution plan to PGA and 2nd phase i.e.
EXECUTION will be done there
5. After execution, server process will start searching for the data from LRU end of LRU list and this
search will continue till it founds data or reaches MRU end. If it found data, it will be given to the
user. If it didnt found any data, it means data is not there in database buffer cache
6. In such cases, server process will copy data from datafiles to MRU end of LRU list of database
buffer cache
7. From MRU end again blocks will be copied to PGA for filtering required rows and then it will be
given to user (displayed on users console)
Note: server process will not start searching from MRU end because there may be a chance of missing
the data by the time it reaches LRU end in searching
Note: for statements issued for the second time, parsing and fetch phases are skipped, subject to the
availability of data and parsed statement in the instance

INSTANCE

Memory
Structures

DATABASE

Background
Processes

Logical
Structures

Physical
Structures
DBA CLASS NOTES | version 3.0 6

KANNA TECHNOLOGIES
LOGICAL STRUCTURES OF DATABASE
1. The following are the logical structures of database and will be helpful in easy manageability of
the database
a. TABLESPACE an alias name to a group of datafiles (or) group of segments (or) a space
where tables reside
b. SEGMENT group of extents (or) object that occupies space
c. EXTENT group of oracle data blocks (or) memory unit allocated to the object
d. ORACLE DATA BLOCK basic unit of data storage (or) group of operating system blocks
2. The following tablespaces are mandatory to exist in 10g database
a. SYSTEM stores base tables (dictionary information)
b. SYSAUX auxiliary tablespace to SYSTEM which also stores base tables required for
reporting purpose
c. TEMP used for performing sort operations
d. UNDO used to store before images helpful for rollback of transaction or instance
recovery
Note: Oracle 9i should have all the above tablespaces except SYSAUX. SYSAUX is introduced in 10g to
avoid burden on SYSTEM tablespace

DBA CLASS NOTES | version 3.0 7

KANNA TECHNOLOGIES

DML STATEMENT PROCESSING


1. Server process performs parsing in library cache by taking the statement information. Optimizer
will generate the best execution plan based on time & cost
2. By following the execution plan statement will get executed in PGA
3. After execution, Server process will search for the data in LRU list. If exists, it will copy undo
block to LRU list. If data is not found, then it will copy both data block and undo block to LRU list.
4. From there those blocks will be copied to PGA where modifications will be done by which redo
entries will be generated in PGA which are copied to redolog buffer cache by server process
Note: A single atomic change happened to the database is called redo entry or redo record or change
vector. E.g.: if 100 rows are modified, then we will have 200 redo entries
5. Modifications is done by copying previous image from data block to undo block and new value
will be inserted into data block thus making both the blocks DIRTY
6. The dirty blocks will be moved to write list from where DBWRn will write them to corresponding
datafiles. But before DBWRn writes, LGWR writes the content of log buffer cache to redolog files
Note : LGWR writing before DBWRn is called WRITE-AHEAD protocol

DBA CLASS NOTES | version 3.0 8

KANNA TECHNOLOGIES
DDL STATEMENT PROCESSING
1. DDL statement processing is same as DML processing as internally all DDL are DML statements
to the base tables
2. For every DDL statement, base tables will get modified with update/delete/insert statements.
Because of this reason, in case of DDL also undo will be generated

COMPONENTS OF ORACLE DATABASE ARCHITECTURE


USER PROCESS It is the process which places request from client side and will be created when user
starts any application
SERVER PROCESS It is the process which will do work on behalf of user process on server side
PRIVATE GLOBAL AREA (PGA)
1. It is the memory area allocated to server process to perform execution of the SQL statement &
to store session information
2. The size of memory allocated will be defined using PGA_AGGREGATE_TARGET
3. Before 9i, PGA is configured using
a. WORK_AREA_SIZE
b. BITMAP_WORK_AREA
c. SORT_AREA_SIZE
d. HASH_AREA_SIZE etc
4. Sorting will takes place in PGA if the data is small in size. This is called as in-memory sort.
5. If the data size is larger than sort are size of PGA, Oracle will use both PGA and TEMP tablespace
which needs no.of I/Os and automatically database performance will get degraded.

DBA CLASS NOTES | version 3.0 9

KANNA TECHNOLOGIES

ORACLE INSTANCE It is a way through which users will access / modify data in the database. It is a
combination of memory structures and background processes
SHARED GLOBAL AREA (SGA) It is the memory area which contains several memory caches helpful in
reading and writing data
SHARED POOL
1. Shared pool contains following components
a. Library cache it contains shared SQL & PL/SQL statements
b. Data dictionary cache it contains dictionary information in the form of rows, hence
also called as row cache
2. Size of shared pool is defined using SHARED_POOL_SIZE
DATABASE BUFFER CACHE
1. It is the memory area where a copy of the data is placed in LRU list
2. The status of block in DBC will be any of the following status
a. UNUSED block which is never used
b. FREE block which is used already but currently it is free
c. PINNED block currently in use

DBA CLASS NOTES | version 3.0 10

KANNA TECHNOLOGIES
d. DIRTY block which got modified
3. DBC contains LRU list and WRITE list which helps in splitting modified blocks with other blocks
4. Size of DBC is defined using DB_CACHE_SIZE or DB_BLOCK_BUFFERS
LOG BUFFER CACHE It is the memory area where a copy of redo entries are maintained and size is
defined by LOG_BUFFER
Note: LBC should be allotted with smallest size than any other memory component in SGA
LARGE POOL
1. Large pool will be used efficiently at the time of RMAN backup
2. Large pool can dedicate some of its memory to shared pool and gets back whenever shared pool
is observing less free space
3. Size is defined using LARGE_POOL_SIZE
JAVA POOL It is memory area used to run java executables (like JDBC driver) and size is defined using
JAVA_POOL_SIZE
STREAM POOL
1. It is the memory area used when replicating a database using oracle streams
2. This parameter is introduced in 10g and can be defined using STREAM_POOL_SIZE
3. If stream pool is not defined and streams are used, then 10% of memory from shared pool will
be used. This may affect the database performance
The below diagrams will explain SMONn instance recovery in detail
1. We know that LGWR wiill write redo entries into redolog files. But if we have more and more
redo entries generated (for huge transactions), redolog file size increases and even terabytes of
storage is not sufficient.
2. To overcome this Oracle designed its architecture so that LGWR will write into 2 or more
redolog files in a cyclic order (shown in the below diagram)
3. When doing this, certain events will trigger out which are listed as below

DBA CLASS NOTES | version 3.0 11

KANNA TECHNOLOGIES

Redolog
member 2

Redolog
member 1

LOGSWITCH
LGWR moving from one redolog file to another is called LOG SWITCH. At the time of log switch,
following actions will take place
Checkpoint event will occur this tells that dirty blocks should be made permanent to datafiles.
(Eg: Its just like automatic saving of email when composing in gmail)
CKPT process will update the latest SCN to datafile header and controlfiles by taking the info
from redolog files
DBWRn will write the corresponding dirty blocks from write list to datafiles
ARCHn process will generate archives (copy of online redolog files) only if database is in
archivelog mode
Note: Checkpoint event not only occurs at log switch. It can occur at repeated interval and this is
decided by a parameter LOG_CHECKPOINT_INTERVAL (till 8i) and FAST_START_MTTR_TARGET (from 9i)

SMONn It is the background process responsible for following actions


1. Instance recovery this will be done in following phases

DBA CLASS NOTES | version 3.0 12

KANNA TECHNOLOGIES
a. Roll forward compares the SCN between redolog files and datafiles header and will
make sure committed data is written to datafiles
b. Opens the database for user access
c. Rollbacks uncommitted transactions with the help of undo tablespace
2. It will coalesce the tablespaces which are defined as automatic segment space management
3. It will release the temporary segments occupied by the transactions when they are completed (a
more detailed post available @ http://pavandba.wordpress.com/2010/04/20/how-temporarytablespace-works/ )

SMONs instance recovery with example


EXAMPLE : 1

1
...
2

3
1

Controlfile - 3

In the above diagram, assume that 1,2 and 3 transactions are commited and 4 is going on. Also
as checkpoint occurred at log switch, complete data of 1 and 2, also part of 3 were written to
datafiles

DBA CLASS NOTES | version 3.0 13

KANNA TECHNOLOGIES
Assume LGWR is writing to second file for transaction 4 and instance crash occurred.
While recovery, SMON will start comparing the SCN between datafile header and redolog file.
Also it will check for commit point.
In the above example, 1 & 2 are written and commited. So nothing is there to recover. But for 3,
only half data is written. To write other half, SMON will initiate DBWRn. But DBWRn will be
unable to do that as dirty blocks are cleaned out from write list (due to instance restart)
Now DBWRn will take help of server process which will actually generate dirty blocks with the
help of redo entries that are already written to redolog files

EXAMPLE : 2

1
...
2
.
3

3
1

Controlfile - 3

In the above example, transaction 3 is not yet commited, but because log switch occurred and
checkpoint event triggred, part of its data is written to datafiles
Assume an instance crash occurred and SMON is performing instance recovery

DBA CLASS NOTES | version 3.0 14

KANNA TECHNOLOGIES
SMON will start comparing SCN as usual and when comes to 3 it identifies that data is written to
datafiles, but actually 3 is not committed. So this data need to be reverted. Again it will ask
DBWRn to take this job
DBWRn in turn takes help from server process which will generated blocks with old values with
the help of undo tablespace
Note : For roll forward, redo entries from redolog files will be used where as in rollback before images
from undo tablespace will be used

PMON It is responsible for following actions


1. releases the locks and resources held by abruptly terminated sessions
Note: whenever any user performs DML transactions on a table, oracle will apply lock. This is to
maintain read consistency
2. authenticates the user
3. registers the listener information with the instance
4. restarts dead dispatchers at the time of instance recovery in case of shared server architecture
DBWRn It is responsible in writing dirty buffers from write list to datafiles and it will do this action in
following situations
1. after LGWR writes
2. when write list reaches threshold value
3. at every checkpoint
4. when tablespace is taken offline or placed in read-only mode
5. when database is shutdown cleanly
LGWR It is responsible for writing redo entries from log buffer cache to redolog files and it will perform
this in following situations
1. before DBWRn writes
2. whenever commit occurs
3. when log buffer cache is 1/3rd full

DBA CLASS NOTES | version 3.0 15

KANNA TECHNOLOGIES
4. when 1 MB of redo is generated
5. every 3 sec
CKPT it will update the latest SCN to control files and datafile header by taking information from
redolog files. This will happen at every log switch
ARCHn It will generated offline redolog files in specified location. This will be done only if database is
in archivelog mode

New Background processes in 10g


MMAN
======
The Automatic Shared Memory Management feature uses a new background process named Memory
Manager (MMAN). MMAN serves as the SGA Memory Broker and coordinates the sizing of the memory
components. This will be used when we are using ASMM feature in 10g.
RVWR
======
It is a new background process Recovery Writer ( RVWR) introduced which is responsible for writing
flashback logs which stores pre-image(s) of data blocks
CTWR
=====
This is a new process Change Tracking Writer (CTWR) which works with the new block changed tracking
features in 10g for fast RMAN incremental backups.
MMNL
=====
The Memory Monitor Light (MMNL) process is a new process in 10g which works with the Automatic
Workload Repository new features (AWR) to write out full statistics buffers to disk as needed.

MMON
======
The Manageability Monitor (MMON) process was introduced in 10g and is associated with the
Automatic Workload Repository new features used for automatic problem detection and self-tuning.
MMON writes out the required statistics for AWR on a scheduled basis.

DBA CLASS NOTES | version 3.0 16

KANNA TECHNOLOGIES
CJQn
=====
This is the Job Queue monitoring process which is initiated with the job_queue_processes parameter.

New Background processes in 11g


RCBG - this background process is responsible for processing data into server result cache
DIAG - In 11g we have a single location for all the trace files, alert log and other diagnostic files. DIAG is
the process which performs diagnostic dumps and executes oradebug commands
DIA0 responsible for hang detection and deadlock resoultion
EMNC Event Monitor Coordinator will coordinate with event management and notification activity
FBDA Flashback Data Archiver process is responsible for all flashback related actions in 11g database
SMCo Space management coordinator executes various space management tasks like space
reclaiming, allocation etc. It uses slave processes Wnnn whenever required

DATAFILES actual files where user data will be stored


REDOLOG FILES files contains redo entries which are helpful in database recovery. To avoid space
constraints oracle will create two or more redolog files and LGWR will write into them in a cyclic order

Redo
log
file 1

Redo
log
file 2

DBA CLASS NOTES | version 3.0 17

KANNA TECHNOLOGIES
CONTROL FILES These files will store crucial database information like
1. database name and creation timestamp
2. latest SCN
3. location and sizes of redolog files and datafiles
4. parameters that define the size of controlfile
ARCHIVED REDOLOG FILES These files will be created by ARCHn process if archivelog mode is enabled.
The size of archives will be equal or less than redolog files
PARAMETER FILE
1. This file contains parameters that will define the characteristics of database.
2. It is a text file and will be in the form of init<SID>.ora [SID instance name] and resides in
ORACLE_HOME/dbs

(on

unix)

and

ORACLE_HOME/database

(on

windows)

example : if SID is TEST, then file name will be inittest.ora


3. Parameters are divided into two types
a. Static parameters the value for these parameters cannot be changed when the
database is up and running
b. Dynamic parameters the value for these parameters can be changed even when DB is
up and running
Note: Instance name can be different from database name. This is to provide security
SERVER PARAMETER FILE (SPFILE)
1. This file is binary copy of pfile which provides different scopes (options) in changing parameter
values
a. Scope=spfile -> changes the values from next startup
b. Scope=memory -> changes the values immediately (values will revert from next startup)
c. Scope=both -> changes the values immediately and also made permanent in next
startup
2. Spfile will be in the form of spfile<SID>.ora and resides in ORACLE_HOME/dbs (on unix) and
ORACLE_HOME/database

(on

windows)

example : If SID is test, then spfile name will be spfiletest.ora

DBA CLASS NOTES | version 3.0 18

KANNA TECHNOLOGIES
3. We can create spfile from pfile or pfile from spfile using below commands
a. SQL> create spfile from pfile;
b. SQL> create pfile from spfile;
4. Order of precedence for database startup
Spfile<SID>.ora init<SID>.ora

Scope

Before shutdown

After shutdown

Memory

300

200

Spfile

200

300

Both

300

300

ASMM (AUTOMATIC SHARED MEMORY MANAGEMENT)


1. In 9i, SGA has been made dynamic i.e. sizes of SGA components can be changed without
shutting down the database (not possible in 8i)
2. Many times DBAs faced problem in calculating correct memory sizes which lead to performance
problems in instance level and DBAs are more involved in handling instance tuning issues. To
avoid this, oracle 10g introduced ASMM
3. The following memory components are automatically sized when using ASMM
a. Shared pool
b. database buffer cache
c. large pool
d. java pool
e. stream pool
Note: LOG_BUFFER will not be automatically sized in any version. Its a static parameter

DBA CLASS NOTES | version 3.0 19

KANNA TECHNOLOGIES
4. Using ASMM, we can define total memory to SGA and oracle will decide how much to distribute
to all caches. This is possible by setting SGA_TARGET parameter (new in 10g)
Note: to enabled ASMM, we should define STATISTICS_LEVEL = TYPICAL (default) or ALL
5. Maximum size for SGA is defined by SGA_MAX_SIZE. Depends on transactions load, SGA size will
vary from SGA_TARGET to SGA_MAX_SIZE
6. Its been observed that individual parameters are also defined in some 10g databases which
means, those values will act as min values and SGA_TARGET value will act as medium and
SGA_MAX_SIZE as max value
7. Oracle 10g introduced new background process MMAN in order to manage the memory for SGA
components
Note: SGA size is not at all dependent on database size and will be calculated based on the transactions
hitting the database

Oracle 8i
SORT_AREA_SIZE
WORK_AREA_SIZE
HASH_AREA_SIZE
BITMAP_WORK_AREA
SHARED_POOL_SIZE
DB_CACHE_SIZE
LOG_BUFFER
LARGE_POOL_SIZE
JAVA_POOL_SIZE

Oracle 9i

Oracle 10g

PGA_AGGREGATE_TARGET

PGA_AGGREGATE_TARGET

SHARED_POOL_SIZE
DB_CACHE_SIZE
LARGE_POOL_SIZE
JAVA_POOL_SIZE
LOG_BUFFER

Oracle 11g

MEMORY_TARGET
LOG_BUFFER
SGA_TARGET
LOG_BUFFER

ALERT LOG & OTHER TRACE FILES


1. Alert log is the monitoring file which records following information useful for DBA in diagnosing
the errors in the database
a. All oracle related errors
b. Every startup and shutdown timestamp

DBA CLASS NOTES | version 3.0 20

KANNA TECHNOLOGIES
c. Non-default parameters
d. Archivelog generation information
e. Checkpoint information (optional) etc
2. Alert log file will only specifies error message in brief, but it will point to a file which will have
more information about the error. These are called trace files
3. Trace files are of 3 types background trace files, core trace files and user trace files
4. If any background process fails to perform, it will throw error and a trace file will be generated
(called background trace files) in ORACLE_HOME/admin/SID/bdump location
5. For all operating system related errors with oracle, trace files will be generated (called core
trace files) in ORACLE_HOME/admin/SID/cdump location
6. For all user related errors, trace files (called user trace files) will be generated in
ORACLE_HOME/admin/SID/udump location
7. The default location of these files can be changed by defining following parameters
a. BACKGROUND_DUMP_DEST
b. CORE_DUMP_DEST
c. USER_DUMP_DEST
8. The above 3 parameters are replaced with a single parameter DIAGNOSTIC_DEST in oracle 11g.
the default location will be $ORACLE_HOME/diag/rdbms/SID/dbname/trace
9. Oracle 11g contains two versions of alert log file. One is in text format which resides in above
location and the other one will be in XML format

OPTIMAL FLEXIBLE ARCHITECTURE (OFA)


1. Reading and writing to a hard disk will be done with the help of I/O header
2. For a hard disk there will be only one I/O header exists
3. As per OFA, oracle recommends to store all the physical files separately in different hard drives
4. In such case, different I/O headers will be working for different hard drives thereby increasing
database performance
5. If not possible, at least we should separate redolog files, archivelog files and controlfiles from
datafiles

DBA CLASS NOTES | version 3.0 21

KANNA TECHNOLOGIES
DATABASE OPERATING MODES
STARTUP PHASES

OPEN available for user access

MOUNT maintenance phase

NOMOUNT starts instance

DATAFILES & REDOLOG FILES

CONTROL FILES

SPFILE or PFILE

Commands:

SQL> startup

or

SQL> startup nomount


SQL> alter database mount;
SQL> alter database open;

Note: A database can be started without issuing shutdown command using SQL> startup force
SHUTDOWN TYPES
Mode

Can New users


able to connect?

Can existing users


can issue new
transactions?

Will the current


transactions by
existing users
completes?

Whether
checkpoint occurs?

NORMAL

TRANSACTIONAL

IMMEDIATE

ABORT

DBA CLASS NOTES | version 3.0 22

KANNA TECHNOLOGIES
SHARED SERVER ARCHITECTURE
Shared pool
Up1

Up2

Database buffer cache

Up3

MRU end
Library cache
WRITE
list

DISPATCHER
Request
queue

Response
queue

LRU
list

Data
dictionary
cache

Log buffer
cache
Large pool
Java pool

LRU end

Stream pool

SGA

SMONn

PMON

DBWRn

UGA

LGWR

CKPT

Shared server processes

PARAMETER FILE

PASSWORD FILE

ARCHIVED
REDOLOG FILES

DATA FILES

REDOLOG
FILES

CONTROL
FILES

DATABASE

DBA CLASS NOTES | version 3.0 23

ARCHn

KANNA TECHNOLOGIES
1. Multiple user requests will be received by dispatcher which will be placed in request
queue
2. Shared server processes will take information from request queue and will be processed
inside the database
3. The results will be placed in response queue from where dispatcher will send them to
corresponding users
4. Instead of PGA, statements will get executed in UGA (user global area) in shared server
architecture
5. Shared server architecture can be enabled by specifying following parameters
a. DISPATCHERS
b. MAX_DISPATCHERS
c. SHARED_SERVER_PROCESSES
d. MAX_SHARED_SERVER_PROCESSES
e. CIRCUITS and MAX_CIRCUITS (optional)
6. This architecture should be enabled only if ora-04030 or ora-04031 errors are observed
frequently in alert log file
7. To make shared server architecture effective, SERVER=SHARED should be mentioned in
client TNSNAMES.ORA file
8. A single dispatcher can handle 20 user requests where as a single shared server process
can handle 16 requests concurrently
Note: SMONn can have 16 slave processes and DBWRn can have 20 slave processes working
concurrently
Note: startup and shutdown is not possible if sysdba connects through shared server
connection

DBA CLASS NOTES | version 3.0 24

KANNA TECHNOLOGIES
ORACLE 11g DATABASE ARCHITECTURE
Shared pool

Database buffer cache

User
process

MRU end

Log buffer
cache

Library cache
WRITE
list

LRU
list

Large pool

Data dictionary cache


Server
process

Java pool

PGA

RESULT CACHE
LRU end

SMONn

PMON

DBWRn

LGWR

Stream pool

CKPT

ARCHn

PARAMETER FILE

PASSWORD FILE

ARCHIVED
REDOLOG FILES

DATA FILES

REDOLOG
FILES

CONTROL
FILES

DBA CLASS NOTES | version 3.0 25

KANNA TECHNOLOGIES
SELECT STATEMENT PROCESSING
1. Server process will receive the statement sent by user process on server side and will handover
that to library cache of shared pool
2. The 1st phase of sql execution i.e Parsing will be done in library cache
3. Then, OPTIMIZER (brain of oracle sql engine) will generate many execution plans, but chooses
the best one based on time & cost (time response time, cost cpu resource utilization)
4. Server process will send the parsed statement with its execution plan to PGA and 2 nd phase i.e
EXECUTION will be done there
5. After execution, server process will start searching for the data from LRU end of LRU list and this
search will continue till it founds data or reaches MRU end. If it found data, it will be given to the
user. If it didnt found any data, it means data is not there in database buffer cache
6. In such cases, server process will copy data from datafiles to MRU end of LRU list of database
buffer cache
7. From MRU end the rows pertaining to requested table will be filtered and placed in SERVER
RESULT CACHE along with execution plan id and then it will be given to user (displayed on users
console)
Note : for statements issued for the second time, server process will get parsed tree and plan id from
library cache and it will straightly goes to server result cache and compares the plan id. If the plan id
matches, corresponding rows will be given to user. So, in this case, it is skipping all 3 phases of SQL
execution by which response time is much faster than 10g database.

SERVER RESULT CACHE


1. It is new component introduced in 11g.
2. Usage

of

result

cache

Is

dependent

on

parameters

RESULT_CACHE_MODE

and

RESULT_CACHE_MAX_SIZE
3. The possible values for RESULT_CACHE_MODE is MANUAL or FORCE. When set to MANUAL, sql
query should have hint /* result cache */. When using FORCE all queries will use result cache.
4. Even though after setting to FORCE, we can still avoid any query to use result cache using hint /*
no result cache */
5. Oracle recommends to enable result cache only if database is hitting with lot of statements
which are frequently repeated. So it must be enabled in OLTP environment

DBA CLASS NOTES | version 3.0 26

KANNA TECHNOLOGIES
6. If we specify MEMORY_TARGET parameter, oracle will allocate 0.25% of shared pool size as
result cache. If we specify SGA_TARGET (which is of 10g), result cache will be 0.5% of shared
pool. If we use individual parameters (like in 9i), result cache will be of 1% size of shared pool
7. When any DML/DDL statements modify table data or structure, data in result cache will become
invalid and need to be processed again
Note: http://pavandba.wordpress.com/2010/07/15/how-result-cache-works/

Using the Result Cache


You can improve the response times of frequently executed SQL queries by using the result cache. The
result cache stores results of SQL queries and PL/SQL functions in a new component of the SGA called
the Result Cache Memory. The first time a repeatable query executes, the database caches its results.
On subsequent executions, the database simply fetches the results from the result cache instead of
executing the query again. The database manages the result cache. You can turn result caching on only
at the database level. If any of the objects that are part of a query are modified, the database invalidates
the cached query results. Ideal candidates for result caching are queries that access many rows to return
a few rows, as in many data warehousing solutions.
The result cache consists of two components, the SQL Query Result Cache that stores SQL query results
and the PL/SQL Function Result Cache that stores the values returned by PL/SQL functions, with both
components sharing the same infrastructure. I discuss the two components of the result cache in the
following sections.

Managing the Result Cache


The result cache is always enabled by default, and its size depends on the memory the database
allocates to the shared pool. You can change the memory allocated to the result cache by setting the
RESULT_CACHE_MAX_SIZE initialization parameter. This parameter can range from a value of zero to a
system-dependent maximum. You disable result caching by setting the parameter to zero, as shown
here:
SQL> ALTER SYSTEM SET result_cache_max_size=0;
Since result caching is enabled by default, it means that the RESULT_CACHE_MAX_SIZE parameter has a
positive default value as well, based on the size of the MEMORY_TARGET parameter (or the

DBA CLASS NOTES | version 3.0 27

KANNA TECHNOLOGIES
SGA_TARGET

parameter

if

you

have

that

parameter

instead).

In

addition

to

the

RESULT_CACHE_MAX_SIZE parameter, two other initialization parameters have a bearing on the


functioning of the result cache: the RESULT_CACHE_MAX_RESULT parameter specifies the maximum
amount of the result cache a single result can use. By default, a single cached result can occupy up to 5
percent of the result cache, and you can specify a percentage between 1 and 100. The
RESULT_CACHE_REMOTE_EXPIRATION parameter determines the length of time for which a cached
result that depends on remote objects is valid. By default, this parameter is set to zero, meaning you
arent supposed to use the result cache for queries involving remote objects. The reason for this is over
time remote objects could be modified, leading to invalid results in the cache.

Using the RESULT_CACHE and NO_RESULT_CACHE Hints


Using the RESULT_CACHE hint as a part of a query adds the ResultCache operator to a querys execution
plan. The ResultCache operator will search the result cache to see whether theres a stored result in
there for the query. It retrieves the result if its already in the cache; otherwise, the ResultCache
operator will execute the query and store its results in the result cache. The no_result_cache operator
works the opposite way. If you add this hint to a query, itll lead the ResultCache operator to bypass the
result cache and reexecute the query to get the results.
Note: The RESULT_CACHE and the NO_RESULT_CACHE hints always take precedence over the value you
set for the RESULT_CACHE_MODE initialization parameter.
You can use the following views to manage the result cache:
V$RESULT_CACHE_STATISTICS: Lists cache settings and memory usage statistics
V$RESULT_CACHE_OBJECTS: Lists all cached objects and their attributes
V$RESULT_CACHE_DEPENDENCY: Lists the dependency information between the cached results and
dependencies
V$RESULT_CACHE_MEMORY: Lists all memory blocks and their statistics
V$RESULT_CACHE_OBJECTS: Lists both cached results and all dependencies

DBA CLASS NOTES | version 3.0 28

KANNA TECHNOLOGIES
Restrictions on Using the SQL Query Result Cache
You cant cache results in the SQL Query Result Cache for the following objects:
1. Temporary tables
2. Dictionary tables
3. Nondeterministic PL/SQL functions
4. The curval and nextval pseudo functions
5. The SYSDATE, SYS_TIMESTAMP, CURRENT_DATE, CURRENT_TIMESTAMP, LOCAL_TIMESTAMP,
USERENV, SYS_CONTEXT, and SYS_QUID functions
6. You also wont be able to cache subqueries, but you can use the RESULT_CACHE hint in an inline
view.

AUTOMATIC MEMORY MANAGEMENT


1. This is the new feature in 11g which enables DBA to manage both SGA and PGA automatically by
setting MEMORY_TARGET and MEMORY_MAX_TARGET parameters.
2. MEMORY_TARGET = SGA_TARGET + PGA_AGGREGATE_TARGET
3. MEMORY_TARGET is dynamic parameter, so the value can be changed at any time, where as
MEMORY_MAX_TARGET is static
4. We

can

check

memory

sufficiency

and

tune

it

by

taking

advice

from

AMM

using

V$MEMORY_TARGET_ADVICE
5. We

can

check

the

memory

allocation

to

SGA

components

by

V$MEMORY_DYNAMIC_COMPONENTS view.
6. Also

we

can

find

current

and

previous

resize

operations

through

V$MEMORY_CURRENT_RESIZE_OPS and V$MEMORY_RESIZE_OPS views


Note:

More

about

AMM

http://pavandba.wordpress.com/2010/07/21/automatic-memory-

management-in-11g/

DBA CLASS NOTES | version 3.0 29

KANNA TECHNOLOGIES
How to set Environment Variables?
1. When we install oracle software, sqlplus or any other oracle commands will not work.
To make them work we need to set environment variables in .bash_profile file
2. Below are the steps to do the same
[oracle@pc1 ~] $ more /etc/oraInst.loc
[oracle@pc1 ~] $ cd /u02/oraInventory/contentsXML
[oracle@pc1 ~] $ more inventory.xml
From the above command, we will get oracle home name and path. We need to set
ORACLE_HOME path and bin location in .bash_profile file as follows
[oracle@pc1 ~] $ vi .bash_profile
export ORACLE_HOME=/u02/ora11g
PATH=$PATH/bin:/usr/bin:$ORACLE_HOME/bin

STEPS to create database manually


1. Copy any existing database pfile to a new name. If no database exists on this server, use
pfile of another database which is residing on another server
2. Open pfile with vi editor and do necessary changes like changing database name, dump
locations etc and save it
3. Create necessary directories as mentioned in pfile
4. Copy the database creation script to the server and edit it to your need
5. Export the SID
$ export ORACLE_SID=dev
6. Start the instance in nomount phase using the pfile
SQL> startup nomount

DBA CLASS NOTES | version 3.0 30

KANNA TECHNOLOGIES
7. Execute create database script
SQL> @db.sql
8. Once database is created, it will be opened automatically
9. Execute the catalog.sql and catproc.sql scripts
SQL> @$ORACLE_HOME/rdbms/admin/catalog.sql
SQL> @$ORACLE_HOME/rdbms/admin/catproc.sql
10. Finally add this database entry to oratab file
Note: Sometimes, we may get error Oracle Instance terminated. Disconnection forced. This is
due to the reason that undo tablespace name mentioned in pfile is different from the one
mentioned in database creation script

MULTIPLEXING REDOLOG FILES


1. Redolog files are mainly used for recovering a database and also to ensure data commit
2. If a redolog file is lost, it will lead to data loss. To avoid this, we can maintain multiplexed copies
of redolog files in different locations. These copies are together called as redolog group and
individual files are called redolog members
3. Oracle recommends to maintain a min of 2 redolog groups with min of 2 members in each group
4. LGWR will write into members of same group parallely only if ASYNC I/O is enabled at OS level
5. Redolog files will have 3 states CURRENT, ACTIVE and INACTIVE. Always these states will be
changed in cyclic order
6. We cannot have different sizes for members in the same group, whereas we can have different
sizes for different groups, but not recommended to implement

DBA CLASS NOTES | version 3.0 31

KANNA TECHNOLOGIES

Redolog
member 1

Redolog
member 2

Redolog
member 1

GROUP 1

Redolog
member 2

GROUP 2

7. Default size of redolog member is 100mb in 9i and 50mb in 10g


8. In 8i, LOG_CHECKPOINT_INTERVAL parameter setting will specify the time at which checkpoint
should occur where as from 9i the same can be achieved using FAST_START_MTTR_TARGET

COMMANDS
# To check redolog file members
SQL> select member from v$logfile;
# To check redolog group info,status and size
SQL> select group#,members,status,sum(bytes/1024/1024) from v$log
group by group#,members,status;

DBA CLASS NOTES | version 3.0 32

KANNA TECHNOLOGIES
# To add a redolog file group
SQL> alter database add logfile group 4 (/u02/prod/redo04a.log,/u02/prod/redo04b.log) size
50m;
# To add a redolog member
SQL> alter database add logfile member /u02/prod/redo01b.log to group 1;
# To drop a redolog group
SQL> alter database drop logfile group 4;
# To drop a redolog member
SQL> alter database drop logfile member /u02/prod/redo01b.log;
Note: Even after we drop logfile group or member, still file will exists at OS level
Note: We cannot drop a member or a group which is in CURRENT status
# Resuing a member
SQL> alter database add logfile member /u02/prod/redo04a.log reuse to group 4;

# Steps to rename (or) relocate a redolog member


1. SQL> shutdown immediate;
2. SQL> ! cp /u02/prod/redo01.log /u02/prod/redo01a.log (If relocating, use the source
and destination paths)
3. SQL> startup mount
4. SQL> alter database rename file /u02/prod/redo01.log to /u02/prod/redo01a.log;

DBA CLASS NOTES | version 3.0 33

KANNA TECHNOLOGIES
The above command will make server process to update the controlfile with new file
name
5. SQL> alter database open;
Note: We cannot resize a redolog member, instead we need to create new group with required
size and drop the old group
# Handling corrupted redolog file
SQL> alter database clear logfile member /u02/prod/redo01a.log;
Or
SQL> alter database clear unarchived logfile member /u02/prod/redo01a.log;

MULTIPLEXING OF CONTROL FILES


1. Control file contains crucial database information and loss of this file will lead to loss of
important data about database. So it is recommended to have multiplexed copies of files in
different locations
2. If control file is lost in 9i, database may go for force shutdown, where as database will continue
to run, if it is 10g version
COMMANDS
# Steps to multiplex control file using spfile
1. SQL> show parameter spfile
2. SQL> show parameter control_files
3. SQL> alter system set
control_files=/u02/prod/control01.ctl,/u02/prod/control02.ctl,/u02/prod/control03.ctl,/u0
2/prod/control04.ctl scope=spfile;
Here we have added 4th control file. This addition can also be done in different location when
implementing OFA

DBA CLASS NOTES | version 3.0 34

KANNA TECHNOLOGIES
4. SQL> shutdown immediate
5. SQL> ! cp /u02/prod/control01.ctl /u02/prod/control04.ctl
6. SQL> startup
# Steps to multiplex controlfile using pfile
1. SQL> shutdown immediate
2. [oracle@pc1 ~] $ cd $ORACLE_HOME/dbs
3. [oracle@pc1 ~] $ vi initprod.ora

Edit control_files parameter and add new path to it and save the file
4. [oracle@pc1 ~] cp /u02/prod/control01.ctl /u02/prod/control04.ctl
5. [oracle@pc1 ~] sqlplus / as sysdba
6. SQL> startup
Note: we can create a maximum of 8 copies of controlfiles

ARCHIVELOG FILES
1. Archive log files are offline copies for online redolog files and are required to recover the
database if we have old backup
2. The following are parameters that are used for archivelog mode with their description
a. LOG_ARCHIVE_START till 9i we used to have two types of archiving (manual &
automatic). But from 10g we have only automatic archiving. This parameter will enable
automatic archiving and useful only till 9i (deprecated in 10g)
b. LOG_ARCHIVE_TRACE it is used to generate a trace file to know how ARCHn process
working
c. LOG_ARCHIVE_MIN_SUCCEEDED_DEST defines min destinations to which ARCHn
process should complete archiving by the time LGWR starts writing to online redolog file
d. LOG_ARCHIVE_MAX_PROCESSES will start multiple ARCH processes and helpful in
faster writing
e. LOG_ARCHIVE_LOCAL_FIRST if enabled, ARCHn process will first generate archive in
local machine and then in remote machine. It is used in case of dataguard setup
f.

LOG_ARCHIVE_FORMAT defines the archive log file format

g. LOG_ARCHIVE_DUPLEX_DEST if want to archive in only 2 locations, we should use this

DBA CLASS NOTES | version 3.0 35

KANNA TECHNOLOGIES
h. LOG_ARCHIVE_DEST_1...10 if want to archive to more than 2, we should enable this
i.

LOG_ARCHIVE_DEST_STATE_1...10 to enable / disable archive locations

j.

LOG_ARCHIVE_CONFIG it enables / disables sending redologs to remote location. Used


in dataguard environment

3. When we want to multiplex into only 2 locations, from 10g we should use LOG_ARCHIVE_DEST
and LOG_ARCHIVE_DUPLEX_DEST parameters
4. The default location for archivelogs in 10g is Flash Recovery Area (FRA). The archives in this
location are deleted when a space pressure arised. The location and size of FRA can be known
using DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE parameters respectively
5. To disable archivelog generation into FRA, we shouldnt use LOG_ARCHIVE_DEST, but should use
LOG_ARCHIVE_DEST_1
Note: archivelogs can also be deleted based on RMAN deletion policy

COMMANDS
# How to check if database is in archive log mode?
SQL> archive log list;
# Enabling archivelog mode in 10g/11g
1. SQL> shutdown immediate
2. SQL> startup mount
3. SQL> alter database archivelog;
4. SQL> alter database open;
Note: When we enable archivelog mode using above method, the archives will be generated by default
in Flash Recovery Area (FRA). It is the location where files required for recovery exist and introduced in
10g
# To change the archive destination from FRA to customized location
1. SQL> alter system set log_archive_dest_1=location=/u03/archives scope=spfile;
2. SQL> shutdown immediate

DBA CLASS NOTES | version 3.0 36

KANNA TECHNOLOGIES
3. SQL> startup
# Enabling archivelog mode in 9i
1. SQL> alter system set log_archive_start=TRUE scope=spfile;
2. SQL> alter system set log_archive_dest=/u03/archives scope=spfile;
3. SQL> alter system set log_archive_format=prod_%s.arc scope=spfile;
4. SQL> shutdown immediate
5. SQL> startup mount
6. SQL> archive log start;
7. SQL> alter database archivelog;
8. SQL> alter database open;

# Disabling archivelog mode in 9i/10g/11g


1. SQL> shutdown immediate
2. SQL> startup mount
3. SQL> alter database noarchivelog;
4. SQL> alter database open;

DBA CLASS NOTES | version 3.0 37

KANNA TECHNOLOGIES
BLOCK SPACE UTILIZATION PARAMETERS

Block header
20%
PCT FREE level

PCTUSED level
40%

The following are the block space utilization parameters


a. INITTRANS and MAXTRANS represents no of concurrent transactions that can access a
block. MAXTRANS was set to 255 and removed from 10g
b. PCTFREE it is the reserved space for future updates (update statement may or maynot
increase row size). Incase row size increases, it will take space from PCTFREE
c. PCTUSED it is the level which will be compared with data level for insertion after
deletion
Note: Block space utilization parameters are deprecated from locally managed tablespaces

LOCAL Vs DICTIONARY managed tablespaces


DMT

Block id
121
122
123

Status
Used
Free
Used

DDC

DATAFILE
HEADER

121

122

123

DBA CLASS NOTES | version 3.0 38

KANNA TECHNOLOGIES
1. In DMT, free block information is used to maintain in the form of freelist which will be there
in data dictionary cache
2. Everytime DBWRn requires free block, server process will perform an I/O to know free block
information from freelist. This will happen to all the free blocks. Because more no of I/Os
are being performed, it will degrade the performance of database

Block
id
121
122
123
124

status
1
0
1
1

121

122

123

124

Locally managed tablespace

1. In locally managed tablespace, free block information is maintained in datafile header itself in
the form of bitmap blocks
2. These bitmaps are represented with 0 and 1 where 0 means free and 1 means used
3. When a free block is required, server process will search in bitmap block and will inform DBWRn
thus it is avoiding I/O which increases database performance
4. In 8i default tablespace type is dictionary, but still we can create locally managed tablespace
where as in 9i, default is local
Note: In any version, we can create dictionary managed tablespace only if SYSTEM tablespace is
dictionary

DBA CLASS NOTES | version 3.0 39

KANNA TECHNOLOGIES
Tablespace creation syntax
create tablespace mytbs
datafile '/u02/ora10g/prod/mytbs01.dbf' size 50m
autoextend on maxsize 200m
extent management local / dictionary
segment space management auto / manual
inittrans 1 maxtrans 255
pctfree 20 pctused 40
initial 1m next 5m
pctincrease / uniform / autoallocate
minextents 1 maxextents 500
logging / nologging
blocksize 8k; - this is optional

Note: Even though we specify a tablespace as NOLOGGING, still all DML transactions will
generate redo entries (this is to help in instance recovery). NOLOGGING is applicable in only
below situations
1. create table B as select * from A nologging;
2. insert into B select * from A nologging;
3. alter index <index_name> rebuild online nologging;
4. create index <index_name> on table_name(column_name) nologging;
5. any DML operations on LOB segments
COMMANDS
# To create a tablespace
SQL> create tablespace mytbs
datafile /u02/prod/mytbs01.dbf size 10m;
DBA CLASS NOTES | version 3.0 40

KANNA TECHNOLOGIES
# To create a tablespace in 9i
SQL> create tablespace mytbs
datafile /u02/prod/mytbs01.dbf size 10m
segment space management auto;
# To create dictionary managed tablespace
SQL> create tablespace mytbs
datafile /u02/prod/mytbs01.dbf size 10m
extent management dictionary;
# To view tablespace information
SQL> select allocation_type,extent_management,contents from dba_tablespaces where
tablespace_name=MYDATA;
# To view datafile information
SQL> select file_name,sum(bytes),autoextensible,sum(maxbytes) from dba_data_files where
tablespace_name=MYDATA group by file_name,autoextensible;
# To check the database size
SQL> select sum(bytes/1024/1024/1024) from dba_data_files;
# To enable/disable autoextend
SQL> alter database datafile /u02/prod/mytbs01.dbf autoextend on maxsize 100m;
SQL> alter database datafile /u02/prod/mytbs01.dbf autoextend off;
# To resize a datafile
SQL> alter database datafile /u02/prod/mytbs01.dbf resize 20m;

DBA CLASS NOTES | version 3.0 41

KANNA TECHNOLOGIES
# To add a datafile
SQL> alter tablespace mytbs add datafile /u02/prod/mytbs02.dbf size 10m;
Note: If we have multiple datafiles, extents will be allocated in round robin fashion
Note: Adding the datafile for tablespace size increment is the best option if we have multiple
hard disks
# To rename a tablespace (10g/11g)
SQL> alter tablespace mytbs rename to mydata;
# To convert DMT to LMT or vice versa
SQL> exec dbms_space_admin.tablespace_migrate_to_local(MYDATA);
SQL> exec dbms_space_admin.tablespace_migrate_from_local(MYDATA);
Note: Local to dictionary conversion is possible only if SYSTEM tablespace is not local
# To rename or relocate a datafile
1. SQL> alter tablespace mydata offline;
2. SQL> !mv /u02/prod/mytbs01.dbf /u02/prod/mydata01.dbf
3. SQL> alter tablespace mytbs rename datafile /u02/prod/mytbs01.dbf to
/u02/prod/mydata01.dbf;
4. SQL> alter tablespace mydata online;
# To rename or relocate system datafile
1. SQL> shutdown immediate
2. SQL> !mv /u02/prod/system.dbf /u02/prod/system01.dbf
3. SQL> startup mount
4. SQL> alter database rename file /u02/prod/system.dbf to /u02/prod/system01.dbf;
5. SQL> alter database open;
DBA CLASS NOTES | version 3.0 42

KANNA TECHNOLOGIES
# To drop a tablespace
SQL> drop tablespace mydata;
This will remove tablespace info from base tables, but still datafiles exist at OS level
Or
SQL> drop tablespace mydata including contents;
This will remove tablespace info and also clears the datafile (i.e it will empty the contents)
Or
SQL> drop tablespace mydata including contents and datafiles;
This will remove at oracle and also OS level
# To reuse a datafile
SQL> alter tablespace mydata add datafile /u02/prod/mydata02.dbf reuse;

BIGFILE TABLESPACE
1. For managing the datafiles in VLDB, oracle introduced bigfile tablespace in 10g
2. Bigfile tablespaces datafiles can grow into terabytes based on the block size. For
example, for a 8KB block size a single file can grow till 4TB
3. Bigfile tablespaces should be created only when we have stripping and mirroring
implemented at storage level in real time
4. We cant add another datafile to a bigfile tablespace until it reaches max value
5. Bigfile tablespaces can be created only as LMT and with ASSM
Note: Either in LMT or DMT, ASSM once defined cannot be changed

DBA CLASS NOTES | version 3.0 43

KANNA TECHNOLOGIES
# To create a big file tablespace
SQL> create bigfile tablespace bigtbs
datafile /u02/prod/bigtbs01.dbf size 50m;

CAPACITY PLANNING
1. It is the process of estimating space requirement for future data storage
2. We will do this by collecting free space info for all the tablespaces in the database either
daily, weekly or monthly basis
3. We need to observe the difference in free space and should able to analyze how much
space is required in future
4. Following query is used to find free space
SQL> select tablespace_name,sum(bytes/1024/1024) from dba_free_space group by
tablespace_name;
5. Eg: If we observe tablespace USERS free space is reducing 500m daily, for next 1 month
we need 500*30=15GB
6. Once we analyze the required space, we need to check the availability for the same in
mount point. If space not there, contact storage team to get space added

UNDO MANAGEMENT
1. Undo tablespace features are enabled by setting following parameters
a. UNDO_MANAGEMENT
b. UNDO_TABLESPACE
c. UNDO_RETENTION
2. Only one undo tablespace will be in action at a given time

DBA CLASS NOTES | version 3.0 44

KANNA TECHNOLOGIES
Imp points to remember:
1. The undo blocks occupied by a transaction will become expired once the transaction commits.
2. The data will be selected from undo tablespace, if any DML operation is being performed on the table
on which select query is also fired. This is to maintain read consistency.

ORA-1555 error(snapshot too old error)


Tx1 Updating table A and commited
DBC
Tx2 updating table B
A

Tx3 selecting data from A

B B B
Data File

1. In the above situation, Tx1 issued an update statement on table A and committed. Because of
this dirty blocks are generated in DBC and undo blocks are used from undo tablespace. Also,
dirty blocks of A are not yet written to datafiles
2. Tx2 is updating table B and because of non availability of undo blocks, Tx2 overrided expired
undo blocks of Tx1
3. Tx3 is selecting the data from A. This operation will first look for data in undo tablespace, but
already blocks of A are occupied by B (Tx2), it will not retrieve any data. Then it will check for
latest data in datafiles, but as dirty blocks are not yet written to datafiles, there are transaction
will be unable to get data. In this situation it will throw ORA-1555 (snapshot too old) error
Soltuions to avoid ORA-1555
1. Re-issuing the SELECT statement will be a solution when we are getting ora-1555 very rarley
2. It may occur due to undersized undo tablespace. So increasing undo tablespace size is one
solution
3. Increasing undo_retention value is also a solution
4. Avoiding frequent commits

DBA CLASS NOTES | version 3.0 45

KANNA TECHNOLOGIES
5. Using retention gurantee clause with DML statement. This is only from 10g
Note : Dont ever allow undo & Temp tablespaces to be in AUTOEXTEND ON

COMMANDS
# To create UNDO tablespace
SQL> create undo tablespace undotbs2
datafile /u02/prod/undotbs2_01.dbf size 30m;
# To change undo tablespace
SQL> alter system set undo_tablespace=UNDOTBS2 scope=memory/spfile/both;
# To create temporary tablespace
SQL> create temporary tablespace mytemp
tempfile /u02/prod/mytemp01.dbf size 30m;
# To add a tempfile
SQL> alter tablespace mytemp add tempfile /u02/prod/mytemp02.dbf size 30m;
# To resize a tempfile
SQL> alter database tempfile /u02/prod/mytemp01.dbf resize 50m;
# To create temporary tablespace group
SQL> create temporary tablespace mytemp
tempfile /u02/prod/mytemp01.dbf size 30m
tablespace group grp1;
# To view tablespace group information
SQL> select * from dba_tablespace_groups;
DBA CLASS NOTES | version 3.0 46

KANNA TECHNOLOGIES
# To view temp tablespace information
SQL> select file_name,sum(bytes) from dba_temp_files where tablespace_name=MYTEMP
group by file_name;
# To move temp tablespace between groups
SQL> alter tablespace mytemp tablespace group grp2;

TABLESPACE ENCRYPTION
Oracle Database 10g introduced Transparent Data Encryption (TDE), which enabled you to
encrypt columns in a table. The feature is called transparent because the database takes care
of all the encryption and decryption details. In Oracle Database 11g, you can also encrypt an
entire tablespace. In fact, tablespace encryption helps you get around some of the restrictions
imposed on encrypting a column in a table through the TDE feature. For example, you can get
around the restriction that makes it impossible for you to encrypt a column thats part of a
foreign key or thats used in another constraint, by using tablespace encryption.
Restrictions on Tablespace Encryption
You must be aware of the following restrictions on encrypting tablespaces. You
1. Cant encrypt a temporary or an undo tablespace.
2. Cant change the security key of an encrypted tablespace.
3. Cant encrypt an external table.
As with TDE, you need to create an Oracle Wallet to implement tablespace encryption.
Therefore, lets first create an Oracle Wallet before exploring how to encrypt a tablespace.

DBA CLASS NOTES | version 3.0 47

KANNA TECHNOLOGIES
Creating the Oracle Wallet
Tablespace encryption uses Oracle Wallets to store the encryption master keys. Oracle Wallets
could be either encryption wallets or auto-open wallets. When you start the database, the
auto-open wallet opens automatically, but you must open the encryption wallet yourself.
Oracle recommends that you use an encryption wallet for tablespace encryption, unless youre
dealing with a Data Guard setup, where its better to use the auto-open wallet.
You can create the wallet easily by executing the following command in SQL*Plus:
SQL> alter system set encryption key identified by "password"
The previous command creates an Oracle Wallet if there isnt one already and adds a master
key to that wallet. By default, Oracle stores the Oracle Wallet, which is simply an operating
system file named ewallet.pl2, in an operating systemdetermined location. You can, however,
specify a location for the file by setting the parameter encryption_wallet_location in the
sqlnet.ora file, as shown here:
ENCRYPTION_WALLET_LOCATION=
(SOURCE=
(METHOD=file)
(METHOD_DATA= (DIRECTORY=/apps/oracle/general/wallet) ) )
You must first create a directory named wallet under the $ORACLE_BASE/admin/$ORACLE_SID
directory. Otherwise, youll get an error when creating the wallet: ORA-28368: cannot autocreate wallet. Once you create the directory named wallet, issue the following command to
create the Oracle Wallet:
SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "secure";
System altered.
The ALTER SYSTEM command shown here will create a new Oracle Wallet if you dont have one.
It also opens the wallet and creates a master encryption key. If you have an Oracle Wallet, the
DBA CLASS NOTES | version 3.0 48

KANNA TECHNOLOGIES
command opens the wallet and re-creates the master encryption key. Once youve created the
Oracle Wallet, you can encrypt your tablespaces
Creating an Encrypted Tablespace
The following example shows how to encrypt a tablespace:
SQL> CREATE TABLESPACE tbsp1 DATAFILE '/u01/app/oracle/test/tbsp1_01.dbf' SIZE 500m
ENCRYPTION DEFAULT STORAGE (ENCRYPT);
Tablespace created.
The storage clause ENCRYPT tells the database to encrypt the new tablespace. The clause
ENCRYPTION tells the database to use the default encryption algorithm, DES128. You can
specify an alternate algorithm such as 3DES168, AES128, or AES256 through the clause USING,
which you specify right after the ENCRYPTION clause. Since I chose the default encryption
algorithm, I didnt use the USING clause here.
The following example shows how to specify the optional USING clause, to define a nondefault
encryption algorithm.
SQL> CREATE TABLESPACE mytbsp2 DATAFILE '/u01/app/oracle/test/mytbsp2_01.dbf' size
500m ENCRYPTION USING '3DES168' DEFAULT STORAGE (ENCRYPT);
Tablespace created.
The example shown here creates an encrypted tablespace, MYTBSP2, that uses the 3DES168
encryption algorithm instead of the default algorithm.
Note: You can check whether a tablespace is encrypted by querying the DBA_TABLESPACES
view
The database automatically encrypts data during the writes and decrypts it during reads. Since
both encryption and decryption arent performed in memory, theres no additional memory
requirement. There is, however, a small additional I/O overhead. The data in the undo
DBA CLASS NOTES | version 3.0 49

KANNA TECHNOLOGIES
segments and the redo log will keep the encrypted data in the encrypted form. When you
perform operations such as a sort or a join operation that use the temporary tablespace, the
encrypted data remains encrypted in the temporary tablespace.

ORACLE MANAGED FILES (OMF)


1. OMF gives flexibility in managing Controlfiles, Redologs and Datafiles. When using OMF
oracle will automatically creates the file in specified location and even will delete the file
at OS level when dropped.
2. We can use OMF by setting below parameter
db_create_file_dest
db_create_online_log_dest

dynamic parameters

COMMANDS
# To configure OMF parameters
SQL> alter system set db_create_file_dest=/u02/prod scope=memory;
SQL> alter system set db_create_online_log_dest=/u02/prod scope=memory;

USER MANAGEMENT
1. User creation should be done after clearly understanding requirement from application team
2. A schema is a collection of objects whereas user means who access tables of other schema. So
many times, we will interchange these two terms in order to represent a user in the database.
For example, we can say scott as schema or a user
3. Whenever we create user, we should assign a default permanent tablespace (which allows to
create tables) and default temporary tablespace(which allows to do sorting). At any moment of
time we can change them

DBA CLASS NOTES | version 3.0 50

KANNA TECHNOLOGIES
4. If we dont assign default tablespace and temporary tablespace, oracle will take default values.
To see the default values we can use DATABASE_PROPERTIES view.
5. If we want a user to create a table, we need to assign quota on that tablespace to the user, then
only user can create a table.
6. After creating a user, we should grant privileges. Privileges for a user are of 2 types
a. System level privileges any privilege that is used to modify the system (i.e database) is
considered

as

system

level

privilege.

eg: create table, create view etc


b. Object level privileges any privilege which is used to access object in another schema
is

called

object

level

privilege.

eg: select on A, update on B etc


7. After creating user, dont grant connect and resource roles (in 9i). In 10g, we can grant connect
role as it contains only create session privilege
8. Resource role internally contains unlimited tablespace privilege and because of this, it will
override the quota that is granted initially. So, it should not be granted in real time until it is
required
9. A role is a set of privileges which will reduce the risk of issuing many commands
10. To find out roles and privileges assigned to a user, use following views
a. DBA_SYS_PRIVS
b. DBA_TAB_PRIVS
c. DBA_ROLE_PRIVS
d. ROLE_SYS_PRIVS
e. ROLE_TAB_PRIVS
f.

ROLE_ROLE_PRIVS

COMMANDS
# To create a user
SQL> create user user1 identified by user1
default tablespace mytbs
temporary tablespace temp;

DBA CLASS NOTES | version 3.0 51

KANNA TECHNOLOGIES

# To grant permissions to user


SQL> grant create session, create table to user1;
# To revoke any permissions from user
SQL> revoke create table from scott;
# To change password of user
SQL> alter user user1 identified by oracle;
# To allocate quota on tablespace
SQL> alter user user1 quota 10m on mydata;
Note: Allocating quota doesnt represent reserving the space. If 2 or more users are sharing a
tablespace, quota will filled up in first come first serve basis
# To change default tablespace or temporary tablespace
SQL> alter user user1 default tablespace test;
SQL> alter user user1 default temporary tablespace mytemp;
Note: The objects created in the old tablespace remain unchanged even after changing a default
tablespace for a user
# To check default permanent & temporary tablespace for a user
SQL> select default_tablespace,temporary_tablespace from dba_users where username=SCOTT;
# To lock or unlock a user
SQL> alter user scott account lock;
SQL> alter user scott account unlock;

DBA CLASS NOTES | version 3.0 52

KANNA TECHNOLOGIES
# To check default permanent tablespace and temporary tablespace
SQL> select property_name,property_value from database_properties where property_name
like DEFAULT%;
# To change default permanent tablespace
SQL> alter database default tablespace mydata;
# To change default temporary tablespace
SQL> alter database default temporary tablespace mytemp;
# To check system privileges for a user
SQL> select privilege from dba_sys_privs where grantee=SCOTT;
# To check object level privileges
SQL> select owner,table_name,privilege from dba_tab_privs where grantee=SCOTT;
# To check roles assigned to a user
SQL> select granted_role from dba_role_privs where grantee=SCOTT;
# To check permissions assigned to role
SQL> select privilege from role_sys_privs where role=MYROLE;
SQL> select owner,table_name,privilege from role_tab_privs where role=MYROLE;
SQL> select granted_role from role_role_privs where role=MYROLE;
# To drop a user
SQL> drop user user1;
Or
SQL> drop user user1 cascade;

DBA CLASS NOTES | version 3.0 53

KANNA TECHNOLOGIES
PROFILE MANAGEMENT
1. Profile management is divided into
a. Password management
b. Resource management
2. Profiles are assigned to users to control access to resources and also to provide enhanced
security while logging into the database
3. The following are parameters for password policy management
a. FAILED_LOGIN_ATTEMPTS it specifies how many times a user can fail to login to the
database
b. PASSWORD_LOCK_TIME user who exceeds failed_login_attempts will be locked. This
parameter specifies till how much time it will be locked and account will get unlocked
after that time automatically. DBA can also unlock manually
c. PASSWORD_LIFE_TIME it specifies after how many days a user need to change the
password
d. PASSWORD_GRACE_TIME it specified grace period for the user to change the
password. If user still fails to change password even after grace time, account will be
locked and only DBA need to manually unlock it
e. PASSWORD_REUSE_TIME this will specify after how many days user can reuse the
same password
f.

PASSWORD_REUSE_MAX it specifies how many max times previous passwords can be


used again

g. PASSWORD_VERIFY_FUNCTION it defines rules for setting a new password like


password should be 8 char long, password should contains alphanumeric values etc
4. The following parameters used for resource management
a. SESSIONS_PER_USER it specifies how many concurrent sessions can be opened
b. IDLE_TIME it specifies how much time a user can reside on the database idle (without
doing any work). The session will be killed if it crosses idle_time value, but status in
v$SESSION will be marked as SNIPPED. Snipped sessions will still hold resources which is
burden to OS
c. CONNECT_TIME it specifies how much time user can stay in the database

DBA CLASS NOTES | version 3.0 54

KANNA TECHNOLOGIES
COMMANDS
# To create a profile
SQL> create profile my_profile limit
failed_login_attempts 3
password_lock_time 1/24/60
sessions_per_user 1
idle_time 5;
# To assign a profile to user
SQL> alter user scott profile my_profile;
# To alter a profile value
SQL> alter profile my_profile limit sessions_per_user 3;
# To create default password verify function
SQL> @$ORACLE_HOME/rdbms/admin/utlpwdmg.sql

Note: sessions terminated because of idle time are marked as SNIPPED in v$session and DBA need to
manually kill the related OS process to clear the session
# To kill a session
SQL> select sid,serial# from v$session where username=SCOTT;
SQL> alter system kill session sid,serial# immediate;

Note: Resource management parameters are affective only if RESOURCE_LIMIT is set to TRUE
# To check and change resource_limit value
SQL> show parameter resource_limit
SQL> alter system set resource_limit=TRUE scope=both;
Note: from 11g onwards passwords for all users are case-sensitive

DBA CLASS NOTES | version 3.0 55

KANNA TECHNOLOGIES
AUDITING
1. It is the process of recording user actions in the database
2. Auditing is enabled by setting AUDIT_TRAIL to
a. None no auditing enabled (default)
b. DB audited information will be recorded in AUD$ table and can be viewed using
DBA_AUDIT_TRAIL view
Note: AUDIT_TRAIL=DB is default in 11g i.e auditing is enabled by default in 11g
c. OS audited information will be stored in the form of trace files at OS level. For this we
need to set AUDIT_FILE_DEST parameter
By default audit file destination will be $ORACLE_HOME/admin/SID/adump
d. DB, Extended it is same as DB option but will still record info like SQL_TEXT,
BIND_VALUE etc
e. XML it will generated XML files to store auditing information
f.

XML, Extended same as XML but will record much more information

3. Even though we set AUDIT_TRAIL parameter to some value, oracle will not start auditing until
one of the following types of auditing commands are issued
a. Statement level auditing
b. Schema level auditing
c. Object level auditing
d. Database auditing (it is only till 9i. due to performance issues it was removed from 10g)
4. By default some activites like startup & shutdown of database, any structural changes to
database are audited and recorded in alert log file
5. If auditing is enabled with DB, then we need to monitor space in SYSTEM tablespace as there is a
chance of getting full when more and more information is keep on recorded
6. SYS user activities can also be captured by setting AUDIT_SYS_OPERATIONS to TRUE
7. Auditing should use following scopes
a. Whenever successful / not successful
b. By session / By access
8. Enabling auditing at database level will have adverse impact on the database performance

DBA CLASS NOTES | version 3.0 56

KANNA TECHNOLOGIES
COMMANDS
# To enable auditing
SQL> alter system set audit_trail=DB/OS/XML scope=spfile;
Note: AUDIT_TRAIL parameter is static and require a restart of database before going to be effective
# To audit what is required
SQL> Audit create table; statement level auditing
SQL> Audit update on table SALARY; object level auditing
SQL> Audit all by scott; schema level auditing
SQL> Audit session by scott;
SQL> Audit all privileges;
# To turn off auditing
SQL> Noaudit session;
SQL> Noaudit update on table SALARY;
SQL> Noaudit all privileges;

NETWORKING WITH ORACLE


1. We need to have oracle client software installed on client machine in order to connect
to server
2. The following files are required to establish a successful connection to the server
a. Client TNSNAMES.ORA and SQLNET.ORA
b. Server LISTENER.ORA, TNSNAMES.ORA and SQLNET.ORA
3. TNSNAMES.ORA file contains the description of the database to which connection
should establish
4. SQLNET.ORA will define the type of connection between client and server
DBA CLASS NOTES | version 3.0 57

KANNA TECHNOLOGIES
5. Apart from using tnsnames.ora we can also use EZCONNECT, LDAP, bequeath protocol
etc to establish connection to server
6. These files will reside $ORACLE_HOME/network/admin
7. LISTENER service will run based on LISTENER.ORA file and we can manage listener using
below commands
a. $ lsnrctl start / stop / status / reload
8. We can have host string different from service name i.e instance name or SID and even
it can be different from database name. This is to provide security for the database
9. Tnsping is the command to check the connectivity to the database from client machine
10. Do create seperate listeners for multiple databases
11. If the connections are very high, create multiple listeners for the same database
12. Any network related problem should be resolved in the following steps
a. Check whether listener is up and running or not on server side
b. Check the output is ok for tnsping command
c. If still problem exist, check firewall on both client and server. If not known take
the help of network admin
13. We can know the free port information from netstat command
14. We need to set password for listener so as to provide security
15. TNSNAMES.ORA and SQLNET.ORA files can also be seen on server side because server
will act as client when connecting to another server
16. If listener is down, existing users will not have any impact. Only new users cannot be
able to connect to instance
17. From 10g, SYSDBA connection to database will use bequeath protocol as it doesnt
require any protocol like TCP/IP
Note: article about LISTENER security can be found @
http://pavandba.files.wordpress.com/2009/11/integrigy_oracle_listener_tns_security.pdf
Note : port number in listener.ora and tnsnames.ora should be same

DBA CLASS NOTES | version 3.0 58

KANNA TECHNOLOGIES
Creating new Listener manually
1. Copy the existing entry to end of the listener.ora file
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u02)
(PROGRAM = extproc)
))
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server1.kanna.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
))
2.
3.
4.
5.

Remove the line mentioned protocol=IPC in listener description in listener.ora


Change listener name, host and port no to appropriate values
Change the listener name in the first line of SID_LIST
Change SID_NAME and ORACLE_HOME and remove EXTPROC line, when finished it
should look like below

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = prod)
(ORACLE_HOME = /u02)
))
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server1.kanna.com)(PORT = 1521))
))

DBA CLASS NOTES | version 3.0 59

KANNA TECHNOLOGIES
PASSWORD FILE (PFILE)
1. This file contain sysdba password and will be used if any user with sysdba permission is trying to
connect database remotely.
2. It will be in the form of orapw<SID> and resides in ORACLE_HOME/dbs (on unix) and
ORACLE_HOME/database (on windows)
3. If we forgot password of sys user or lost password file, it can be recreated using ORAPWD utility
as follows
$ ORAPWD file=orapwtest password=oracle entries=5 force=y (here TEST is SID)
Entires

represents

those

many

users

can

use

this

password

file

Force=y will allow DBA to overwrite password file


4. To make password file affective, define REMOTE_LOGIN_PASSWORD=EXCLUSIVE

Distributed Database Management


1. Having multiple databases connected to each other is called distributed database
management system

2. Even though manageability is complex, having multiple small databases will give more
benefit
3. DDMS will use two phase commit mechanism i.e. a transaction should be committed in
both the databases before the data was made permanent

DBA CLASS NOTES | version 3.0 60

KANNA TECHNOLOGIES
4. DDMS can have different databases like oracle, db2, sql server etc. In this case they will
talk each other using oracle gateway component (which need to be configured
seperately)
5. If we have same databases in DDMS, its called homogenous DDMS. If we have different
databases then its called heterogeneous DDMS
Database Links
1. It is the object which will pull remote database data to local database
2. While creating dblink, we need to know username and password of remote database
3. Apart from username and password of remote db, we need to have tns entry in
tnsnames.ora of local db and tnsping command should work

# To create database link


SQL> create database link dev.com
connect to scott identified by tiger
using dev;
# To retrieve data from remote database
SQL> select * from scott.emp@dev.com;

DBA CLASS NOTES | version 3.0 61

KANNA TECHNOLOGIES
Materialized Views
1. It is an object used to pull remote databases data frequently in specified time which is
called as refreshing the data using materialized views

2. Snapshot is the object which used to do the same till 8i, but the disadvantage is time
constraint in pulling huge no.of rows
3. MV uses MV log to store information of already transferred rows. MVLOG will store
rowids of table rows which helps in further refresh
4. MV should be created in the database where we store the data abd MVLOG will be
created automatically in remote database
5. MVLOG is a table not a file and its name always will start with MV$LOG
6. MV refresh can happen in following three modes
a. Complete pulling entire data
b. Fast pulling non transferred rows
c. Force it will do fast refresh and in case any failure, it will go for complete
refresh
7. When MV refresh happening very slow, check the size of table and compare that with
MVLOG size
8. If MVLOG size is more than table size, then drop and recreate only MVLOG
Note: A complete refresh is required after the recreation of MVLOG
Note: we can use refresh fast on commit in order to transfer the data to remote database
without waiting

DBA CLASS NOTES | version 3.0 62

KANNA TECHNOLOGIES
# To check the table size
SQL> select sum(bytes/1024/1024) from dba_segments where segment_name=EMP;

ORACLE UTILITIES

SQL * LOADER
1. It is a utility to load the data from a textfile to oracle table
2. The textfile is called as flat file. It can be in .txt, .csv, etc format
3. SQL *LOADER contains following components
a. CONTROL FILE this will define the configuration parameters which tell how to
load the data into the table. It has nothing to do with database control file
b. INPUT FILE or INFILE this is the file from which data should be loaded
c. BADFILE records which are failed to load into table (due to any reason) will be
stored in this file
d. DISCARDFILE records which doesnt satisfy the condition will be placed here
e. LOGFILE it will record the action of sql * loader and can be used for reference
4. SQL * LOADER can be invoked as follows
[oracle@server1 admin]$ sqlldr userid=system/oracle control=control.lst log=track.log
INFILE (or) input
file

controlfile

SQL * LOADER

BAD FILE

DISCARD FILE

Data loading
into table

LOG FILE

DBA CLASS NOTES | version 3.0 63

KANNA TECHNOLOGIES
EXPORT & IMPORT
1. It is the utility to transfer the data between two oracle databases
2. The following levels of export/import are possible
a. Database level
b. Schema level
c. Table level
d. Row level
3. Apart from database level, we can perform other levels of export and import within the
same database
4. Whenever session is running long time, we can check from v$session_longops
Note: To avoid questionable statistics warning, use statistics=none during export
5. Export will convert the command to select statements and the final output will be
returned to dumpfile
exp select statement datafiles DBC dumpfile
6. Server process will take the responsibility of writing the data to dumpfile
7. Export will transfer the data to dumpfile in the size of block. To increase the speed of
writing we can set BUFFER=10 * avg row length. But we will never use this formula in
real time
Note: avg row length can be obtained from dba_tables
8. DIRECT=Y will make the export process faster by performing in the following way
exp select statement datafiles dumpfile
9. DIRECT=Y is not applicable for
a. A table with LONG datatype
b. A table with LOB datatype
c. Cluster table
DBA CLASS NOTES | version 3.0 64

KANNA TECHNOLOGIES
d. Partitioned table
Note: when we give direct=y option, if oracle cannot export a table with that option, it will
automatically convert to conventional path
10. By mentioning CONSISTENT=Y, export will take data from only undo tablespace if a DML
operation is being performed on the table
Note : while using CONSISTENT=Y, there is a chance of getting ORA-1555 error
11. Import is the utility to dump the contents from export dumpfile to a schema
12. Import internally converts contents of export dump file to DDL and DML statements
imp create table inserts the data create index or other objects add constraints and
enable them
13. SHOW=Y can be used to check corruption in export dump file. This will not actually
import the contents
14. IGNORE=Y should be used if already an object exists with the same name. It will append
the data if the object exists already
Note: whenever import fails with warning for constraints or grants, do import again with
ROWS=N option
Note: when we are importing tables with LONG, LOB datatypes or partitioned tables, the
destination database should also contain same tablespace name as source database
COMMANDS
# To know options of export/import
[oracle@server1 ~]$ exp help=y
[oracle@server1 ~]$ imp help=y
# To take database level export

DBA CLASS NOTES | version 3.0 65

KANNA TECHNOLOGIES
[oracle@server1 ~]$ exp file=/u01/fullbkp_prod.dmp log=/u01/fullbkp_prod.log full=y
# To take schema level export
[oracle@server1 ~]$ exp file=/u01/scott_bkp.dmp log=/u01/scott_bkp.log owner='SCOTT'
# To take table level export
[oracle@server1 ~]$ exp file=/u01/emp_bkp.dmp log=/u01/emp_bkp.log tables='SCOTT.EMP'
# To take row level export
[oracle@server1 ~]$ exp file=/u01/emp_rows_bkp.dmp log=/u01/emp_rows.log
tables='SCOTT.EMP' query=\"where deptno=10\"
# To import full database
[oracle@server1 ~]$ imp file=/u01/fullprod.dmp log=/u01/imp_fullprod.log full=y
# To import a schema
[oracle@server1 ~]$ imp file=/u01/scott_bkp.dmp log=/u01/imp_schema.log
fromuser='SCOTT' touser='SCOTT'
# To import a table
[oracle@server1 ~]$ imp file=/u01/emp_bkp.dmp log=/u01/imp_emp.log fromuser='SCOTT'
touser='SCOTT' tables='EMP'
# To import a table to another user
[oracle@server1 ~]$ imp file=/u01/emp_bkp.dmp log=/u01/imp_emp.log fromuser='SCOTT'
touser='SYSTEM' tables='EMP'

DBA CLASS NOTES | version 3.0 66

KANNA TECHNOLOGIES
DATAPUMP
15. Datapump is an extension to traditional exp/imp which provides more advantages like
security, fastness etc
16. During datapump export, oracle will create master table in the corresponding schema
and data will be transferred parallely from tables to dumpfile
Table 1
Table 2

MASTER TABLE

DUMP FILE

Table 3

17. During datapump import this will happen in reverse order i.e from dumpfile a master
table will be created and from that original tables
18. After finishing either export or import in datapump, oracle will automatically drops
master table
19. Just like exp/imp, datapump also contains 4 (database, schema, table and row) levels
20. In datapump dumpfile will reside only on server and cannot be created on client side
with the help of directory option. This will provide security to dumpfile
21. DBA_DATAPUMP_JOBS view can be used to find the status of datapump export or
import process
Note: whenever datapump export is done using PARALLEL option, import also should be done
with the same option. Otherwise it will effect the time taking for import
22. Oracle will try to import tables to the tablespace with same name and if tablespace
doesnt exist, it will go to users default tablespace

DBA CLASS NOTES | version 3.0 67

KANNA TECHNOLOGIES
COMMANDS
# To create a directory
SQL> create directory dpbkp as '/u01/expbkp';
Directory created.
# To grant permissions on directory
SQL> grant read,write on directory dpbkp to scott;
Grant succeeded.
# To view directory information
SQL> select * from dba_directories;
OWNER

DIRECTORY_NAME

------------------------------ -----------------------------DIRECTORY_PATH
SYS

DPBKP

/u01/expbkp
# To know options of datapump export/import
[oracle@server1 ~]$ expdp help=y
[oracle@server1 ~]$ impdp help=y
# To take database level export
[oracle@server1 ~]$ expdp directory=dpbkp dumpfile=fullprod.dmp logfile=fullprod.log full=y
# To take schema level export
[oracle@server1 ~]$ expdp directory=dpbkp dumpfile=scott_bkp.dmp logfile=scott_bkp.log
schemas='SCOTT'
DBA CLASS NOTES | version 3.0 68

KANNA TECHNOLOGIES
# To take table level export
[oracle@server1 ~]$ expdp directory=dpbkp dumpfile=emp_bkp.dmp logfile=emp_bkp.log
tables='SCOTT.EMP'
# To take row level export
[oracle@server1 ~]$ expdp directory=dpbkp dumpfile=emprows_bkp.dmp
logfile=emprows_bkp.log tables='SCOTT.EMP' query=\"where deptno=10\"
# To import full database
[oracle@server1 ~]$ impdp directory=dpbkp dumpfile=fullprod.dmp logfile=imp_fullprod.log
full=y
# To import a schema
[oracle@server1 ~]$ impdp directory=dpbkp dumpfile=scott_bkp.dmp logfile=imp_schema.log
remap_schema='SCOTT:SCOTT'
# To import a table
[oracle@server1 ~]$ impdp directory=dpbkp dumpfile=emp_bkp.dmp logfile=imp_emp.log
tables='EMP' remap_schema='SCOTT:SCOTT'
# To import a table to another user
[oracle@server1 ~]$ impdp directory=dpbkp dumpfile=emp_bkp.dmp logfile=imp_emp.log
tables='EMP' remap_schema='SCOTT:SYSTEM'
# To import tables to another tablespace (only in datapump)
[oracle@server1 ~]$ impdp directory=dpbkp dumpfile=emp_bkp.dmp logfile=imp_emp.log
tables='EMP' remap_schema='SCOTT:SCOTT' remap_tablespace=MYDATA:MYTBS

DBA CLASS NOTES | version 3.0 69

KANNA TECHNOLOGIES
# To import without taking export (using network_link option)
[oracle@server1 ~]$ impdp directory=dpbkp dumpfile=emp_bkp.dmp logfile=imp_schema.log
schemas='SCOTT' network_link='source.com'

BACKUP & RECOVERY


COLD BACKUP
1. Backup is a copy of original data which will be used to recover databases
2. If the data is reproducable and backup not existing, still we can recover the data. But it
is a tedious and time consuming task
3. Taking backup after shutting down the database is called cold backup and because no
transactions exist, the backup will be consistent
4. In real time, we will perform cold backup very rarely
STEPS to take cold backup
SQL> select name from v$datafile;
SQL> select member from v$logfile;
SQL> select name from v$controlfile;
SQL> shutdown immediate
[oracle@server1 ~]$ mkdir /u03/coldbkp
[oracle@server1 ~]$ cp /datafiles/prod/*.dbf /u03/coldbkp
[oracle@server1 ~]$ cp /datafiles/prod/*.log /u03/coldbkp
[oracle@server1 ~]$ cp /datafiles/prod/*.ctl /u03/coldbkp
[oracle@server1 ~]$ cp $ORACLE_HOME/dbs/*.ora /u03/coldbkp

DBA CLASS NOTES | version 3.0 70

KANNA TECHNOLOGIES
[oracle@server1 ~]$ sqlplus "/ as sysdba"
SQL> startup
SQL> alter database backup controlfile to trace;
Note: archives are not required to take back up with cold backup
HOT BACKUP
1. Taking the backup while the database is up and running is called hot backup
2. During hot backup database will be in fuzzy state and still users can perform
transactions which makes backup inconsistent
3. Whenever we place a tablespace or database in begin backup mode, following happens
a. The corresponding datafiles header will be freezed i.e CKPT process will not
update latest SCN
b. Body of the datafile is still active i.e DBWRn will write the dirty blocks to datafiles
4. After end backup, datafile header will be unfreezed and CKPT process will update latest
SCN immediately by taking that information from controlfiles
5. During hot backup, we will observe much redo generated because oracle will copy
entire data block as redo entry into LBC. This is to avoid fractured block
6. A block fracture occurs when a block is being read by the backup, and being written to
at the same time by DBWR. Because the OS (usually) reads blocks at a different rate
than Oracle, your OS copy will pull pieces of an Oracle block at a time. What if the OS
copy pulls half a block, and while that is happening, the block is changed by DBWR?
When the OS copy pulls the second half of the block it will result in mismatched halves,
which Oracle would not know how to reconcile.
7. This is also why the SCN of the datafile header does not change when a tablespace
enters hot backup mode. The current SCNs are recorded in redo, but not in the datafile.
This is to ensure that Oracle will always recover over the datafile contents with redo
entries. When recovery occurs, the fractured datafile block will be replaced with a
complete block from redo, making it whole again.
DBA CLASS NOTES | version 3.0 71

KANNA TECHNOLOGIES
Note: Database should be in archivelog mode to perform hot backup

STEPS to take hot backup in 9i


[oracle@server1 ~]$ mkdir /u03/hotbkp
SQL> select name from v$datafile;
SQL> select name from v$controlfile;
SQL> alter tablespace system begin backup;
SQL> !cp /datafiles/prod/system01.dbf /u03/hotbkp
SQL> alter tablespace system end backup;
Repeat above steps for all the tablespaces in the database
SQL> !cp /datafiles/prod/*.ctl.dbf /u03/hotbkp
SQL> alter system switch logfile;
SQL> !cp /u03/archives/*.arc /u03/hotbkp/archbkp
Taking archive backup is the important step in hot backup
SQL> alter database backup controlfile to trace;
[oracle@server1 ~]$ cp $ORACLE_HOME/dbs/*.ora /u03/hotbkp

STEPS to take hot backup in 10g


[oracle@server1 ~]$ mkdir /u03/hotbkp
SQL> select name from v$datafile;

DBA CLASS NOTES | version 3.0 72

KANNA TECHNOLOGIES
SQL> select name from v$controlfile;
SQL> alter database begin backup;
SQL> !cp /datafiles/prod/*.dbf /u03/hotbkp
SQL> alter database end backup;
Since we are placing entire database into begin backup mode, no repetition for all the
tablespaces is required
SQL> !cp /datafiles/prod/*.ctl.dbf /u03/hotbkp
SQL> alter system switch logfile;
SQL> !cp /u03/archives/*.arc /u03/hotbkp/archbkp
Taking archive backup is the important step in hot backup
SQL> alter database backup controlfile to trace;
[oracle@server1 ~]$ cp $ORACLE_HOME/dbs/*.ora /u03/hotbkp
Note: In any version, during hot backup we will not take redolog files backup

DATABASE RECOVERY
1. Recover is of 2 types
a. Complete recovery recovering database till the point of failure. No data loss
b. Incomplete recovery recovering to a certain time or scn. Has data loss
2. We will perform complete recovery if we lost only datafiles
3. We will perform incomplete recovery if we lost either redolog files, controlfiles or
archivelog files
4. Incomplete recover is possible with 3 modes
a. UNTIL SCN
DBA CLASS NOTES | version 3.0 73

KANNA TECHNOLOGIES
b. UNTIL CANCEL this is the option we use in real time always
c. UNTIL TIME
5. Recovery process involves two phases
a. RESTORE copying a file from backup location to original location as that file is
lost now
b. RECOVER applying archivelogs and redologs to bring the file SCN in par with
latest SCN
6. Whenever any file is missing at OS level, we can able to get that information either from
alert log file or using v$RECOVER_FILE view
Note: Practically, we can do complete recovery even if we lost controlfiles

STEPS to recover datafile in a noarchivelog mode database


SQL> shutdown immediate
SQL> !cp /u03/coldbkp/*.dbf /datafiles/prod
SQL> !cp /u03/coldbkp/*.ctl /datafiles/prod
SQL> !cp /u03/coldbkp/*.log /datafiles/prod
SQL>startup
STEPS to recover redologfile in a noarchivelog mode database
SQL> shutdown immediate
SQL> !cp /u03/coldbkp/*.dbf /datafiles/prod
SQL> !cp /u03/coldbkp/*.ctl /datafiles/prod
SQL> recover database until cancel;
SQL> alter database open resetlogs;
DBA CLASS NOTES | version 3.0 74

KANNA TECHNOLOGIES

STEPS to recover controlfile in a noarchivelog mode database


SQL> shutdown immediate
SQL> !cp /u03/coldbkp/*.ctl /datafiles/prod
SQL> startup mount
SQL> recover database using backup controlfile until cancel;
SQL> alter database open resetlogs;

STEPS for recovering tablespace


SQL> alter tablespace mydata offline;
SQL> !cp /u03/hotbkp/mydata01.dbf /datafiles/prod
SQL> recover tablespace mydata;
SQL> alter tablespace mydata online;
STEPS for recovering a single datafile
SQL> alter database datafile '/datafiles/prod/mydata01.dbf' offline;
SQL> !cp /u03/hotbkp/mydata01.dbf /datafiles/prod
SQL> recover 'datafile /datafiles/prod/mydata01.dbf';
SQL> alter database datafile '/datafiles/prod/mydata01.dbf' online;
STEPS for recovering system tablespace
SQL> shut immediate

DBA CLASS NOTES | version 3.0 75

KANNA TECHNOLOGIES
SQL> !cp /u03/hotbkp/system01.dbf /datafiles/prod
SQL> startup mount
SQL> recover tablespace system;
SQL> alter database open;
STEPS for recovering database (we will perform this when we lost more than 50% of datafiles)
SQL> shut immediate
SQL> !cp /u03/hotbkp/*.dbf /datafiles/prod
SQL> startup mount
SQL> recover database;
SQL> alter database open;
Note: we can drop a single datafile using below command
SQL> alter database datafile /datafiles/prod/mydata01.dbf offline drop;
When we use above command, it will delete the file at OS level, but data dictionary will not be
updated and never we can get back that file even if we have backup. So dont use this in real
time
STEPS to recover a datafile without backup
SQL> alter tablespace mydata offline;
SQL> alter database create datafile /datafiles/prod/mydata01.dbf as
/datafiles/prod/mydata01.dbf;
SQL> recover tablespace mydata;
SQL> alter tablespace mydata online;
DBA CLASS NOTES | version 3.0 76

KANNA TECHNOLOGIES
Note: All the archives generated from the date of datafile creation should be available to do
this
STEPS to recover redolog file in archivelog mode
SQL> shutdown immediate
SQL> startup mount
SQL> recover database until cancel;
SQL> alter database open resetlogs;
Using RESETLOGS When used resetlogs option to open the database, it will
1. Create new redolog files at OS level (location and size will be taken from controlfile) if
not already existing
2. Resets the log seq number (LSN) to 1, 2, 3 etc for the created files
3. Whenever database is opened with resetlogs option, we will say database entered into
new incarnation. If database is in new incarnation, the backups which were taken till
now are no more useful. So, whenever we perform an incomplete recovery we need to
take full backup of database immediately
4. We can find the prev incarnation information of a database from below query
select resetlogs_change#,resetlogs_time from v$database;
STEPS to recover controlfile (incomplete recovery) in archivelog mode
SQL> shutdown immediate
SQL> !cp /u03/hotbkp/*.ctl /datafiles/prod/*.ctl
SQL> startup mount
SQL> recover database using backup controlfile until cancel;
SQL> alter database open resetlogs;
DBA CLASS NOTES | version 3.0 77

KANNA TECHNOLOGIES
STEPS for controlfile complete recovery
SQL> alter database backup controlfile to trace;
The above command may not work sometimes, in which case we need to use already taken
trace file during backup. This command will generate a controlfile script in udump in the form
of trace file
SQL> shutdown immediate
[oracle@server1 ~]$ goto udump location and copy the first create controlfile script to a file
called control.sql
SQL> startup nomount
SQL> @control.sql
SQL> alter database open;
Note: After creating control files using above procedure, there will be no SCN in that. So server
process will write the latest SCN to control files in this situation by taking info from datafile
header
Note: we will perform until time recovery in case we lost a single table and need to recover it.
But to do this we need to have approval from all the users in the database

STEPS for spfile or pfile recovery


When we lost any one of spfile or pfile, we know that we can recreate using another file. But when we
lost both the files, then we need to take help from alert log to create new pfile or spfile.
1. Copy all the non-default parameters that are listed in the alert log to a new text file
2. Name that file as init<sid>.ora and place that file in $ORACLE_HOME/dbs location

DBA CLASS NOTES | version 3.0 78

KANNA TECHNOLOGIES
STEPS for spfile or pfile recovery in 11g
Oracle 11g provides a very good and easy option for recovering spfile or pfile using below commands
SQL> create pfile from memory;
SQL> create spfile from memory;
Note: above commands will work only if database is still up and running even we lost spfile and pfile. If
database is down, we need to rely on that method which we used for 10g

BACKUP MODE

ADVANTAGE

DISADVANATAGE

COLD

Consistent backup

Require shutdown

HOT

No shutdown

Inconsistent backup

EXPORT (LOGICAL)

Restoring a single table

Datafile recovery not possible

RECOVERY MANAGER (RMAN)


1. It is the backup methodology introduced in 8.0 which performs block level backup i.e
RMAN will take the backup of only used blocks
2. RMAN will take the information from bitmap block about used blocks and while
performing this RMAN will make sure DBWRn is not writing into free blocks
3. Advantages of RMAN
a. Block level backup
b. Parallelism
c. Duplexing of archives
d. Detection of corruption in datafiles
e. Validating backup
f. Incremental backup
DBA CLASS NOTES | version 3.0 79

KANNA TECHNOLOGIES
g. Recovery catalog etc
4. Components of RMAN
a. RMAN executable file RMAN prompt from where backup commands are issued
b. Target database database which we want to take backup
c. Auxiliary database cloned copy of target database
d. Recovery catalog repository database to store RMAN backup information
e. Media management layer it is responsible in interacting with tape drive while
taking RMAN backup directly to tape
5. In real time, we will take backup to tapes and to manage these tapes there will be
separate team exist which is called netbackup team or storage team or backup team.
They will manage tapes (like inserting tape, removing tape etc) through a tool called
netbackup tool. The very widely used tool is VERITAS 6.5
6. Archive log mode should be enabled to take RMAN backup
7. Whenever archive destination is full, database will hang. In such scenarios, do following
a. If time permits take the backup of archives using delete input clause
b. If time doesnt permits temporarily move archivelogs to some other mount point
c. If no mount point having free space then delete the archives and take full backup
of the database without fail immediately
d. When we delete either backup or archives from OS level, we can make RMAN
understand this by running
i. RMAN> crosscheck backup;
ii. RMAN> crosscheck archivelog all;
e. RMAN will delete obsolete backup automatically if the backup location is flash
recovery area
8. RMAN configuration parameters
a. RETENTION POLICY tells till what date our backup will be stored which we can
use for recovery. It has 2 values
i. Redundancy it will tell how many backups to be retained
ii. Recovery window it will tell how many days backup to be retained
DBA CLASS NOTES | version 3.0 80

KANNA TECHNOLOGIES
b. BACKUP OPTIMIZATION it will avoid taking backup of unmodified datafile
c. CONTROLFILE AUTOBACKUP includes controlfile in the backup
d. PARALLELISM creates multiple processes to speed up backup
e. ENCRYPTION to secure the backup
f. ARCHIVELOG DELETION POLICY deletes archivelogs automatically based on this

COMMANDS
# To connect to RMAN
[oracle@server1 ~]$ rman target /
# To see configuration parameter values
RMAN> show all;
# To change any configuration parameter
RMAN> configure retention policy to redundancy 5;
# To backup the database
RMAN> backup database;
# To backup archivelogs
RMAN> backup archivelog all;
# To backup both database and archivelogs
RMAN> backup database plus archivelog;
Note: By default in 10g, rman backup will go to flash recovery area. To override that, use below
command
DBA CLASS NOTES | version 3.0 81

KANNA TECHNOLOGIES
# To take compressed backup
RMAN> backup as compressed backupset database plus archivelog;
# To take backup to specified area
RMAN> backup format=/u03/rmanbkp/fulldbbkp_%t.bkp database;
# To see backup information
RMAN> list backup;
The above command will get the information from controlfile of the database
# To find & delete expired backups
RMAN> crosscheck backup;
RMAN> delete expired backup;
RMAN> delete noprompt expired backup;
# To find and delete expired archivelogs
RMAN> crosscheck archivelog all;
RMAN> delete expired archivelog all;
RMAN> delete noprompt expired archivelog all;
# To find and delete unnecessary backups
RMAN> report obsolete;
RMAN> delete obsolete;
RMAN> delete noprompt obsolete;

DBA CLASS NOTES | version 3.0 82

KANNA TECHNOLOGIES
# To take physical image copy of database
RMAN> backup as copy database;
# To validate the backup
RMAN> restore database validate;
# To validate the database before backup
RMAN> backup validate database archivelog all;
# To validate a particular backupset
RMAN> validate backupset 1234;
# To take backup when using tape
RMAN> run
{
allocate channel c1 device type sbt_tape;
backup database plus archivelog;
}
# To increase FRA size
SQL> show parameter db_recovery_file_dest_size
SQL> alter system set db_recovery_file_dest_size=10G scope=both;

DBA CLASS NOTES | version 3.0 83

KANNA TECHNOLOGIES

RMAN RECOVERY SCENARIOS


STEPS to recover a datafile
RMAN> run
{
sql alter tablespace mydata offline;
restore tablespace mydata;
recover tablespace mydata;
sql alter tablespace mydata online;
}
STEPS to recover a system datafile
RMAN>run
{
shutdown immediate;
startup mount;
restore datafile 1;
recover datafile 1;
sql alter database open;
}
STEPS to recover a redolog files
RMAN> run
{
shutdown immediate;
startup mount;
set until scn 1234; or set until time to_date(2011-01-05 11:30:00,YYYY-MM-DD
hh24:mi:ss);
DBA CLASS NOTES | version 3.0 84

KANNA TECHNOLOGIES
recover database;
sql alter database open resetlogs;
}
STEPS to recover a controlfiles
RMAN> run
{
shutdown immediate;
startup nomount;
restore controlfile from autobackup;
sql alter database mount;
recover database;
sql alter database open resetlogs;
}

RECOVERY CATALOG
1. RMAN will store the backup information in target database controlfile. If we lost this
controlfile and perform either complete or incomplete recovery, we will loose backup
info even though physically backups are available
2. To avoid this situation RMAN introduced recovery catalog. It is a database which stores
target database backup information
3. Single recovery catalog can support multiple target databases
4. We cannot obtain recovery catalog information from target but vice versa is possible
Steps for Configuring Recover Catalog
Below steps need to be done on catalog database
SQL> create tablespace rmantbs
datafile /datafiles/prod/rmantbs01.dbf size 50m;
DBA CLASS NOTES | version 3.0 85

KANNA TECHNOLOGIES
SQL> create user rman_rc identified by rman_rc
default tablespace rmantbs
temporary tablespace temp;
SQL> grant connect,resource,recovery_catalog_owner to rman_rc;
[oracle@server1 ~]$ rman catalog rman_rc/rman_rc@rc
RMAN> create catalog;
[oracle@server1 ~]$ rman target / catalog rman_rc/rman_rc@rc
Below step need to be done on target database side
RMAN> register database;
# To obtain target database information from catalog
SQL> select db_id,name from rc_database;

INCREMENTAL BACKUP
1. Taking backup of very large database (VLDB) will take time if the backup size is
increasing frequently
2. In such cases, we can go for incremental backup which will take backup of any changes
happend from last full backup to till date
3. Incremental backups are two types
a. Differential (default)
b. Cumulative
4. Both incremental backup types will have level 0 and level 1 (level 0 full backup, level 1incremental backup)
5. First time incremental backup will do level 0 backup always
6. RMAN will perform incremental backup by identifying changed blocks with the help of
block SCN
DBA CLASS NOTES | version 3.0 86

KANNA TECHNOLOGIES
7. We cannot recover database using level 1 backup applying on full database backup
8. We can apply level 1 backup on image copies and can recover the database
9. 10g RMAN can perform faster incremental backups using block change tracker. With this
whenever any block changes CTWR (change track writer) background process will write
that information to a tracking file
10. The change tracking file resides in DB_CREATE_FILE_DEST

COMMANDS
# To take full backup in incremental mode
RMAN> backup incremental level 0 database;
RMAN> backup cumulative incremental level 0 database;
# To take differential backup
RMAN> backup incremental level 1 backup;
# To take cumulative backup
RMAN> backup cumulative incremental level 1 database;
# To enable change tracking

SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;


You can also create the change tracking file in a location you choose yourself, using the
following SQL statement:
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE
'/mydir/rman_change_track.f' REUSE;
DBA CLASS NOTES | version 3.0 87

KANNA TECHNOLOGIES
The REUSE option tells Oracle to overwrite any existing file with the specified name.
# To disable change tracking
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;

PERFORMANCE TUNING
1. Performance tuning is of 2 types
a. Proactive tuning it is least preferred because of practical problems
b. Reactive tuning most preferred way which means react to the problem instead
of preventing it to occur
2. Any performance problem with a particular query should be resolved in following
phases
NETWORKING TUNING
1. When a performance problem is reported, first we need to check if problem is only for
one user or for multiple users
2. If it is only for only one user, then it could be because of network problem so check
tnsping to database
3. If tnsping value is too high then intimate network admin about this. If not move to next
phase
APPLICATION TUNING
1. In this phase we need to find whether any new applications added or are there any
changes in the code
2. If any additions or changes, ask application team to revert them and then check the
performance. If working fine, then problem is with those changes
3. If no additions/changes happened or if performance problem exists after reverting, then
proceed for next phase
DBA CLASS NOTES | version 3.0 88

KANNA TECHNOLOGIES
SQL TUNING
1. By running some reports like ADDM or ASH, we can know what queries are giving
problem and can send them to application team (or team which is responsible for
writing sql queries) for tuning
2. Sometimes DBA help may be required, so DBA should have expertise knowledge on SQL
3. In real time, most (90%) of tuning problems will get resolved in this phase. If not solved,
proceed to next phase

OBJECT TUNING
1. In this phase, first we need to check what is the last analyzed date for the tables
involved in the query.
# To find out last analyzed date of a table
SQL> select last_analyzed from dba_tables where table_name=EMP;
2. If we see last_analyzed date as old date, then it means that table statistics didnt
gathered from long time. It may be the reason for performance problem.
3. Optimizer will generate the best execution plan based on these statistics. But if statistics
are old, optimizer will go for worst plan which affects performance. In such cases, we
need to analyze manually using below commands
# To analyze a table
SQL> analyze table emp compute statistics; 8i
SQL> exec dbms_stats.gather_table_stats('SCOTT','EMP'); 9i onwards
4. If table contains huge no.of rows analyze will take time as it collects info for each and
every row. In such cases, we can estimate statistics which means collecting statistics for
some percentage of rows. This can be done using below command

DBA CLASS NOTES | version 3.0 89

KANNA TECHNOLOGIES
# To analyze a table using estimate option
SQL> analyze table emp estimate statistics; 8i
SQL> exec dbms_stats.gather_table_stats('SCOTT','EMP',,40); 9i onwards
# To analyze a Index (Oracle also recommends to analyze indexes also)
SQL> analyze index pk_emp compute statistics; 8i
SQL> exec dbms_stats.gather_index_stats(SCOTT,PK_EMP); 9i onwards
# To analyze a schema
SQL> exec dbms_stats.gather_schema_stats(SCOTT);

# To analyze full database (this will not consider base tables)


SQL> exec dbms_stats.gather_database_stats;

# To analyze base tables and dictionary views


SQL> exec dbms_stats.gather_dictionary_stats;
SQL> exec dbms_stats.gather_fixed_objects_stats;
Note: In 10g, oracle automatically collects statistics for tables which are modified greater than
10% every night 10PM of server time. But due to practical complications, experts
recommended to disable that automated job and create a new one manually
Note: Always statistics gathering job should run in non-peak hours of server time as it takes
max cpu power and memory for processing
5. Tables are divided into 3 categories
a. Static tables monthly analyze
b. Semi-dynamic tables weekly analyze
c. Dynamic tables daily analyze
DBA CLASS NOTES | version 3.0 90

KANNA TECHNOLOGIES
6. Optimizer works on two modes
a. Rule based optimization (RBO) [deprecated from 10g]
b. Cost based optimization (CBO)
7. CBO and RBO are the internal algorithms based on which optimizer will generate
execution plans
8. OPTIMIZER_MODE parameter will actually decide which optimization mode should be
used for query execution
9. In 9i, optimizer_mode will have default value as CHOOSE i.e optimizer will choose
whether to use RBO or CBO. In 10g/11g, the default value is ALL_ROWS which means it
will prepare the execution plan in such a way that it will select all rows from the table
10. We can define optimizer_mode=first_n_rows (n is integer) in order to select n rows
from the table. This is useful when you are displaying data in pages
11. If performance problems remain same even after analyzing, then we need to see if
query is using indexes or not by generating explain plan

EXPLAIN PLAN
1. It is a plan which shows the flow of execution for any sql statement
2. To generate explain plan we require plan_table in SYS schema. If not there, we can
create using $ORACLE_HOME/rdbms/admin/utlxplan.sql script
3. After creating plan_table, use below command to generate explain plan
SQL> grant select,insert on plan_table to scott;
SQL> conn scott/tiger
SQL> explain plan for select * from emp;
4. To view the content of explain plan, run $ORACLE_HOME/rdbms/admin/utlxpls.sql
script
5. Optimizer may deviate from best execution plan sometimes depends on resources (CPU
or memory) availability

DBA CLASS NOTES | version 3.0 91

KANNA TECHNOLOGIES
6. If index is not there on a table, create an index on a column which is after where
condition in the query
7. Also choose any one of following types of index to be created
a. B-Tree index must be used for high cardinality (no.of distinct values) columns
b. Bitmap index for low cardinality columns
c. Function based index for columns with defined functions
d. Reverse key index for selecting latest data always
8. If still facing performance problem, check if we are using right type of table
9. Following are types of tables available
a. General table which used regularly
b. Cluster table a table which shares common columns with other tables
A

In the above diagram X is the common column shared by both A & B. Problem in
using cluster table is any modifications to X cannot be done easily
c. Index organized table (IOT) it avoids creating indexes separately as data itself
will be stored in index form. The performance of IOT is fast but DML and DDL
operations are very costly
d. Partition table a normal table can be split logically into partitions so that we
can make queries to search only in 1 partition which improves search time. The
following are the types of partitions available
i. Range
ii. List
iii. Hash
We can also have composite partition of following types
a. Range range
b. Range list
c. Range hash
DBA CLASS NOTES | version 3.0 92

KANNA TECHNOLOGIES
d. List list
e. List hash
f. List range (from 11g)
Note: Sample partition table and Index creation script available at
http://pavandba.com/2009/11/11/sample-partition-table-and-partition-index-script/

DATABASE TUNING
Fragmentation
1. High water mark (HWM) is the level of data represented in a table
2. Generally oracle will not use space which is created by deleting some rows because high
water mark will not be reset at that time. This create many unused free spaces in the
table which leads to fragmentation
3. The following are different ways to defragment a table across versions
a. 5,6,7,8,8i export/import
b. 9i export/import and move table
# To move a table
SQL> alter table emp move tablespace mydata;
The above command will create a duplicate table and copies the data, then drops the original
table
Note: The above command is used even to normally move a table to another tablespace in case
of space constraint. Also, we can move table to the same tablespace, but we need to have free
space as double the size of table
# To check the size of table
SQL> select sum(bytes/1024/1024) from dba_segments where segment_name=EMP;

DBA CLASS NOTES | version 3.0 93

KANNA TECHNOLOGIES
Note: After table move, the corresponding indexes will become UNUSABLE because the row ids
will change. We need to use any of the below commands to rebuild the indexes
# To check which indexes became unusable
SQL> select index_name,status from dba_indexes where table_name=EMP;
# To rebuild the index
SQL> alter index pk_emp rebuild;
SQL> alter index pk_emp rebuild online;
SQL> alter index pk_emp rebuild online nologging; - always prefer to use this command as it
executes faster because no redo is generated
c. 10g / 11g export/import, expdp/impdp, move table & shrink compact
# To shrink a table
SQL> alter table scott.emp enable row movement;
SQL> alter table scott.emp shrink space compact;
SQL> alter table scott.emp disable row movement;
As row ids doesnt change with above commands, it is not necessary to rebuild indexes. While
doing shrinking, still users can access the table, but it will use full scan instead of index scan
Note: Apart from table fragmentation, we have tablespace fragmentation and that will occur
only in DMT or LMT with manual segment space management. The only solution is to export &
import the objects in that tablespace. So, it is always preferred to use LMT with ASSM

ROW CHAINING
1. If the data size is more than block size, data will spread into multiple blocks forming a
chain which is called row chaining
2. For example, when we are storing a 20k size of image, it will spread into 3 blocks as
shown below
DBA CLASS NOTES | version 3.0 94

KANNA TECHNOLOGIES

# To find row chaining


SQL> select table_name,chain_cnt from dba_tables where table_name=EMP;
3. Because data is spreaded across multiple blocks oracle need to perform multiple I/Os to
retrieve this which will lead to performance degradation
4. Solution for row chaining is to create new tablespace with non-default block size and
moving the tables
5. We can create tablespaces with non default block size of 2k, 4k, 16k and 32k (8k is
anyways default)
# To create non-default block size tablespace
SQL> create tablespace nontbs
datafile /datafiles/prod/nontbs01.dbf size 10m
blocksize 16k;
6. More block size cannot be fitted into default database buffer cache, so it is required to
enable separate buffer cache
# To enable non-default buffer cache
SQL> alter system set db_16k_cache_size=100m scope=both;
Instead of 16, we can use 2,4,32 values also
Note: Once defined, we cant change the default block size of the database
DBA CLASS NOTES | version 3.0 95

KANNA TECHNOLOGIES
ROW MIGRATION
1. Updating a row may increase row size and in such case it will use PCTFREE space
2. If PCTFREE is full, but still a row requires more size for update, oracle will move that
entire row to another block
3. If many rows are moved like this, more I/Os should be performed to retrieve data which
degrades performance
4. Solutions to avoid row migration is to increase the PCTFREE percentage or sometimes
creating a non-default block size also acts as a solution
5. Because PCTFREE is managed automatically in LMT, we will not observe any row
migration in LMT

INSTANCE TUNING
1. If even after performing all the steps in database tuning, performance problem exists,
we need to instance level tuning
TKPROF report
1. Transient kernel profiler is a report which show details like time taken, cpu utilization in
every phase (parse, execution and fetch) of sql execution
# Steps to take TKPROF report
SQL> grant alter session to scott;
SQL> alter session set sql_trace=TRUE;
SQL> select * from emp;
SQL> alter session set sql_trace=FALSE;
The above steps will create a trace file in udump location
[oracle@server1 udump]$ tkprof prod_ora_7824.trc tkprof_report.lst
DBA CLASS NOTES | version 3.0 96

KANNA TECHNOLOGIES
2. From TKPROF report if we observe that statement is getting parsed everytime and if it is
frequently executed query, reason could be statement flushing out from shared pool
because of less size. So increasing shared pool size is the solution
3. If we observe fetching is happening everytime, it could be because of data flushing from
buffer cache for which increasing the size is the solution
4. If the size of database buffer cache is enough to hold the data bit still data is flushing
out, in such cases we can use keep & recycle caches
# To enable keep & recycle caches
SQL> alter system set db_keep_cache_size=50m scope=both;
SQL> alter system set db_recycle_cache_size=50m scope=both;
# To place table in keep or recycle caches
SQL> alter table scott.emp storage (buffer_pool keep);
SQL> alter table scott.emp storage (buffer_pool recycle);
5. If a table is placed in KEEP cache, it will be there in the instance till its lifetime without
flushing. If a table is placed in RECYCLE cache, it will be flushed immediately without
waiting for LRU to occur
Note: Frequently used tables should be placed in keep cache whereas full scan tables should be
placed in recycle cache

DBA CLASS NOTES | version 3.0 97

KANNA TECHNOLOGIES
STATSPACK REPORT
1. This report should be generated only if entire database performance is slow
2. It is a report which details database performance during a given period of time
# Steps for generating statspack report
SQL> @$ORACLE_HOME/rdbms/admin/spcreate.sql
This will create a PERFSTAT user who is responsible for storing statistical data
[oracle@server1 udump]$ sqlplus perfstat/perfstat
SQL> exec statspack.snap; begin time
SQL> exec statspack.snap; end time
SQL> @$ORACLE_HOME/rdbms/admin/spreport.sql
3. Statspack report can have levels from 1 to 10 and the default is 5
4. In 8i, the concept of collecting statistics can be done using utlbstat.sql and utlestat.sql
scripts
Automatic Workload Repository(AWR) report
1. It is an extension to statspack report and introduced in 10g
2. In 10g, oracle will automatically runs collection of statistics job every one hour
# To generate AWR report
SQL> @$ORACLE_HOME/rdbms/admin/awrrpt.sql
Note: AWR/Statspack report analysis docs are available in
http://pavandba.files.wordpress.com/2009/11/statspack_opm4.pdf
http://pavandba.files.wordpress.com/2009/11/statspack_tuning_otn_new1.pdf
http://pavandba.files.wordpress.com/2009/11/opdg_slow_database.pdf
DBA CLASS NOTES | version 3.0 98

KANNA TECHNOLOGIES
Active Session History(ASH) report
1. It is a report to know the performance of database in last 15 minutes which helps in
providing quick solutions
2. It also provides session info which are causing performance problem
# To generate ASH report
SQL> @$ORACLE_HOME/rdbms/admin/ashrpt.sql
Automatic Database Diagnostic Monitor (ADDM) report
1. It is a tool which can be used to get recommendations from oracle on the performance
issues
2. It will take the snapshots generated by AWR and based on that will generate
recommendations
# To generate ADDM report
SQL> @$ORACLE_HOME/rdbms/admin/addmrpt.sql

ENTERPRISE MANAGER(EM)
1. It is a tool through which we can manage entire database and can perform all database
actions in a single click
2. Till 9i, it is called as oracle enterprise manager (OEM) and is restricted to use within the
network of database
3. From 10g it was made browser based so that we can manage the database from
anywhere in the world
4. EM can be configured either through DBCA or manual way
# Steps to configure EM manually
SQL> select role from dba_roles where role like MGMT%;
DBA CLASS NOTES | version 3.0 99

KANNA TECHNOLOGIES
SQL> select account_status from dba_users where username in (SYSMAN,DBSNMP);
SQL> alter user sysman account unlock; if not unlocked already
SQL> alter user sysman identified by oraman;
SQL> alter user dbsnmp account unlock;
SQL> alter user dbsnmp identified by dbsnmp;
SQL> alter user mgmt_view account unlock;
[oracle@server1 ~ ]$ lsnrctl status (start the listener if not already done)
[oracle@server1 ~ ]$ emca config dbcontrol db repos create
or
[oracle@server1 ~ ]$ emca repos create
[oracle@server1 ~ ]$ emca config dbcontrol db

# To drop repository
[oracle@server1 ~ ]$ emca deconfig dbconrol db repos drop
# To recreate repository
[oracle@server1 ~ ]$ emca config dbcontrol db repos recreate

# To manage EM
[oracle@server1 ~ ]$ emctl status / start / stop dbconsole

DBA CLASS NOTES | version 3.0 100

KANNA TECHNOLOGIES
CHANGING DATABASE NAME
Method 1 by recreating controlfile
SQL> alter database backup controlfile to trace;
This will generate script in udump location
[oracle@server1 udump ]$ cp prod_ora_7784.trc control.sql
[oracle@server1 ~ ]$ vi control.sql
Here change the database name and replace word REUSE with SET and make sure it is having
RESETLOGS
SQL> show parameter control_files
SQL> alter system set db_name=prod123 scope=spfile;
SQL> shutdown immediate
SQL> ! rm /datafiles/prod/*.ctl
SQL> startup nomount
SQL> @control.sql
SQL> alter database open resetlogs;

Method 2 using nid (DBNEWID utility)


SQL> shutdown immediate
SQL> startup mount
[oracle@server1 ~ ]$ nid target=/ dbname=prod123
SQL> alter system set db_name=prod123 scope=spfile;
DBA CLASS NOTES | version 3.0 101

KANNA TECHNOLOGIES
SQL> shut immediate
SQL> startup mount
SQL> alter database open resetlogs;
The above steps will change database id also
# To change only dbname
[oracle@server1 ~ ]$ nid target=/ dbname=prod123 setname=yes

How to change instance name


SQL> alter system set instance_name=prod123 scope=spfile;
SQL> shutdown immediate
SQL> ! mv $ORACLE_HOME/dbs/spfileprod.ora $ORACLE_HOME/dbs/spfileprod123.ora
SQL> startup

DBA CLASS NOTES | version 3.0 102

KANNA TECHNOLOGIES
FLASHBACK FEATURES
FLASHBACK QUERY

1. CREATE TABLE flashback_query_test (

id

NUMBER(10));

2. SELECT current_scn, TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS') FROM


v$database;
CURRENT_SCN TO_CHAR(SYSTIMESTAM
----------- ------------------722452 2004-03-29 13:34:12
3. INSERT INTO flashback_query_test (id) VALUES (1);
4. COMMIT;
5. SELECT COUNT(*) FROM flashback_query_test;
COUNT(*)
---------1
6. SELECT COUNT(*) FROM flashback_query_test AS OF TIMESTAMP
TO_TIMESTAMP('2004-03-29 13:34:12', 'YYYY-MM-DD HH24:MI:SS');
COUNT(*)
---------0
7. SELECT COUNT(*) FROM flashback_query_test AS OF SCN 722452;
COUNT(*)
---------0

FLASHBACK VERSION QUERY

1. CREATE TABLE flashback_version_query_test (


description VARCHAR2(50));

id

NUMBER(10),

2. INSERT INTO flashback_version_query_test (id, description) VALUES (1,


'ONE');
3. COMMIT;
4. SELECT current_scn, TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS') FROM
v$database;
5. UPDATE flashback_version_query_test SET description = 'TWO' WHERE id =
1;

DBA CLASS NOTES | version 3.0 103

KANNA TECHNOLOGIES
6. COMMIT;
7. UPDATE flashback_version_query_test SET description = 'THREE' WHERE id
= 1;
8. COMMIT;
9. SELECT current_scn, TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS') FROM
v$database;
COLUMN versions_startscn FORMAT 99999999999999999
COLUMN versions_starttime FORMAT A24
COLUMN versions_endscn FORMAT 99999999999999999
COLUMN versions_endtime FORMAT A24
COLUMN versions_xid FORMAT A16
COLUMN versions_operation FORMAT A1
COLUMN description FORMAT A11
SET LINESIZE 200
10. SELECT versions_startscn, versions_starttime,
versions_endscn, versions_endtime,
versions_xid, versions_operation,
description
FROM
flashback_version_query_test
VERSIONS BETWEEN TIMESTAMP TO_TIMESTAMP('2004-03-29 14:59:08', 'YYYYMM-DD HH24:MI:SS')
AND TO_TIMESTAMP('2004-03-29 14:59:36', 'YYYY-MM-DD HH24:MI:SS')
WHERE id = 1;
11. SELECT versions_startscn, versions_starttime,
versions_endscn, versions_endtime,
versions_xid, versions_operation,
description
FROM
flashback_version_query_test
VERSIONS BETWEEN SCN 725202 AND 725219
WHERE id = 1;

FLASHBACK TRANSACTION QUERY


1. SELECT xid, operation, start_scn,commit_scn, logon_user, undo_sql
FROM
flashback_transaction_query
WHERE xid = HEXTORAW('0600030021000000');
2. update "SCOTT"."FLASHBACK_VERSION_QUERY_TEST" set "DESCRIPTION" = 'ONE'
where ROWID = 'AAAMP9AAEAAAA
AYAAA';

DBA CLASS NOTES | version 3.0 104

KANNA TECHNOLOGIES
FLASHBACK TABLE

Flashback table requires following privileges


1) FLASHBACK ANY TABLE or FLASHBACK object
2) SELECT,INSERT,DELETE and ALTER privs on table
3) Row movement must be enabled

CREATE TABLE flashback_table_test (


id NUMBER(10)
);
ALTER TABLE flashback_table_test ENABLE ROW MOVEMENT;
SELECT current_scn FROM v$database;
CURRENT_SCN
----------715315
INSERT INTO flashback_table_test (id) VALUES (1);
COMMIT;
SELECT current_scn FROM v$database;
CURRENT_SCN
----------715340
FLASHBACK TABLE flashback_table_test TO SCN 715315;
SELECT COUNT(*) FROM flashback_table_test;
COUNT(*)
---------0
FLASHBACK TABLE flashback_table_test TO SCN 715340;
SELECT COUNT(*) FROM flashback_table_test;
COUNT(*)
---------1

DBA CLASS NOTES | version 3.0 105

KANNA TECHNOLOGIES
FLASHBACK DATABASE

Database must be in archivelog mode and flashback should be enabled for performing this. When placed
in flashback mode, we can observe flashback logs getting generated in flash_recovery_area
-- Create a dummy table.
CONN scott/tiger
CREATE TABLE flashback_database_test (
id NUMBER(10)
);
-- Flashback 5 minutes.
CONN sys/password AS SYSDBA
SHUTDOWN IMMEDIATE
STARTUP MOUNT EXCLUSIVE
FLASHBACK DATABASE TO TIMESTAMP SYSDATE-(1/24/12);
ALTER DATABASE OPEN RESETLOGS;
-- Check that the table is gone.
CONN scott/tiger
DESC flashback_database_test
We can use following commands also in flashback database
FLASHBACK DATABASE TO TIMESTAMP my_date;
FLASHBACK DATABASE TO BEFORE TIMESTAMP my_date;
FLASHBACK DATABASE TO SCN my_scn;
FLASHBACK DATABASE TO BEFORE SCN my_scn;

DBA CLASS NOTES | version 3.0 106

KANNA TECHNOLOGIES
DATAGUARD
Oracle Data Guard is one of the most effective and comprehensive data availability, data
protection and disaster recovery solutions available today for enterprise data. Oracle Data
Guard is the management, monitoring, and automation software infrastructure that creates,
maintains, and monitors one or more standby databases to protect enterprise data from
failures, disasters, errors, and corruptions.
Data Guard maintains these standby databases as transitional consistent copies of the
production database. These standby databases can be located at remote disaster recovery sites
thousands of miles away from the production data center, or they may be located in the same
city, same campus, or even in the same building. If the production database becomes
unavailable because of a planned or an unplanned outage, Data Guard can switch any standby
database to the production role, thus minimizing the downtime associated with the outage, and
preventing any data loss.

Available as a feature of the Enterprise Edition of the Oracle Database, Data Guard can be used
in combination with other Oracle High Availability (HA) solutions such as Real Application
Clusters (RAC), Oracle Flashback and Oracle Recovery Manager (RMAN), to provide a very high
level of data protection and data availability that is unprecedented in the industry.

The following diagram presents a hi-level overview of Oracle Data Guard.

DBA CLASS NOTES | version 3.0 107

KANNA TECHNOLOGIES
Overview of Oracle Data Guard Functional Components
Data Guard Configuration:
A Data Guard configuration consists of one production (or primary) database and up to nine
standby databases. The databases in a Data Guard configuration are connected by Oracle Net
and may be dispersed geographically. There are no restrictions on where the databases are
located, provided that they can communicate with each other. However, for disaster recovery,
it is recommended that the standby databases are hosted at sites that are geographically
separated from the primary site.

Redo Apply and SQL Apply:


A standby database is initially created from a backup copy of the primary database. Once
created, Data Guard automatically maintains the standby database as a transactional consistent
copy of the primary database by transmitting primary database redo data to the standby
system and then applying the redo logs to the standby database. Data Guard provides two
methods to apply this redo data to the standby database and keep it transactional consistent
with the primary, and these methods correspond to the two types of standby databases
supported by Data Guard.

Redo Apply, used for physical standby databases


SQL Apply, used for logical standby databases
A physical standby database provides a physically identical copy of the primary database, with
on-disk database structures that are identical to the primary database on a block-for-block
basis. The database schemas, including indexes are the same. The Redo Apply technology
applies redoes data on the physical standby database using standard Oracle media recovery
techniques.
A logical standby database contains the same logical information as the production database,
although the physical organization and structure of the data can be different. The SQL apply
technology keeps the logical standby database synchronized with the primary database by
transforming the data in the redo logs received from the primary database into SQL statements
and then executing the SQL statements on the standby database. This makes it possible for the
logical standby database to be accessed for queries and reporting purposes at the same time
the SQL is being applied to it. Thus, a logical standby database can be used concurrently for
data protection and reporting.

DBA CLASS NOTES | version 3.0 108

KANNA TECHNOLOGIES
Role Management:
Using Data Guard, the role of a database can be switched from a primary role to a standby role
and vice versa, ensuring no data loss in the process, and minimizing downtime. There are two
kinds of role transitions a switchover and a failover. A switchover is a role reversal between
the primary database and one of its standby databases. This is typically done for planned
maintenance of the primary system. During a switchover, the primary database transitions to a
standby role and the standby database transitions to the primary role. The transition occurs
without having to re-create either database. A failover is an irreversible transition of a standby
database to the primary role. This is only done in the event of a catastrophic failure of the
primary database, which is assumed to be lost and to be used again in the Data Guard
configuration, it must be re-instantiated as a standby from the new primary.

Data Guard Protection Modes:


In some situations, a business cannot afford to lose data at any cost. In other situations, some
applications require maximum database performance and can tolerate a potential loss of data.
Data Guard provides three distinct modes of data protection to satisfy these varied
requirements:
Maximum Protection This mode offers the highest level of data protection. Data is
synchronously transmitted to the standby database from the primary database and
transactions are not committed on the primary database unless the redo data is available on at
least one standby database configured in this mode. If the last standby database configured in
this mode becomes unavailable, processing stops on the primary database. This mode ensures
no-data-loss.
Maximum Availability This mode is similar to the maximum protection mode, including zero
data loss. However, if a standby database becomes unavailable (for example, because of
network connectivity problems), processing continues on the primary database. When the fault
is corrected, the standby database is automatically resynchronized with the primary database.
Maximum Performance This mode offers slightly less data protection on the primary
database, but higher performance than maximum availability mode. In this mode, as the
primary database processes transactions, redo data is asynchronously shipped to the standby
database. The commit operation of the primary database does not wait for the standby
database to acknowledge receipt of redo data before completing write operations on the
primary database. If any standby destination becomes unavailable, processing continues on the
primary database and there is little effect on primary database performance.

DBA CLASS NOTES | version 3.0 109

KANNA TECHNOLOGIES
Data Guard Broker:
The Oracle Data Guard Broker is a distributed management framework that automates and
centralizes the creation, maintenance, and monitoring of Data Guard configurations. All
management operations can be performed either through Oracle Enterprise Manager, which
uses the Broker, or through the Brokers specialized command-line interface (DGMGRL).
The following diagram shows an overview of the Oracle Data Guard architecture.

Whats New in Oracle Data Guard 10g Release 2?


Fast-Start Failover
This capability allows Data Guard to automatically, and quickly fail over to a previously chosen,
synchronized standby database in the event of loss of the primary database, without requiring
any manual steps to invoke the failover, and without incurring any data loss. Following a faststart failover, once the old primary database is repaired, Data Guard automatically reinstates it
to be a standby database. This act restores high availability to the Data Guard configuration.
Improved Redo Transmission
Several enhancements have been made in the redo transmission architecture to make sure
redo data generated on the primary database can be transmitted as quickly and efficiently as
possible to the standby database(s).
Easy conversion of a physical standby database to a reporting database

DBA CLASS NOTES | version 3.0 110

KANNA TECHNOLOGIES
A physical standby database can be activated as a primary database, opened read/write for
reporting purposes, and then flashed back to a point in the past to be easily converted back to a
physical standby database. At this point, Data Guard automatically synchronizes the standby
database with the primary database. This allows the physical standby database to be utilized for
read/write reporting and cloning activities.
Automatic deletion of applied archived redo log files in logical standby databases
Archived logs, once they are applied on the logical standby database, are automatically deleted,
reducing storage consumption on the logical standby and improving Data Guard manageability.
Physical standby databases have already had this functionality since Oracle Database 10g
Release 1, with Flash Recovery Area.

Fine-grained monitoring of Data Guard configurations


Oracle Enterprise Manager has been enhanced to provide granular, up-to-date monitoring of
Data Guard configurations, so that administrators may make an informed and expedient
decision regarding managing this configuration.
Real Time Apply:
With this feature, redo data can be applied on the standby database (whether Redo Apply or
SQL Apply) as soon as they have written to a Standby Redo Log (SRL). Prior releases of Data
Guard require this redo data to be archived at the standby database in the form of archivelogs
before they can be applied. The Real Time Apply feature allows standby databases to be closely
synchronized with the primary database, enabling up-to-date and real-time reporting
(especially for Data Guard SQL Apply). This also enables faster switchover and failover times,
which in turn reduces planned and unplanned downtime for the business.
The impact of a disaster is often measured in terms of Recovery Point Objective (RPO i.e. how
much data can a business afford to lose in the event of a disaster) and Recovery Time Objective
(RTO i.e. how much time a business can afford to be down in the event of a disaster). With
Oracle Data Guard, when Maximum Protection is used in combination with Real Time Apply,
businesses get the benefits of both zero data loss as well as minimal downtime in the event of a
disaster and this makes Oracle Data Guard the only solution available today with the best RPO
and RTO benefits for a business.
Integration with Flashback Database:
Data Guard in 10g has been integrated with the Flashback family of features to bring the
Flashback feature benefits to a Data Guard configuration. One such benefit is human error
protection. In Oracle9i, administrators may configure Data Guard with an apply delay to protect
standby databases from possible logical data corruptions that occurred on the primary
database. The side-effects of such delays are that any reporting that gets done on the standby
DBA CLASS NOTES | version 3.0 111

KANNA TECHNOLOGIES
database is done on old data, and switchover/failover gets delayed because the accumulated
logs have to be applied first. In Data Guard 10g, with the Real Time Apply feature, such delayedreporting or delayed-switchover/failover issues do not exist, and if logical corruptions do land
up affecting both the primary and standby database, the administrator may decide to use
Flashback Database on both the primary and standby databases to quickly revert the databases
to an earlier point-in-time to back out such user errors.

Another benefit that such integration provides is during failovers. In releases prior to 10g,
following any failover operation, the old primary database must be recreated (as a new standby
database) from a backup of the new primary database, if the administrator intends to bring it
back in the Data Guard configuration. This may be an issue when the database sizes are fairly
large, and the primary/standby databases are hundreds/thousands of miles away. However, in
Data Guard 10g, after the primary server fault is repaired, the primary database may simply be
brought up in mounted mode, flashed back (using flashback database) to the SCN at which
the failover occurred, and then brought back as a standby database in the Data Guard
configuration. No re-instantiation is required.

SQL Apply New Features:


Zero Downtime Instantiation:
Logical standby database can now be created from an online backup of the primary database,
without shutting down or quiescing the primary database, as was the case in prior releases. No
shutdown of the primary system implies production downtime is eliminated, and no quiesce
implies no waiting for quiescing to take effect and no dependence on Resource Manager.

Rolling Upgrades:
Oracle Database 10g supports database software upgrades (from Oracle Database 10g Patchset
1 onwards) in a rolling fashion, with near zero database downtime, by using Data Guard SQL
Apply. The steps involve upgrading the logical standby database to the next release, running in
a mixed mode to test and validate the upgrade, doing a role reversal by switching over to the
upgraded database, and then finally upgrading the old primary database. While running in a
mixed mode for testing purpose, the upgrade can be aborted and the software downgraded,
without data loss. For additional data protection during these steps, a second standby database
may be used.

DBA CLASS NOTES | version 3.0 112

KANNA TECHNOLOGIES
By supporting rolling upgrades with minimal downtimes, Data Guard reduces the large
maintenance windows typical of many administrative tasks, and enables the 247 operation of
the business.

Data Guard Benefits


Disaster recovery and high availability
Data Guard provides an efficient and comprehensive disaster recovery and high availability
solution. Automatic failover and easy-to-manage switchover capabilities allow quick role
reversals between primary and standby databases, minimizing the downtime of the primary
database for planned and unplanned outages.

Complete data protection


A standby database also provides an effective safeguard against data corruptions and user
errors. Storage level physical corruptions on the primary database do not propagate to the
standby database. Similarly, logical corruptions or user errors that cause the primary database
to be permanently damaged can be resolved. Finally, the redo data is validated at the time it is
received at the standby database and further when applied to the standby database.

Efficient utilization of system resources


A physical standby database can be used for backups and read-only reporting, thereby reducing
the primary database workload and saving valuable CPU and I/O cycles. In Oracle Database 10g
Release 2, a physical standby database can also be easily converted back and forth between
being a physical standby database and an open read/write database. A logical standby database
allows its tables to be simultaneously available for read-only access while they are updated
from the primary database. A logical standby database also allows users to perform data
manipulation operations on tables that are not updated from the primary database. Finally,
additional indexes and materialized views can be created in the logical standby database for
better reporting performance.
Protection from communication failures
If network connectivity is lost between the primary and one or more standby databases, redo
data cannot be sent from the primary to those standby databases. Once connectivity is reestablished, the missing redo data is automatically detected by Data Guard and the necessary
archive logs are automatically transmitted to the standby databases. The standby databases are
resynchronized with the primary database, with no manual intervention by the administrator.

DBA CLASS NOTES | version 3.0 113

KANNA TECHNOLOGIES
Centralized and simple management
Data Guard Broker automates the management and monitoring tasks across the multiple
databases in a Data Guard configuration. Administrators may use either Oracle Enterprise
Manager or the Brokers own specialized command-line interface (DGMGRL) to take advantage
of this integrated management framework.

Additional commands for Dataguard


# Add Standby Redo Log Groups to Standby Database
Create standby redo log groups on standby database (start with next group number; create one more
group than current number of groups) after switching out of managed recovery mode:
SQL> sqlplus / as sysdba
SQL> alter database recover managed standby database cancel;
SQL> alter database open read only;
SQL> select max(group#) maxgroup from v$logfile;
SQL> select max(bytes) / 1024 size (K) from v$log;
SQL> alter database add standby logfile group 4
(/orcl/oradata/PROD2/stby_log_PROD_4A.rdo,'/orcl/oradata/PROD2/stby_log_PROD_4B.rdo) size
4096K; etc
SQL> column member format a55
SQL> select vs.group#,vs.bytes,vl.member from v$standby_log vs,v$logfile vl where vs.group# =
vl.group# order by vs.group#,vl.member;

# Add Tempfile To Standby


Add a tempfile to the standby database for switchover or read-only access, then, switch back to
managed recovery:
SQL> alter tablespace temp add tempfile /data/oradata/PROD2/temp_PROD_01.dbf size 400064K
reuse;
SQL> alter database recover managed standby database disconnect from session;
SQL> select * from v$tempfile;
SQL> exit

DBA CLASS NOTES | version 3.0 114

KANNA TECHNOLOGIES
# Add Standby Redo Log Groups to Primary Database
Create standby logfile groups on the primary database for switchovers (start with next group number;
create one more group than current number of groups):
$ sqlplus / as sysdba
SQL> select max(group#) maxgroup from v$logfile;
SQL> select max(bytes) / 1024 size (K) from v$log;
SQL> alter database add standby logfile group 4 (/orcl/oradata/PROD/stby_log_PROD_4A.rdo,
/orcl/oradata/PROD/stby_log_PROD_4B.rdo) size 4096K; etc
SQL> column member format a55
SQL> select vs.group#,vs.bytes,vl.member from v$standby_log vs, v$logfile vl where vs.group# =
vl.group# order by vs.group#,vl.member;

# Switch To Maximum Availability Protection Mode


Switch to the desired maximum availability protection mode on the primary database (from the
default maximum performance):
SQL> select value from v$parameter where name = log_archive_dest_2; must show LGWR SYNC
SQL> shutdown normal
SQL> startup mount
SQL> alter database set standby database to maximize availability;
SQL> alter database open;
SQL> select protection_mode from v$database;

# Shutdown and Startup for Standby Database


To shut down a standby database:
If in read-only access, switch back to managed recovery (after terminating any other active sessions):
SQL> alter database recover managed standby database disconnect from session;
Cancel managed recovery and shutdown:
SQL> alter database recover managed standby database cancel;
SQL> shutdown immediate

DBA CLASS NOTES | version 3.0 115

KANNA TECHNOLOGIES

# To start up a standby database:


SQL> startup nomount
SQL> alter database mount standby database;
SQL> alter database recover managed standby database disconnect from session;

# Switchover Swapping Primary and Standby


End all activities on the primary and standby database.
On the primary (switchover status should show TO STANDBY):
SQL> select database_role,switchover_status from v$database;
SQL> alter database commit to switchover to physical standby;
SQL> shutdown immediate
SQL> startup nomount
SQL> alter database mount standby database;
On the standby (switchover status should show SWITCHOVER PENDING):
SQL> select database_role,switchover_status from v$database;
SQL> alter database commit to switchover to primary;
SQL> shutdown normal
SQL> startup
On the primary:
SQL> alter database recover managed standby database disconnect from session;
On the standby:
SQL> alter system archive log current;
Change tnsnames.ora entry on all servers to swap the connect strings (myserver_prod and
myserver_prod2).

DBA CLASS NOTES | version 3.0 116

KANNA TECHNOLOGIES
# Failover Standby Becomes Primary
End all activities on the standby database.
May need to resolve redo log gaps (not shown here).
On the standby: SQL> alter database recover managed standby database finish;
SQL> alter database commit to switchover to primary;
SQL> shutdown immediate
SQL> startup
Change tnsnames.ora entry on all servers to point the primary connect string to the standby database.
New standby needs to be created. Old primary is no longer functional.
Monitoring Standby Database
select count(*) from v$archive_gap;
This query detects gaps in the logs that have been received. If any rows are returned by this query then
there is a gap in the sequence numbers of the logs that have been received.
This gap must be resolved before logs can be applied.
SELECT decode(count(*),0,0,1) FROM v$managed_standby WHERE (PROCESS=ARCH AND STATUS NOT
IN (CONNECTED)) OR (PROCESS=MRP0 AND STATUS NOT IN (WAIT_FOR_LOG,'APPLYING_LOG))
OR (PROCESS=RFS AND STATUS NOT IN (IDLE,'RECEIVING));
This query detects bad statuses. When a bad status is present this query will return a 1.

The ARCH process should always be CONNECTED. The MRP0 process should always be waiting for a
log or applying a log, and when this is not true it will report the error in the status. The RFS process
exists when the Primary is connected to the Standby and should always be IDLE or RECEIVING.
SELECT DECODE(COUNT(DISTINCT PROCESS),3,0,1) FROM v$managed_standby;
This query detects missing processes. If we do not have exactly 3 distinct processes then there is a
problem, and this query will return a 1.

The most likely process to be missing is the RFS which is the connection to the Primary database. You
must resolve the problem preventing the Primary from connecting to the Standby before this process
will start running again.

DBA CLASS NOTES | version 3.0 117

KANNA TECHNOLOGIES

# Verify all STANDBY PROCESSES are running normally on the STANDBY database.

SELECT PROCESS,STATUS,RESETLOG_ID,SEQUENCE#,ACTIVE_AGENTS FROM V$MANAGED_STANDBY ;


A query with good results follows proving all processes are connected with normal statuses.

SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME, APPLIED FROM V$ARCHIVED_LOG WHERE FIRST_TIME >
TRUNC(SYSDATE) ORDER BY SEQUENCE#;

Data guard Related View


V$DATABASE
PROTECTION_LEVEL: current protection mode setting.
FS_FAILOVER_STATUS: synchronization status
DBA_LOGSTDBY_UNSUPPORTED: unsupported tables.
DBA_LOGSTDBY_EVENTS: monitor transaction activity.
V$LOG: Redo log changed.
V$MANAGED_STANDBY : Recovery progress.

DBA CLASS NOTES | version 3.0 118

You might also like