Professional Documents
Culture Documents
USER
RAM
HARD DISK
1. Whenever user sends a request, primary search for the data will be done in RAM. If the
information is available, it will be given to the user
2. Otherwise, secondary search will be done in hard disk and copy of that info will be placed in
RAM before giving that to the user
3. The request and response between RAM and hard disk is called I/O operation
4. Files with more size will be fitted into RAM using swapping (flushing old data Least Recently
Used algorithm)
Questions:
1. Why to copy data to RAM?
a. It will give benefit for next users who are requesting for same data. Accessing
information from memory is always faster than accessing it from disk
2. What happens if data size is more than RAM size?
a. Data will be splitted based on the RAM size and swapping will takes place (In windows,
this is called paging)
3. How the data will be managed in RAM?
a. Using Least Recently Used (LRU) algorithm
Note: The primary goal of DBA is to reduce response time there by increasing performance and also
avoiding I/O (all these 3 are interlinked)
Note: Any request and response between memory and disk is called I/O
KANNA TECHNOLOGIES
ORACLE DATABASE FUNCTIONALITY
USER
INSTANCE
DATABASE
KANNA TECHNOLOGIES
ORACLE 10g DATABASE ARCHITECTURE
Shared pool
User
process
Server
process
PGA
Library cache
WRITE
list
LRU
list
Data
dictionary
cache
Log buffer
cache
Large pool
Java pool
LRU end
Stream pool
SGA
SMONn
PMON
DBWRn
LGWR
CKPT
PARAMETER FILE
PASSWORD FILE
ARCHIVED
REDOLOG FILES
DATA FILES
REDOLOG
FILES
CONTROL
FILES
DATABASE
ARCHn
KANNA TECHNOLOGIES
INTERNALS OF USER CONNECTIVITY
1. Whenever user starts and application, a user process will be created on the client side. E.g.:
sqlplusw.exe process will be started when a user clicks on sqlplus executable on a windows
operating system
2. This user process will send a request to establish a connection to server by providing login
credentials (sometimes even host string also)
3. On server side, Listener service will accept all connections that are coming in and will hand over
user information (like username, password, ip address, network etc) to a background process
called PMON (process monitor)
4. PMON will then perform the authentication of the user using base tables. For this it will do a
primary search in data dictionary cache and if a copy of base table is not available in that, then it
will copy from the database
5. Once authenticated, user will receive and acknowledgement statement. This can be either
successful / unsuccessful message
6. If successful connection, PMON will create server process and a memory will be allocated to that
server process which is called as PGA (private global area)
7. Server process is the one which will do work on behalf of user process
KANNA TECHNOLOGIES
BASE TABLES
1. Base tables store the information i.e. helpful for database functionality. This info is also called as
dictionary information
2. Base tables will be in the form of XXX$ (i.e. name suffixed with a $ sign) and will reside in
SYSTEM tablespace
3. Information in base tables will be in cryptic format and because of this we can access but cannot
understand data inside them
4. A try to modify base tables (performing DML or DDL operations) may lead to database
corruption. Only oracle processes are having authority to modify them
5. Base tables will be created at the time of database creation using SQL.BSQ script
KANNA TECHNOLOGIES
ii. Compiling the statement
iii. Running or executing the statement
c. FETCH : Data will be retrieved in this phase
Note: For a PL/SQL program, BINDING will happen after PARSING phase (so it will have 4 phases to go)
INSTANCE
Memory
Structures
DATABASE
Background
Processes
Logical
Structures
Physical
Structures
DBA CLASS NOTES | version 3.0 6
KANNA TECHNOLOGIES
LOGICAL STRUCTURES OF DATABASE
1. The following are the logical structures of database and will be helpful in easy manageability of
the database
a. TABLESPACE an alias name to a group of datafiles (or) group of segments (or) a space
where tables reside
b. SEGMENT group of extents (or) object that occupies space
c. EXTENT group of oracle data blocks (or) memory unit allocated to the object
d. ORACLE DATA BLOCK basic unit of data storage (or) group of operating system blocks
2. The following tablespaces are mandatory to exist in 10g database
a. SYSTEM stores base tables (dictionary information)
b. SYSAUX auxiliary tablespace to SYSTEM which also stores base tables required for
reporting purpose
c. TEMP used for performing sort operations
d. UNDO used to store before images helpful for rollback of transaction or instance
recovery
Note: Oracle 9i should have all the above tablespaces except SYSAUX. SYSAUX is introduced in 10g to
avoid burden on SYSTEM tablespace
KANNA TECHNOLOGIES
KANNA TECHNOLOGIES
DDL STATEMENT PROCESSING
1. DDL statement processing is same as DML processing as internally all DDL are DML statements
to the base tables
2. For every DDL statement, base tables will get modified with update/delete/insert statements.
Because of this reason, in case of DDL also undo will be generated
KANNA TECHNOLOGIES
ORACLE INSTANCE It is a way through which users will access / modify data in the database. It is a
combination of memory structures and background processes
SHARED GLOBAL AREA (SGA) It is the memory area which contains several memory caches helpful in
reading and writing data
SHARED POOL
1. Shared pool contains following components
a. Library cache it contains shared SQL & PL/SQL statements
b. Data dictionary cache it contains dictionary information in the form of rows, hence
also called as row cache
2. Size of shared pool is defined using SHARED_POOL_SIZE
DATABASE BUFFER CACHE
1. It is the memory area where a copy of the data is placed in LRU list
2. The status of block in DBC will be any of the following status
a. UNUSED block which is never used
b. FREE block which is used already but currently it is free
c. PINNED block currently in use
KANNA TECHNOLOGIES
d. DIRTY block which got modified
3. DBC contains LRU list and WRITE list which helps in splitting modified blocks with other blocks
4. Size of DBC is defined using DB_CACHE_SIZE or DB_BLOCK_BUFFERS
LOG BUFFER CACHE It is the memory area where a copy of redo entries are maintained and size is
defined by LOG_BUFFER
Note: LBC should be allotted with smallest size than any other memory component in SGA
LARGE POOL
1. Large pool will be used efficiently at the time of RMAN backup
2. Large pool can dedicate some of its memory to shared pool and gets back whenever shared pool
is observing less free space
3. Size is defined using LARGE_POOL_SIZE
JAVA POOL It is memory area used to run java executables (like JDBC driver) and size is defined using
JAVA_POOL_SIZE
STREAM POOL
1. It is the memory area used when replicating a database using oracle streams
2. This parameter is introduced in 10g and can be defined using STREAM_POOL_SIZE
3. If stream pool is not defined and streams are used, then 10% of memory from shared pool will
be used. This may affect the database performance
The below diagrams will explain SMONn instance recovery in detail
1. We know that LGWR wiill write redo entries into redolog files. But if we have more and more
redo entries generated (for huge transactions), redolog file size increases and even terabytes of
storage is not sufficient.
2. To overcome this Oracle designed its architecture so that LGWR will write into 2 or more
redolog files in a cyclic order (shown in the below diagram)
3. When doing this, certain events will trigger out which are listed as below
KANNA TECHNOLOGIES
Redolog
member 2
Redolog
member 1
LOGSWITCH
LGWR moving from one redolog file to another is called LOG SWITCH. At the time of log switch,
following actions will take place
Checkpoint event will occur this tells that dirty blocks should be made permanent to datafiles.
(Eg: Its just like automatic saving of email when composing in gmail)
CKPT process will update the latest SCN to datafile header and controlfiles by taking the info
from redolog files
DBWRn will write the corresponding dirty blocks from write list to datafiles
ARCHn process will generate archives (copy of online redolog files) only if database is in
archivelog mode
Note: Checkpoint event not only occurs at log switch. It can occur at repeated interval and this is
decided by a parameter LOG_CHECKPOINT_INTERVAL (till 8i) and FAST_START_MTTR_TARGET (from 9i)
KANNA TECHNOLOGIES
a. Roll forward compares the SCN between redolog files and datafiles header and will
make sure committed data is written to datafiles
b. Opens the database for user access
c. Rollbacks uncommitted transactions with the help of undo tablespace
2. It will coalesce the tablespaces which are defined as automatic segment space management
3. It will release the temporary segments occupied by the transactions when they are completed (a
more detailed post available @ http://pavandba.wordpress.com/2010/04/20/how-temporarytablespace-works/ )
1
...
2
3
1
Controlfile - 3
In the above diagram, assume that 1,2 and 3 transactions are commited and 4 is going on. Also
as checkpoint occurred at log switch, complete data of 1 and 2, also part of 3 were written to
datafiles
KANNA TECHNOLOGIES
Assume LGWR is writing to second file for transaction 4 and instance crash occurred.
While recovery, SMON will start comparing the SCN between datafile header and redolog file.
Also it will check for commit point.
In the above example, 1 & 2 are written and commited. So nothing is there to recover. But for 3,
only half data is written. To write other half, SMON will initiate DBWRn. But DBWRn will be
unable to do that as dirty blocks are cleaned out from write list (due to instance restart)
Now DBWRn will take help of server process which will actually generate dirty blocks with the
help of redo entries that are already written to redolog files
EXAMPLE : 2
1
...
2
.
3
3
1
Controlfile - 3
In the above example, transaction 3 is not yet commited, but because log switch occurred and
checkpoint event triggred, part of its data is written to datafiles
Assume an instance crash occurred and SMON is performing instance recovery
KANNA TECHNOLOGIES
SMON will start comparing SCN as usual and when comes to 3 it identifies that data is written to
datafiles, but actually 3 is not committed. So this data need to be reverted. Again it will ask
DBWRn to take this job
DBWRn in turn takes help from server process which will generated blocks with old values with
the help of undo tablespace
Note : For roll forward, redo entries from redolog files will be used where as in rollback before images
from undo tablespace will be used
KANNA TECHNOLOGIES
4. when 1 MB of redo is generated
5. every 3 sec
CKPT it will update the latest SCN to control files and datafile header by taking information from
redolog files. This will happen at every log switch
ARCHn It will generated offline redolog files in specified location. This will be done only if database is
in archivelog mode
MMON
======
The Manageability Monitor (MMON) process was introduced in 10g and is associated with the
Automatic Workload Repository new features used for automatic problem detection and self-tuning.
MMON writes out the required statistics for AWR on a scheduled basis.
KANNA TECHNOLOGIES
CJQn
=====
This is the Job Queue monitoring process which is initiated with the job_queue_processes parameter.
Redo
log
file 1
Redo
log
file 2
KANNA TECHNOLOGIES
CONTROL FILES These files will store crucial database information like
1. database name and creation timestamp
2. latest SCN
3. location and sizes of redolog files and datafiles
4. parameters that define the size of controlfile
ARCHIVED REDOLOG FILES These files will be created by ARCHn process if archivelog mode is enabled.
The size of archives will be equal or less than redolog files
PARAMETER FILE
1. This file contains parameters that will define the characteristics of database.
2. It is a text file and will be in the form of init<SID>.ora [SID instance name] and resides in
ORACLE_HOME/dbs
(on
unix)
and
ORACLE_HOME/database
(on
windows)
(on
windows)
KANNA TECHNOLOGIES
3. We can create spfile from pfile or pfile from spfile using below commands
a. SQL> create spfile from pfile;
b. SQL> create pfile from spfile;
4. Order of precedence for database startup
Spfile<SID>.ora init<SID>.ora
Scope
Before shutdown
After shutdown
Memory
300
200
Spfile
200
300
Both
300
300
KANNA TECHNOLOGIES
4. Using ASMM, we can define total memory to SGA and oracle will decide how much to distribute
to all caches. This is possible by setting SGA_TARGET parameter (new in 10g)
Note: to enabled ASMM, we should define STATISTICS_LEVEL = TYPICAL (default) or ALL
5. Maximum size for SGA is defined by SGA_MAX_SIZE. Depends on transactions load, SGA size will
vary from SGA_TARGET to SGA_MAX_SIZE
6. Its been observed that individual parameters are also defined in some 10g databases which
means, those values will act as min values and SGA_TARGET value will act as medium and
SGA_MAX_SIZE as max value
7. Oracle 10g introduced new background process MMAN in order to manage the memory for SGA
components
Note: SGA size is not at all dependent on database size and will be calculated based on the transactions
hitting the database
Oracle 8i
SORT_AREA_SIZE
WORK_AREA_SIZE
HASH_AREA_SIZE
BITMAP_WORK_AREA
SHARED_POOL_SIZE
DB_CACHE_SIZE
LOG_BUFFER
LARGE_POOL_SIZE
JAVA_POOL_SIZE
Oracle 9i
Oracle 10g
PGA_AGGREGATE_TARGET
PGA_AGGREGATE_TARGET
SHARED_POOL_SIZE
DB_CACHE_SIZE
LARGE_POOL_SIZE
JAVA_POOL_SIZE
LOG_BUFFER
Oracle 11g
MEMORY_TARGET
LOG_BUFFER
SGA_TARGET
LOG_BUFFER
KANNA TECHNOLOGIES
c. Non-default parameters
d. Archivelog generation information
e. Checkpoint information (optional) etc
2. Alert log file will only specifies error message in brief, but it will point to a file which will have
more information about the error. These are called trace files
3. Trace files are of 3 types background trace files, core trace files and user trace files
4. If any background process fails to perform, it will throw error and a trace file will be generated
(called background trace files) in ORACLE_HOME/admin/SID/bdump location
5. For all operating system related errors with oracle, trace files will be generated (called core
trace files) in ORACLE_HOME/admin/SID/cdump location
6. For all user related errors, trace files (called user trace files) will be generated in
ORACLE_HOME/admin/SID/udump location
7. The default location of these files can be changed by defining following parameters
a. BACKGROUND_DUMP_DEST
b. CORE_DUMP_DEST
c. USER_DUMP_DEST
8. The above 3 parameters are replaced with a single parameter DIAGNOSTIC_DEST in oracle 11g.
the default location will be $ORACLE_HOME/diag/rdbms/SID/dbname/trace
9. Oracle 11g contains two versions of alert log file. One is in text format which resides in above
location and the other one will be in XML format
KANNA TECHNOLOGIES
DATABASE OPERATING MODES
STARTUP PHASES
CONTROL FILES
SPFILE or PFILE
Commands:
SQL> startup
or
Note: A database can be started without issuing shutdown command using SQL> startup force
SHUTDOWN TYPES
Mode
Whether
checkpoint occurs?
NORMAL
TRANSACTIONAL
IMMEDIATE
ABORT
KANNA TECHNOLOGIES
SHARED SERVER ARCHITECTURE
Shared pool
Up1
Up2
Up3
MRU end
Library cache
WRITE
list
DISPATCHER
Request
queue
Response
queue
LRU
list
Data
dictionary
cache
Log buffer
cache
Large pool
Java pool
LRU end
Stream pool
SGA
SMONn
PMON
DBWRn
UGA
LGWR
CKPT
PARAMETER FILE
PASSWORD FILE
ARCHIVED
REDOLOG FILES
DATA FILES
REDOLOG
FILES
CONTROL
FILES
DATABASE
ARCHn
KANNA TECHNOLOGIES
1. Multiple user requests will be received by dispatcher which will be placed in request
queue
2. Shared server processes will take information from request queue and will be processed
inside the database
3. The results will be placed in response queue from where dispatcher will send them to
corresponding users
4. Instead of PGA, statements will get executed in UGA (user global area) in shared server
architecture
5. Shared server architecture can be enabled by specifying following parameters
a. DISPATCHERS
b. MAX_DISPATCHERS
c. SHARED_SERVER_PROCESSES
d. MAX_SHARED_SERVER_PROCESSES
e. CIRCUITS and MAX_CIRCUITS (optional)
6. This architecture should be enabled only if ora-04030 or ora-04031 errors are observed
frequently in alert log file
7. To make shared server architecture effective, SERVER=SHARED should be mentioned in
client TNSNAMES.ORA file
8. A single dispatcher can handle 20 user requests where as a single shared server process
can handle 16 requests concurrently
Note: SMONn can have 16 slave processes and DBWRn can have 20 slave processes working
concurrently
Note: startup and shutdown is not possible if sysdba connects through shared server
connection
KANNA TECHNOLOGIES
ORACLE 11g DATABASE ARCHITECTURE
Shared pool
User
process
MRU end
Log buffer
cache
Library cache
WRITE
list
LRU
list
Large pool
Java pool
PGA
RESULT CACHE
LRU end
SMONn
PMON
DBWRn
LGWR
Stream pool
CKPT
ARCHn
PARAMETER FILE
PASSWORD FILE
ARCHIVED
REDOLOG FILES
DATA FILES
REDOLOG
FILES
CONTROL
FILES
KANNA TECHNOLOGIES
SELECT STATEMENT PROCESSING
1. Server process will receive the statement sent by user process on server side and will handover
that to library cache of shared pool
2. The 1st phase of sql execution i.e Parsing will be done in library cache
3. Then, OPTIMIZER (brain of oracle sql engine) will generate many execution plans, but chooses
the best one based on time & cost (time response time, cost cpu resource utilization)
4. Server process will send the parsed statement with its execution plan to PGA and 2 nd phase i.e
EXECUTION will be done there
5. After execution, server process will start searching for the data from LRU end of LRU list and this
search will continue till it founds data or reaches MRU end. If it found data, it will be given to the
user. If it didnt found any data, it means data is not there in database buffer cache
6. In such cases, server process will copy data from datafiles to MRU end of LRU list of database
buffer cache
7. From MRU end the rows pertaining to requested table will be filtered and placed in SERVER
RESULT CACHE along with execution plan id and then it will be given to user (displayed on users
console)
Note : for statements issued for the second time, server process will get parsed tree and plan id from
library cache and it will straightly goes to server result cache and compares the plan id. If the plan id
matches, corresponding rows will be given to user. So, in this case, it is skipping all 3 phases of SQL
execution by which response time is much faster than 10g database.
of
result
cache
Is
dependent
on
parameters
RESULT_CACHE_MODE
and
RESULT_CACHE_MAX_SIZE
3. The possible values for RESULT_CACHE_MODE is MANUAL or FORCE. When set to MANUAL, sql
query should have hint /* result cache */. When using FORCE all queries will use result cache.
4. Even though after setting to FORCE, we can still avoid any query to use result cache using hint /*
no result cache */
5. Oracle recommends to enable result cache only if database is hitting with lot of statements
which are frequently repeated. So it must be enabled in OLTP environment
KANNA TECHNOLOGIES
6. If we specify MEMORY_TARGET parameter, oracle will allocate 0.25% of shared pool size as
result cache. If we specify SGA_TARGET (which is of 10g), result cache will be 0.5% of shared
pool. If we use individual parameters (like in 9i), result cache will be of 1% size of shared pool
7. When any DML/DDL statements modify table data or structure, data in result cache will become
invalid and need to be processed again
Note: http://pavandba.wordpress.com/2010/07/15/how-result-cache-works/
KANNA TECHNOLOGIES
SGA_TARGET
parameter
if
you
have
that
parameter
instead).
In
addition
to
the
KANNA TECHNOLOGIES
Restrictions on Using the SQL Query Result Cache
You cant cache results in the SQL Query Result Cache for the following objects:
1. Temporary tables
2. Dictionary tables
3. Nondeterministic PL/SQL functions
4. The curval and nextval pseudo functions
5. The SYSDATE, SYS_TIMESTAMP, CURRENT_DATE, CURRENT_TIMESTAMP, LOCAL_TIMESTAMP,
USERENV, SYS_CONTEXT, and SYS_QUID functions
6. You also wont be able to cache subqueries, but you can use the RESULT_CACHE hint in an inline
view.
can
check
memory
sufficiency
and
tune
it
by
taking
advice
from
AMM
using
V$MEMORY_TARGET_ADVICE
5. We
can
check
the
memory
allocation
to
SGA
components
by
V$MEMORY_DYNAMIC_COMPONENTS view.
6. Also
we
can
find
current
and
previous
resize
operations
through
More
about
AMM
http://pavandba.wordpress.com/2010/07/21/automatic-memory-
management-in-11g/
KANNA TECHNOLOGIES
How to set Environment Variables?
1. When we install oracle software, sqlplus or any other oracle commands will not work.
To make them work we need to set environment variables in .bash_profile file
2. Below are the steps to do the same
[oracle@pc1 ~] $ more /etc/oraInst.loc
[oracle@pc1 ~] $ cd /u02/oraInventory/contentsXML
[oracle@pc1 ~] $ more inventory.xml
From the above command, we will get oracle home name and path. We need to set
ORACLE_HOME path and bin location in .bash_profile file as follows
[oracle@pc1 ~] $ vi .bash_profile
export ORACLE_HOME=/u02/ora11g
PATH=$PATH/bin:/usr/bin:$ORACLE_HOME/bin
KANNA TECHNOLOGIES
7. Execute create database script
SQL> @db.sql
8. Once database is created, it will be opened automatically
9. Execute the catalog.sql and catproc.sql scripts
SQL> @$ORACLE_HOME/rdbms/admin/catalog.sql
SQL> @$ORACLE_HOME/rdbms/admin/catproc.sql
10. Finally add this database entry to oratab file
Note: Sometimes, we may get error Oracle Instance terminated. Disconnection forced. This is
due to the reason that undo tablespace name mentioned in pfile is different from the one
mentioned in database creation script
KANNA TECHNOLOGIES
Redolog
member 1
Redolog
member 2
Redolog
member 1
GROUP 1
Redolog
member 2
GROUP 2
COMMANDS
# To check redolog file members
SQL> select member from v$logfile;
# To check redolog group info,status and size
SQL> select group#,members,status,sum(bytes/1024/1024) from v$log
group by group#,members,status;
KANNA TECHNOLOGIES
# To add a redolog file group
SQL> alter database add logfile group 4 (/u02/prod/redo04a.log,/u02/prod/redo04b.log) size
50m;
# To add a redolog member
SQL> alter database add logfile member /u02/prod/redo01b.log to group 1;
# To drop a redolog group
SQL> alter database drop logfile group 4;
# To drop a redolog member
SQL> alter database drop logfile member /u02/prod/redo01b.log;
Note: Even after we drop logfile group or member, still file will exists at OS level
Note: We cannot drop a member or a group which is in CURRENT status
# Resuing a member
SQL> alter database add logfile member /u02/prod/redo04a.log reuse to group 4;
KANNA TECHNOLOGIES
The above command will make server process to update the controlfile with new file
name
5. SQL> alter database open;
Note: We cannot resize a redolog member, instead we need to create new group with required
size and drop the old group
# Handling corrupted redolog file
SQL> alter database clear logfile member /u02/prod/redo01a.log;
Or
SQL> alter database clear unarchived logfile member /u02/prod/redo01a.log;
KANNA TECHNOLOGIES
4. SQL> shutdown immediate
5. SQL> ! cp /u02/prod/control01.ctl /u02/prod/control04.ctl
6. SQL> startup
# Steps to multiplex controlfile using pfile
1. SQL> shutdown immediate
2. [oracle@pc1 ~] $ cd $ORACLE_HOME/dbs
3. [oracle@pc1 ~] $ vi initprod.ora
Edit control_files parameter and add new path to it and save the file
4. [oracle@pc1 ~] cp /u02/prod/control01.ctl /u02/prod/control04.ctl
5. [oracle@pc1 ~] sqlplus / as sysdba
6. SQL> startup
Note: we can create a maximum of 8 copies of controlfiles
ARCHIVELOG FILES
1. Archive log files are offline copies for online redolog files and are required to recover the
database if we have old backup
2. The following are parameters that are used for archivelog mode with their description
a. LOG_ARCHIVE_START till 9i we used to have two types of archiving (manual &
automatic). But from 10g we have only automatic archiving. This parameter will enable
automatic archiving and useful only till 9i (deprecated in 10g)
b. LOG_ARCHIVE_TRACE it is used to generate a trace file to know how ARCHn process
working
c. LOG_ARCHIVE_MIN_SUCCEEDED_DEST defines min destinations to which ARCHn
process should complete archiving by the time LGWR starts writing to online redolog file
d. LOG_ARCHIVE_MAX_PROCESSES will start multiple ARCH processes and helpful in
faster writing
e. LOG_ARCHIVE_LOCAL_FIRST if enabled, ARCHn process will first generate archive in
local machine and then in remote machine. It is used in case of dataguard setup
f.
KANNA TECHNOLOGIES
h. LOG_ARCHIVE_DEST_1...10 if want to archive to more than 2, we should enable this
i.
j.
3. When we want to multiplex into only 2 locations, from 10g we should use LOG_ARCHIVE_DEST
and LOG_ARCHIVE_DUPLEX_DEST parameters
4. The default location for archivelogs in 10g is Flash Recovery Area (FRA). The archives in this
location are deleted when a space pressure arised. The location and size of FRA can be known
using DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE parameters respectively
5. To disable archivelog generation into FRA, we shouldnt use LOG_ARCHIVE_DEST, but should use
LOG_ARCHIVE_DEST_1
Note: archivelogs can also be deleted based on RMAN deletion policy
COMMANDS
# How to check if database is in archive log mode?
SQL> archive log list;
# Enabling archivelog mode in 10g/11g
1. SQL> shutdown immediate
2. SQL> startup mount
3. SQL> alter database archivelog;
4. SQL> alter database open;
Note: When we enable archivelog mode using above method, the archives will be generated by default
in Flash Recovery Area (FRA). It is the location where files required for recovery exist and introduced in
10g
# To change the archive destination from FRA to customized location
1. SQL> alter system set log_archive_dest_1=location=/u03/archives scope=spfile;
2. SQL> shutdown immediate
KANNA TECHNOLOGIES
3. SQL> startup
# Enabling archivelog mode in 9i
1. SQL> alter system set log_archive_start=TRUE scope=spfile;
2. SQL> alter system set log_archive_dest=/u03/archives scope=spfile;
3. SQL> alter system set log_archive_format=prod_%s.arc scope=spfile;
4. SQL> shutdown immediate
5. SQL> startup mount
6. SQL> archive log start;
7. SQL> alter database archivelog;
8. SQL> alter database open;
KANNA TECHNOLOGIES
BLOCK SPACE UTILIZATION PARAMETERS
Block header
20%
PCT FREE level
PCTUSED level
40%
Block id
121
122
123
Status
Used
Free
Used
DDC
DATAFILE
HEADER
121
122
123
KANNA TECHNOLOGIES
1. In DMT, free block information is used to maintain in the form of freelist which will be there
in data dictionary cache
2. Everytime DBWRn requires free block, server process will perform an I/O to know free block
information from freelist. This will happen to all the free blocks. Because more no of I/Os
are being performed, it will degrade the performance of database
Block
id
121
122
123
124
status
1
0
1
1
121
122
123
124
1. In locally managed tablespace, free block information is maintained in datafile header itself in
the form of bitmap blocks
2. These bitmaps are represented with 0 and 1 where 0 means free and 1 means used
3. When a free block is required, server process will search in bitmap block and will inform DBWRn
thus it is avoiding I/O which increases database performance
4. In 8i default tablespace type is dictionary, but still we can create locally managed tablespace
where as in 9i, default is local
Note: In any version, we can create dictionary managed tablespace only if SYSTEM tablespace is
dictionary
KANNA TECHNOLOGIES
Tablespace creation syntax
create tablespace mytbs
datafile '/u02/ora10g/prod/mytbs01.dbf' size 50m
autoextend on maxsize 200m
extent management local / dictionary
segment space management auto / manual
inittrans 1 maxtrans 255
pctfree 20 pctused 40
initial 1m next 5m
pctincrease / uniform / autoallocate
minextents 1 maxextents 500
logging / nologging
blocksize 8k; - this is optional
Note: Even though we specify a tablespace as NOLOGGING, still all DML transactions will
generate redo entries (this is to help in instance recovery). NOLOGGING is applicable in only
below situations
1. create table B as select * from A nologging;
2. insert into B select * from A nologging;
3. alter index <index_name> rebuild online nologging;
4. create index <index_name> on table_name(column_name) nologging;
5. any DML operations on LOB segments
COMMANDS
# To create a tablespace
SQL> create tablespace mytbs
datafile /u02/prod/mytbs01.dbf size 10m;
DBA CLASS NOTES | version 3.0 40
KANNA TECHNOLOGIES
# To create a tablespace in 9i
SQL> create tablespace mytbs
datafile /u02/prod/mytbs01.dbf size 10m
segment space management auto;
# To create dictionary managed tablespace
SQL> create tablespace mytbs
datafile /u02/prod/mytbs01.dbf size 10m
extent management dictionary;
# To view tablespace information
SQL> select allocation_type,extent_management,contents from dba_tablespaces where
tablespace_name=MYDATA;
# To view datafile information
SQL> select file_name,sum(bytes),autoextensible,sum(maxbytes) from dba_data_files where
tablespace_name=MYDATA group by file_name,autoextensible;
# To check the database size
SQL> select sum(bytes/1024/1024/1024) from dba_data_files;
# To enable/disable autoextend
SQL> alter database datafile /u02/prod/mytbs01.dbf autoextend on maxsize 100m;
SQL> alter database datafile /u02/prod/mytbs01.dbf autoextend off;
# To resize a datafile
SQL> alter database datafile /u02/prod/mytbs01.dbf resize 20m;
KANNA TECHNOLOGIES
# To add a datafile
SQL> alter tablespace mytbs add datafile /u02/prod/mytbs02.dbf size 10m;
Note: If we have multiple datafiles, extents will be allocated in round robin fashion
Note: Adding the datafile for tablespace size increment is the best option if we have multiple
hard disks
# To rename a tablespace (10g/11g)
SQL> alter tablespace mytbs rename to mydata;
# To convert DMT to LMT or vice versa
SQL> exec dbms_space_admin.tablespace_migrate_to_local(MYDATA);
SQL> exec dbms_space_admin.tablespace_migrate_from_local(MYDATA);
Note: Local to dictionary conversion is possible only if SYSTEM tablespace is not local
# To rename or relocate a datafile
1. SQL> alter tablespace mydata offline;
2. SQL> !mv /u02/prod/mytbs01.dbf /u02/prod/mydata01.dbf
3. SQL> alter tablespace mytbs rename datafile /u02/prod/mytbs01.dbf to
/u02/prod/mydata01.dbf;
4. SQL> alter tablespace mydata online;
# To rename or relocate system datafile
1. SQL> shutdown immediate
2. SQL> !mv /u02/prod/system.dbf /u02/prod/system01.dbf
3. SQL> startup mount
4. SQL> alter database rename file /u02/prod/system.dbf to /u02/prod/system01.dbf;
5. SQL> alter database open;
DBA CLASS NOTES | version 3.0 42
KANNA TECHNOLOGIES
# To drop a tablespace
SQL> drop tablespace mydata;
This will remove tablespace info from base tables, but still datafiles exist at OS level
Or
SQL> drop tablespace mydata including contents;
This will remove tablespace info and also clears the datafile (i.e it will empty the contents)
Or
SQL> drop tablespace mydata including contents and datafiles;
This will remove at oracle and also OS level
# To reuse a datafile
SQL> alter tablespace mydata add datafile /u02/prod/mydata02.dbf reuse;
BIGFILE TABLESPACE
1. For managing the datafiles in VLDB, oracle introduced bigfile tablespace in 10g
2. Bigfile tablespaces datafiles can grow into terabytes based on the block size. For
example, for a 8KB block size a single file can grow till 4TB
3. Bigfile tablespaces should be created only when we have stripping and mirroring
implemented at storage level in real time
4. We cant add another datafile to a bigfile tablespace until it reaches max value
5. Bigfile tablespaces can be created only as LMT and with ASSM
Note: Either in LMT or DMT, ASSM once defined cannot be changed
KANNA TECHNOLOGIES
# To create a big file tablespace
SQL> create bigfile tablespace bigtbs
datafile /u02/prod/bigtbs01.dbf size 50m;
CAPACITY PLANNING
1. It is the process of estimating space requirement for future data storage
2. We will do this by collecting free space info for all the tablespaces in the database either
daily, weekly or monthly basis
3. We need to observe the difference in free space and should able to analyze how much
space is required in future
4. Following query is used to find free space
SQL> select tablespace_name,sum(bytes/1024/1024) from dba_free_space group by
tablespace_name;
5. Eg: If we observe tablespace USERS free space is reducing 500m daily, for next 1 month
we need 500*30=15GB
6. Once we analyze the required space, we need to check the availability for the same in
mount point. If space not there, contact storage team to get space added
UNDO MANAGEMENT
1. Undo tablespace features are enabled by setting following parameters
a. UNDO_MANAGEMENT
b. UNDO_TABLESPACE
c. UNDO_RETENTION
2. Only one undo tablespace will be in action at a given time
KANNA TECHNOLOGIES
Imp points to remember:
1. The undo blocks occupied by a transaction will become expired once the transaction commits.
2. The data will be selected from undo tablespace, if any DML operation is being performed on the table
on which select query is also fired. This is to maintain read consistency.
B B B
Data File
1. In the above situation, Tx1 issued an update statement on table A and committed. Because of
this dirty blocks are generated in DBC and undo blocks are used from undo tablespace. Also,
dirty blocks of A are not yet written to datafiles
2. Tx2 is updating table B and because of non availability of undo blocks, Tx2 overrided expired
undo blocks of Tx1
3. Tx3 is selecting the data from A. This operation will first look for data in undo tablespace, but
already blocks of A are occupied by B (Tx2), it will not retrieve any data. Then it will check for
latest data in datafiles, but as dirty blocks are not yet written to datafiles, there are transaction
will be unable to get data. In this situation it will throw ORA-1555 (snapshot too old) error
Soltuions to avoid ORA-1555
1. Re-issuing the SELECT statement will be a solution when we are getting ora-1555 very rarley
2. It may occur due to undersized undo tablespace. So increasing undo tablespace size is one
solution
3. Increasing undo_retention value is also a solution
4. Avoiding frequent commits
KANNA TECHNOLOGIES
5. Using retention gurantee clause with DML statement. This is only from 10g
Note : Dont ever allow undo & Temp tablespaces to be in AUTOEXTEND ON
COMMANDS
# To create UNDO tablespace
SQL> create undo tablespace undotbs2
datafile /u02/prod/undotbs2_01.dbf size 30m;
# To change undo tablespace
SQL> alter system set undo_tablespace=UNDOTBS2 scope=memory/spfile/both;
# To create temporary tablespace
SQL> create temporary tablespace mytemp
tempfile /u02/prod/mytemp01.dbf size 30m;
# To add a tempfile
SQL> alter tablespace mytemp add tempfile /u02/prod/mytemp02.dbf size 30m;
# To resize a tempfile
SQL> alter database tempfile /u02/prod/mytemp01.dbf resize 50m;
# To create temporary tablespace group
SQL> create temporary tablespace mytemp
tempfile /u02/prod/mytemp01.dbf size 30m
tablespace group grp1;
# To view tablespace group information
SQL> select * from dba_tablespace_groups;
DBA CLASS NOTES | version 3.0 46
KANNA TECHNOLOGIES
# To view temp tablespace information
SQL> select file_name,sum(bytes) from dba_temp_files where tablespace_name=MYTEMP
group by file_name;
# To move temp tablespace between groups
SQL> alter tablespace mytemp tablespace group grp2;
TABLESPACE ENCRYPTION
Oracle Database 10g introduced Transparent Data Encryption (TDE), which enabled you to
encrypt columns in a table. The feature is called transparent because the database takes care
of all the encryption and decryption details. In Oracle Database 11g, you can also encrypt an
entire tablespace. In fact, tablespace encryption helps you get around some of the restrictions
imposed on encrypting a column in a table through the TDE feature. For example, you can get
around the restriction that makes it impossible for you to encrypt a column thats part of a
foreign key or thats used in another constraint, by using tablespace encryption.
Restrictions on Tablespace Encryption
You must be aware of the following restrictions on encrypting tablespaces. You
1. Cant encrypt a temporary or an undo tablespace.
2. Cant change the security key of an encrypted tablespace.
3. Cant encrypt an external table.
As with TDE, you need to create an Oracle Wallet to implement tablespace encryption.
Therefore, lets first create an Oracle Wallet before exploring how to encrypt a tablespace.
KANNA TECHNOLOGIES
Creating the Oracle Wallet
Tablespace encryption uses Oracle Wallets to store the encryption master keys. Oracle Wallets
could be either encryption wallets or auto-open wallets. When you start the database, the
auto-open wallet opens automatically, but you must open the encryption wallet yourself.
Oracle recommends that you use an encryption wallet for tablespace encryption, unless youre
dealing with a Data Guard setup, where its better to use the auto-open wallet.
You can create the wallet easily by executing the following command in SQL*Plus:
SQL> alter system set encryption key identified by "password"
The previous command creates an Oracle Wallet if there isnt one already and adds a master
key to that wallet. By default, Oracle stores the Oracle Wallet, which is simply an operating
system file named ewallet.pl2, in an operating systemdetermined location. You can, however,
specify a location for the file by setting the parameter encryption_wallet_location in the
sqlnet.ora file, as shown here:
ENCRYPTION_WALLET_LOCATION=
(SOURCE=
(METHOD=file)
(METHOD_DATA= (DIRECTORY=/apps/oracle/general/wallet) ) )
You must first create a directory named wallet under the $ORACLE_BASE/admin/$ORACLE_SID
directory. Otherwise, youll get an error when creating the wallet: ORA-28368: cannot autocreate wallet. Once you create the directory named wallet, issue the following command to
create the Oracle Wallet:
SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "secure";
System altered.
The ALTER SYSTEM command shown here will create a new Oracle Wallet if you dont have one.
It also opens the wallet and creates a master encryption key. If you have an Oracle Wallet, the
DBA CLASS NOTES | version 3.0 48
KANNA TECHNOLOGIES
command opens the wallet and re-creates the master encryption key. Once youve created the
Oracle Wallet, you can encrypt your tablespaces
Creating an Encrypted Tablespace
The following example shows how to encrypt a tablespace:
SQL> CREATE TABLESPACE tbsp1 DATAFILE '/u01/app/oracle/test/tbsp1_01.dbf' SIZE 500m
ENCRYPTION DEFAULT STORAGE (ENCRYPT);
Tablespace created.
The storage clause ENCRYPT tells the database to encrypt the new tablespace. The clause
ENCRYPTION tells the database to use the default encryption algorithm, DES128. You can
specify an alternate algorithm such as 3DES168, AES128, or AES256 through the clause USING,
which you specify right after the ENCRYPTION clause. Since I chose the default encryption
algorithm, I didnt use the USING clause here.
The following example shows how to specify the optional USING clause, to define a nondefault
encryption algorithm.
SQL> CREATE TABLESPACE mytbsp2 DATAFILE '/u01/app/oracle/test/mytbsp2_01.dbf' size
500m ENCRYPTION USING '3DES168' DEFAULT STORAGE (ENCRYPT);
Tablespace created.
The example shown here creates an encrypted tablespace, MYTBSP2, that uses the 3DES168
encryption algorithm instead of the default algorithm.
Note: You can check whether a tablespace is encrypted by querying the DBA_TABLESPACES
view
The database automatically encrypts data during the writes and decrypts it during reads. Since
both encryption and decryption arent performed in memory, theres no additional memory
requirement. There is, however, a small additional I/O overhead. The data in the undo
DBA CLASS NOTES | version 3.0 49
KANNA TECHNOLOGIES
segments and the redo log will keep the encrypted data in the encrypted form. When you
perform operations such as a sort or a join operation that use the temporary tablespace, the
encrypted data remains encrypted in the temporary tablespace.
dynamic parameters
COMMANDS
# To configure OMF parameters
SQL> alter system set db_create_file_dest=/u02/prod scope=memory;
SQL> alter system set db_create_online_log_dest=/u02/prod scope=memory;
USER MANAGEMENT
1. User creation should be done after clearly understanding requirement from application team
2. A schema is a collection of objects whereas user means who access tables of other schema. So
many times, we will interchange these two terms in order to represent a user in the database.
For example, we can say scott as schema or a user
3. Whenever we create user, we should assign a default permanent tablespace (which allows to
create tables) and default temporary tablespace(which allows to do sorting). At any moment of
time we can change them
KANNA TECHNOLOGIES
4. If we dont assign default tablespace and temporary tablespace, oracle will take default values.
To see the default values we can use DATABASE_PROPERTIES view.
5. If we want a user to create a table, we need to assign quota on that tablespace to the user, then
only user can create a table.
6. After creating a user, we should grant privileges. Privileges for a user are of 2 types
a. System level privileges any privilege that is used to modify the system (i.e database) is
considered
as
system
level
privilege.
called
object
level
privilege.
ROLE_ROLE_PRIVS
COMMANDS
# To create a user
SQL> create user user1 identified by user1
default tablespace mytbs
temporary tablespace temp;
KANNA TECHNOLOGIES
KANNA TECHNOLOGIES
# To check default permanent tablespace and temporary tablespace
SQL> select property_name,property_value from database_properties where property_name
like DEFAULT%;
# To change default permanent tablespace
SQL> alter database default tablespace mydata;
# To change default temporary tablespace
SQL> alter database default temporary tablespace mytemp;
# To check system privileges for a user
SQL> select privilege from dba_sys_privs where grantee=SCOTT;
# To check object level privileges
SQL> select owner,table_name,privilege from dba_tab_privs where grantee=SCOTT;
# To check roles assigned to a user
SQL> select granted_role from dba_role_privs where grantee=SCOTT;
# To check permissions assigned to role
SQL> select privilege from role_sys_privs where role=MYROLE;
SQL> select owner,table_name,privilege from role_tab_privs where role=MYROLE;
SQL> select granted_role from role_role_privs where role=MYROLE;
# To drop a user
SQL> drop user user1;
Or
SQL> drop user user1 cascade;
KANNA TECHNOLOGIES
PROFILE MANAGEMENT
1. Profile management is divided into
a. Password management
b. Resource management
2. Profiles are assigned to users to control access to resources and also to provide enhanced
security while logging into the database
3. The following are parameters for password policy management
a. FAILED_LOGIN_ATTEMPTS it specifies how many times a user can fail to login to the
database
b. PASSWORD_LOCK_TIME user who exceeds failed_login_attempts will be locked. This
parameter specifies till how much time it will be locked and account will get unlocked
after that time automatically. DBA can also unlock manually
c. PASSWORD_LIFE_TIME it specifies after how many days a user need to change the
password
d. PASSWORD_GRACE_TIME it specified grace period for the user to change the
password. If user still fails to change password even after grace time, account will be
locked and only DBA need to manually unlock it
e. PASSWORD_REUSE_TIME this will specify after how many days user can reuse the
same password
f.
KANNA TECHNOLOGIES
COMMANDS
# To create a profile
SQL> create profile my_profile limit
failed_login_attempts 3
password_lock_time 1/24/60
sessions_per_user 1
idle_time 5;
# To assign a profile to user
SQL> alter user scott profile my_profile;
# To alter a profile value
SQL> alter profile my_profile limit sessions_per_user 3;
# To create default password verify function
SQL> @$ORACLE_HOME/rdbms/admin/utlpwdmg.sql
Note: sessions terminated because of idle time are marked as SNIPPED in v$session and DBA need to
manually kill the related OS process to clear the session
# To kill a session
SQL> select sid,serial# from v$session where username=SCOTT;
SQL> alter system kill session sid,serial# immediate;
Note: Resource management parameters are affective only if RESOURCE_LIMIT is set to TRUE
# To check and change resource_limit value
SQL> show parameter resource_limit
SQL> alter system set resource_limit=TRUE scope=both;
Note: from 11g onwards passwords for all users are case-sensitive
KANNA TECHNOLOGIES
AUDITING
1. It is the process of recording user actions in the database
2. Auditing is enabled by setting AUDIT_TRAIL to
a. None no auditing enabled (default)
b. DB audited information will be recorded in AUD$ table and can be viewed using
DBA_AUDIT_TRAIL view
Note: AUDIT_TRAIL=DB is default in 11g i.e auditing is enabled by default in 11g
c. OS audited information will be stored in the form of trace files at OS level. For this we
need to set AUDIT_FILE_DEST parameter
By default audit file destination will be $ORACLE_HOME/admin/SID/adump
d. DB, Extended it is same as DB option but will still record info like SQL_TEXT,
BIND_VALUE etc
e. XML it will generated XML files to store auditing information
f.
XML, Extended same as XML but will record much more information
3. Even though we set AUDIT_TRAIL parameter to some value, oracle will not start auditing until
one of the following types of auditing commands are issued
a. Statement level auditing
b. Schema level auditing
c. Object level auditing
d. Database auditing (it is only till 9i. due to performance issues it was removed from 10g)
4. By default some activites like startup & shutdown of database, any structural changes to
database are audited and recorded in alert log file
5. If auditing is enabled with DB, then we need to monitor space in SYSTEM tablespace as there is a
chance of getting full when more and more information is keep on recorded
6. SYS user activities can also be captured by setting AUDIT_SYS_OPERATIONS to TRUE
7. Auditing should use following scopes
a. Whenever successful / not successful
b. By session / By access
8. Enabling auditing at database level will have adverse impact on the database performance
KANNA TECHNOLOGIES
COMMANDS
# To enable auditing
SQL> alter system set audit_trail=DB/OS/XML scope=spfile;
Note: AUDIT_TRAIL parameter is static and require a restart of database before going to be effective
# To audit what is required
SQL> Audit create table; statement level auditing
SQL> Audit update on table SALARY; object level auditing
SQL> Audit all by scott; schema level auditing
SQL> Audit session by scott;
SQL> Audit all privileges;
# To turn off auditing
SQL> Noaudit session;
SQL> Noaudit update on table SALARY;
SQL> Noaudit all privileges;
KANNA TECHNOLOGIES
5. Apart from using tnsnames.ora we can also use EZCONNECT, LDAP, bequeath protocol
etc to establish connection to server
6. These files will reside $ORACLE_HOME/network/admin
7. LISTENER service will run based on LISTENER.ORA file and we can manage listener using
below commands
a. $ lsnrctl start / stop / status / reload
8. We can have host string different from service name i.e instance name or SID and even
it can be different from database name. This is to provide security for the database
9. Tnsping is the command to check the connectivity to the database from client machine
10. Do create seperate listeners for multiple databases
11. If the connections are very high, create multiple listeners for the same database
12. Any network related problem should be resolved in the following steps
a. Check whether listener is up and running or not on server side
b. Check the output is ok for tnsping command
c. If still problem exist, check firewall on both client and server. If not known take
the help of network admin
13. We can know the free port information from netstat command
14. We need to set password for listener so as to provide security
15. TNSNAMES.ORA and SQLNET.ORA files can also be seen on server side because server
will act as client when connecting to another server
16. If listener is down, existing users will not have any impact. Only new users cannot be
able to connect to instance
17. From 10g, SYSDBA connection to database will use bequeath protocol as it doesnt
require any protocol like TCP/IP
Note: article about LISTENER security can be found @
http://pavandba.files.wordpress.com/2009/11/integrigy_oracle_listener_tns_security.pdf
Note : port number in listener.ora and tnsnames.ora should be same
KANNA TECHNOLOGIES
Creating new Listener manually
1. Copy the existing entry to end of the listener.ora file
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u02)
(PROGRAM = extproc)
))
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server1.kanna.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
))
2.
3.
4.
5.
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = prod)
(ORACLE_HOME = /u02)
))
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = server1.kanna.com)(PORT = 1521))
))
KANNA TECHNOLOGIES
PASSWORD FILE (PFILE)
1. This file contain sysdba password and will be used if any user with sysdba permission is trying to
connect database remotely.
2. It will be in the form of orapw<SID> and resides in ORACLE_HOME/dbs (on unix) and
ORACLE_HOME/database (on windows)
3. If we forgot password of sys user or lost password file, it can be recreated using ORAPWD utility
as follows
$ ORAPWD file=orapwtest password=oracle entries=5 force=y (here TEST is SID)
Entires
represents
those
many
users
can
use
this
password
file
2. Even though manageability is complex, having multiple small databases will give more
benefit
3. DDMS will use two phase commit mechanism i.e. a transaction should be committed in
both the databases before the data was made permanent
KANNA TECHNOLOGIES
4. DDMS can have different databases like oracle, db2, sql server etc. In this case they will
talk each other using oracle gateway component (which need to be configured
seperately)
5. If we have same databases in DDMS, its called homogenous DDMS. If we have different
databases then its called heterogeneous DDMS
Database Links
1. It is the object which will pull remote database data to local database
2. While creating dblink, we need to know username and password of remote database
3. Apart from username and password of remote db, we need to have tns entry in
tnsnames.ora of local db and tnsping command should work
KANNA TECHNOLOGIES
Materialized Views
1. It is an object used to pull remote databases data frequently in specified time which is
called as refreshing the data using materialized views
2. Snapshot is the object which used to do the same till 8i, but the disadvantage is time
constraint in pulling huge no.of rows
3. MV uses MV log to store information of already transferred rows. MVLOG will store
rowids of table rows which helps in further refresh
4. MV should be created in the database where we store the data abd MVLOG will be
created automatically in remote database
5. MVLOG is a table not a file and its name always will start with MV$LOG
6. MV refresh can happen in following three modes
a. Complete pulling entire data
b. Fast pulling non transferred rows
c. Force it will do fast refresh and in case any failure, it will go for complete
refresh
7. When MV refresh happening very slow, check the size of table and compare that with
MVLOG size
8. If MVLOG size is more than table size, then drop and recreate only MVLOG
Note: A complete refresh is required after the recreation of MVLOG
Note: we can use refresh fast on commit in order to transfer the data to remote database
without waiting
KANNA TECHNOLOGIES
# To check the table size
SQL> select sum(bytes/1024/1024) from dba_segments where segment_name=EMP;
ORACLE UTILITIES
SQL * LOADER
1. It is a utility to load the data from a textfile to oracle table
2. The textfile is called as flat file. It can be in .txt, .csv, etc format
3. SQL *LOADER contains following components
a. CONTROL FILE this will define the configuration parameters which tell how to
load the data into the table. It has nothing to do with database control file
b. INPUT FILE or INFILE this is the file from which data should be loaded
c. BADFILE records which are failed to load into table (due to any reason) will be
stored in this file
d. DISCARDFILE records which doesnt satisfy the condition will be placed here
e. LOGFILE it will record the action of sql * loader and can be used for reference
4. SQL * LOADER can be invoked as follows
[oracle@server1 admin]$ sqlldr userid=system/oracle control=control.lst log=track.log
INFILE (or) input
file
controlfile
SQL * LOADER
BAD FILE
DISCARD FILE
Data loading
into table
LOG FILE
KANNA TECHNOLOGIES
EXPORT & IMPORT
1. It is the utility to transfer the data between two oracle databases
2. The following levels of export/import are possible
a. Database level
b. Schema level
c. Table level
d. Row level
3. Apart from database level, we can perform other levels of export and import within the
same database
4. Whenever session is running long time, we can check from v$session_longops
Note: To avoid questionable statistics warning, use statistics=none during export
5. Export will convert the command to select statements and the final output will be
returned to dumpfile
exp select statement datafiles DBC dumpfile
6. Server process will take the responsibility of writing the data to dumpfile
7. Export will transfer the data to dumpfile in the size of block. To increase the speed of
writing we can set BUFFER=10 * avg row length. But we will never use this formula in
real time
Note: avg row length can be obtained from dba_tables
8. DIRECT=Y will make the export process faster by performing in the following way
exp select statement datafiles dumpfile
9. DIRECT=Y is not applicable for
a. A table with LONG datatype
b. A table with LOB datatype
c. Cluster table
DBA CLASS NOTES | version 3.0 64
KANNA TECHNOLOGIES
d. Partitioned table
Note: when we give direct=y option, if oracle cannot export a table with that option, it will
automatically convert to conventional path
10. By mentioning CONSISTENT=Y, export will take data from only undo tablespace if a DML
operation is being performed on the table
Note : while using CONSISTENT=Y, there is a chance of getting ORA-1555 error
11. Import is the utility to dump the contents from export dumpfile to a schema
12. Import internally converts contents of export dump file to DDL and DML statements
imp create table inserts the data create index or other objects add constraints and
enable them
13. SHOW=Y can be used to check corruption in export dump file. This will not actually
import the contents
14. IGNORE=Y should be used if already an object exists with the same name. It will append
the data if the object exists already
Note: whenever import fails with warning for constraints or grants, do import again with
ROWS=N option
Note: when we are importing tables with LONG, LOB datatypes or partitioned tables, the
destination database should also contain same tablespace name as source database
COMMANDS
# To know options of export/import
[oracle@server1 ~]$ exp help=y
[oracle@server1 ~]$ imp help=y
# To take database level export
KANNA TECHNOLOGIES
[oracle@server1 ~]$ exp file=/u01/fullbkp_prod.dmp log=/u01/fullbkp_prod.log full=y
# To take schema level export
[oracle@server1 ~]$ exp file=/u01/scott_bkp.dmp log=/u01/scott_bkp.log owner='SCOTT'
# To take table level export
[oracle@server1 ~]$ exp file=/u01/emp_bkp.dmp log=/u01/emp_bkp.log tables='SCOTT.EMP'
# To take row level export
[oracle@server1 ~]$ exp file=/u01/emp_rows_bkp.dmp log=/u01/emp_rows.log
tables='SCOTT.EMP' query=\"where deptno=10\"
# To import full database
[oracle@server1 ~]$ imp file=/u01/fullprod.dmp log=/u01/imp_fullprod.log full=y
# To import a schema
[oracle@server1 ~]$ imp file=/u01/scott_bkp.dmp log=/u01/imp_schema.log
fromuser='SCOTT' touser='SCOTT'
# To import a table
[oracle@server1 ~]$ imp file=/u01/emp_bkp.dmp log=/u01/imp_emp.log fromuser='SCOTT'
touser='SCOTT' tables='EMP'
# To import a table to another user
[oracle@server1 ~]$ imp file=/u01/emp_bkp.dmp log=/u01/imp_emp.log fromuser='SCOTT'
touser='SYSTEM' tables='EMP'
KANNA TECHNOLOGIES
DATAPUMP
15. Datapump is an extension to traditional exp/imp which provides more advantages like
security, fastness etc
16. During datapump export, oracle will create master table in the corresponding schema
and data will be transferred parallely from tables to dumpfile
Table 1
Table 2
MASTER TABLE
DUMP FILE
Table 3
17. During datapump import this will happen in reverse order i.e from dumpfile a master
table will be created and from that original tables
18. After finishing either export or import in datapump, oracle will automatically drops
master table
19. Just like exp/imp, datapump also contains 4 (database, schema, table and row) levels
20. In datapump dumpfile will reside only on server and cannot be created on client side
with the help of directory option. This will provide security to dumpfile
21. DBA_DATAPUMP_JOBS view can be used to find the status of datapump export or
import process
Note: whenever datapump export is done using PARALLEL option, import also should be done
with the same option. Otherwise it will effect the time taking for import
22. Oracle will try to import tables to the tablespace with same name and if tablespace
doesnt exist, it will go to users default tablespace
KANNA TECHNOLOGIES
COMMANDS
# To create a directory
SQL> create directory dpbkp as '/u01/expbkp';
Directory created.
# To grant permissions on directory
SQL> grant read,write on directory dpbkp to scott;
Grant succeeded.
# To view directory information
SQL> select * from dba_directories;
OWNER
DIRECTORY_NAME
------------------------------ -----------------------------DIRECTORY_PATH
SYS
DPBKP
/u01/expbkp
# To know options of datapump export/import
[oracle@server1 ~]$ expdp help=y
[oracle@server1 ~]$ impdp help=y
# To take database level export
[oracle@server1 ~]$ expdp directory=dpbkp dumpfile=fullprod.dmp logfile=fullprod.log full=y
# To take schema level export
[oracle@server1 ~]$ expdp directory=dpbkp dumpfile=scott_bkp.dmp logfile=scott_bkp.log
schemas='SCOTT'
DBA CLASS NOTES | version 3.0 68
KANNA TECHNOLOGIES
# To take table level export
[oracle@server1 ~]$ expdp directory=dpbkp dumpfile=emp_bkp.dmp logfile=emp_bkp.log
tables='SCOTT.EMP'
# To take row level export
[oracle@server1 ~]$ expdp directory=dpbkp dumpfile=emprows_bkp.dmp
logfile=emprows_bkp.log tables='SCOTT.EMP' query=\"where deptno=10\"
# To import full database
[oracle@server1 ~]$ impdp directory=dpbkp dumpfile=fullprod.dmp logfile=imp_fullprod.log
full=y
# To import a schema
[oracle@server1 ~]$ impdp directory=dpbkp dumpfile=scott_bkp.dmp logfile=imp_schema.log
remap_schema='SCOTT:SCOTT'
# To import a table
[oracle@server1 ~]$ impdp directory=dpbkp dumpfile=emp_bkp.dmp logfile=imp_emp.log
tables='EMP' remap_schema='SCOTT:SCOTT'
# To import a table to another user
[oracle@server1 ~]$ impdp directory=dpbkp dumpfile=emp_bkp.dmp logfile=imp_emp.log
tables='EMP' remap_schema='SCOTT:SYSTEM'
# To import tables to another tablespace (only in datapump)
[oracle@server1 ~]$ impdp directory=dpbkp dumpfile=emp_bkp.dmp logfile=imp_emp.log
tables='EMP' remap_schema='SCOTT:SCOTT' remap_tablespace=MYDATA:MYTBS
KANNA TECHNOLOGIES
# To import without taking export (using network_link option)
[oracle@server1 ~]$ impdp directory=dpbkp dumpfile=emp_bkp.dmp logfile=imp_schema.log
schemas='SCOTT' network_link='source.com'
KANNA TECHNOLOGIES
[oracle@server1 ~]$ sqlplus "/ as sysdba"
SQL> startup
SQL> alter database backup controlfile to trace;
Note: archives are not required to take back up with cold backup
HOT BACKUP
1. Taking the backup while the database is up and running is called hot backup
2. During hot backup database will be in fuzzy state and still users can perform
transactions which makes backup inconsistent
3. Whenever we place a tablespace or database in begin backup mode, following happens
a. The corresponding datafiles header will be freezed i.e CKPT process will not
update latest SCN
b. Body of the datafile is still active i.e DBWRn will write the dirty blocks to datafiles
4. After end backup, datafile header will be unfreezed and CKPT process will update latest
SCN immediately by taking that information from controlfiles
5. During hot backup, we will observe much redo generated because oracle will copy
entire data block as redo entry into LBC. This is to avoid fractured block
6. A block fracture occurs when a block is being read by the backup, and being written to
at the same time by DBWR. Because the OS (usually) reads blocks at a different rate
than Oracle, your OS copy will pull pieces of an Oracle block at a time. What if the OS
copy pulls half a block, and while that is happening, the block is changed by DBWR?
When the OS copy pulls the second half of the block it will result in mismatched halves,
which Oracle would not know how to reconcile.
7. This is also why the SCN of the datafile header does not change when a tablespace
enters hot backup mode. The current SCNs are recorded in redo, but not in the datafile.
This is to ensure that Oracle will always recover over the datafile contents with redo
entries. When recovery occurs, the fractured datafile block will be replaced with a
complete block from redo, making it whole again.
DBA CLASS NOTES | version 3.0 71
KANNA TECHNOLOGIES
Note: Database should be in archivelog mode to perform hot backup
KANNA TECHNOLOGIES
SQL> select name from v$controlfile;
SQL> alter database begin backup;
SQL> !cp /datafiles/prod/*.dbf /u03/hotbkp
SQL> alter database end backup;
Since we are placing entire database into begin backup mode, no repetition for all the
tablespaces is required
SQL> !cp /datafiles/prod/*.ctl.dbf /u03/hotbkp
SQL> alter system switch logfile;
SQL> !cp /u03/archives/*.arc /u03/hotbkp/archbkp
Taking archive backup is the important step in hot backup
SQL> alter database backup controlfile to trace;
[oracle@server1 ~]$ cp $ORACLE_HOME/dbs/*.ora /u03/hotbkp
Note: In any version, during hot backup we will not take redolog files backup
DATABASE RECOVERY
1. Recover is of 2 types
a. Complete recovery recovering database till the point of failure. No data loss
b. Incomplete recovery recovering to a certain time or scn. Has data loss
2. We will perform complete recovery if we lost only datafiles
3. We will perform incomplete recovery if we lost either redolog files, controlfiles or
archivelog files
4. Incomplete recover is possible with 3 modes
a. UNTIL SCN
DBA CLASS NOTES | version 3.0 73
KANNA TECHNOLOGIES
b. UNTIL CANCEL this is the option we use in real time always
c. UNTIL TIME
5. Recovery process involves two phases
a. RESTORE copying a file from backup location to original location as that file is
lost now
b. RECOVER applying archivelogs and redologs to bring the file SCN in par with
latest SCN
6. Whenever any file is missing at OS level, we can able to get that information either from
alert log file or using v$RECOVER_FILE view
Note: Practically, we can do complete recovery even if we lost controlfiles
KANNA TECHNOLOGIES
KANNA TECHNOLOGIES
SQL> !cp /u03/hotbkp/system01.dbf /datafiles/prod
SQL> startup mount
SQL> recover tablespace system;
SQL> alter database open;
STEPS for recovering database (we will perform this when we lost more than 50% of datafiles)
SQL> shut immediate
SQL> !cp /u03/hotbkp/*.dbf /datafiles/prod
SQL> startup mount
SQL> recover database;
SQL> alter database open;
Note: we can drop a single datafile using below command
SQL> alter database datafile /datafiles/prod/mydata01.dbf offline drop;
When we use above command, it will delete the file at OS level, but data dictionary will not be
updated and never we can get back that file even if we have backup. So dont use this in real
time
STEPS to recover a datafile without backup
SQL> alter tablespace mydata offline;
SQL> alter database create datafile /datafiles/prod/mydata01.dbf as
/datafiles/prod/mydata01.dbf;
SQL> recover tablespace mydata;
SQL> alter tablespace mydata online;
DBA CLASS NOTES | version 3.0 76
KANNA TECHNOLOGIES
Note: All the archives generated from the date of datafile creation should be available to do
this
STEPS to recover redolog file in archivelog mode
SQL> shutdown immediate
SQL> startup mount
SQL> recover database until cancel;
SQL> alter database open resetlogs;
Using RESETLOGS When used resetlogs option to open the database, it will
1. Create new redolog files at OS level (location and size will be taken from controlfile) if
not already existing
2. Resets the log seq number (LSN) to 1, 2, 3 etc for the created files
3. Whenever database is opened with resetlogs option, we will say database entered into
new incarnation. If database is in new incarnation, the backups which were taken till
now are no more useful. So, whenever we perform an incomplete recovery we need to
take full backup of database immediately
4. We can find the prev incarnation information of a database from below query
select resetlogs_change#,resetlogs_time from v$database;
STEPS to recover controlfile (incomplete recovery) in archivelog mode
SQL> shutdown immediate
SQL> !cp /u03/hotbkp/*.ctl /datafiles/prod/*.ctl
SQL> startup mount
SQL> recover database using backup controlfile until cancel;
SQL> alter database open resetlogs;
DBA CLASS NOTES | version 3.0 77
KANNA TECHNOLOGIES
STEPS for controlfile complete recovery
SQL> alter database backup controlfile to trace;
The above command may not work sometimes, in which case we need to use already taken
trace file during backup. This command will generate a controlfile script in udump in the form
of trace file
SQL> shutdown immediate
[oracle@server1 ~]$ goto udump location and copy the first create controlfile script to a file
called control.sql
SQL> startup nomount
SQL> @control.sql
SQL> alter database open;
Note: After creating control files using above procedure, there will be no SCN in that. So server
process will write the latest SCN to control files in this situation by taking info from datafile
header
Note: we will perform until time recovery in case we lost a single table and need to recover it.
But to do this we need to have approval from all the users in the database
KANNA TECHNOLOGIES
STEPS for spfile or pfile recovery in 11g
Oracle 11g provides a very good and easy option for recovering spfile or pfile using below commands
SQL> create pfile from memory;
SQL> create spfile from memory;
Note: above commands will work only if database is still up and running even we lost spfile and pfile. If
database is down, we need to rely on that method which we used for 10g
BACKUP MODE
ADVANTAGE
DISADVANATAGE
COLD
Consistent backup
Require shutdown
HOT
No shutdown
Inconsistent backup
EXPORT (LOGICAL)
KANNA TECHNOLOGIES
g. Recovery catalog etc
4. Components of RMAN
a. RMAN executable file RMAN prompt from where backup commands are issued
b. Target database database which we want to take backup
c. Auxiliary database cloned copy of target database
d. Recovery catalog repository database to store RMAN backup information
e. Media management layer it is responsible in interacting with tape drive while
taking RMAN backup directly to tape
5. In real time, we will take backup to tapes and to manage these tapes there will be
separate team exist which is called netbackup team or storage team or backup team.
They will manage tapes (like inserting tape, removing tape etc) through a tool called
netbackup tool. The very widely used tool is VERITAS 6.5
6. Archive log mode should be enabled to take RMAN backup
7. Whenever archive destination is full, database will hang. In such scenarios, do following
a. If time permits take the backup of archives using delete input clause
b. If time doesnt permits temporarily move archivelogs to some other mount point
c. If no mount point having free space then delete the archives and take full backup
of the database without fail immediately
d. When we delete either backup or archives from OS level, we can make RMAN
understand this by running
i. RMAN> crosscheck backup;
ii. RMAN> crosscheck archivelog all;
e. RMAN will delete obsolete backup automatically if the backup location is flash
recovery area
8. RMAN configuration parameters
a. RETENTION POLICY tells till what date our backup will be stored which we can
use for recovery. It has 2 values
i. Redundancy it will tell how many backups to be retained
ii. Recovery window it will tell how many days backup to be retained
DBA CLASS NOTES | version 3.0 80
KANNA TECHNOLOGIES
b. BACKUP OPTIMIZATION it will avoid taking backup of unmodified datafile
c. CONTROLFILE AUTOBACKUP includes controlfile in the backup
d. PARALLELISM creates multiple processes to speed up backup
e. ENCRYPTION to secure the backup
f. ARCHIVELOG DELETION POLICY deletes archivelogs automatically based on this
COMMANDS
# To connect to RMAN
[oracle@server1 ~]$ rman target /
# To see configuration parameter values
RMAN> show all;
# To change any configuration parameter
RMAN> configure retention policy to redundancy 5;
# To backup the database
RMAN> backup database;
# To backup archivelogs
RMAN> backup archivelog all;
# To backup both database and archivelogs
RMAN> backup database plus archivelog;
Note: By default in 10g, rman backup will go to flash recovery area. To override that, use below
command
DBA CLASS NOTES | version 3.0 81
KANNA TECHNOLOGIES
# To take compressed backup
RMAN> backup as compressed backupset database plus archivelog;
# To take backup to specified area
RMAN> backup format=/u03/rmanbkp/fulldbbkp_%t.bkp database;
# To see backup information
RMAN> list backup;
The above command will get the information from controlfile of the database
# To find & delete expired backups
RMAN> crosscheck backup;
RMAN> delete expired backup;
RMAN> delete noprompt expired backup;
# To find and delete expired archivelogs
RMAN> crosscheck archivelog all;
RMAN> delete expired archivelog all;
RMAN> delete noprompt expired archivelog all;
# To find and delete unnecessary backups
RMAN> report obsolete;
RMAN> delete obsolete;
RMAN> delete noprompt obsolete;
KANNA TECHNOLOGIES
# To take physical image copy of database
RMAN> backup as copy database;
# To validate the backup
RMAN> restore database validate;
# To validate the database before backup
RMAN> backup validate database archivelog all;
# To validate a particular backupset
RMAN> validate backupset 1234;
# To take backup when using tape
RMAN> run
{
allocate channel c1 device type sbt_tape;
backup database plus archivelog;
}
# To increase FRA size
SQL> show parameter db_recovery_file_dest_size
SQL> alter system set db_recovery_file_dest_size=10G scope=both;
KANNA TECHNOLOGIES
KANNA TECHNOLOGIES
recover database;
sql alter database open resetlogs;
}
STEPS to recover a controlfiles
RMAN> run
{
shutdown immediate;
startup nomount;
restore controlfile from autobackup;
sql alter database mount;
recover database;
sql alter database open resetlogs;
}
RECOVERY CATALOG
1. RMAN will store the backup information in target database controlfile. If we lost this
controlfile and perform either complete or incomplete recovery, we will loose backup
info even though physically backups are available
2. To avoid this situation RMAN introduced recovery catalog. It is a database which stores
target database backup information
3. Single recovery catalog can support multiple target databases
4. We cannot obtain recovery catalog information from target but vice versa is possible
Steps for Configuring Recover Catalog
Below steps need to be done on catalog database
SQL> create tablespace rmantbs
datafile /datafiles/prod/rmantbs01.dbf size 50m;
DBA CLASS NOTES | version 3.0 85
KANNA TECHNOLOGIES
SQL> create user rman_rc identified by rman_rc
default tablespace rmantbs
temporary tablespace temp;
SQL> grant connect,resource,recovery_catalog_owner to rman_rc;
[oracle@server1 ~]$ rman catalog rman_rc/rman_rc@rc
RMAN> create catalog;
[oracle@server1 ~]$ rman target / catalog rman_rc/rman_rc@rc
Below step need to be done on target database side
RMAN> register database;
# To obtain target database information from catalog
SQL> select db_id,name from rc_database;
INCREMENTAL BACKUP
1. Taking backup of very large database (VLDB) will take time if the backup size is
increasing frequently
2. In such cases, we can go for incremental backup which will take backup of any changes
happend from last full backup to till date
3. Incremental backups are two types
a. Differential (default)
b. Cumulative
4. Both incremental backup types will have level 0 and level 1 (level 0 full backup, level 1incremental backup)
5. First time incremental backup will do level 0 backup always
6. RMAN will perform incremental backup by identifying changed blocks with the help of
block SCN
DBA CLASS NOTES | version 3.0 86
KANNA TECHNOLOGIES
7. We cannot recover database using level 1 backup applying on full database backup
8. We can apply level 1 backup on image copies and can recover the database
9. 10g RMAN can perform faster incremental backups using block change tracker. With this
whenever any block changes CTWR (change track writer) background process will write
that information to a tracking file
10. The change tracking file resides in DB_CREATE_FILE_DEST
COMMANDS
# To take full backup in incremental mode
RMAN> backup incremental level 0 database;
RMAN> backup cumulative incremental level 0 database;
# To take differential backup
RMAN> backup incremental level 1 backup;
# To take cumulative backup
RMAN> backup cumulative incremental level 1 database;
# To enable change tracking
KANNA TECHNOLOGIES
The REUSE option tells Oracle to overwrite any existing file with the specified name.
# To disable change tracking
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
PERFORMANCE TUNING
1. Performance tuning is of 2 types
a. Proactive tuning it is least preferred because of practical problems
b. Reactive tuning most preferred way which means react to the problem instead
of preventing it to occur
2. Any performance problem with a particular query should be resolved in following
phases
NETWORKING TUNING
1. When a performance problem is reported, first we need to check if problem is only for
one user or for multiple users
2. If it is only for only one user, then it could be because of network problem so check
tnsping to database
3. If tnsping value is too high then intimate network admin about this. If not move to next
phase
APPLICATION TUNING
1. In this phase we need to find whether any new applications added or are there any
changes in the code
2. If any additions or changes, ask application team to revert them and then check the
performance. If working fine, then problem is with those changes
3. If no additions/changes happened or if performance problem exists after reverting, then
proceed for next phase
DBA CLASS NOTES | version 3.0 88
KANNA TECHNOLOGIES
SQL TUNING
1. By running some reports like ADDM or ASH, we can know what queries are giving
problem and can send them to application team (or team which is responsible for
writing sql queries) for tuning
2. Sometimes DBA help may be required, so DBA should have expertise knowledge on SQL
3. In real time, most (90%) of tuning problems will get resolved in this phase. If not solved,
proceed to next phase
OBJECT TUNING
1. In this phase, first we need to check what is the last analyzed date for the tables
involved in the query.
# To find out last analyzed date of a table
SQL> select last_analyzed from dba_tables where table_name=EMP;
2. If we see last_analyzed date as old date, then it means that table statistics didnt
gathered from long time. It may be the reason for performance problem.
3. Optimizer will generate the best execution plan based on these statistics. But if statistics
are old, optimizer will go for worst plan which affects performance. In such cases, we
need to analyze manually using below commands
# To analyze a table
SQL> analyze table emp compute statistics; 8i
SQL> exec dbms_stats.gather_table_stats('SCOTT','EMP'); 9i onwards
4. If table contains huge no.of rows analyze will take time as it collects info for each and
every row. In such cases, we can estimate statistics which means collecting statistics for
some percentage of rows. This can be done using below command
KANNA TECHNOLOGIES
# To analyze a table using estimate option
SQL> analyze table emp estimate statistics; 8i
SQL> exec dbms_stats.gather_table_stats('SCOTT','EMP',,40); 9i onwards
# To analyze a Index (Oracle also recommends to analyze indexes also)
SQL> analyze index pk_emp compute statistics; 8i
SQL> exec dbms_stats.gather_index_stats(SCOTT,PK_EMP); 9i onwards
# To analyze a schema
SQL> exec dbms_stats.gather_schema_stats(SCOTT);
KANNA TECHNOLOGIES
6. Optimizer works on two modes
a. Rule based optimization (RBO) [deprecated from 10g]
b. Cost based optimization (CBO)
7. CBO and RBO are the internal algorithms based on which optimizer will generate
execution plans
8. OPTIMIZER_MODE parameter will actually decide which optimization mode should be
used for query execution
9. In 9i, optimizer_mode will have default value as CHOOSE i.e optimizer will choose
whether to use RBO or CBO. In 10g/11g, the default value is ALL_ROWS which means it
will prepare the execution plan in such a way that it will select all rows from the table
10. We can define optimizer_mode=first_n_rows (n is integer) in order to select n rows
from the table. This is useful when you are displaying data in pages
11. If performance problems remain same even after analyzing, then we need to see if
query is using indexes or not by generating explain plan
EXPLAIN PLAN
1. It is a plan which shows the flow of execution for any sql statement
2. To generate explain plan we require plan_table in SYS schema. If not there, we can
create using $ORACLE_HOME/rdbms/admin/utlxplan.sql script
3. After creating plan_table, use below command to generate explain plan
SQL> grant select,insert on plan_table to scott;
SQL> conn scott/tiger
SQL> explain plan for select * from emp;
4. To view the content of explain plan, run $ORACLE_HOME/rdbms/admin/utlxpls.sql
script
5. Optimizer may deviate from best execution plan sometimes depends on resources (CPU
or memory) availability
KANNA TECHNOLOGIES
6. If index is not there on a table, create an index on a column which is after where
condition in the query
7. Also choose any one of following types of index to be created
a. B-Tree index must be used for high cardinality (no.of distinct values) columns
b. Bitmap index for low cardinality columns
c. Function based index for columns with defined functions
d. Reverse key index for selecting latest data always
8. If still facing performance problem, check if we are using right type of table
9. Following are types of tables available
a. General table which used regularly
b. Cluster table a table which shares common columns with other tables
A
In the above diagram X is the common column shared by both A & B. Problem in
using cluster table is any modifications to X cannot be done easily
c. Index organized table (IOT) it avoids creating indexes separately as data itself
will be stored in index form. The performance of IOT is fast but DML and DDL
operations are very costly
d. Partition table a normal table can be split logically into partitions so that we
can make queries to search only in 1 partition which improves search time. The
following are the types of partitions available
i. Range
ii. List
iii. Hash
We can also have composite partition of following types
a. Range range
b. Range list
c. Range hash
DBA CLASS NOTES | version 3.0 92
KANNA TECHNOLOGIES
d. List list
e. List hash
f. List range (from 11g)
Note: Sample partition table and Index creation script available at
http://pavandba.com/2009/11/11/sample-partition-table-and-partition-index-script/
DATABASE TUNING
Fragmentation
1. High water mark (HWM) is the level of data represented in a table
2. Generally oracle will not use space which is created by deleting some rows because high
water mark will not be reset at that time. This create many unused free spaces in the
table which leads to fragmentation
3. The following are different ways to defragment a table across versions
a. 5,6,7,8,8i export/import
b. 9i export/import and move table
# To move a table
SQL> alter table emp move tablespace mydata;
The above command will create a duplicate table and copies the data, then drops the original
table
Note: The above command is used even to normally move a table to another tablespace in case
of space constraint. Also, we can move table to the same tablespace, but we need to have free
space as double the size of table
# To check the size of table
SQL> select sum(bytes/1024/1024) from dba_segments where segment_name=EMP;
KANNA TECHNOLOGIES
Note: After table move, the corresponding indexes will become UNUSABLE because the row ids
will change. We need to use any of the below commands to rebuild the indexes
# To check which indexes became unusable
SQL> select index_name,status from dba_indexes where table_name=EMP;
# To rebuild the index
SQL> alter index pk_emp rebuild;
SQL> alter index pk_emp rebuild online;
SQL> alter index pk_emp rebuild online nologging; - always prefer to use this command as it
executes faster because no redo is generated
c. 10g / 11g export/import, expdp/impdp, move table & shrink compact
# To shrink a table
SQL> alter table scott.emp enable row movement;
SQL> alter table scott.emp shrink space compact;
SQL> alter table scott.emp disable row movement;
As row ids doesnt change with above commands, it is not necessary to rebuild indexes. While
doing shrinking, still users can access the table, but it will use full scan instead of index scan
Note: Apart from table fragmentation, we have tablespace fragmentation and that will occur
only in DMT or LMT with manual segment space management. The only solution is to export &
import the objects in that tablespace. So, it is always preferred to use LMT with ASSM
ROW CHAINING
1. If the data size is more than block size, data will spread into multiple blocks forming a
chain which is called row chaining
2. For example, when we are storing a 20k size of image, it will spread into 3 blocks as
shown below
DBA CLASS NOTES | version 3.0 94
KANNA TECHNOLOGIES
KANNA TECHNOLOGIES
ROW MIGRATION
1. Updating a row may increase row size and in such case it will use PCTFREE space
2. If PCTFREE is full, but still a row requires more size for update, oracle will move that
entire row to another block
3. If many rows are moved like this, more I/Os should be performed to retrieve data which
degrades performance
4. Solutions to avoid row migration is to increase the PCTFREE percentage or sometimes
creating a non-default block size also acts as a solution
5. Because PCTFREE is managed automatically in LMT, we will not observe any row
migration in LMT
INSTANCE TUNING
1. If even after performing all the steps in database tuning, performance problem exists,
we need to instance level tuning
TKPROF report
1. Transient kernel profiler is a report which show details like time taken, cpu utilization in
every phase (parse, execution and fetch) of sql execution
# Steps to take TKPROF report
SQL> grant alter session to scott;
SQL> alter session set sql_trace=TRUE;
SQL> select * from emp;
SQL> alter session set sql_trace=FALSE;
The above steps will create a trace file in udump location
[oracle@server1 udump]$ tkprof prod_ora_7824.trc tkprof_report.lst
DBA CLASS NOTES | version 3.0 96
KANNA TECHNOLOGIES
2. From TKPROF report if we observe that statement is getting parsed everytime and if it is
frequently executed query, reason could be statement flushing out from shared pool
because of less size. So increasing shared pool size is the solution
3. If we observe fetching is happening everytime, it could be because of data flushing from
buffer cache for which increasing the size is the solution
4. If the size of database buffer cache is enough to hold the data bit still data is flushing
out, in such cases we can use keep & recycle caches
# To enable keep & recycle caches
SQL> alter system set db_keep_cache_size=50m scope=both;
SQL> alter system set db_recycle_cache_size=50m scope=both;
# To place table in keep or recycle caches
SQL> alter table scott.emp storage (buffer_pool keep);
SQL> alter table scott.emp storage (buffer_pool recycle);
5. If a table is placed in KEEP cache, it will be there in the instance till its lifetime without
flushing. If a table is placed in RECYCLE cache, it will be flushed immediately without
waiting for LRU to occur
Note: Frequently used tables should be placed in keep cache whereas full scan tables should be
placed in recycle cache
KANNA TECHNOLOGIES
STATSPACK REPORT
1. This report should be generated only if entire database performance is slow
2. It is a report which details database performance during a given period of time
# Steps for generating statspack report
SQL> @$ORACLE_HOME/rdbms/admin/spcreate.sql
This will create a PERFSTAT user who is responsible for storing statistical data
[oracle@server1 udump]$ sqlplus perfstat/perfstat
SQL> exec statspack.snap; begin time
SQL> exec statspack.snap; end time
SQL> @$ORACLE_HOME/rdbms/admin/spreport.sql
3. Statspack report can have levels from 1 to 10 and the default is 5
4. In 8i, the concept of collecting statistics can be done using utlbstat.sql and utlestat.sql
scripts
Automatic Workload Repository(AWR) report
1. It is an extension to statspack report and introduced in 10g
2. In 10g, oracle will automatically runs collection of statistics job every one hour
# To generate AWR report
SQL> @$ORACLE_HOME/rdbms/admin/awrrpt.sql
Note: AWR/Statspack report analysis docs are available in
http://pavandba.files.wordpress.com/2009/11/statspack_opm4.pdf
http://pavandba.files.wordpress.com/2009/11/statspack_tuning_otn_new1.pdf
http://pavandba.files.wordpress.com/2009/11/opdg_slow_database.pdf
DBA CLASS NOTES | version 3.0 98
KANNA TECHNOLOGIES
Active Session History(ASH) report
1. It is a report to know the performance of database in last 15 minutes which helps in
providing quick solutions
2. It also provides session info which are causing performance problem
# To generate ASH report
SQL> @$ORACLE_HOME/rdbms/admin/ashrpt.sql
Automatic Database Diagnostic Monitor (ADDM) report
1. It is a tool which can be used to get recommendations from oracle on the performance
issues
2. It will take the snapshots generated by AWR and based on that will generate
recommendations
# To generate ADDM report
SQL> @$ORACLE_HOME/rdbms/admin/addmrpt.sql
ENTERPRISE MANAGER(EM)
1. It is a tool through which we can manage entire database and can perform all database
actions in a single click
2. Till 9i, it is called as oracle enterprise manager (OEM) and is restricted to use within the
network of database
3. From 10g it was made browser based so that we can manage the database from
anywhere in the world
4. EM can be configured either through DBCA or manual way
# Steps to configure EM manually
SQL> select role from dba_roles where role like MGMT%;
DBA CLASS NOTES | version 3.0 99
KANNA TECHNOLOGIES
SQL> select account_status from dba_users where username in (SYSMAN,DBSNMP);
SQL> alter user sysman account unlock; if not unlocked already
SQL> alter user sysman identified by oraman;
SQL> alter user dbsnmp account unlock;
SQL> alter user dbsnmp identified by dbsnmp;
SQL> alter user mgmt_view account unlock;
[oracle@server1 ~ ]$ lsnrctl status (start the listener if not already done)
[oracle@server1 ~ ]$ emca config dbcontrol db repos create
or
[oracle@server1 ~ ]$ emca repos create
[oracle@server1 ~ ]$ emca config dbcontrol db
# To drop repository
[oracle@server1 ~ ]$ emca deconfig dbconrol db repos drop
# To recreate repository
[oracle@server1 ~ ]$ emca config dbcontrol db repos recreate
# To manage EM
[oracle@server1 ~ ]$ emctl status / start / stop dbconsole
KANNA TECHNOLOGIES
CHANGING DATABASE NAME
Method 1 by recreating controlfile
SQL> alter database backup controlfile to trace;
This will generate script in udump location
[oracle@server1 udump ]$ cp prod_ora_7784.trc control.sql
[oracle@server1 ~ ]$ vi control.sql
Here change the database name and replace word REUSE with SET and make sure it is having
RESETLOGS
SQL> show parameter control_files
SQL> alter system set db_name=prod123 scope=spfile;
SQL> shutdown immediate
SQL> ! rm /datafiles/prod/*.ctl
SQL> startup nomount
SQL> @control.sql
SQL> alter database open resetlogs;
KANNA TECHNOLOGIES
SQL> shut immediate
SQL> startup mount
SQL> alter database open resetlogs;
The above steps will change database id also
# To change only dbname
[oracle@server1 ~ ]$ nid target=/ dbname=prod123 setname=yes
KANNA TECHNOLOGIES
FLASHBACK FEATURES
FLASHBACK QUERY
id
NUMBER(10));
id
NUMBER(10),
KANNA TECHNOLOGIES
6. COMMIT;
7. UPDATE flashback_version_query_test SET description = 'THREE' WHERE id
= 1;
8. COMMIT;
9. SELECT current_scn, TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS') FROM
v$database;
COLUMN versions_startscn FORMAT 99999999999999999
COLUMN versions_starttime FORMAT A24
COLUMN versions_endscn FORMAT 99999999999999999
COLUMN versions_endtime FORMAT A24
COLUMN versions_xid FORMAT A16
COLUMN versions_operation FORMAT A1
COLUMN description FORMAT A11
SET LINESIZE 200
10. SELECT versions_startscn, versions_starttime,
versions_endscn, versions_endtime,
versions_xid, versions_operation,
description
FROM
flashback_version_query_test
VERSIONS BETWEEN TIMESTAMP TO_TIMESTAMP('2004-03-29 14:59:08', 'YYYYMM-DD HH24:MI:SS')
AND TO_TIMESTAMP('2004-03-29 14:59:36', 'YYYY-MM-DD HH24:MI:SS')
WHERE id = 1;
11. SELECT versions_startscn, versions_starttime,
versions_endscn, versions_endtime,
versions_xid, versions_operation,
description
FROM
flashback_version_query_test
VERSIONS BETWEEN SCN 725202 AND 725219
WHERE id = 1;
KANNA TECHNOLOGIES
FLASHBACK TABLE
KANNA TECHNOLOGIES
FLASHBACK DATABASE
Database must be in archivelog mode and flashback should be enabled for performing this. When placed
in flashback mode, we can observe flashback logs getting generated in flash_recovery_area
-- Create a dummy table.
CONN scott/tiger
CREATE TABLE flashback_database_test (
id NUMBER(10)
);
-- Flashback 5 minutes.
CONN sys/password AS SYSDBA
SHUTDOWN IMMEDIATE
STARTUP MOUNT EXCLUSIVE
FLASHBACK DATABASE TO TIMESTAMP SYSDATE-(1/24/12);
ALTER DATABASE OPEN RESETLOGS;
-- Check that the table is gone.
CONN scott/tiger
DESC flashback_database_test
We can use following commands also in flashback database
FLASHBACK DATABASE TO TIMESTAMP my_date;
FLASHBACK DATABASE TO BEFORE TIMESTAMP my_date;
FLASHBACK DATABASE TO SCN my_scn;
FLASHBACK DATABASE TO BEFORE SCN my_scn;
KANNA TECHNOLOGIES
DATAGUARD
Oracle Data Guard is one of the most effective and comprehensive data availability, data
protection and disaster recovery solutions available today for enterprise data. Oracle Data
Guard is the management, monitoring, and automation software infrastructure that creates,
maintains, and monitors one or more standby databases to protect enterprise data from
failures, disasters, errors, and corruptions.
Data Guard maintains these standby databases as transitional consistent copies of the
production database. These standby databases can be located at remote disaster recovery sites
thousands of miles away from the production data center, or they may be located in the same
city, same campus, or even in the same building. If the production database becomes
unavailable because of a planned or an unplanned outage, Data Guard can switch any standby
database to the production role, thus minimizing the downtime associated with the outage, and
preventing any data loss.
Available as a feature of the Enterprise Edition of the Oracle Database, Data Guard can be used
in combination with other Oracle High Availability (HA) solutions such as Real Application
Clusters (RAC), Oracle Flashback and Oracle Recovery Manager (RMAN), to provide a very high
level of data protection and data availability that is unprecedented in the industry.
KANNA TECHNOLOGIES
Overview of Oracle Data Guard Functional Components
Data Guard Configuration:
A Data Guard configuration consists of one production (or primary) database and up to nine
standby databases. The databases in a Data Guard configuration are connected by Oracle Net
and may be dispersed geographically. There are no restrictions on where the databases are
located, provided that they can communicate with each other. However, for disaster recovery,
it is recommended that the standby databases are hosted at sites that are geographically
separated from the primary site.
KANNA TECHNOLOGIES
Role Management:
Using Data Guard, the role of a database can be switched from a primary role to a standby role
and vice versa, ensuring no data loss in the process, and minimizing downtime. There are two
kinds of role transitions a switchover and a failover. A switchover is a role reversal between
the primary database and one of its standby databases. This is typically done for planned
maintenance of the primary system. During a switchover, the primary database transitions to a
standby role and the standby database transitions to the primary role. The transition occurs
without having to re-create either database. A failover is an irreversible transition of a standby
database to the primary role. This is only done in the event of a catastrophic failure of the
primary database, which is assumed to be lost and to be used again in the Data Guard
configuration, it must be re-instantiated as a standby from the new primary.
KANNA TECHNOLOGIES
Data Guard Broker:
The Oracle Data Guard Broker is a distributed management framework that automates and
centralizes the creation, maintenance, and monitoring of Data Guard configurations. All
management operations can be performed either through Oracle Enterprise Manager, which
uses the Broker, or through the Brokers specialized command-line interface (DGMGRL).
The following diagram shows an overview of the Oracle Data Guard architecture.
KANNA TECHNOLOGIES
A physical standby database can be activated as a primary database, opened read/write for
reporting purposes, and then flashed back to a point in the past to be easily converted back to a
physical standby database. At this point, Data Guard automatically synchronizes the standby
database with the primary database. This allows the physical standby database to be utilized for
read/write reporting and cloning activities.
Automatic deletion of applied archived redo log files in logical standby databases
Archived logs, once they are applied on the logical standby database, are automatically deleted,
reducing storage consumption on the logical standby and improving Data Guard manageability.
Physical standby databases have already had this functionality since Oracle Database 10g
Release 1, with Flash Recovery Area.
KANNA TECHNOLOGIES
database is done on old data, and switchover/failover gets delayed because the accumulated
logs have to be applied first. In Data Guard 10g, with the Real Time Apply feature, such delayedreporting or delayed-switchover/failover issues do not exist, and if logical corruptions do land
up affecting both the primary and standby database, the administrator may decide to use
Flashback Database on both the primary and standby databases to quickly revert the databases
to an earlier point-in-time to back out such user errors.
Another benefit that such integration provides is during failovers. In releases prior to 10g,
following any failover operation, the old primary database must be recreated (as a new standby
database) from a backup of the new primary database, if the administrator intends to bring it
back in the Data Guard configuration. This may be an issue when the database sizes are fairly
large, and the primary/standby databases are hundreds/thousands of miles away. However, in
Data Guard 10g, after the primary server fault is repaired, the primary database may simply be
brought up in mounted mode, flashed back (using flashback database) to the SCN at which
the failover occurred, and then brought back as a standby database in the Data Guard
configuration. No re-instantiation is required.
Rolling Upgrades:
Oracle Database 10g supports database software upgrades (from Oracle Database 10g Patchset
1 onwards) in a rolling fashion, with near zero database downtime, by using Data Guard SQL
Apply. The steps involve upgrading the logical standby database to the next release, running in
a mixed mode to test and validate the upgrade, doing a role reversal by switching over to the
upgraded database, and then finally upgrading the old primary database. While running in a
mixed mode for testing purpose, the upgrade can be aborted and the software downgraded,
without data loss. For additional data protection during these steps, a second standby database
may be used.
KANNA TECHNOLOGIES
By supporting rolling upgrades with minimal downtimes, Data Guard reduces the large
maintenance windows typical of many administrative tasks, and enables the 247 operation of
the business.
KANNA TECHNOLOGIES
Centralized and simple management
Data Guard Broker automates the management and monitoring tasks across the multiple
databases in a Data Guard configuration. Administrators may use either Oracle Enterprise
Manager or the Brokers own specialized command-line interface (DGMGRL) to take advantage
of this integrated management framework.
KANNA TECHNOLOGIES
# Add Standby Redo Log Groups to Primary Database
Create standby logfile groups on the primary database for switchovers (start with next group number;
create one more group than current number of groups):
$ sqlplus / as sysdba
SQL> select max(group#) maxgroup from v$logfile;
SQL> select max(bytes) / 1024 size (K) from v$log;
SQL> alter database add standby logfile group 4 (/orcl/oradata/PROD/stby_log_PROD_4A.rdo,
/orcl/oradata/PROD/stby_log_PROD_4B.rdo) size 4096K; etc
SQL> column member format a55
SQL> select vs.group#,vs.bytes,vl.member from v$standby_log vs, v$logfile vl where vs.group# =
vl.group# order by vs.group#,vl.member;
KANNA TECHNOLOGIES
KANNA TECHNOLOGIES
# Failover Standby Becomes Primary
End all activities on the standby database.
May need to resolve redo log gaps (not shown here).
On the standby: SQL> alter database recover managed standby database finish;
SQL> alter database commit to switchover to primary;
SQL> shutdown immediate
SQL> startup
Change tnsnames.ora entry on all servers to point the primary connect string to the standby database.
New standby needs to be created. Old primary is no longer functional.
Monitoring Standby Database
select count(*) from v$archive_gap;
This query detects gaps in the logs that have been received. If any rows are returned by this query then
there is a gap in the sequence numbers of the logs that have been received.
This gap must be resolved before logs can be applied.
SELECT decode(count(*),0,0,1) FROM v$managed_standby WHERE (PROCESS=ARCH AND STATUS NOT
IN (CONNECTED)) OR (PROCESS=MRP0 AND STATUS NOT IN (WAIT_FOR_LOG,'APPLYING_LOG))
OR (PROCESS=RFS AND STATUS NOT IN (IDLE,'RECEIVING));
This query detects bad statuses. When a bad status is present this query will return a 1.
The ARCH process should always be CONNECTED. The MRP0 process should always be waiting for a
log or applying a log, and when this is not true it will report the error in the status. The RFS process
exists when the Primary is connected to the Standby and should always be IDLE or RECEIVING.
SELECT DECODE(COUNT(DISTINCT PROCESS),3,0,1) FROM v$managed_standby;
This query detects missing processes. If we do not have exactly 3 distinct processes then there is a
problem, and this query will return a 1.
The most likely process to be missing is the RFS which is the connection to the Primary database. You
must resolve the problem preventing the Primary from connecting to the Standby before this process
will start running again.
KANNA TECHNOLOGIES
# Verify all STANDBY PROCESSES are running normally on the STANDBY database.
SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME, APPLIED FROM V$ARCHIVED_LOG WHERE FIRST_TIME >
TRUNC(SYSDATE) ORDER BY SEQUENCE#;