Professional Documents
Culture Documents
3.) shmall : This parameter controls the total amount of shared memory (in pages) that can be
used at one time on the system. The value of this parameter should always be at least: We can
determine the value of shmall by performing the following :
# cat /proc/sys/kernel/shmall
268435456
For most Linux systems, the default value for shmall is 2097152 and is adequate for most
configurations. The default value for shmall in CentOS 5 is 268435456 (see above) which is more than
enough for the Oracle configuration described in this article. Note that this value of 268435456 is not
the "normal" default value for shmall in a Linux environment , inserts the following two entries in the
file /etc/sysctl.conf:
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 268435456
4.) shmmin : This parameter controls the minimum size (in bytes) for a shared memory segment.
The default value for shmmin is 1 and is adequate for the Oracle configuration described in this
article.We can determine the value of shmmin by performing the following:
# ipcs -lm | grep "min seg size"
min seg size (bytes) = 1
Semaphores :
After the DBA has configured the shared memory settings, it is time to take care of configuring the
semaphores. The best way to describe a semaphore is as a counter that is used to provide
synchronization between processes (or threads within a process) for shared resources like shared
memory. Semaphore sets are supported in System V where each one is a counting semaphore. When
an application requests semaphores, it does so using "sets". To determine all current semaphore
limits, use the following:
# ipcs -ls
------ Semaphore Limits -------max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767
We can also use the following command:
# cat /proc/sys/kernel/sem
250
32000 32
128
The following list describes the kernel parameters that can be used to change the semaphore
configuration for the server:
i.) semmsl - This kernel parameter is used to control the maximum number of semaphores per
semaphore set. Oracle recommends setting semmsl to the largest PROCESS instance parameter
setting in the init.ora file for all databases on the Linux system plus 10. Also, Oracle recommends
setting the semmsl to a value of no less than 100.
ii.) semmni - This kernel parameter is used to control the maximum number of semaphore sets in
the entire Linux system. Oracle recommends setting semmni to a value of no less than 100.
iii.) semmns - This kernel parameter is used to control the maximum number of semaphores (not
semaphore sets) in the entire Linux system. Oracle recommends setting the semmns to the sum of
the PROCESSES instance parameter setting for each database on the system, adding the largest
PROCESSES twice, and then finally adding 10 for each Oracle database on the system. Use the
following calculation to determine the maximum number of semaphores that can be allocated on a
Linux system. It will be the lesser of:
SEMMNS -or- (SEMMSL * SEMMNI)
Logical Structures
The logical units are tablespace, segment, extent, and data block.
Tablespace
A Tablespace is a grouping logical database objects. A database must have one or more tablespaces.
In the Figure 3, we have three tablespaces SYSTEM tablespace, Tablespace 1, and Tablespace 2.
Tablespace is composed by one or more datafiles.
Segment
A Tablespace is further broken into segments. A segment is used to stores same type of objects. That
is, every table in the database will store into a specific segment (named Data Segment) and every
index in the database will also store in its own segment (named Index Segment). The other segment
types are Temporary Segment and Rollback Segment.
Extent
A segment is further broken into extents. An extent consists of one or more data block. When the
database object is enlarged, an extent will be allocated. Unlike a tablespace or a segment, an extent
cannot be named.
Data Block
A data block is the smallest unit of storage in the Oracle database. The data block size is a specific
number of bytes within tablespace and it has the same number of bytes.
Physical Structures
The physical structures are structures of an Oracle database (in this case the disk files) that are not
directly manipulated by users. The physical structure consists of datafiles, redo log files, and control
files.
Read-only mode does not restrict database recovery or operations that change the database's state
without generating redo data. For example, in read-only mode:
* Datafiles can be taken offline and online
* Offline datafiles and tablespaces can be recovered
* The control file remains available for updates about the state of the database
Shutdown the database
The three steps to shutting down a database and its associated instance are:
*Close the database.
*Unmount the database.
*Shut down the instance.
Close a Database
When you close a database, Oracle writes all database data and recovery data in the SGA to the
datafiles and redo log files, respectively. Next, Oracle closes all online datafiles and online redo log
files. At this point, the database is closed and inaccessible for normal operations. The control files
remain open after a database is closed but still mounted.
Close the Database by Terminating the Instance
In rare emergency situations, you can terminate the instance of an open database to close and
completely shut down the database instantaneously. This process is fast, because the operation of
writing all data in the buffers of the SGA to the datafiles and redo log files is skipped. The subsequent
reopening of the database requires recovery, which Oracle performs automatically.
Un mount a Database
After the database is closed, Oracle un mounts the database to disassociate it from the instance. At
this point, the instance remains in the memory of your computer.
After a database is un mounted, Oracle closes the control files of the database.
Shut Down an Instance
The final step in database shutdown is shutting down the instance. When you shut down an instance,
the SGA is removed from memory and the background processes are terminated.
Abnormal Instance Shutdown
In unusual circumstances, shutdown of an instance might not occur cleanly; all memory structures
might not be removed from memory or one of the background processes might not be terminated.
When remnants of a previous instance exist, a subsequent instance startup most likely will fail. In
such situations, the database administrator can force the new instance to start up by first removing
the remnants of the previous instance and then starting a new instance, or by issuing a SHUTDOWN
ABORT statement in Enterprise Manager.
Managing an Oracle Instance
When Oracle engine starts an instance, it reads the initialization parameter file to determine the
values of initialization parameters. Then, it allocates an SGA and creates background processes. At
this point, no database is associated with these memory structures and processes.
Type of initialization file:
Static (PFILE) Persistent (SPFILE)
Instance parameter
Name of the database
Memory structure of the SGA
Name and location of control file
Information about undo segments
Location of udump, bdump and cdump file
Creating an SPFILE:
Create SPFILE=..ORA
From PFILE=..ORA;
Note:
* Required SYSDBA Privilege.
* Execute before or after instance startup.
Oracle Background Processes
An Oracle instance runs two types of processes
Server Process
Background Process
Before work user must connect to an Instance. When user LOG on Oracle Server Oracle Engine create
a process called Server processes. Server process communicate with oracle instance on the behalf of
user process.
Each background process is useful for a specific purpose and its role is well defined.
Background processes are invoked automatically when the instance is started.
Database Writer (DBWr)
Process Name: DBW0 through DBW9 and DBWa through DBWj
Max Processes: 20
This process writes the dirty buffers for the database buffer cache to data files. One database writer
process is sufficient for most systems; more can be configured if essential. The initialisation
parameter, DB_WRITER_PROCESSES, specifies the number of database writer processes to start.
The DBWn process writes dirty buffer to disk under the following conditions:
When a checkpoint is issued.
10
This process is responsible for performing recovery if a user process fails. It will rollback uncommitted
transactions. PMON is also responsible for cleaning up the database buffer cache and freeing resources
that were allocated to a process. PMON also registers information about the instance and dispatcher
processes with network listener.
PMON wakes up every 3 seconds to perform housekeeping activities. PMON must always be running
for an instance.
Checkpoint Process
Process Name: CKPT
Max processes: 1
Checkpoint process signals the synchronization of all database files with the checkpoint information. It
ensures data consistency and faster database recovery in case of a crash.
CKPT ensures that all database changes present in the buffer cache at that point are written to the
data files, the actual writing is done by the Database Writer process. The datafile headers and the
control files are updated with the latest SCN (when the checkpoint occurred), this is done by the log
writer process.
The CKPT process is invoked under the following conditions:
When a log switch is done.
When the time specified by the initialization parameter LOG_CHECKPOINT_TIMEOUT exists between
the incremental checkpoint and the tail of the log; this is in seconds.
When the number of blocks specified by the initialization parameter LOG_CHECKPOINT_INTERVAL
exists between the incremental checkpoint and the tail of the log; these are OS blocks.
The number of buffers specified by the initialization parameter FAST_START_IO_TARGET required to
perform roll-forward is reached.
Oracle 9i onwards, the time specified by the initialization parameter FAST_START_MTTR_TARGET is
reached; this is in seconds and specifies the time required for a crash recovery. The parameter
FAST_START_MTTR_TARGET replaces LOG_CHECKPOINT_INTERVAL and FAST_START_IO_TARGET, but
these parameters can still be used.
*
When the ALTER SYSTEM SWITCH LOGFILE command is issued.
*
When the ALTER SYSTEM CHECKPOINT command is issued.
Incremental Checkpoints initiate the writing of recovery information to datafile headers and
controlfiles. Database writer is not signaled to perform buffer cache flushing activity here.
Archiver
Process Name: ARC0 through ARC9
Max Processes: 10
The ARCn process is responsible for writing the online redo log files to the mentioned archive log
destination after a log switch has occurred. ARCn is present only if the database is running in
archivelog mode and automatic archiving is enabled. The log writer process is responsible for starting
multiple ARCn processes when the workload increases. Unless ARCn completes the copying of a redo
11
12
Job queue processes carry out batch processing. All scheduled jobs are executed by these processes.
The initialization parameter JOB_QUEUE_PROCESSES specifies the maximum job processes that can
be run concurrently. If a job fails with some Oracle error, it is recorded in the alert file and a process
trace file is generated. Failure of the Job queue process will not cause the instance to fail.
Dispatcher
Process Name: Dnnn
Max Processes: Intended for Shared server setups (MTS). Dispatcher processes listen to and receive requests from
connected sessions and places them in the request queue for further processing. Dispatcher processes
also pickup outgoing responses from the result queue and transmit them back to the clients. Dnnn are
mediators between the client processes and the shared server processes. The maximum number of
Dispatcher process can be specified using the initialization parameter MAX_DISPATCHERS.
Shared Server Processes
Process Name: Snnn
Max Processes: Intended for Shared server setups (MTS). These processes pickup requests from the call request
queue, process them and then return the results to a result queue. The number of shared server
processes to be created at instance startup can be specified using the initialization parameter
SHARED_SERVERS.
Parallel Execution Slaves
Process Name: Pnnn
Max Processes: These processes are used for parallel processing. It can be used for parallel execution of SQL
statements or recovery. The Maximum number of parallel processes that can be invoked is specified by
the initialization parameter PARALLEL_MAX_SERVERS.
Trace Writer
Process Name: TRWR
Max Processes: 1
Trace writer writes trace files from an Oracle internal tracing facility.
Input/Output Slaves
Process Name: Innn
Max Processes: These processes are used to simulate asynchronous I/O on platforms that do not support it. The
initialization parameter DBWR_IO_SLAVES is set for this purpose.
Wakeup Monitor Process
Process Name: WMON
Max Processes: -
13
This process was available in older versions of Oracle to alarm other processes that are suspended
while waiting for an event to occur. This process is obsolete and has been removed.
Conclusion
With every release of Oracle, new background processes have been added and some existing ones
modified. These processes are the key to the proper working of the database. Any issues related to
background processes should be monitored and analyzed from the trace files generated and the alert
log.
Create Stand-alone 10g Database Manually
Step 1 Create a initSID.ora (Example: initTEST.ora) file in $ORACLE_HOME/dbs/ directory.
Example: $ORACLE_HOME/dbs/initTEST.ora
Put following entry in initTEST.ora file
##############################################################
background_dump_dest=<put BDUMP log destination>
core_dump_dest=<put CDUMP log destination>
user_dump_dest=<put UDUMP log destination>
control_files = (/<Destination>/control1.ctl,/ <Destination>/control2.ctl,/ <Destination>/control3.ctl)
undo_management = AUTO
undo_tablespace = UNDOTBS1
db_name = test
db_block_size = 8192
sga_max_size = 1073741824
sga_target = 1073741824
####################################################
Step 2 Create a password file
$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwd<sid>.ora password=<password>
entries=5
Step 3 Set your ORACLE_SID
$ export ORACLE_SID=test
$ export ORACLE_HOME=/<Destination>
Step 4 Run the following sqlplus command to connect to the database and startup the instance.
$sqlplus '/ as sysdba'
SQL> startup nomount
Step 5 Create the Database. use following scripts.
create database test
logfile group 1 ('<Destination>/redo1.log') size 100M,
group 2 ('<Destination>/redo2.log') size 100M,
group 3 ('<Destination>/redo3.log') size 100M
14
15
16
Important:
If the value of DB_FILES is too low, you cannot add datafiles beyond the DB_FILES limit.
Example : if init parameter db_files set to 2 then you can not add more then 2 in your database.
If the value of DB_FILES is too high, memory is unnecessarily consumed.
When you issue CREATE DATABASE or CREATE CONTROLFILE statements,
the MAXDATAFILES parameter specifies an initial size. However, if you attempt to add a new file whose
number is greater than MAXDATAFILES, but less than or equal toDB_FILES, the control file will expand
automatically so that the datafiles section can accommodate more files.
Note:
If you add new datafiles to a tablespace and do not fully specify the filenames, the database creates
the datafiles in the default database directory . Oracle recommends you always specify a fully qualified
name for a datafile. Unless you want to reuse existing files, make sure the new filenames do not
conflict with other files. Old files that have been previously dropped will be overwritten.
How to add datafile in execting tablespace?
alter tablespace <Tablespace_Name> add datafile /............../......./file01.dbf size 10m autoextend
on;
How to resize the datafile?
alter database datafile '/............../......./file01.dbf' resize 100M;
How to bring datafile online and offline?
alter database datafile '/............../......./file01.dbf' online;
alter database datafile '/............../......./file01.dbf' offline;
How to renaming the datafile in a single tablesapce?
Step:1 Take the tablespace that contains the datafiles offline. The database must be open.
alter tablespace <Tablespace_Name> offline normal;
Step:2 Rename the datafiles using the operating system.
Step:3 Use the ALTER TABLESPACE statement with the RENAME DATAFILE clause to change the
filenames within the database.
alter tablespace <Tablespace_Name> rename datafile '/...../..../..../user.dbf' to
'/..../..../.../users1.dbf';
17
18
19
Case : 4
If you do not want to follow any of these procedures, there are other things that can be done besides
dropping the tablespace.
If the reason you wanted to drop the file is because you mistakenly created the file of the
wrong size, then consider using the RESIZE command.
If you really added the datafile by mistake, and Oracle has not yet allocated any space within
this datafile, then you can use ALTER DATABASE DATAFILE <filename> RESIZE; command to make
the file smaller than 5 Oracle blocks. If the datafile is resized to smaller than 5 oracle blocks, then it
will never be considered for extent allocation. At some later date, the tablespace can be rebuilt to
exclude the incorrect datafile.
Important : The ALTER DATABASE DATAFILE <datafile name> OFFLINE DROP command is not
meant to allow you to remove a datafile. What the command really means is that you are offlining the
datafile with the intention of dropping the tablespace.
Important : If you are running in archivelog mode, you can also use: ALTER DATABASE DATAFILE
<datafile name> OFFLINE; instead of OFFLINE DROP. Once the datafile is offline, Oracle no longer
attempts to access it, but it is still considered part of that tablespace. This datafile is marked only as
offline in the controlfile and there is no SCN comparison done between the controlfile and the datafile
during startup (This also allows you to startup a database with a non-critical datafile missing). The
entry for that datafile is not deleted from the controlfile to give us the opportunity to recover that
datafile.
Managing Control Files
A control file is a small binary file that records the physical structure of the database with database
name, Names and locations of associated datafiles, online redo log files, timestamp of the database
creation, current log sequence number and Checkpoint information.
Note:
You should create two or more copies of the control file during database creation.
Role of Control File:
20
If you do not specify files for CONTROL_FILES before database creation, and you are not using
the Oracle Managed Files feature, Oracle creates a control file in <DISK>:\ORACLE_HOME\DTATBASE\
location and uses a default filename. The default name is operating system specific.
Every Oracle database should have at least two control files, each stored on a different disk. If
a control file is damaged due to a disk failure, the associated instance must be shut down.
Oracle writes to all filenames listed for the initialization parameter CONTROL_FILES in the
database's initialization parameter file.
The first file listed in the CONTROL_FILES parameter is the only file read by the Oracle
database server during database operation.
If any of the control files become unavailable during database operation, the instance becomes
inoperable and should be aborted.
All control files for the database have been permanently damaged and you do not have a
control file backup.
You want to change one of the permanent database parameter settings originally specified in
the CREATE DATABASE statement. These settings include the database's name and the following
parameters: MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES, and
MAXINSTANCES.
21
22
If you specify NORESETLOGS when creation the control file, use following commands: ALTER
DATABASE OPEN;
If you specified RESETLOGS when creating the control file, use the ALTER DATABASE
statement, indicating RESETLOGS.
ALTER DATABASE OPEN RESETLOGS;
TIPS:
When creating a new control file, select the RESETLOGS option if you have lost any online redo log
groups in addition to control files. In this case, you will need to recover from the loss of the redo logs .
You must also specify the RESETLOGS option if you have renamed the database. Otherwise, select the
NORESETLOGS option.
Backing Up Control Files
Method 1:
Back up the control file to a binary file (duplicate of existing control file) using the following
statement:
ALTER DATABASE BACKUP CONTROLFILE TO '<DISK>:\Directory\control.bkp';
Method 2:
Produce SQL statements that can later be used to re-create your control file:
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
How to retrieve information related to Control File:
V$DATABASE
Displays database information from the control file
V$CONTROLFILE
Lists the names of control files
23
24
Important:
You must have at- least two online groups.
You can not drop a active online redo log group. If it active switch it by alter system switch
logfile before dropping.
Also make sure that online redo log group is archived ( if archiving is enabled).
Syntax:
If you want to drop log group:
Alter database drop logfile group <GROUP_NUMBER>;
If you want to drop a logfile member:
Alter database drop logfile member <DISK>:\Directory\<LOG_FILE_NAME>.log;
How to Viewing Online Redo Log Information?
SELECT * FROM V$LOG;
GROUP# THREAD# SEQ BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
------ ------- ----- ------- ------- --- --------- ------------- --------1 1 10605 1048576 1 YES ACTIVE 11515628 16-APR-00
2 1 10606 1048576 1 NO CURRENT 11517595 16-APR-00
3 1 10603 1048576 1 YES INACTIVE 11511666 16-APR-00
25
Important:
That a temporary tablespace cannot contain permanent objects and therefore doesn't need to
be backed up.
When we create a TEMPFILE, Oracle only writes to the header and last block of the file. This is
why it is much quicker to create a TEMPFILE than to create a normal database file.
TEMPFILEs are not recorded in the database's control file.
We cannot remove datafiles from a tablespace until you drop the entire tablespace but we can
remove a TEMPFILE from a database:
SQL> ALTER DATABASE TEMPFILE ''<disk>:\<directory>\<Tablespace Name>.dbf' DROP INCLUDING
DATAFILES;
Except for adding a tempfile, you cannot use the ALTER TABLESPACE statement for a locally
managed temporary tablespace (operations like rename, set to read only, recover, etc. will fail).
How does create Temporary Tablespaces?
CREATE TEMPORARY TABLESPACE temp
TEMPFILE '<disk>:\<directory>\<Tablespace Name>.dbf' size 20M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M;
For best performance, the UNIFORM SIZE must be a multiple of the SORT_AREA_SIZE parameter.
26
Important:
The default Default Temporary Tablespace is SYSTEM.
Each database can be assigned one and only one Default Temporary Tablespace.
Temporary Tablespace is automatically assigned to users.
Restriction:
27
28
DEFAULT_TABLESPACE TEMPORARY_TABLESPACE
------------------------------
------------------------------
29
30
31
MAX_EXT
PCT_INCREASE
32
------- -----------2
40
0
1
99
1
1
99
0
2
10
1
1
99
1
TOTAL
PIECES shows the number of free space extents in the tablespace file, MAXIMUM and MINIMUM show
the largest and smallest contiguous area of space in database blocks, AVERAGE shows the average
size in blocks of a free space extent, and TOTAL shows the amount of free space in each tablespace
file in blocks. This query is useful when you are going to create a new object or you know that a
segment is about to extend, and you want to make sure that there is enough space in the containing
tablespace.
Managing Tablespace
A tablespace is a logical storage unit. Why we are say logical because a tablespace is not visible in the
file system. Oracle store data physically is datafiles. A tablespace consist of one or more datafile.
Type of tablespace?
System Tablespace
Created with the database
Required in all database
Contain the data dictionary
33
34
35
36
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; --> creates a locally managed
tablespace in which every extent is 128K
EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO; --> creates a tablespace
with automatic segment-space management (ASSM).
Add a file to the tablespace
ALTER TABLESPACE DATA_1_TBS
ADD DATAFILE 'c:\oradata\data_file2.dbf' SIZE 30M AUTOEXTEND OFF;
To get more information on the files which are associated with a tablespace the following query
could be used:
SELECT TABLESPACE_NAME, FILE_NAME, FILE_ID, AUTOEXTENSIBLE, ONLINE_STATUS
FROM DBA_DATA_FILES ORDER BY 1;
Remove a file from a tablespace (Resizing a tablespace)
Removing a file from a tablespace cannot be done directly. First, the objects must be moved in
another tablespace, the initial tablespace will be dropped and recreated. After that the objects could
be moved again in the tablespace which was resized. If the reason you wanted to drop the file is
because you mistakenly created the file of the wrong size, then consider using the RESIZE command.
Add more space to a tablespace without adding a new file
ALTER DATABASE DATAFILE 'C:\oradata\data_1.dbf' RESIZE 25M;
Dropping a tablespace
DROP TABLESPACE DATA_1_TBS; (if the tablespace is empty)
DROP TABLESPACE DATA_1_TBS INCLUDING CONTENTS; (if the objects in the tablespace are no
longer needed)
However the files must be deleted from the OS level
Rename a tablespace
37
38
SQL*Loader is an Oracle-supplied utility that allows you to load data from a flat file
(the flat file must be formatted) into an Oracle database. SQL*Loader supports
various load formats, selective loading, and multi-table loads. SQL*Loader utility
(must be run at the OS level) use a control file which contains the way the data is
formatted and inserted into the Oracle database. During the insert operation a
discard, log and bad files are created. The log file is a record of SQL*Loader's
activities during a load session. Where a row is not inserted in a table (constraint
violations, not enough disk space, etc) a record is inserted in thebad file. Sometimes,
in the control file there are some criteria that a record must meet before it is loaded.
If the criteria is not meet the record is not inserted in the database but in the discard
file. The discard file is optional.
Invoking SQL*Loader
In this case the SQL*Loader connects to the database as scott and using the
information provided in sqlloader.ctl file insert the data in the database.
39
Supposing we have a text file which contains data for SCOTT.DEPT1 table. Here is the
content of the file C:\DEPT.txt :
load data
infile 'c:\DEPT.txt'
into table DEPT1
fields terminated by "," optionally enclosed by '"'
( DNAME, DEPTNO, LOC )
The following command will insert the data in the DEPT table:
40
The default behavior of SQL*Loader is to insert data in an empty table. If the table is
not empty an error will occur. If we want to append data the APPEND parameter must
be added in the control file. If we want to replace the old data with the new one the
REPLACE parameter must be added. Here is an example using APPEND parameter:
load data
infile 'c:\DEPT.txt'
APPEND
into table DEPT1
fields terminated by "," optionally enclosed by '"'
( DNAME, DEPTNO, LOC )
Also, the WHEN clause could be added to filter the data which will be inserted in the
database:
load data
infile 'c:\DEPT.txt'
INSERT into table DEPT1
WHEN (12:13) = '10'
( DNAME POSITION(1:10),
DEPTNO POSITION(12:13),
LOC POSITION(15:23))
41
Data transformation
Data transformation is possible during the data load. The control file must be modified
to allow this. Here is an example of control file which allow data transformation:
load data
infile 'c:\DEPT.txt'
into table DEPT1
fields terminated by "," optionally enclosed by '"'
( DNAME,
DEPTNO,
LOC constant "TORONTO")
Here are other examples where data is transformed at the column level:
-> using sequences:
-> modifying the data from the file:
(...)
-> using a constant:
-> using an Oracle function:
(...)
The example which use FIELDS TERMINATED BY "," allows variable length records,
because the columns are delimited by "," (could be used any other sign). Sometimes
we don't have delimiters and the columns are fixed length values. In this case the
control file must be like:
load data
infile 'c:\DEPT.txt'
into table DEPT1
42
ACOUNTING
10 OTTAWA
MANAGEMENT 20 MONTREAL
RESEARCH
30 HALIFAX
SALES
40 QUEBEC
Here is an example where the insertions are done on 2 tables (in my example DEPT1
and DEPT2 have identical structure):
load data
infile 'c:\DEPT.txt'
into table DEPT1
( DNAME POSITION(1:10),
DEPTNO POSITION(12:13),
LOC POSITION(15:23))
into table DEPT2
( DNAME POSITION(1:10),
DEPTNO POSITION(12:13),
LOC POSITION(15:23))
43
Structure of ADR Directory is designed in such a way that uses consistent diagnostic data formats
across products and instances, and a integrated set of tools enable customers and Oracle Support to
correlate and analyze diagnostic data across multiple instances .
In 11g alert file is saved in 2 location, one is in alert directory ( in XML format) and old style alert file
in trace directory . Within ADR base, there can be many ADR homes, where each ADR home is the
root directory for all diagnostic data for a particular instance. The location of an ADR home for a
database is shown in the above pictures . Both the files can be viewed with EM and ADRCI Utility.
SQL> show parameter diag
NAME
TYPE
VALUE
----------------------------------diagnostic_dest string
D:\ORACLE
Below table shows us the new location of Diagnostic trace files
Data
Old location
ADR location
44
Packaging of incident and problem information into a zip file for transmission to Oracle
Support.
Diagnostic data includes incident and problem descriptions, trace files, dumps, health monitor reports,
alert log entries, and more .
ADRCI has a rich command set, and can be used in interactive mode or within scripts. In addition,
ADRCI can execute scripts of ADRCI commands in the same way that SQL*Plus executes scripts of
SQL and PL/SQL commands.
To use ADRCI in interactive mode :
Enter the following command at the operating system command prompt:
C:\>adrci
ADRCI: Release 11.1.0.6.0 - Beta on Wed May 18 12:31:40 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
ADR base = "d:\oracle"
To get list of adrci command type help command as below :
adrci> help
HELP [topic]
45
46
Obtain a list of trace files whose file name matches a search string.
47
If you are using spfile, instance parameters can be changed permanently using SQL*Plus
commands. If you are using pfile, you have to edit pfile using an editor to change values permanently.
3. Spfile names should be either spfile<SID>.ora or spfile.ora. Pfile names must be init<SID>.ora.
(More information onspfile naming)
48
49
50
Now lets assume that you are using spfile and create a pfile from it.
1
SQL> create pfile='/home/oracle/mypfile.ora' from spfile;
2
3
File created.
4
5
$ file /home/oracle/mypfile.ora
6
7
/home/oracle/mypfile.ora: ASCII text
The command above will transform your current spfile into pfile and will store the pfile at the location
you specify. (/home/oracle/mypfile.ora)
One thing to mention is that infact there does not even have to be a database or an instance to
perform a transformation. SQL*Plus is the only required tool.
Loss of an Spfile
If you lose your spfile (for ex: accidentally deleted it) and your instance is up, you may recreate an
spfile. The values of parameters are read by instance at startup.
So,
1
2
3
4
5
6
7
your instance knows all the values. It can create a new spfile from those parameter values.
SQL> create spfile='/home/oracle/myspfile.ora' from memory ;
File created.
$ file /home/oracle/myspfile.ora
/home/oracle/myspfile.ora: data
If you lose your spfile and your db is down you'll have to restore your spfile from a backup. That
subject is related with rman (recovery manager) and is not covered in this article.
Instance Parameters
There are hundreds of instance parameters that determine the way an instance operates. As an
administrator, you have to set each of these parameters correctly.
All these parameter are stored in a file called parameter file. (More information on spfile)
How To View Parameters Of An Instance
51
VALUE
lock_name_space
DESCRIPTION
lock name space used for generating lock names for standby/clone database
processes
550
user processes
sessions
848
timed_statistics
TRUE
name
value
description
Modifying Parameters
1. Session-wide Parameters
You can change the value of a parameter session-wide using "ALTER SESSION" command. The scope
is limited to session, not instance. It is valid only for the current session. At next login, you'll see that
the parameter has been reset. V$PARAMETER view shows the session-wide parameters.
1
sql> SELECT name, VALUE, isses_modifiable
2
FROM V$PARAMETER
3
WHERE NAME = 'nls_language'
NAME
VALUE
ISSES_MODIFIABLE
nls_language
TURKISH
TRUE
this parameter can be changed using "ALTER SESSION" command. In this case this is "TRUE" which
means that I can change it.
1
sql> alter session set nls_language='ENGLISH' ;
2
3
sql> SELECT name, VALUE, isses_modifiable
4
FROM V$PARAMETER
5
WHERE NAME = 'nls_language' ;
NAME
VALUE
ISSES_MODIFIABLE
nls_language
ENGLISH
TRUE
2. Instance-wide Parameters
You can change the value of a parameter instance-wide using "ALTER SYSTEM" command. The scope is
instance-wide. Every session is affected. When a user logs in, a session is created and that session
inherits the values of parameters from instance-wide parameter values. V$SYSTEM_PARAMETER view
shows the instance-wide parameters.
1
sql> SELECT name, VALUE, issys_modifiable
2
FROM V$SYSTEM_PARAMETER
3
WHERE NAME = 'db_recovery_file_dest_size' ;
52
NAME
VALUE
ISSYS_MODIFIABLE
db_recovery_file_dest_size
4227858432
IMMEDIATE
ISSYS_MODIFIABLE => this column shows how the parameter change will affect the instance. If it is
"IMMEDIATE", this means that the changes will take effect immediately. Such parameters are called
dynamic parameters. If the column is "FALSE", then you will have to restart your instance for changes
to take effect. Such parameters are called static parameters.
1
SQL> alter system set db_recovery_file_dest_size=4000000000 scope=both;
2
3
SQL> select name,value,issys_modifiable
4
from v$system_parameter
5
where name='db_recovery_file_dest_size' ;
NAME
VALUE
ISSYS_MODIFIABLE
db_recovery_file_dest_size
4000000000
IMMEDIATE
Scope Option
While changing an instance-wide parameter using "ALTER SYSTEM" command you can also set scope
option to determine the scope of change. Scope option can take the value "MEMORY","SPFILE" or
"BOTH"
MEMORY => if you are modifying a dynamic parameter, the changes will take effect immediately for
current instance. But after a restart of the instance the changes will revert. If you use "MEMORY", the
changes will be temporary.
You
1
2
3
4
5
6
SPFILE => If the instance was started using an spfile (more information on how a database starts)
and you set "SPFILE" for scope option, then the change will be recorded in spfile. However, the current
instance will keep on operating with old values. The changes will take effect only after a restart.
You
1
2
3
4
5
6
BOTH => IF you use "BOTH" for scope option for a dynamic parameter, the changes will take effect
immediately and be permanent.
53
VALUE
ISDEFAULT
log_archive_max_processes
TRUE
open_cursors
300
FALSE
If a parameter is not defined in spfile, the "ISDEFAULT" column will be "TRUE". Otherwise it will
be "FALSE". I haven't set "log_archive_max_processes" parameter and its default value is 4. However,
I've set "open_cursors" parameter in spfile and it doesn't have a default value.
1
$ strings spfileMYDB.ora | grep open_cursors
2
3
open_cursors=300
54
VALUE
ISDEFAULT
open_cursors
50
1
$ strings spfileMYDB.ora | grep open_cursors
TRUE
Deprecated Parameters
As new database versions are developed, some parameters used in earlier versions may become
deprecated. There is a column named "ISDEPRECATED" in v$SYSTEM_PARAMETER view. This column
shows whether the parameter is deprecated or not in current version.
1
sql> SELECT name, VALUE, isdeprecated
2
FROM v$system_parameter
3
WHERE name = 'background_dump_dest' ;
NAME
VALUE
ISDEPRECATED
background_dump_dest
/u01/diag/rdbms/testdb/testdb/trace
TRUE
As of 11g, the "background_dump_dest" parameter is no longer being used. It used to show the path
of background dump files. We now have "diagnostic_dest" parameter instead in version 11g. As seen
in the query, the "ISDEPRECATED" column shows "TRUE".
How Is Database Opened ?
An instance can be "started" or "shutdown". A database can be "mounted","opened", "closed" and
"dismounted". However instance and database is tightly attached so you may also use "starting a
database" or "shutting down a database".
Your database will be opened automatically after a server reboot if you've configured "Oracle Restart"
or if your database is a RAC.
You may also manually shut your database down and then start it up anytime you want. Let's take a
55
3. NOMOUNT State
1
SQL> startup nomount;
2
ORACLE instance started.
3
Total System Global Area 4175568896 bytes
4
Fixed Size
2233088 bytes
5
Variable Size
3137342720 bytes
6
Database Buffers
1023410176 bytes
7
Redo Buffers
12582912 bytes
At this stage the instance is started but there is no database yet.
To start an instance, oracle needs a parameter file. It will search for a parameter file under
"$ORACLE_HOME\dbs\" directory in the order below and will use the first file it finds.
- spfile<SID>.ora (spfile)
- spfile.ora
(spfile)
- init<SID>.ora
(pfile)
You will also find a parameter file named "init.ora" under this directory. This is a template parameter
file that comes with an installation. You may use this template file as a starting point for building your
56
At nomount stage, as your instance is started, you should be able to see the background processes
associated with the instance.
$ ps -ef | grep myins
1
oracle 20227
1 0 18:56 ?
00:00:00 ora_pmon_myins
2
oracle 20229
1 0 18:56 ?
00:00:00 ora_psp0_myins
3
oracle 20232
1 0 18:56 ?
00:00:00 ora_vktm_myins
4
oracle 20236
1 0 18:56 ?
00:00:00 ora_gen0_myins
5
oracle 20238
1 0 18:56 ?
00:00:00 ora_diag_myins
6
oracle 20240
1 0 18:56 ?
00:00:00 ora_dbrm_myins
7
oracle 20242
1 0 18:56 ?
00:00:00 ora_dia0_myins
8
oracle 20244
1 27 18:56 ?
00:00:02 ora_mman_myins
9
oracle 20246
1 0 18:56 ?
00:00:00 ora_dbw0_myins
10
oracle 20248
1 0 18:56 ?
00:00:00 ora_lgwr_myins
11
oracle 20250
1 0 18:56 ?
00:00:00 ora_ckpt_myins
12
oracle 20252
1 0 18:56 ?
00:00:00 ora_smon_myins
13
oracle 20254
1 0 18:56 ?
00:00:00 ora_reco_myins
14
oracle 20256
1 0 18:56 ?
00:00:00 ora_rbal_myins
15
oracle 20258
1 0 18:56 ?
00:00:00 ora_asmb_myins
16
oracle 20260
1 0 18:56 ?
00:00:00 ora_mmon_myins
17
oracle 20262
1 0 18:56 ?
00:00:00 oracle+ASM_asmb_myins (DESCRIPTION=(LOCAL=YES)
18
(ADDRESS=(PROTOCOL=beq)))
19
oracle 20264
1 0 18:56 ?
00:00:00 ora_mmnl_myins
20
oracle 20266
1 0 18:56 ?
00:00:00 ora_d000_myins
21
oracle 20268
1 0 18:56 ?
00:00:00 ora_s000_myins
22
oracle 20270
1 0 18:56 ?
00:00:00 ora_mark_myins
23
oracle 20277
1 0 18:56 ?
00:00:00 ora_ocf0_myins
24
oracle 20282
1 0 18:56 ?
00:00:00 oracle+ASM_ocf0_myins (DESCRIPTION=(LOCAL=YES)
25
(ADDRESS=(PROTOCOL=beq)))
26
oracle 20397
1 0 18:57 ?
00:00:00 oraclemyins (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=
27
oracle 20399 16888 0 18:57 pts/1 00:00:00 grep myins
You can also query the spfile, see parameter values or change them at nomount stage.
1
sql> show parameter
2
NAME
TYPE
VALUE
3
-------------------------- ----------- ----------------4
O7_DICTIONARY_ACCESSIBILITY boolean
FALSE
5
active_instance_count
integer
6
aq_tm_processes
integer
0
7
archive_lag_target
integer
0
8
asm_diskgroups
string
9
asm_diskstring
string
10 asm_power_limit
integer
1
11 audit_file_dest
string
/u01/admin/testdb/adump
57
audit_sys_operations
audit_syslog_level
boolean FALSE
string
"instance_name" parameter shows the name of the instance. However, the value of this parameter is
not stored in parameter file. It is populated from the value of ORACLE_SID environment variable at
the time the startup command is executed.
4. MOUNT State
SQL> alter database mount;
Database altered.
To proceed to mount state oracle will find the control file and verify its syntax. However, the
information found in control file is not validated. For example, location of data files and redo log files
are stored in a control file but Oracle will not validate if these files exists or not at mount stage. But
there should valid records that show the location of those file in the control file. Otherwise you cannot
proceed to mount state.
The path of control files is stored in parameter file.
SQL> show parameter control_files;
NAME
TYPE
VALUE
------------------------------------ ----------- -----------------------------control_files
string
+ORADATA/testdb/controlfile/cu
rrent.475.758824101, +FRA/test
db/controlfile/current.257.758
824101
Here I've got two control files (paths are seperated by comma) that reside in ASM.
You can find which stage you are at by querying v$database view. This view is not available at
nomount stage because at nomount stage there is no database. However in mount state the database
is associated with the instance.
1
SQL> select name,open_mode from v$database;
2
3
NAME
OPEN_MODE
4
--------- -------------------5
TESTDB MOUNTED
Here, the name of the database is "TESTDB". This is the value of "db_name" parameter you've set in
parameter file. And the database is in mount state.
1
SQL> show parameter db_name;
2
3
NAME
TYPE
VALUE
4
------------------------------------ ----------- -----------------------------5
db_name
string
testdb
As seen above, the name of the database is determined by db_name parameter.
58
59
60
61
62
Redolog
What is Redolog?
Literally, re-do means to do it again.
There is always something changing (update, delete, insert) in your database.
Those changes are recorded by your database.
Each record regarding a change in your database is called a redo record or redolog.
These redologs are stored in files called redolog files.
This is an internal mechanism of Oracle. You can not disable it. Every database should have redolog
files.
Why does Oracle need redologs?
63
64
Redolog States
CURRENT:
If a redolog group is in "current" state it means that, redo records are currently being written to that
group. That redolog group will keep being "current" until a log switch occurs. The act of moving from
one redolog group to another is called a log switch. There can only be one current redolog group at a
time. Example:
1
sql> select group#,status from v$log ;
GROUP# STATUS
1
CURRENT
2
INACTIVE
3
INACTIVE
Here I've got 3 redolog groups. Group 1 is the current redolog group. The other two are inactive.
ACTIVE:
65
66
Archivelog
Redolog files are used in a circular fashion. When a redolog file is filled, the log writer process (lgwr)
shifts to next redolog file. Those redolog files are officially called "online redolog files" but they are
usually called only "redolog files" in community. When all the redolog files are filled then the first
redolog file is overwritten.
67
68
69
1
3
1
4
1
5
1
6
1
7
1
70
71
Then
72
Oracle returns an error and the database instance shuts down. In this
case, you may need to perform media recovery on the database from the
loss of an online redo log file. If the database checkpoint has moved
beyond the lost redo log, media recovery is not necessary since Oracle
has saved the data recorded in the redo log to the data files. Simply drop
the inaccessible redo log group. If Oracle did not archive the bad log, use
ALTER DATABASE CLEAR UNARCHIVED LOG to disable archiving before
the log can be dropped.
73
74
Description
V$LOG
Displays the redo log file information from the control file
V$LOGFILE
75
76
Host
Example
Local
LOG_ARCHIVE_DEST_1 =
LOG_ARCHIVE_DEST_n
77
LOG_ARCHIVE_DEST and
Local
LOG_ARCHIVE_DUPLEX_DES
only
T
LOG_ARCHIVE_DEST =
'/disk1/arc'
LOG_ARCHIVE_DUPLEX_DES
T = '/disk2/arc'
01 Method
Perform the following steps to set the destination for archived redo logs using the
LOG_ARCHIVE_DEST_n initialization parameter:
(1) Use SQL*Plus to shut down the database with normal or immediate option but not abort.
SHUTDOWN
(2) Edit the LOG_ARCHIVE_DEST_n parameter to specify from one to ten archiving locations. The
LOCATION keyword specifies an operating system specific path name.
For example, enter:
LOG_ARCHIVE_DEST_1 = 'LOCATION = /disk1/archive'
LOG_ARCHIVE_DEST_2 = 'LOCATION = /disk2/archive'
LOG_ARCHIVE_DEST_3 = 'LOCATION = /disk3/archive'
If you are archiving to a standby database, use the SERVICE keyword to specify a valid net service
name from the tnsnames.ora file. For example, enter:
LOG_ARCHIVE_DEST_4 = 'SERVICE = standby1'
(3) Edit the LOG_ARCHIVE_FORMAT initialization parameter, using %s to include the log sequence
number as part of the file name and %t to include the thread number. Use capital letters (%S and
%T) to pad the file name to the left with zeroes. For example, enter:
LOG_ARCHIVE_FORMAT = arch%s.arc
These settings will generate archived logs as follows for log sequence
numbers 100, 101, and 102:
/disk1/archive/arch100.arc,
/disk1/archive/arch101.arc,
/disk1/archive/arch102.arc
/disk2/archive/arch100.arc,
/disk2/archive/arch101.arc,
/disk2/archive/arch102.arc
/disk3/archive/arch100.arc,
/disk3/archive/arch101.arc,
/disk3/archive/arch102.arc
02 Method
The second method, which allows you to specify a maximum of two locations, is to use the
LOG_ARCHIVE_DEST parameter to specify a primary archive destination and the
LOG_ARCHIVE_DUPLEX_DEST to specify an optional secondary archive destination. Whenever Oracle
archives a redo log, it archives it to every destination specified by either set of parameters.
Perform the following steps to use method 2:
(1) Use SQL*Plus to shut down the database.
SHUTDOWN
(2) Specify destinations for the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameter
(you can also specify LOG_ARCHIVE_DUPLEX_DEST dynamically using the ALTER SYSTEM statement).
For example, enter:
78
if you find any entries there in this database your stand by database is
not in synch with production and you are to find apply all those archive
log files to fill that gap.
V$ARCHIVE_PROCESSES
this lists the archive processes configured and valid with ACTIVE state
besides displaying information about the state of the various archive
processes for an instance.
V$PROXY_ARCHIVEDLOG
this view contains descriptions of archived log backups which are taken
with a new feature called Proxy Copy. Each row represents a backup of
one archived log
V$ARCHIVED_LOG
Displays historical archived log information from the control file. If you
use a recovery catalog, the RC_ARCHIVED_LOG view contains similar
information.
V$ARCHIVE
V$DATABASE
V$ARCHIVE_DEST
Describes the current instance, all archive destinations, and the current
value, mode, and status of these destinations.
V$BACKUP_REDOLOG
V$LOG
Displays all online redo log groups for the database and indicates which
need to be archived.
V$LOG_HISTORY
Contains log history information such as which logs have been archived
and the SCN range for each archived log.
79
80
81
the database parameter AUDIT_TRAIL is also set to DB by default. Alternatively it can be changed to
OS for file based auditing. The output file location can be tracked as.
SQL> show parameter audit_file_dest;
Turn auditing on and off with the AUDIT or NOAUDIT command.
Example is you can turn on auditing for any CREATE TABLE statement. You might want to track how
often and who is creating tables in the database. Auditing CREATE TABLE this way means an audit
entry is generated every time someone creates a table.
SQL> audit create table; or SQL> audit create table by user1;
Turn off auditing for CREATE TABLE statements:
SQL> noaudit create table;
other examples.
SQL> audit drop any table by user1 whenever not successful;
SQL> audit select on hr.employees by access;
SQL> audit select on hr.employees by session
View Audit Information
To verify the audits you have already implemented use the following command.
SQL> select * from DBA_PRIV_AUDIT_OPTS;
View the audits turned on for objects owned by HR for the SELECT, INSERT, UPDATE, and DELETE
privileges.
SQL> select OWNER, OBJECT_NAME, OBJECT_TYPE, SEL, INS, UPD, DEL
from DBA_OBJ_AUDIT_OPTS
where owner = HR;
DBA_AUDIT_TRAIL = shows all audit entries in the system.
DBA_AUDIT_OBJECT = shows all audit entries in the system for objects.
DBA_AUDIT_STATEMENT = shows audit entries for the statements GRANT, REVOKE, AUDIT, NOAUDIT,
and ALTER SYSTEM.
82
83
84
To debug an application, a DBA sometimes needs to connect as another user to simulate the problem.
Without knowing the actual plain-text password of the user, the DBA can retrieve the encrypted
password from the database, change the password for the user, connect with the changed password,
and then change back the password using an undocumented clause of the alter
user command.
The first step is to retrieve the encrypted password for the user, which is stored in the table
DBA_USERS.
SQL> select password from dba_users where username = 'USER1';
PASSWORD
-----------------------------94b7CBD64A941432
Save this password using cut and paste in a GUI environment, or save it in a text file to retrieve later.
The next step is to temporarily change the users password and then log in using the temporary
password:
SQL> alter user user1 identified by temp_pass;
User altered.
SQL> connect user1/temp_pass@db;
Connected.
At this point, you can debug the application from USER1s point of view. Once you are done
debugging, change the password back using the undocumented by values clause of alter user.
SQL> alter user user1 identified by values '94b7CBD64A941432';
Profiles and Resource Control
The list of resource-control profile options that can appear after CREATE PROFILE profilename LIMIT
are explained in below. Each of these parameters can either be an integer, UNLIMITED or DEFAULT. As
with the password-related parameters, UNLIMITED means that there is no bound on how much of the
given resource can be used. DEFAULT means that this parameter takes its values from the DEFAULT
profile. The COMPOSITE_LIMIT parameter allows you to control a group of resource limits when the
types of resources typically used varies widely by type; it allows a user to use a lot of CPU time but
not much disk I/O during one session, and vice versa during another session, without being
disconnected by the policy.
Resource Parameter Description
SESSIONS_PER_USER = The maximum number of sessions a user can simultaneously have
CPU_PER_SESSION = The maximum CPU time allowed per session, in hundredths of a second
CPU_PER_CALL = Maximum CPU time for a statement parse, execute, or fetch operation, in
hundredths of a second
CONNECT_TIME = Maximum total elapsed time, in minutes
IDLE_TIME = Maximum continuous inactive time in a session, in minutes, while a query or other
operation is not inprogress
LOGICAL_READS_PER_SESSION = Total number of data blocks read per session, either from memory
or disk
85
86
Composite_limits : it specify the total resource cost for a session, expressed in service
units. Oracle Database calculates the total service units as a weighted sum of
CPU_PER_SESSION, CONNECT_TIME,
LOGICAL_READS_PER_SESSION,
and
PRIVATE_SGA. composite_limit <value | unlimited | default>
e.g; SQL> alter profile P1 LIMIT composite_limit 5000000;
CONNECT_TIME : it specify the total elapsed time limit for a session, expressed in
minutes. connect_time <value | unlimited | default> . e.g ,
SQL> alter profile P1 LIMIT connect_time 600;
CPU_PER_CALL : Specify the CPU time limit for a call (a parse, execute, or fetch), expressed
in hundredths of seconds. cpu_per_call < value | unlimited | default > . e.g;
SQL> alter profile P1 LIMIT cpu_per_call 3000;
LOGICAL_READS_PER_CALL : Specify the permitted number of data blocks read for a call
to process a SQL statement (a parse, execute, or fetch). logical_reads_per_call <value
| unlimited | default> .e.g;
SQL> ALTER PROFILE P1 LIMIT logical_reads_per_call 1000;
PRIVATE_SGA : Specify the amount of private space a session can allocate in the shared
pool of the system global area (SGA). private_sga <value | unlimited | default> Only valid with
TP-monitor. e.g;
SQL> ALTER PROFILE P1 LIMIT private_sga 15K ;
the
87
88
Master Table :
At the heart of every Data Pump operation is the master table. This is a table created in the schema of
the user running a Data Pump job. It is a directory that maintains all details about the job: the
current state of every object being exported or imported, the locations of those objects in the dumpfile
set, the user-supplied parameters for the job, the status of every worker process, the current set of
dump files, restart information, and so on. During a file-based export job, the master table is built
during execution and written to the dumpfile set as the last step. Conversely, loading the master table
into the current users schema is the first step of a file-based import operation, so that the master
table can be used to sequence the creation of all objects imported. The use of the master table is the
key to the ability of Data Pump to restart a job in the event of a planned or unplanned job stoppage.
Because it maintains the status of every object to be processed by the job, Data Pump knows which
objects were currently being worked on, and whether or not those objects were successfully
completed.
Process
Structure
89
External Tables
Conventional Path
Data Pump will choose the best data movement method for a particular operation. It is also possible
for the user to specify an access method using command line parameters.The fastest method of
90
Command and control queue: All processes (except clients) subscribe to this queue. All
API commands, work requests and responses, file requests, and log messages are processed on this
queue.
Status queue: Only shadow processes subscribe to read from this queue. It is used to
receive work-in-progress and error messages queued by the MCP. The MCP is the only writer to this
queue.
File Management : The file manager is distributed across several parts of the Data Pump job. The
actual creation of new files and allocation of file segments is handled centrally within the MCP.
However, each worker and parallel query process makes local process requests to the file manager to
allocate space, read a file chunk, write to a buffer, or update progress statistics. The local file manager
determines if the request can be handled locally and if not, forwards it to the MCP using the command
and control queue. Reading file chunks and updating file statistics in the master table are handled
locally. Writing to a buffer is typically handled locally, but may result in a request to the MCP for more
file space.
91
dumpfile=hr_emp_tab.dmp
tables=hr.employees
directory=datapump
Oracle 10g
SQL > VAR OHM VARCHAR2(100);
SQL > EXEC DBMS_SYSTEM.GET_ENV('ORACLE_HOME', :OHM) ;
SQL > PRINT OHM ;
92
Note:
Starting with release 9.2, maintenance releases of Oracle Database are denoted by a change to the
second digit of a release number. In previous releases, the third digit indicated a particular
maintenance release.
Major Database Release Number : The first digit is the most general identifier. It represents a
major new version of the software that contains significant new functionality.
Database Maintenance Release Number : The second digit represents a maintenance release
level. Some new features may also be included.
Application Server Release Number : The third digit reflects the release level of the Oracle
Application Server (OracleAS) .
Component-Specific Release Number : The fourth digit identifies a release level specific to a
component. Different components can have different numbers in this position depending upon, for
example, component patch sets or interim releases.
Platform-Specific Release Number : The fifth digit identifies a platform-specific release. Usually
this is a patch set. When different platforms require the equivalent patch set, this digit will be the
same across the affected platforms.
Checking The Current Release Number : To identify the release of Oracle Database that is
currently installed and to see the release levels of other database components we are using, query the
data dictionary view product_component_version. A sample query follows.(We can also query the
v$version view to see component-level information.) Other product release levels may increment
independent of the database server.
SQL> select * from product_component_version;
PRODUCT
VERSION
STATUS
------------------------------------ ----------NLSRTL
10.2.0.1.0 Production
Oracle Database 10g Enterprise Edition
10.2.0.1.0 Production
PL/SQL
10.2.0.1.0 Production
It is important to convey to Oracle the results of this query when we report problems with the
software .
93