Professional Documents
Culture Documents
Graphical tool(java) to manage one database, on the server. If here are multiple
databases created on the same oracle_home, it is possible to reach them using
different ports. All communication is done over HTTPS. To start the database
control use emctl utility. Environment variables which needs to be set are: Path,
Oracle_home and oracle_sid(sid = name of instance). Three checks on certificate:
1. the certificate is issued by a certificate authority that your browser trusts.
2. valid dates used in certificate
3. URL is the same as host
Monitors:
1. how long recovery if instance fails
Application server control
Graphical tool to manage (group of) instances, on the server, middle tier
Grid Control
Holistic view of environment, manage targets(HTTP or HTTPS)
Oracle enterprise manager (OEM)
Everything what can be done with OEM can also be done with SQL
Grid computing
Grid computing performs a “virtualization” of distributed computing resources
and allows for the automated allocating of resources as system demand changes.
Each server is independent, yet ready to participate in a variety of processing
requests from many types of applications.
Supported languages
Java(3GL), PL/SQL(3GL) and SQL(set-oriented language)
Connectivity
Thick (oci), oracle aware driver, have oracle specific functions
Thin, not aware of a database, so it can be deployed in a non oracle environment
Oracle applications
Oracle forms developer builds applications that run on an Oracle Application
Server middle tier and display in a Java applet
Oracle reports generating and formatting reports, restricted PDF or HTML
XML publisher generating and formatting reports, restricted XML
Oracle Discover generating reports without help of programmer when
configured correctly
Oracle E-business suite (ERP) financial
Oracle collaboration suite servers for email, voicemail fax, file serving etc,
Relation paradigm
Data is stored in two dimensional tables (entity, relation). Effective for Online
Transaction processing(OLTP) and decision support systems(DSS)
Normalization data
1st – remove repeating groups
2nd – remove columns from the table that are not dependent on the primary key
3rd – remove all columns that are interdependent
Documentation:
1. Entity-relationship diagram, connection between tables
Hierarchical structure
Data stored in one unit.
Data manipulation (DML) commands
Select, insert, update, delete, merge
Data definition (DDL) commands
Create, alter, drop, rename, truncate, comment
Data Control (DCL) commands
Grant, Revoke
Transaction control (TCL) commands
Commit, rollback, savepoint
DBA Role
Sizing applications and hardware, oracle installation and maintenance, database
physical design, monitoring and tuning performance, assisting developers with
application design and sql tuning, backup, restore, recovery, user and security
management.
Oracle server
Instance memory structure and processes (background), created first, defined
by parameter file. In windows an instance will be a service.
Database Files on disk, opened second
Datafiles
There is no practical limit in oracle only physical
Architecture
Single instance architecture
Real Application Clusters (RAC)
Multiple instances open one database, RAC gives amazing capabilities for
performance, fault tolerance and scalability. There is performance benefit by
long-runing queries and large batch updates and NOT when a large number of
small transactions is executed(OLTP)
Streams
Streams is a facility for capturing changes made to tables and applying them to
remote copies of the tables. Streams can be bidirectional and can be used for fault
tolerance.
Data Guard
Oracle Data Guard ensures high availability, data protection, and disaster
recovery for enterprise data. Data Guard provides a comprehensive set of services
that create, maintain, manage, and monitor one or more standby databases to
enable production Oracle databases to survive disasters and data corruptions. Data
Guard maintains these standby databases as transactionally consistent copies of
the production database. Data Guard can switch any standby database to the
production role, minimizing the downtime associated with the outage. 2 standby
forms:
1. physical exact copy of primary database, used for Fault tolerance
2. logical possible with different data structures, used for query (database
dataware house)
Memory structure
1. Shared memory segments system global area (SGA), shared across all
background and server processes). A system global area (SGA) is a group of
shared memory structures that contain data and control information for one Oracle
database instance(SGA + oracle process(background))
2. user session / process running locally on the user machine(user session/ user
process), could be odbc driver, user process connects to server process.
3. Server process and sessions connected to the program global area(PGA)
and SGA and user. It takes care that sql code is executed. When dedicated server
the each user process has his own server process. The PGA is used to process
SQL statements and to hold logon and other session information
S, etc. See below for short summary
Sort area – Used for any sorts required by SQL processing
Session information – Includes user privileges
Cursor state – Indicates stage of SQL processing
Stack space – Contains session variables
Extra info: The buffers in the cache are organized in two lists: the write list and
the least recently used (LRU) list. The write list holds dirty buffers, which
contain data that has been modified but has not yet been written to disk. The LRU
list holds free buffers, pinned buffers, and dirty buffers that have not yet been
moved to the write list. Free buffers do not contain any useful data and are
available for use. Pinned buffers are currently being accessed.
Blocks
Blocks have a static length unlike rows which have a variable length, so a row
may use several blocks
Log Buffer(SGA) / Redo
Change vector. Here are the modifications (insert, update, delete, create, alter
drop) stored which are applied to the database. The
Redo log buffer is kept in memory till the log writer(LGWR) flushes (commit)
the memory and writes it to disk. The log buffer is only a few megabytes and
cannot be resized after initialization. In RAC each instance has it’s own log buffer
and writer.
Shared pool(SGA)
Consist of 4 pool components: library cache, data dictionary cache, PL/SQL area
and SQL query and PL/SQL function result. The pool size is dynamic, a to large
pool size can have impact on the performance. Initial size several hundred Mb
Library cache storing recently executed code in its parsed form(sql and pl/sql),
slightest difference (lower instead of upper) means that the statement must be
parsed again. The library cache includes the shared SQL areas, private SQL areas
(user specific variables), PL/SQL procedures and packages, and control structures
such as locks and library cache handles. Anonymous PL/SQL (not compiled code)
is much slower than stored procedures (compiled code).
Data Dictionary cache centralized repository of information about data such as
meaning, relationships to other data, entity, origin, usage, and format. Oracle
databases store information here about the logical and physical structure of the
database. The data dictionary contains information such as: user information, such
as user privileges, available tables, database files, other database objects
integrity constraints defined for tables in the database,
names and datatypes of all columns in database tables,
information on space allocated and used for schema objects
SQL Query result cache oracle stores result of such queries in memory, it is
smart enough to know if a table is updated. Default disabled
Large Pool
Dynamic size. The database administrator can configure an optional memory area
called the large pool to provide large memory allocations for:
1.Session memory for the shared server and the Oracle XA interface (used where
transactions interact with more than one database)
2. I/O server processes
3. Oracle backup and restore operations
4. response queue and request queue
When the large pool is absence then everything will be loaded into memory.
Java Pool
Dynamic size. Only needed when java-stored procedures are running in the
database, the heap space to instantiate the java objects will be stored in this pool.
The java code itself is stored in the shared pool.
Streams Pool
Dynamic size. Extract change vectors from the redo log and reconstruct
statements.
Processes
This section describes the two types of processes that run the Oracle database
server code (server processes and background processes).
Instance background processes are started when an instance is started, there are 8
important background processes.
1. System monitor(SMON), mounting and opening a database. the system monitor
process (SMON) performs recovery, if necessary, at instance
startup. SMON is also responsible for cleaning up temporary segments that are no
longer in use and for coalescing contiguous free extents within dictionary
managed tablespaces. If any terminated transactions were skipped during instance
recovery because of file-read or offline errors, SMON recovers them when the
tablespace or file is brought back online. SMON checks regularly to see whether it
is needed. Other processes can call SMON if they detect a need for it.
DATABASE BEHEER
2. Process monitor(PMON) INSTANCE BEHEER(SGA, etc)
Responsible for cleaning up the database buffer cache in freeing resources that the
user process was using. PMON checks if the status of the server processes and
restart them if necessary (stopped). If a sessions is terminated abnormally it will
be rolled back by the PMON
3. Database writern(meerdere, DBWn)
Writes dirty buffers to the datafiles, it only writes the buffers which needed to be
written and only those. Only buffers which are cold(least recently used, so not
used recently) and dirty will be written to disk if there are no free buffers
anymore. Cause of written buffer to disk.
1. No free buffers, .
2. to many dirty buffers,
3. three second time-out
4. a checkpoint.
4. Log writer(LGWR)
Writes the log buffer to disk (online redo log files). When:
1. 1/3 full
2. if DBWn is about to write dirty buffers
3. if a session does a commit
Contains changes for committed and uncommitted transactions.
5. Checkpoint Process(CKPT)
No longer has to signal full checkpoints (all dirty buffers will be written to disk),
but it does have to keep track of where in the redo stream the incremental position
is and if necessary instruct the DBWn to write out some dirty buffers to push the
current position forward(RBA). Partial checkpoints occur automatically
1. Every dirty block in the buffer cache is written to the data files.
That is, it synchronizes the datablocks in the buffer cache with the
datafiles on disk.
It's the DBWR that writes all modified databaseblocks back to the
datafiles.
2. The latest SCN is written (updated) into the datafile header.
3. The latest SCN (System Change Number, this is a sequential
counter, identifying precisely a moment in the database.) is also written to
the controlfiles.
• An orderly shutdown
• DBA’s request
6. Manageability monitor(MMON)
MMON gathers a snapshot and launches the ADDM every hour. ADDM makes
observations and recommendations regarding performance using two snapshots.
Capturing statistics value for SQL objects which have been changed
7. Manageability monitor light(MMNL)
Performs frequent and light-weight manageability-related tasks, such as session
history capture
8. Memory Manager(MMAN)
Observe the demand of PGA an SGA memory and allocates it.
9. Archivern(meerdere, ARCn)
The LGWR writes the online redo log files and the archiver reads them, no other
process touches them. ARC moves redo information from redo log file to archive
log destination
10. Recoverer process(RECO)
A distributed transaction is a transaction that involves updates to two or more
databases. Is responsible for rollback.
Database files
Control file
Needed to make connection between instance and database.
The database control file is a small binary file necessary for the database to
start and operate successfully. A control file is updated continuously by
Oracle during database use, so it must be available for writing whenever
the database is open. If for some reason the control file is not accessible,
then the database cannot function properly Online redo log files datafiles.
If control file is damaged then database crashes, the control file can be
reconfigured when the database is in a nomount mode.
• location of datafiles
• location of redo log
Redo log files
Two types: online redo log files and archive log files. Every database must
have at least two online redo log file groups to function and each group
should have at least two members for safety. An online redo log group is a
group of (possibly one) online redo log files. Each of these files in the groups is
called an online redo log member of the group. Only one group at a time is
active.
The purpose of having multiple members in a group is to ensure that if a redo
log file becomes unusable another file of the unusable file's redo log group can
be used instead. The database will not crash if a member of redo group is
damaged or being reconfigured. A log switch occurs when one online redo
log has been filled and the next online redo log is going to be filled. A log
switch always triggers a checkpoint. Two modes when switch
• archivelog mode ensures no online redo log file is not
overwritten till it has been archived. Archiver process launch
automatically after log switch.
• noarchivelog mode (default) online files are overwritten
before copy is made. The database isn’t corrupt after crash but
you lose data. If you want to switch to archivelog you need to
do a clean shutdown and change it in the mount mode.
Datafiles
Server processes reads from the datafiles and dbwn writes to
datafiles. Two datafiles required: system and sysaux. A datafile consists
of multiple system blocks, they are static.
Database structure
FAT systems handles file up to 4 GB, use NTFS or ASM(automatic storage management)
Data Dictionary
The data dictionary is metadata, data about data and stored in a segment like any
other table. It describes the database both physical and logically and its contents.
User definitions, security information, integrity constraints and performance
monitoring. It is stored in the tablespace system and sysaux.
Prefixes:
USER_ objects owned by the user querying the view
ALL_ tables to which you have access + own tables
DBA_ rows for every object in the database
V$ reflect the internal state of the DBMS and are mainly useful to DBAs for
performance audit and optimisation.
Dictionary tables to know
DATA_FILES physical database files(dba )
TABLESPACES describes all tablespaces in database (dba, User)
USERS all users of database(dba)
TEMP_FILES describes all temporary files in the database(dba )
V$TEMPFILE displays tempfile information.
V$database displays information about the database from the
control file.
V$datafile This view contains datafile information from the
control file
V$ transaction segment which has been assigned to transaction
Mount modes
No mount instance isn’t connected to database and db_name parameter is not set.
Requires parameter file
Mounted The control file is successfully opened and the files(data) in the control file
can be found, requires control file
Open connection between instance and database is ready to be used, so all redo files
and datafiles can be reached
Shutdown all files are closed and instance does not exist
Create a database
1. create parameter file
2. use the parameter file to build instance
3. issue create database command to create database
4. SQL scripts to create data dictionary
5. SQL scripts to generate Enterprise Manager Database Control
Echo and spools commands to write to log file everything.
There are defaults for everything except db_name (required), so if it not exists it will be
created using the defaults.
Files created: controlfile, online redo log files, sysaux and system tablespace
Post creation scripts
Create db files it creates users tablespace
Create DB catalog constructs views on the data dictionary which makes it possible to
manage oracle database and many pl/sql scripts
Emrepository create objects needed by enterprise manager database control
postDBcreation generates server parameter file
Database listener
Process that monitors the port of a database. Three ways to start:
1. lsnrctl
2. database control
3. windows service
launches new server processes and checks if instance is available. When the listener is
down no new server processes can be generated. A listener needs to be on the same
machine as the instance except when RAC is used.
Instances needs to be registered by listener, two ways listener.ora or when a instance
startups it will registers itself. The parameter local_listener tells the instance how to
connect with the listener, PMON process is responsible. If you want to register instance
to listener us following command: alter system register
Configuration can be found in listener.ora, this is thus server side. Once registered the
listener can be brought down
Connection
Connect user/pass[@connect_alias] as sysdba or sysoper
@connect_alias could be ip-adres, hostname, LDAP, connect string(tnsnames.ora,
client side)
Sysdba and sysoper are not users they are privileges that can be assigned to users. By
default the sys user has these privileges.
Sysoper has access to the following commands: STARTUP, SHUTDOWN, alter
database, alter backup and recover
Sysdba same as sysoper only has one more ability: create database
Shutdown
Normal No new user will be permitted but all current connections are allowed to
continue
Transactional No new user will be permitted; existing session that are not in a
transaction will be terminated
Immediate No new sessions are permitted and all connected sessions are terminated
Abort this is the equivalent of a power cut.
PMON will rollback any incomplete transactions
Instance parameters
Current parameters in instance can be found in table V$parameters
Parameters in spfile can be found in v$spparameter
The parameters can be requested by using sql*plus or by using the database control
It is possible to change the parameters, you need to provide the scope what you want to
change(memory or spfile)
V$Spparameter = spfile
V$parameter = Memory
Wanneer je zowel in memory als in file wilt aanpannen gebruik dan scope both
Alert log
All critical operations applied to database or instance will logged to alert log. So changes
which don’t affect database will not be logged, e.g. creating user, alter session.
Location of Trace files will be controlled by BACKGROUND_DUMP_DEST
Oracle Net configuration
Two types of configuration
Dedicated server each user process own server process
Shared server number of user processes make use of server process pool
Network support:
TCP, TCP with secure sockets, NMP(named pipes), SDP, IPC(When a process is on the
same machine as the server,no tcp headers).
The database listener isn’t required when using IPC.
Oracle Net(network architecture) is responsible for connection between user process and
server process
Tools to manage oracle net database control, net manager, net configuration assistant
When an instance starts, the network listener process opens and establishes a
communication pathway through which users connect to Oracle. Then, each dispatcher
process gives the listener process an address at which the dispatcher listens for
connection requests. At least one dispatcher process must be configured and started for
each network protocol that the database clients will use. The listener returns the address
of the dispatcher process that has the lightest load, and the user process connects to the
dispatcher directly
When a user makes a call, the dispatcher places the request on the request queue, where
it is picked up by the next available shared server process.
The request queue is in the SGA and is common to all dispatcher processes of an
instance. The shared server processes check the common request queue for new requests,
picking up new requests on a first-in-first-out basis. One shared server process picks up
one request in the queue and makes all necessary calls to the database to complete that
request.
When the server completes the request, it places the response on the calling dispatcher's
response queue. Each dispatcher has its own response queue in the SGA. The dispatcher
then returns the completed request to the appropriate user process.
!! a connection between user process and dispatcher is for the duration of the session. The
connection between listener and session is temporary (transient)
Shared server session stored session information on the SGA(large pool), rather than in
the user global Area. This memory is called UGA’s Only the stackspace is stored in the
PGA
Datafile storage
4 types of devices to store datafile:
1. file on file local system files are stored in normal OS directory
2. files on clustered file system system with external disks, mounted on more than one
computer, e.g. OCFS(oracle clustered file system)
3. files on raw devices it is possible(not recommend) to create datafiles on disks with
no file system
4. files on ASM devices A ASM is a volume manager and a file system for Oracle
database files. ASM uses disk groups to store datafiles; an ASM disk group is a collection
of disks that ASM manages as a unit. Within a disk group, ASM exposes a file system
interface for Oracle database files. The content of files that are stored in a disk group are
evenly distributed, or striped(to balance load and to reduce I/O latency), to eliminate hot
spots and to provide uniform performance across the disks. The performance is
comparable to the performance of raw devices. ASM can store database files(only not
alertlogs, binary files and trace files) and Oracle_home needs to be defined. Oracle ASM
separates files into stripes and spreads data evenly across all of the disks in a disk group.,
not volumes. Mirroring is optional striping not.
Tablespaces
All databases must have a system, a sysaux, temporary and an undo tablespace
Creation options
- small is multiple files and bigfile is one big file
TEMPFILEs are not recorded in the database's control file. This implies that one can just recreate
them whenever you restore the database, or after deleting them by accident. This opens interesting
possibilities like having different TEMPFILE configurations between permanent and standby
databases, or configure TEMPFILEs to be local instead of shared in a RAC environment.
USER ACCOUNTS
User account is used to connect to instance, attribute are:
Username unique name in database, <30 characters, can contain letter digits dollar
signs and underscore. Excpetion is when you use double quotes, so create user “john%#”
is valid.
Authentication
Tablespace every user has default tablespace. Here will the objects created by user
will stored.
Tablespace quotas is the amount of space that a user is allowed to occupy, if it’s
reduced the tables will survive but cannot get bigegr
Temporary tablespace permanent objects are stored in tablespaces and temporary
objects are stored in temporary tablespaces. Temporary tablespaces are used to
manage space for database sort operations and for storing global temporary tables.
CREATE INDEX, ANALYZE, Select DISTINCT, ORDER BY, GROUP BY,
UNION, INTERSECT, MINUS, Sort-Merge joins, etc.
A user doesn’t need quota on their temporary tablespace
user profile control password setting and control over resource usage
account status open, locked expired, expired(grace)
to create a table you need to have the create table
When a user account is created, a schema is created too
Sys user owns data dictionary, PL/SQL packages
System user owns objects used for administration and monitoring
Authentication
All user session must be authenticated. There is no such thing as an anonymous login
Example:
Grant create table to scott with admin option admin option means that you are able to
pass the privilege to someone else
If privilege is revoked from a user, any actions performed will remain intact, also the
priviliges given to someone else.
Object Privileges
The any privilges are only system priviliges.
Object privileges are :
Select, insert, update, delete, alter, execute.
Example
Grant select on hr.employees to scott [with grand option],
With grand option a user is allowed to pass the object privilege to someone else
ROLES
A role can be enabled or disabled, can have system and/or object privilges, can be
password protected
Example:
Create Role hr_junior <identified by pass>
Grant create session to hr_junior
Grant hr_junior to user
Manage profiles
A profile handles two things:
1. Enforce password policy, limits are: failed login attempts, password_locktime,
password_life_time, password_grace_time, password_reuse_time, password_reuse_max.
2. restrict the resources a session can take. The instance parameter resource_limit needs
to be set to true: limits are: sessions_per_user, cpu_per_session, CPU_Per_call
Examples:
Constraint dept_deptno_pk primary key,
Constraint name check(deptno between 10 and 90)
Constraint dept_dnsame not null
Constraint emp_mgr_fk referenced emp(empno)
An transaction can only be protected by one undo segment(no fixed size because of extents). An undo
segment can support multiple transactions if needed.
Three levels of necessity:
1. Active undo cannot be overwritten
2. Expired undo can be overwritten
3. Unexpired undo can be overwritten only if there is a shortage of undo space.
Requirements:
1. sufficient space
2. additional space to store unexpired undo data that might be needed
If out of undo space, the portion that already had succeeded will be rolled back. The rest of the transaction
remains intact and uncommitted
Parameters:
UNDO_management: AUTO or MANUAL
UNDO_TABLESPACE: the localtion of the undo tablespace
UNDO_RETENTION: number of seconds when undo data becomes expired instead of unexpired
The undo segment isn’t by default autoextend, this can be set at anytime
Redo(crashes)
Unlike undo segments that are stored in a tablespace and are similar to tables and indexes, redo logs are
stored at the file system level and are used to store changes to the database as they happen.
The primary purpose of the redo logs are to record all changes to the database as fast as possible. Oracle
simply writes all changes to a redo log file. All changes, no mater what table, no matter what tablespace, no
mater where on the filesystem the final change will need to be made..
It's called a redo log, because first oracle writes the data quickly to disk, then it later has to redo that change
in the database proper. If the database crashes or has some other type of failure, oracle will first read the
redo logs and apply all committed changes to their proper tablespaces / datafiles before opening the
database up for regular use. Old value and new value written to redo.
Update
1. read from buffer cache, if data is not available then from datafiles
2. writes to log buffer(old+new)
3. writes to undo segment(old)
4. change is carried out on the buffer cache
5. until commit all other sessions reads from undo segement
Insert / delete
1. Read from buffer cache, if data is not available then from datafiles
2. insert only the row id is stored in redo log buffer, when delete whole row is
stored.
Rollback
Rollback is carried out by background processes.
1. PMON if there is a problem with the session
2. SMON if there is a problem with database(e.g. rebooted)
3. Manual it is possible to do a manual rollback
Commit
Nothing will happen with the dBWn only the log buffer will be written to disk. Your
session will hang till the write is completed. LGWR contains data of committed and
uncommitted data.
Any DDL command, GRANT and revoke will commit a transaction. Autocommit on
means after each DML statement a commit is done implicit.
PL/SQL
PL/sql always executes in database, java can be run either within database or on client.
Stored pl/sql loaded into database and saved in data dictionary as named as PL/SQL
object. Is faster because it is already compiled.
Anonymous code stored remotely
Procedure carries out some action. The arguments can be IN, OUT or IN-OUT
Function similar as procedure but does not have OUT parameters and can’t be invoked
by execute. A function returns one single value.
Package exists of two objects: specification(pks) and body(pkb)
Triggers A trigger runs automatically when an action is carried out or a certain
situation arises. A trigger only runs by it’s own trigger event.
Locking
Exclusive lock is a lock on one row or table and nobody else can change the row then
the session who has the lock. It is possible for everyone to read the row.
Shared lock on a object(table) but multiple users can take the lock. A shared lock is
used to prevent a exclusive lock on table or row.
DML statements require two locks: exclusive lock on row and shared lock on table.
DDL statements require exclusive lock.
Normally a request will wait when the object is already locked, this can be avoided by
using the WAIT <n, number of seconds> or NoWAIT clauses in the select.
Deadlock is resolved by the database automatically. The dba should report deadlocks to
developers because of bad design. If two processes wait on each other one session will be
rollbacked and the other will return an ORA_00060 error
Flashback query
Flashback quert is a facility to see the databse in the past. Select * from piet as of
timestamp(systimestamp-1/1440). Flashback query makes use of undo tablepspace
Public privileges
The role public is assigned to everyone implicit. Packages(security instance parameters)
that are assigned to public role but they should not be assigned to this role:
UTL_FILE read and write any file and directory that is accessible to the operating
system Oracle owner
UTL_TCP this allows users to open TCP ports
UTL_SMTP let users send mail messages
UTL_HTTP allows users to send and receive HTTP messages
Instance Parameters
UTL_FILE_DIR UTL_FILE package have access to all directories in the
UTL_File_dir
REMOTE_OS_AUTHENT User can connect to database from a remote computer
without the need to give a password. The OS username an password on his local machine
will be used.
OS_AUTHENT_PREFIX it specifies a prefix that must be applied to the OS username
O7_DICTIONARY_ACCESSIBILITY Granting object priviliges with the any
keyword. E.g. grant select any table to Jon.
REMOTE_LOGIN_PASSWORDFILE is it possible to connect to the instance as a
user with sysdba or sysoper privilges over the network. Could be EXCLUSIVE or
SHARED. EXCLUSIVE: instance will look for a file whose name includes the instance
name: PWDinstance_name.ora. SHARED: all instances running on the same oracle home
share common password file.
Auditing
SYSDBA activity the instance parameter AUDIT_SYS_OPERATIONS takes care of
the fact that all statements will written to ausit trail.
DATABASE AUDITING instance parameter AUDIT_TRAIL:
None: disabled
OS Audit records will be written to OS
audit trails
DB Audit records written to data
Dictionary, sys.aud$
DB_EXTENDED same as db only now with sql
statements and bind variables
XML As OS only in XML
XML_EXTENDED same as xml only with sql statements
and bind variables
audit command, e.g. audit create any trigger<successful, not successful>, now it will
generate a trail for every trigger which is created. It is possible to add an extra filter
successful or not successful. Based on commands executed against database. Default
view DBA_AUDIT_TRAIL
VALUEBASED When a row is affected, e.g. audit insert on HR.EMPLOYEES
FINEGRAINED capture access to a table, but cannot distinguish between rows. It can
run PL/SQL code when condition is breached. Create policy : ADD policy procedure.
Results: DBA_FGA_AUDIT_TRAIL
Manage Optimizer
Execution plans are developed dynamically by the optimizer, the optimizer relies on
statistics: The statics are:
DBA_TABLES number of rows, number of blocks, amount of free space
DBA_TAB_COLUMNS number of distinct values, highest and lowest values, average
column length
DBA_INDEXES The depth of the index tree, the number of distinct key values
INDEX_STATS number of index entries referring to extant rows
Statistics
Object statistics are not real-time; they are static until refreshed by new analyses (manual
(analyze dbms_STATS command, database control) or automatic). You need to refresh
these analyses frequently else the optimizer can make wrong execute plans. The visibility
of statistics can be controlled by parameter STATISTICS_LEVEL
BASIC disables computation of awr(automatic workload repository) statistics and
disables daily analysis
TYPICAL default setting, will gather the statistics needed for self-management and
performs the daily analysis task.
ALL gathers all information, has impact on performance.
Automatic workload repository(AWR)
AWR is a set of tables and other objects in the sysaux tablespace. Instance statistics
written to disk is called AWR snapshot. The flushing(writing) to disk is done by MMON
process. MMON reads from memory(SGA) and writes to disk so AWR reads these files.
The MMON makes ones an hour an AWR snapshot and will be stored by default 8 days.
AWR is located in the SYSAUX and cannot be relocated.
Metrics Statistics must be converted in metrics, a metric is two or more statistics
correlated together.
Baseline stored set of metrics or/and statistics which can be used across time.
Baselines will be kept indefinitely.
Advisory Framework
Oracle comes with a set of advisors:
ADDM in MMON and takes information from the AWR. It automatically generates a
report with a cover period between the current snapshot and the previous one. ADDM
reports are stored for 30 days.
Recommadations:
• Hardware changes
• Database configuration
• Schema changes
• Application changes
Memory advisors predict the effect of varying the size of memory structures(SGA EN
PGA). The is no advisor for large pool. There is a memory advisor for the whole
SGA(parameter MEMORY_TARGET)
SQL access, tuning and Repair Three sql advisors: access, tuning and repair.
• Access: observe workload of sql statements. Recommendations could be: create
or drop index, make views.
• Tuning: analyze individual statements, as well as recommending schema changes.
Recommendations could be: analyze individual statements as well as
recommending schema changes
• Repair: investigate if a different execution plan is followed no error will be
thrown.
Automatic Undo Advisor observe the rate of undo data generation and length of
queries
Mean Time to Recover estimates how long the period of downtime for crash recovery
will be, given the current workload.
Data Recovery Advisor advise the dba about nature and extent of the problem after a
database failure
Segment Advisor recommends appropriate reorganization of segments because
segments don’t schrink automatically.
Automatic Maintance
There are three automated maintance tasks(scheduler): gathering optimizer statitics, the
segment advisor, the sql tuning advisor. The advisors run automatically, but
recommendations must be accepted manual. The tasks run in the maintenance window.
Which by default opens for four hours every weeknight at 2200 and twenty for hours in
the weekend.
Alerts
Alerts are raised by MMON, Enterprise reads alerts, as other third parties can.
Two kinds of alerts:
• Stateful based on conditions that persist and can be fixed.
• Stateless are based on events, e.g. snapshot to old or two transactions
deadlocking each other.
There are two hundred metrics(V$metrics available) were a thresholds can be set.
Notification
Default notification mechanism for stateful alerts is displaying them in enterprise
manager. When an alert is cleared it will be moved from DBA-
OUSTANDIGN_ALERTS to DBA_ALERT_HISTORY.
PGA Memory Management
Executing sql store specific data: temporary tables, sorting rows, merging bitmaps,
variables and call stack. Every sql statement uses SGA memory and a PGA memory.
Three stages of memory allocation
• Optimal
Allocation of all input data
• One-pass
Insufficient for optimal execution therefore extra pass over the data
• multipass
multipass allocation is even smaller, so several passes are necessary
Optimal is the ideal situation but this isn’t realistic because data warehouse operations
can take gigabytes of memory allocations, example
Optimal One-pass multipass
10 gb of data > 10 GB memory +/- 40 MB < 20 MB
Nologging disables redo generation only for index rebuild. All DML operation against
the index generate redo information
Failures
1 . Statement failure server process rollbacks statement. If statement is part of
multistatement transaction the succeeded statements will remain intact.
• constraint violation or a format.
• Logic errors in application deadlocks
• Space management problems
• Insufficient priviliges
3. user process failure an active transaction will be rolled back automatically
a. terminal rebooting
b. address violation
4. Network failure it is possible that there is no single point of failure
a. Database listener crashes
b. OS and hardware levels fails
c. Routing problems or localized network failures
5. User Errors e.g drop table and commit it but this was not the intention
a. Flashback query query against a version of the database as at some
time in the past
b. Flashback drop reverses effect of drop table
c. Log Miner extracts information from online and archived redo logs.
Redo includes all changes made to data blocks
6. Media Failure damage to disk and therefore the files stored on them, after
recovery a instance recovery is done.
a. Data guard
b. RAID
c. Archived redo logs
d. Multiple copies of controlfile, online redo log file, archive redo log file
7. Instance failure disorderly shutdown of the instance(crash), so missing
committed transaction and storing uncommitted transactions. Recovery is
completely automatic.
a. LGWR cares that there is enough information in the redo log stream and
online redo log files
Database corruption
Enough data in redo log to reconstruct all work. Uncommitted work must never be saved.
The instance recovery mechanism of redo and rollback makes it impossible to corrupt an
Oracle database, so long there is no physical damage. Oracle guarantees that your
database is never corrupted using online redo log files.
Instance recovery
Critical for SLA is MTTR(mean time to recover). Depends on:
• How much redo
• How many read/write operations needed on datafiles
Checkpoint can guarantee that the mean time to recover is short but this could cripple
performance because of excessive disk I/O. The FAST_START_MTTR_TARGET
(default zero, so long recovery) parameter will care that the database is recovered within
that number of seconds. Setting parameter to non zero value enables checkpoint auto-
tuning.Checkpoint auto-tuning uses free CPU for writing dirty buffers to buffer cache.
The parameter can be set in database control
Flash recovery Area
The FAR is a disk destination used as default location for recovery related files. Two
parameters:
Db_recovery_file_dest location
Db_recover_file_dest_size limits amount of space, if size is reached obsolute backups
will be deleted
Files written to FAR:
• Recovery manager backups
• Archive redo log files
• Database flashback logs
• Current control file
Backup
Offline backup, backup taken while database is closed
Closed
Cold
Consistent
Online backup, backup taken while database is in use:
Open
Hot
Inconsistent
User managed backup
If database is in noarchivelog mode, you cannot do open backups. Backups may only be
made with operating commands when the database is shut down or with RMAN when it
is mounted.
Consistent backup
• Copy controlfile
• Copy datafile
• Copy online redo log files
• Tempfiles (optional)
• Parameter file
Three types:
• Backup set contain several files and will not include never-used blocks
• Compressed backup set same as backup set only compressed
• Image copy indentical to the input file include never-used blocks
Automate backups
1. User and server managed backup jobs can be scheduled with OS scheduler
2. Server managed backups can also be scheduled by enterprise manager job.
Manage backups
1. Repository can be checked with reality with crosscheck command.
2. user-managed backups can be under RMAN control by using
CATALOG command
3. if backing up flash recovery area, its usage must be monitored
Data recovery Advisor (DRA)
1. Diagnosing and repairing problems with database.
2. RMAN executable and runnable through enterprise manager
DRA makes use of information gathered by health monitor and automatic diagnostic
repository (ADR, parameter DIAGNOSTIC_DEST) and then generates RMAN recovery
script.
Flow of dra:
1. Assess data failures from the health monitor to ADR
2. List failures, list all failures from ADR
3. generate RMAN script
4. run Script
The DRA will not generate any advice if you have not first asked it to list failures.
Health monitor
Health monitor is a set of checks that run automatically when certain error occurs.
Results are stored in files. Checks performed in different stages:
1. no mount integrity of control file
2. mount integrity control file, only redo log, data file headers. Accessibility will be
checked for the inline and archived log files.
3. Open data block scan, integrity of data dictionary and the undo data segments.
Move data
SQL LOADER
Tool for inserting data into an oracle database, wide variety of formats is supported.
Control File interpret the contents of the input data files
Input data source that will be uploaded into database
Log files log successful runs.
Bad file data doesn’t conform control file or violate integrity constraint.
Reject file mismatch selection criterion
Directory objects
DIRECTORY object is a logical structure that represents a physical directory on the
server's file system. The package UTF_FILE has procedures to write to operating system
files. Only directories in the UTF_FILE_DIR can be accessed. This parameter is system
level parameter so it is not configurable for a single user. ‘*’ each directory on server.
Directory objects are owned by SYS, you have to have the right permissions to create.
External tables
Exists as an object defined in the data dictionary not as segment. It enables you to access
data in external sources as if it were in a table in the database.
1. Cannot perform DML operations
2. can perform a write using data pump
create statement contains “organization external”, type could be oracle_loader for using
files like in sql loader or oracle_datapump using a select statement.
Data pump
Extract from one database and insert it into another one.
Import, export old tools using server process via the user process. Cannot read data
pump generated files. Works with all versions
Datapump can only work with 10g and 11g besides this datapump cannot write to file
system. Datapump is a server-side utility therefore it is faster than import/export because
they need to connect to server process using user process, datapump can directly access
datafiles. Everything on the server(files etc). A job is independent of the session.
The external table path insert uses regular commit, like any other DML statement. Direct
path doesn’t use a commit but use high water mark. Data files are the same. Data pump
files can only be read by data pump.
Parallel processing two levels, number of worker processes and parallel execution
servers each worker process uses.
Network mode The fastest way. transfer data from one database to another database
without staging(Disk staging is using disks as an additional, temporary stage of backup
process before finally storing backup to tape. Backups stay on disk typically for a day or
a week, before being copied to tape in a background process and deleted afterwards.).
Database control data pump 1. export to export files 2. import from export files 3.
import from database. 4. monitor export and import jobs.