You are on page 1of 43

Dear All,

In this blog, we are going to post set of questions asked in the Oracle 1Z0-515
Oracle Data Warehousing 11g Essentials certification examination.
Please comment below to let us know in case of any concern and query.
1. Identify the true statement about REF partitions.
Indentify the true statement about REF partitions.
A.REF partitions have no impact on partition-wise joins.
B. Changes to partitioning in the parent table are automatically reflected in the
child table.
C. Changes in the data in a parent table are reflected in a child table.
D. REF partitions can save storage space in the parent table.
Explanation:
Reference partitioning is a partitioning method introduced in Oracle 11g. Using
reference partitioning, a child table can inherit the partitioning characteristics
from a parent table.

2. Which feature of Oracle Warehouse Builder can be used to help


ensure data quality?
Which feature of Oracle Warehouse Builder can be used to help ensure data
quality?
A.Data extraction
B.Data profiling
C.Logical mapping
D.Exception reporting
Explanation:
After you connect to your data sources in Oracle Warehouse Builder (including
Oracle databases, sources accessed through gateways, and flat file sources) you
can apply fullfeatured data profiling to generate statistics about data quality, and
to discover complex patterns, foreign key relationships, and functional
dependencies. You can then design complex data rules and create data auditors
to monitor compliance with those rules in any source or target system in your
landscape, regardless of whether those sources are loaded using Oracle
Warehouse Builder
or other ETL tools.
For customers who have selected solutions other than Oracle Warehouse Builder
for data profiling
and data quality, these can be applied independently of Oracle Warehouse
Builder ETL and
design features.
Note: Oracle Warehouse Builder is a full-featured data integration, data
warehousing, data quality
and metadata management solution designed for the Oracle database. Oracle
Warehouse Builder

is an integral part of Oracle Database 11g Release 2 (11.2) and is installed as


part of every
database installation (other than Oracle Database XE).
The major feature areas of Oracle Warehouse Builder include:
*Data modeling
*Extraction, Transformation, and Load (ETL)
*Data profiling and data quality
*Metadata management
*Business-level integration of ERP application data
*Integration with Oracle business intelligence tools for reporting purposes
*Advanced data lineage and impact analysis
Oracle Warehouse Builder is also an extensible data integration and data quality
solutions
platform. Oracle Warehouse Builder can be extended to manage metadata
specific to any
application, and can integrate with new data source and target types, and
implement support for
new data access mechanisms and platforms, enforce your organizations best
practices, and
foster the reuse of components across solutions.
Reference: Oracle Warehouse Builder Concepts, 11g Release 2 (11.2), 2
Introduction to Oracle
Warehouse Builder
http://docs.oracle.com/cd/E11882_01/...0581/intro.htm

3. The Analytic Workspace Manager would be used to generate_______.


The Analytic Workspace Manager would be used to generate_______.
A. Materialized views
B. Oracle OLAP Option cubes
C. Oracle Data Mining algorithms
D. Oracle SQL Analytic functions

Explanation:
You can use Analytic Workspace Manager for creating measures, dimensions and
cubes in the OLAP database if the database was installed with the OLAP option.
Workspace Manager, a feature of Oracle Database, enables application
developers and DBAs to
manage current, proposed and historical versions of data in the same database.
Note: Applications and DBA operations often work with more than one version of
the data. Three
common reasons to have multiple data versions are concurrency, auditing and
scenario creation.
Oracle Workspace Manager provides workspaces as a virtual environment to
isolate a collection
of changes to production data, keep a history of changes to data and create

multiple data
scenarios for what if analysis. It can save money, time and labor over traditional
approaches.
4. What would you do to compress data in partitions that are frequently
updated in Oracle Database 11g?
What would you do to compress data in partitions that are frequently updated in
Oracle Database
11g?
A. Use Hybrid Columnar Compression.
B. Use Advanced Compression Option.
C. Use Hybrid Partitions.
D. Avoid compressing any data.
Explanation:
Advanced Compression features in Oracle Database 11g include:
* Online Transaction Processing (OLTP) Table Compression: This breakthrough
compression feature compresses table data during all types of data manipulation
operations, including conventional INSERT or UPDATE. OLTP Table
Compression leverages a sophisticated and intelligent algorithm that minimizes
compression overhead during write operations, thereby making it viable for
highly transactional workloads
Reference: Oracle Advanced Compression, Oracle Data Sheet
Hybrid Columnar Compression (HCC) on Exadata, Oracle White Paper
5. How much user data should you conservatively estimate this
configuration can hold?
A Full Rack Oracle Exadata Database Machine can contain 100 TB of disk space
with high
performance. How much user data should you conservatively estimate this
configuration can hold?
A.
100 TB
B.
75 TB
C.
50 TB
D.
28 TB
E.
20 TB
Explanation:
Without any redundancy (high performance is the only goal) you can use 100%
of the disk space
to data.
With disk mirroring (with lower performance) you can use 50% of the disk space
to data.
6. What should you do in order to most simply implement this change?

You have analyzed your clients workload and the SQL Access Advisor in
Enterprise Manager
recommends that you create some materialized views to improve performance.
What should you
do in order to most simply implement this change?
A.
Rewrite all the queries in the application to identify materialized view
B.
Rewrite existing queries. New queries will automatically use the views.
C.
Respond positively to the Advisor to create the materialized views.
D.
Build virtual views on a third normal form schema.
Explanation:
Enterprise Manager provides a very simple interface for the SQL Access Advisor
(Advisor Central > SQL Advisor >SQL Access Advisor). The first page allows you
to create tasks
to test existing indexes, materialized view and partitions, or create tasks to
suggest new
structures.
The Workload Source page allows you to define the workload to associate with
the task. The
basic options allow the workload to be gathered from the cursor cache, an
existing SQL tuning set,
or a hypothetical workload based on specific schema objects.
The Recommendation Options page allows you to define which type of
recommendations you
are interested in (Indexes, Materialized Views and Partitioning).
After reviewing the result of the analysis you can decide if you should accept or
ignore the
suggested recommendations.
Note: The SQL Access Advisor was introduced in Oracle 10g to make suggestions
about
additional indexes and materialized views which might improve system
performance.
Reference: QL Access Advisor in Oracle Database 11g Release 1
http://www.oracle-base.com/articles/...isor_11gR1.php
7. Which approach is least likely to be used assuming that the customer
does not want the expense of managing view?
Your customer is looking to implement ad-hoc analysis in a data warehouse.
Which approach is
least likely to be used assuming that the customer does not want the expense of
managing view?
A.
Star schema
B.
Snowflake schema
C.
Third normal form schema

D.
OLAP
Explanation:
Data warehouses often use denormalized or partially denormalized schemas
(such
as a star schema) to optimize query performance. On the other hand OLTP
(Online Transaction
Processing) systems often use fully normalized schemas to optimize
update/insert/delete
performance, and to guarantee data consistency.
Reference: Oracle Database Data Warehousing Guide, 11g Release 1 (11.1)
8. What are two ways in which query performance can be improved with
partitioning?
What are two ways in which query performance can be improved with
partitioning?
A.
Partition pruning
B.
Partition optimization
C.
Partition compression
D.
Partition-wise joins
Explanation:
A: Even when you dont name a specific partition in a SQL statement, the fact
that a
table is partitioned might still influence the manner in which the statement
accesses the table.
When a SQL statement accesses one or more partitioned tables, the Oracle
optimizer attempts to
use the information in the WHERE clause to eliminate some of the partitions from
consideration
during statement execution. This process, called partition pruning, speeds
statement execution by
ignoring any partitions that cannot satisfy the statements WHERE clause. To do
so, the optimizer
uses information from the table definition combined with information from the
statements WHERE
clause.
D: A partition wise join is a join between (for simplicity) two tables that are
partitioned on the same
column with the same partitioning scheme. In shared nothing this is effectively
hard partitioning
locating data on a specific node / storage combo. In Oracle is is logical
partitioning.
If you now join the two tables on that partitioned column you can break up the
join in smaller joins
exactly along the partitions in the data. Since they are partitioned (grouped) into
the same buckets,

all values required to do the join live in the equivalent bucket on either sides. No
need to talk to
anyone else, no need to redistribute data to anyone else in short, the optimal
join method for
parallel processing of two large data sets.
9. Which two statements are true about the advantages of using a data
warehouse?
Which two statements are true about the advantages of using a data warehouse?
A.
A data warehouse uses fewer database structures, so access to answers is faster
and easier
B.
A data warehouse is typically implemented with a different design, making
access faster.
C.
A data warehouse is optimized for ongoing write activity, making response faster.
D.
A data warehouse uses specialized features of the Oracle database, like
materialized views
and star transformations, making response faster.
Explanation:
Data warehouses often use denormalized or partially denormalized schemas
(such
as a star schema) to optimize query performance.
Note: A materialized view is a pre-computed table comprising aggregated or
joined data from fact
and possibly dimension tables. Also known as a summary or aggregate table.
Reference: Oracle Database Data Warehousing Guide, 11g Release 1 (11.1)
10. Identify the statement about ASM that is NOT true.
Identify the statement about ASM that is NOT true.
A.
ASM is easier to manage than file systems.
B.
ASM delivers the performance of raw partitions.
C.
ASM is an extra cost option for Oracle databases.
D.
ASM delivers automatic striping and mirroring.
Explanation:
ASM is a management tool, not a RAW performance tool.
Note:
Automatic Storage Management (ASM) is a new feature that has be introduced in
Oracle 10g to
simplify the storage of Oracle datafiles, controlfiles and logfiles.
Automatic Storage Management (ASM) simplifies administration of Oracle related
files by allowing
the administrator to reference disk groups rather than individual disks and files,
which are
managed by ASM. The ASM functionality is an extention of the Oracle Managed
Files (OMF)

functionality that also includes striping and mirroring to provide balanced and
secure storage. The
new ASM functionality can be used in combination with existing raw and cooked
file systems,
along with OMF and manually managed files.
The ASM functionality is controlled by an ASM instance. This is not a full database
instance, just
the memory structures and as such is very small and lightweight.
The main components of ASM are disk groups, each of which comprise of several
physical disks
that are controlled as a single unit. The physical disks are known as ASM disks,
while the files that
reside on the disks are known as ASM files. The locations and names for the files
are controlled
by ASM, but user-friendly aliases and directory structures can be defined for ease
of reference.
The level of redundancy and the granularity of the striping can be controlled
using templates.
Default templates are provided for each file type stored by ASM, but additional
templates can be
defined as needed.
Failure groups are defined within a disk group to support the required level of
redundancy. For
two-way mirroring you would expect a disk group to contain two failure groups so
individual files
are written to two locations.
In summary ASM provides the following functionality:
*Manages groups of disks, called disk groups.
*Manages disk redundancy within a disk group.
*Provides near-optimal I/O balancing without any manual tuning.
*Enables management of database objects without specifying mount points and
filenames.
*Supports large files.
Reference: Automatic Storage Management (ASM) in Oracle Database 10g
http://www.oracle-base.com/articles/...agement10g.php
11. Identity the true statement about a data warehouse
Identity the true statement about a data warehouse
A.
The data warehouse is typically refreshed as often as a transactional system,
B.
Data warehouse queries are simpler than OLTP queries.
C.
A data warehouse typically contains historical data.
D.
Queries against a data warehouse never need summarized information.
Explanation:
A data warehouse is a relational database that is designed for query and analysis
rather than for transaction processing. It usually contains historical data derived
from transaction

data, but it can include data from other sources. It separates analysis workload
from transaction
workload and enables an organization to consolidate data from several sources.

12. Identify the statement about Oracle OLAP that is NOT true.
Identify the statement about Oracle OLAP that is NOT true.
A.
Oracle OLAP cubes are stored in the Oracle relational database
B.
Oracle OLAP uses standard Oracle database security.
C.
Meta data for Oracle OLAP is accessible in an external data dictionary
D.
Oracle OLAP can be deployed using RAC.
Explanation:
All metadata for cubes and dimensions is stored in the Oracle database.
Reference: Oracle OLAP Users Guide, 11g Release 1 (11.1), Whats New in
Oracle OLAP?
Reference: Oracle OLAP Users Guide, 11g Release 1 (11.1), 8 Security
http://download.oracle.com/docs/cd/B...4/whatsnew.htm
13. Identify the true statement about adaptive parallelism.
Identify the true statement about adaptive parallelism.
A.
It Is turned on by default.
B.
It is turned off by default.
C.
You should always leave the default setting
D.
There is no such thing.
Explanation:
Adaptive Parallelism: The adaptive multiuser algorithm, which is enabled by
default,
reduces the degree of parallelism as the load on the system increases. When
using the Oracle
Database adaptive parallelism capabilities, the database uses an algorithm at
SQL execution time
to determine whether a parallel operation should receive the requested DOP or
have its DOP
lower to ensure the system is not overloaded.
In a system that makes aggressive use of parallel execution by using a high DOP,
the adaptive
algorithm adjusts the DOP down when only a few operations are running in
parallel. While the
algorithm still ensures optimal resource utilization, users may experience
inconsistent response
times. Using solely the adaptive parallelism capabilities in an environment that
requires
deterministic response times is not advised. Adaptive parallelism is controlled

through the
database initialization parameter PARALLEL_ADAPTIVE_MULTI_USER.
Reference: Oracle Database VLDB and Partitioning Guide, 11g Release 2 (11.2),
How Parallel
Execution Works
http://docs.oracle.com/cd/E11882_01/...2.htm#BEIECCDD

14. How many Exadata Storage Server cells are there in a Full Rack
Exadata database machine configuration that has 8 Database Server
nodes?
How many Exadata Storage Server cells are there in a Full Rack Exadata
database machine
configuration that has 8 Database Server nodes?
A.
2
B.
14
C.
16
D.
24
Explanation:
[IMG]file:///C:/Users/Angad/AppData/Local/Temp/msohtmlclip1/01/clip_image001.j
pg[/IMG]
15. which limitation CANNOT be imposed through Database Resource
Manager?
Your customer wants to use Database Resource Manager to help ensure
consistent performance
based on users and operations. In designing this implementation, which
limitation CANNOT be
imposed through Database Resource Manager?
A.
Specifying the maximum number of concurrent operations for a resource group
B.
Limiting resource consumption for a resource group
C.
Specifying the amount of parallelism for a resource group
D.
Limiting access to particular data for a resource group
Explanation:
16. Your BI tool will be used to query an Oracle database that includes
the Oracle BI tool generate in submitting queries?
Your BI tool (for example, Oracle Business Intelligence Enterprise Edition Cognos)
will be used to
query an Oracle database that includes the Oracle BI tool generate in submitting
queries that
might include data stored in cubes?
A.

SQL
B.
PIVSQL
C.
Proprietary API code
D.
SQL for relational and proprietary API code for OLAP
Explanation:
Oracle Business Intelligence Enterprise Edition is most commonly used with the
Oracle Database using SQL as the query language. Although the OLAP cube is a
multidimensional data type, it is represented in the Oracle database as a
collection
of relational views and is easily queried by SQL.
Note #1: The wording of the question is strange. SQL can be used and is the first
choice. So it
seems to be the best answer.
Note #2: Oracle Business Intelligence Enterprise Edition (OBI EE) is a product
suite based
on the OBI EE Server. The OBI EE Server can map a logical business model to
many different physical data sources and present the logical model for query to
variety of client applications including Interactive Dashboards, Answers and
Oracle
Business Intelligence Plug-in for Microsoft Office.
Reference: Using Oracle Business Intelligence Enterprise Edition with the OLAP
Option to Oracle
Database 11g, Oracle Whitepaper
17. How does compression affect resource utilization?
How does compression affect resource utilization?
A.
Reduces the amount of CPU and disk utilization
B.
Increases the amount of CPU and disk utilization
C.
Reduces the amount of disk but increases CPU utilization for loading
D.
Increases the amount of disk but reduces CPU utilization for loading!
Explanation:
Compression is useful because it helps reduce the consumption of resources
such
as data space or transmission capacity. Because compressed data must be
decompressed to be
used, this extra processing imposes computational or other costs through
decompression.
18. What are Oracle Data Integrator templates used for?
What are Oracle Data Integrator templates used for?
A.
To model SAP applications
B.
To define how to transform data

C.
As reports to monitor ETL activity
D.
None of these
Explanation:
Oracle Data Integrator streamlines the highperformance movement and
transformation of data between disparate systems in batch, real-time,
synchronous, and
asynchronous modes.
Knowledge Modules are at the core of the Oracle Data Integrator architecture.
They
make all Oracle Data Integrator processes modular, flexible, and extensible.
Knowledge Modules implement the actual data flows and define the templates
for
generating code across the multiple systems involved in each process.
Knowledge
Modules are generic, because they allow data flows to be generated regardless
of
the transformation rules. And they are highly specific, because the code they
generate and the integration strategy they implement are finely tuned for a
given
technology. Oracle Data Integrator provides a comprehensive library of
Knowledge
Modules, which can be tailored to implement existing best practices (for
example,
for highest performance, for adhering to corporate standards, or for specific
vertical
know-how).
By helping companies capture and reuse technical expertise and best practices,
Oracle Data Integrators Knowledge Module framework reduces the cost of
ownership. It also enables metadata-driven extensibility of product functionality
to
meet the most demanding data integration challenges.
Reference: Oracle Data Integrator, Oracle Data Sheet
19. Data Guard compresses data:
Data Guard compresses data:
A.
Always
B.
When using logical standby
C.
When using physical standby
D.
When catching up after a network failure
Explanation:
A Physical standby database replicates the exact contents of its primary
database
across the Oracle Net network layer. While the physical storage locations can be
different, the

data in the database will be exactly the same as the primary database.
Incorrect answer:
A, B: Logical standby databases convert the redo generated at the Primary
database into data and
SQL and then re-apply those SQL transactions on the Logical standby, thus
physical structures
and organization will be different from the Primary database. Users can read
from logical standby
databases while the changes are being applied and, if the GUARD is set to
STANDBY (ALTER
DATABASE GUARD STANDBY;), write to tables in the Logical standby database
that are not
being maintained by SQL Apply.
Unfortunately there are a number of unsupported objects (ie: tables or
sequences owned by
SYS, tables that use table compression, tables that underlie a materialized view
or Global
temporary tables (GTTs)) and unsupported data types (ie: Datatypes BFILE,
ROWID, and
UROWID, user-defined TYPEs, Multimedia data types like Oracle Spatial,
ORDDICOM, and
Oracle Text Collections (e.g. nested tables, VARRAYs), SecureFile LOBs, OBJECT
RELATIONAL
XMLTypes and BINARY XML).[2] Physical standby may be appropriate in such a
case.
20. What does tool generate in submitting queries that might include
data stored in relational tables and OLAP cubes?
Your BI tool (for example, Oracle Business Intelligence Enterprise Edition Plus,
Business Objects
and Cognos) will be used to query an Oracle database that includes the Oracle
OLAP Option.
What does tool generate in submitting queries that might include data stored in
relational tables
and OLAP cubes?
A.
SQL
B.
PL/SQL
C.
Proprietary API code
D.
SQL for relational and proprietary API code for OLAP
Explanation:
DBMS_CUBE PL/SQL Package. In Database 11gR2, a new feature was added that
allows cubes and dimensions to be entirely defined via PL/SQL calls, thus making
it a much
simpler job to automate the creation and refresh of cubes within the context of
an application.

21. What is the estimated maximum speed of data loads for a Quarter
Rack with the Exadata Storage Server?
What is the estimated maximum speed of data loads for a Quarter Rack with the
Exadata Storage
Server?
A.
1 TB/hr
B.
2 TB/hr
C.
4 TB/hr
D.
5 TB/hr
E.
It depends on the number of CPUs in the server.
Explanation:
[IMG]file:///C:/Users/Angad/AppData/Local/Temp/msohtmlclip1/01/clip_image001.j
pg[/IMG]
Reference: http://techsatwork.com/blog/?p=743
22. which task would you NOT use Oracle Data Mining?
For which task would you NOT use Oracle Data Mining?
A.
Predicting customer behavior
B.
Associating factors with a business issue
C.
Determining associations within a population
D.
Reducing the amount of data used in a data warehouse
Explanation:
Data mining does not reduce the amount of data in the warehouse.
Note:
Data mining (the analysis step of the knowledge discovery in databases process,
or KDD), a
relatively young and interdisciplinary field of computer science is the process of
discovering new
patterns from large data sets involving methods at the intersection of artificial
intelligence, machine
learning, statistics and database systems. The overall goal of the data mining
process is to extract
knowledge from a data set in a human-understandable structure and besides the
raw analysis
step involves database and data management aspects, data preprocessing,
model and inference
considerations, interestingness metrics, complexity considerations, postprocessing of found
structure, visualization and online updating.
23. Identify the type of refresh that is NOT supported by materialized
views.

Identify the type of refresh that is NOT supported by materialized views.


A.
Deferred
B.
Incremental
C.
Full
D.
Heuristic
Explanation:
Use the CREATE MATERIALIZED VIEW statement to create a materialized view. A
materialized view is a database object that contains the results of a query.
Incorrect answer:
A: Specify DEFERRED to indicate that the materialized view is to be populated by
the next
REFRESH operation.
B: Oracle Database uses the default index to speed up incremental (FAST)
refresh of the
materialized view.
C: By default, Oracle Database creates a primary key materialized view with
refresh on demand
only. If a materialized view log exists on the table, then the column can be
altered to be capable of
fast refresh. If no such log exists, then only full refresh of the column is possible.
Reference: Oracle Database SQL Language Reference, 11g Release 1
(11.1),CREATE
MATERIALIZED VIEW
24. What do you recommend?
Your customer wants to determine market baskets. What do you recommend?
A.
Use Oracle OLAP Option.
B.
Use Oracle SQL Analytic Functions.
C.
Use associations algorithm in Oracle Data Mining.
D.
Use regression analysis in Oracle Data Mining
Explanation:
Association is a data mining function that discovers the probability of the
cooccurrence of items in a collection. The relationships between co-occurring
items are expressed
as association rules.
Market-Basket Analysis
Association rules are often used to analyze sales transactions. For example, it
might be noted that
customers who buy cereal at the grocery store often buy milk at the same time.
In fact, association
analysis might find that 85% of the checkout sessions that include cereal also
include milk. This
relationship could be formulated as the following rule.

Cereal implies milk with 85% confidence


This application of association modeling is called market-basket analysis. It is
valuable for direct
marketing, sales promotions, and for discovering business trends. Market-basket
analysis can also
be used effectively for store layout, catalog design, and cross-sell.
Association Algorithm
Oracle Data Mining uses the Apriori algorithm to calculate association rules for
items in frequent
itemsets.
Reference: Oracle Data Mining Concepts 11g Release 2
http://docs.oracle.com/cd/E11882_01/...ket_basket.htm
25. How can you implement near real time data integration with Oracle
Data Integrator?
How can you implement near real time data integration with Oracle Data
Integrator?
A.
By accessing Change Data Capture records from logs
B.
By using Exchange Partition
C.
By mining Oracle UNDO segments
D.
By reading operating system logs
Explanation:
Conventional Extract, Transform, Load (ETL) tools closely intermix data
transformation
rules with integration process procedures, requiring the development of both
data
transformations and data flow. Oracle Data Integrator (ODI) takes a different
approach to
integration by clearly separating the declarative rules (the what) from the actual
implementation (the how). With ODI, declarative rules describing mappings and
transformations are defined graphically, through a drag-and-drop interface, and
stored
independently from the implementation. ODI automatically generates the data
flow, which
can be fine-tuned if required. This innovative approach for declarative design has
also been
applied to ODIs framework for Changed Data Capture. ODIs CDC moves only
changed
data to the target systems and can be integrated with Oracle GoldenGate,
thereby enabling
the kind of real time integration that businesses require.
Reference: Best Practices for Real-time Data Warehousing, Oracle White Paper
26. Which can be used in scenario where there are large data loads of a
sensitive nature into a data warehouse?
Which can be used in scenario where there are large data loads of a sensitive
nature into a data

warehouse?
A.
Direct path loading
B.
External tables for loading flat files
C.
Partition exchange loading
D.
Any of these are valid for certain situations.
Explanation:
Instead of filling a bind array buffer and passing it to the Oracle database with a
SQL INSERT statement, a direct path load uses the direct path API to pass the
data to be loaded
to the load engine in the server. The load engine builds a column array structure
from the data
passed to it.
The direct path load engine uses the column array structure to format Oracle
data blocks and build
index keys. The newly formatted database blocks are written directly to the
database (multiple
blocks per I/O request using asynchronous writes if the host platform supports
asynchronous I/O).
Internally, multiple buffers are used for the formatted blocks. While one buffer is
being filled, one or
more buffers are being written if asynchronous I/O is available on the host
platform. Overlapping
computation with I/O increases load performance.
http://download.oracle.com/docs/cd/B...s.htm#i1008815
27. Identify two database options that would help in enabling such a
strategy.
One goal of your Information Lifecycle Management strategy using Oracles ILM
capabilities is to
reduce cost or online storage. Identify two database options that would help in
enabling such a
strategy.
A.
RAC and Advanced Compression
B.
RAC and Partitioning
C.
Partitioning and Advanced Compression
D.
RAC One and Advanced Compression
Explanation:
Advanced compression:
Advanced Compression, an option introduced in Oracle Database 11 g Enterprise
Edition, offers a
comprehensive set of compression capabilities to help organizations reduce
costs, while
maintaining or improving performance. It significantly reduces the storage

footprint of databases
through compression of structured data (numbers, characters) as well as
unstructured data
(documents, spreadsheets, XML and other files). It provides enhanced
compression for database
backups and also includes network compression capabilities for faster
synchronization of standby
databases.
Archival Compression:
* Built on HCC technology
* Compression algorithm optimized for maximum storage savings
* Benefits any application with data retention requirements
* Best approach for ILM and data archival
Partitioning:
There are a number of benefits to partitioning data. Partitioning provides an easy
way to distribute
the data across appropriate storage devices depending on its usage, while still
keeping the data
online and stored on the most cost-effective device. Since partitioning is
completely transparent to
anyone accessing the data, no application changes are required, thus
partitioning can be
implemented at any time.
Note There is a wide variety of information held in an organization today, for
example it
could be an email message, a picture, or an order in an Online Transaction
Processing System. Therefore, once the type of data being retained has been
identified, you already have an understanding of what its evolution and final
destiny
is likely to be.
The challenge now before all organizations, is to understand how their data
evolves
and grows, monitor how its usage changes over time, and decide how long it
should
survive. In addition, the evolving rules and regulations such as Sarbanes-Oxley,
place additional constraints on the data that is being retained and some
organizations now require that data is deleted when there is no longer a legal
requirement to keep it, to avoid expensive e-discovery when the data is
requested
for a legal matter.
Implementing ILM using Oracle Database 11g Page 4 Information Lifecycle
Management (ILM) is
designed to address these issues, with
a combination of processes, policies, software and hardware so that the
appropriate
technology can be used for each phase of the lifecycle of the data.1
Reference: Oracle Database VLDB and Partitioning Guide, 11g Release 1 (11.1), 5
Using
Partitioning for Information Lifecycle Management
28. What would you recommend?

You customer wants to segment their customers1 demographic data into those
that use and do
not use loyalty card. What would you recommend?
A.
Use Oracle OLAP Option.
B.
Use Oracle SQL Analytic Functions.
C.
Use classification algorithm in Oracle Data Mining.
D.
Use non-negative matrix factorization in Oracle Data Mining.
Explanation:
Classification is a data mining function that assigns items in a collection to target
categories or classes. The goal of classification is to accurately predict the target
class
for each case in the data. For example, a classification model could be used to
identify
loan applicants as low, medium, or high credit risks.
The simplest type of classification problem is binary classification. In binary
classification, the target attribute has only two possible values: for example, high
credit
rating or low credit rating
Note:
Oracle Data Mining provides the following algorithms for classification:
* Decision Tree
Decision trees automatically generate rules, which are conditional statements
that
reveal the logic used to build the tree.
* Naive Bayes
Naive Bayes uses Bayes Theorem, a formula that calculates a probability by
counting the frequency of values and combinations of values in the historical
data.
* Generalized Linear Models (GLM)
GLM is a popular statistical technique for linear modeling. Oracle Data Mining
implements GLM for binary classification and for regression.
GLM provides extensive coefficient statistics and model statistics, as well as row
diagnostics. GLM also supports confidence bounds.
* Support Vector Machine
Support Vector Machine (SVM) is a powerful, state-of-the-art algorithm based on
linear and nonlinear regression. Oracle Data Mining implements SVM for binary
and multiclass classification.
Reference:
Reference: Oracle Data Mining, Concepts 11g Release 1 (11.1)
http://download.oracle.com/docs/cd/B...111/b28129.pdf
29. How can you use Oracle Data Mining with Oracle Warehouse
builder?
How can you use Oracle Data Mining with Oracle Warehouse builder?
A.
To identify records to extract
B.

As a standard transform operation


C.
To increase write performance
D.
To eliminate ETL logging
Explanation:
Data Mining and Data Warehousing
Data can be mined whether it is stored in flat files, spreadsheets, database
tables, or
some other storage format. The important criteria for the data is not the storage
format, but its applicability to the problem to be solved.
Proper data cleansing and preparation are very important for data mining, and a
data
warehouse can facilitate these activities. However, a data warehouse will be of
no use
if it does not contain the data you need to solve your problem.
Oracle Data Mining requires that the data be presented as a case table in singlerecord
case format. All the data for each record (case) must be contained within a row.
Most
typically, the case table is a view that presents the data in the required format
for
mining
Note: Oracle Warehouse Builder (OWB) enables the design and deployment of
enterprise data
warehouses, data marts, and e-business intelligence applications.
Reference: Oracle Data Mining, Concepts 11g Release 1 (11.1)
http://download.oracle.com/docs/cd/B...111/b28129.pdf
30. You can use Oracle Data Mining unstructured data.
You can use Oracle Data Mining unstructured data.
A.
TRUE
B.
FALSE
Explanation:
Data that cannot be meaningfully interpreted as numerical or categorical is
considered
unstructured for purposes of data mining. It has been estimated that as much as
85%
of enterprise data falls into this category. Extracting meaningful information from
this
unstructured data can be critical to the success of a business.
Unstructured data may be binary objects, such as image or audio files, or text
objects,
which are language-based. Oracle Data Mining supports text objects.
Text must undergo a transformation process before it can be mined. Once the
data has
been properly transformed, the case table can be used for building, testing, or
scoring
data mining models. Most Oracle Data Mining algorithms support text

Reference: Oracle Data Mining, Concepts 11g Release 1 (11.1)


http://download.oracle.com/docs/cd/B...111/b28129.pdf
31. Which statement is true about a configuration with a few large
nodes versus a configuration with many smaller nodes?
You are looking to create a RAC cluster to deliver high performance for your
clients data
warehouse. Which statement is true about a configuration with a few large nodes
versus a
configuration with many smaller nodes?
A.
A few large nodes always perform better than many small nodes.
B.
A few large nodes always perform worse than many small nodes.
C.
It depends on the workload specifics and the effect of a node failure.
D.
Performance should be the same with either option.
Explanation:
32. Which is NOT an available composite partition in Oracle Database
11g?
Which is NOT an available composite partition in Oracle Database 11g?
A.
range-list
B.
list-list
C.
list-range
D.
interval-hash
Explanation:
Extended Composite Partitioning
In previous releases of Oracle, composite partitioning was limited to Range-Hash
and Range-List
partitioning. Oracle 11g Release 1 extends this to allow the following composite
partitioning
schemes:
Range-Hash (available since 8i)
Range-List (available since 9i)
Range-Range
List-Range
List-Hash
List-List
Note: inverval-hash is a valid Interval partitioning.
Reference:
http://www.oracle-base.com/articles/...ents_11gR1.php
33. Which questions CANNOT be addressed by Oracle Data Mining?
Which questions CANNOT be addressed by Oracle Data Mining?
A.
Fraud detection
B.

Prediction of customer behavior


C.
Root cause de
D.
Identify factors associated with a business problem
Explanation:
Data Mining can provide valuable results:
*Predict customer behavior (Classification) (not B)
*Predict or estimate a value (Regression)
*Segment a population (Clustering)
*Identify factors more associated with a business
problem (Attribute Importance) (not D)
* Find profiles of targeted people or items (Decision Trees)
* Determine important relationships and market baskets
within the population (Associations)
* Find fraudulent or rare events (Anomaly Detection) (not A)
Reference: Anomaly and Fraud Detection with Oracle Data Mining 11g Release 2
34. Indentify the dimension that appears most often in queries in a data
warehouse.
Indentify the dimension that appears most often in queries in a data warehouse.
A.
Product dimension
B.
Time dimension
C.
Cost dimension
D.
Location dimension
Explanation:
In a data warehouse, a dimension is a data element that categorizes each item in
a
data set into non-overlapping regions. A data warehouse dimension provides the
means to slice
and dice data in a data warehouse. Dimensions provide structured labeling
information to
otherwise unordered numeric measures. For example, Customer, Date, and
Product are all
dimensions that could be applied meaningfully to a sales receipt. A dimensional
data element is
similar to a categorical variable in statistics.
The primary function of dimensions is threefold: to provide filtering, grouping and
labeling. For
example, in a data warehouse where each person is categorized as having a
gender of male,
female or unknown, a user of the data warehouse would then be able to filter or
categorize each
presentation or report by either filtering based on the gender dimension or
displaying results
broken out by the gender.
35. You can perform what-if analysis of potential changes with Oracle

Warehouse Builder.
You can perform what-if analysis of potential changes with Oracle Warehouse
Builder.
A.
TRUE
B.
FALSE
Explanation:
The Metadata Dependency Manager (MDM) enables you to plan your project by
previewing the impact of the changes or future changes for what-if analysis.
When
you plan to introduce changes to your source systems, you can gauge the impact
of
that change on your warehouse design. If changes have already been introduced,
then
you can plan the time required to update your ETL design and rebuild your data
warehouse.
Reference: Oracle Warehouse Builder, Concepts, 11g Release 2 (11.2)
36. Which unique method of improving performance is NOT used by the
Oracle Exadata Database Machine?
Which unique method of improving performance is NOT used by the Oracle
Exadata Database
Machine?
A.
Flash to improve query performance
B.
Reduces the amount of data required to flow through I/O
C.
Increases the I/O using InfiniBand
D.
Performs analysis in a special in-memory database
Explanation:
Reference: White paper, Exadata Smart Flash Cache and the Sun Oracle
Database Machine
37. Which Oracle option might be used to encrypt sensitive data in an
Oracle data warehouse?
Which Oracle option might be used to encrypt sensitive data in an Oracle data
warehouse?
A.
Active Data Guard
B.
Total Recall
C.
Advanced Security Option
D.
Virtual Private Database
Explanation:
Oracle Advanced Security is an option to the Oracle Database 11g Enterprise
Edition that helps address privacy and regulatory requirements including the

Payment Card Industry Data Security Standard (PCI-DSS), Health Insurance


Portability and Accountability Act (HIPAA), and numerous breach notification
laws. Oracle Advanced Security provides data encryption and strong
authentication
services to the Oracle database, safeguarding sensitive data against
unauthorized
access from the network and the operating system. It also protects against theft,
loss, and improper decommissioning of storage media and database backups.
Reference: Oracle Advanced Security
http://www.oracle.com/technetwork/da...2-1-129479.pdf
38. Knowledge Modules are:
Knowledge Modules are:
A.
Reusable code templates for Oracle Data Integrator
B.
Prebuilt applications for Oracle Business Intelligence
C.
Options for Oracle Enterprise Manager
D.
Algorithms for data mining
Explanation:
Knowledge modules (KMs) in Oracle Data Integrator are components that
implement reusable transformation and ELT (extract, load, and transform)
strategies across
different technologies.
Reference: Developing a Knowledge Module in Oracle Data Integrator
http://www.oracle.com/technetwork/ar...di-090881.html
39. What two types of results can be cached in the Result Set Cache?
What two types of results can be cached in the Result Set Cache?
A.
Results of an SQL query
B.
Results from a PL/SQL function
C.
Sequence object results
D.
Result sets derived from data dictionary tables
Explanation:
Your applications sometime send repetitive queries to the database. To improve
the
response time of repetitive queries, results of queries, query fragments, and
PL/SQL functions can
be cached in memory. A result cache stores the results of queries shared across
all sessions.
When these queries are executed repeatedly, the results are retrieved directly
from the cache
memory.

You must annotate a query or query fragment with a result cache hint to indicate
that results are to
be stored in the query result cache.
The query result set can be cached in the following ways:
* Server-side Cache
* Client Result Cache
Oracle Database 11g Release 1 (11.1) provides support for server-side Result Set
caching for
both JDBC types. The server-side result cache is used to cache the results of the
current queries,
query fragments, and PL/SQL functions in memory and then to use the cached
results in future
executions of the query, query fragment, or PL/SQL function. The cached results
reside in the
result cache memory portion of the SGA. A cached result is automatically
invalidated whenever a
database object used in its creation is successfully modified. The server-side
caching can be of
the following two types:
* SQL Query Result Cache (A)
* PL/SQL Function Result Cache (B)
40. Which feature would enable higher availability during maintenance
operations while also improving query response performance?
Which feature would enable higher availability during maintenance operations
while also improving
query response performance?
A.
Partitioning
B.
Materialized views
C.
Bitmap Indexing
D.
OLAP
Explanation:
Partitioning enhances the performance, manageability, and availability of a wide
variety of applications and helps reduce the total cost of ownership for storing
large amounts of
data. Partitioning allows tables, indexes, and index-organized tables to be
subdivided into smaller
pieces, enabling these database objects to be managed and accessed at a finer
level of
granularity. Oracle provides a rich variety of partitioning strategies and
extensions to address
every business requirement. Moreover, since it is entirely transparent,
partitioning can be applied
to almost any application without the need for potentially expensive and time
consuming
application changes.
Reference: Oracle Database VLDB and Partitioning Guide, 11g Release 1 (11.1), 2

Partitioning
Concepts
41. Identify the indexing technique you would use to minimize partition
maintenance.
Identify the indexing technique you would use to minimize partition
maintenance.
A.
Local indexes
B.
Global partitioned indexes
C.
Global nonpartitioned indexes
D.
Both global partitioned and global nonpartitioned indexes
Explanation:
If your priority is manageability, use a local index.
Local partitioned indexes are easier to manage than other types of partitioned
indexes. They also
offer greater availability and are common in DSS environments. The reason for
this is
equipartitioning: each partition of a local index is associated with exactly one
partition of the table.
This enables Oracle to automatically keep the index partitions in sync with the
table partitions, and
makes each table-index pair independent. Any actions that make one partitions
data invalid or
unavailable only affect a single partition.
Local partitioned indexes support more availability when there are partition or
subpartition
maintenance operations on the table. A type of index called a local nonprefixed
index is very
useful for historical databases. In this type of index, the partitioning is not on the
left prefix of the
index columns.
Reference: Oracle Database VLDB and Partitioning Guide, 11g Release 1 (11.1), 2
Partitioning
Concepts
Note:
Local indexes are indexes create on each partition in a table. A local index
automatically creates
an index partition for each partition in the table. The index is partitioned by the
same key as the
partition key of the table.
A local index is always partitioned by the same partition key as the parent table.
You cannot add
or remove partitions in a local index, or in a global index for that matter. You
must add and remove
partitions from the parent table. A local index does not need to include the
partition key in the list
of indexed columns.

Local indexes provide the best throughput of a query and are used primarily in
OLAP and DSS
type environments.
42. Exadata uses smart scans, which are executed in________.
Exadata uses smart scans, which are executed in________.
A.
Exadata Storage Server cells
B.
Database Server node memory
C.
Database Server node CPUs
D.
Exadata does not use smart scans.
Explanation:
The Oracle Exadata Database Machine brings database performance to a whole
new level, but have you ever wondered what exactly makes it so fast? Several
components of the
Oracle Exadata Database Machine, such as Oracle Database 11g Release 2;
Oracle Exadatas
Smart Flash Cache, Hybrid Columnar Compression, and SmartScan features; and
InfiniBand
interconnect, help deliver high performance. One of the key technologies that
supports this
performance is the storage index, which is not a regular database index. Storage
indexes reside in
the memory of the storage serversalso called storage cellsand significantly
reduce
unnecessary I/O by excluding irrelevant database blocks in the storage cells.
Oracle Exadata I/O and Smart Scan
Storage in Oracle Exadata changes query processing so that not all blocks have
to go to the
database server for that server to determine which rows might satisfy a query.
Oracle Exadatas
Smart Scan feature enables certain types of query processing to be done in the
storage cell. With
Smart Scan technology, the database nodes send query details to the storage
cells via a protocol
known as iDB (Intelligent Database). With this information, the storage cells can
take over a large
portion of the data-intensive query processing. Oracle Exadata storage cells can
search storage
disks with added intelligence about the query and send only the relevant bytes,
not all the
database blocks, to the database nodeshence the term smart scan.
Reference: http://www.oracle.com/technetwork/is...ta-354069.html
43. What are three advantages provided by proper partitioning in a
data warehouse?
What are three advantages provided by proper partitioning in a data warehouse?
A.

Partition pruning will occur


B.
Faster sorting
C.
Efficient parallel joins
D.
Efficient data loading
E.
Reduced disk usage
Explanation:
There are three major advantages of partitioning.
* Partition Pruning Oracle only accesses a limited set of table partitions if the
FROM and WHERE
clause permit it to.
* Partition-wise Joins Where two tables that have compatible partitioning
schemes are joined ,
Oracle improves the efficiency of parallel operations by performing the join
between individual
partitions of the tables.
* Manageability Partitioning allows DDL operations on a large subset of table
rows with some
element of commonality defined through the partitioning type.
Reference:
http://www.databasejournal.com/featu...clePart-3.htm
44. what is the overall throughput of the data warehouse?
You are looking to size a data warehouse configuration. If the I/O throughput for
the CPUs is 25
GB/s, the I/O throughput for the HBA is 18 GB/s, and the I/O throughput for the
disk subsystem is
6 GB/s, what is the overall throughput of the data warehouse?
A.
25 GB/s
B.
18 GB/s
C.
6 GB/s
D.
It depends on how many processors are in the servers.
Explanation:
In this scenario the disk subsystem is the bottleneck. It determines the
throughput.
Note: Each of the components must provide sufficient I/O bandwidth to ensure a
well-balanced I/O
system.
The end-to-end I/O system consists of more components than just the CPUs and
disks. A wellbalanced I/O system must provide approximately the same
bandwidth across all components in
the I/O system. These components include:
* Host bus adapters (HBAs), the connectors between the server and the storage.
* Switches, in between the servers and a storage area network (SAN) or network

attached storage
(NAS).
* Ethernet adapters for network connectivity (GigE NIC or Infiniband). In an
Oracle Real
Application Clusters (Oracle RAC) environment, you need an additional private
port for the
interconnect between the nodes that you should not include when sizing the
system for I/O
throughput. The interconnect must be sized separately, taking into account
factors such as
internode parallel execution.
* Wires that connect the individual components.
Reference: http://docs.oracle.com/cd/E11882_01/...em.htm#autoId2
45. What areas can SQL Access Advisor give advice on?
What areas can SQL Access Advisor give advice on?
A.
Partitioning advice, index advice, and materialized views advice
B.
Index advice and compression advice
C.
Index advice and data masking advice
D.
Partitioning advice and compression advice
Explanation:
The SQL Access Advisor was introduced in Oracle 10g to make suggestions about
additional indexes and materialized views which might improve system
performance. Oracle 11g
has made two significant changes to the SQL Access Advisor:
The advisor now includes advice on partitioning schemes that may improve
performance.
The original workload manipulation has been deprecated and replaced by SQL
tuning sets.
Reference:
http://www.oracle-base.com/articles/...isor_11gR1.php
46. Which condition can cause a change in the contents of the SQL
Result Set Cache?
Which condition can cause a change in the contents of the SQL Result Set
Cache?
A.
SQL result sets age out of the Result Set Cache based on the KEEP parameter.
B.
SQL result sets are invalidated in the Result Set Cache after DML is performed
against any of
tables in the SQL query.
C.
SQL result sets are pinned in the Result Set Cache with the KEEP parameter.
D.
None of these would cause a change.

Explanation:
The database automatically invalidates a cached result whenever a transaction
modifies the data or metadata of any of the database objects used to construct
that cached result.
Note: DML is abbreviation of Data Manipulation Language. It is used to retrieve,
store, modify,
delete, insert and update data in database.
Reference: From oracle documentation
http://docs.oracle.com/cd/B28359_01/...y.htm#BGBEEFBE
47. Which is NOT an advantage provided by partitioning?
Which is NOT an advantage provided by partitioning?
A.
Reduces storage requirements for tables
B.
Can add to the benefits of parallelism through parallel partition-wise joins
C.
Can improve performance by reducing I/O
D.
Provides added flexibility for maintenance operations
Explanation:
Table storage requirements would increase, but the benefits are huge.
Oracle partitioning is a divide-and-conquer approach to improving Oracle
maintenance and SQL
performance. Anyone with un-partitioned databases over 500 gigabytes is
courting disaster.
Databases become unmanageable, and serious problems occur:
* SQL may perform poorly Without Oracle partitioning, SQL queries with fulltable scans take
hours to complete. In a full scan, the smaller the Oracle partition, the faster the
performance. Also,
index range scans become inefficient.
* Recovery Files recovery takes days, not minutes
* Maintenance Rebuilding indexes (important to re-claim space and improve
performance)
Oracle partitioning has many benefits to improve performance and
manageability:
* Stable
* Robust
* Faster backups
* Less overhead
* Easier management
Maintenance of Oracle partitioned tables is improved because maintenance can
be focused on
particular portions of tables. For maintenance operations across an entire
database object, it is
possible to perform these operations on a per-partition basis, thus dividing the
maintenance
process into more manageable chunks. (not D)
* Faster SQL Oracle is partition-aware, and some SQL may improve is speed by

several orders
of magnitude (over 100x faster).
- Index range scans Oracle partitioning physically sequences rows in index-order
causing a
dramatic improvement (over 10x faster) in the speed of partition-key scans.
- Full-table scans Oracle partition pruning only accesses those data blocks
required by the
query.
- Table joins Oracle partition-wise joins take the specific sub-set of the query
partitions, causing
huge speed improvements on nested loop and hash joins. (not C)
- You can also improve the performance of massive join operations when large
amounts of data
(for example, several million rows) are joined together by using partition-wise
joins. (not B)
- Updates Oracle parallel query for partitions improves batch load speed.
Reference: http://www.dba-oracle.com/oracle_tips_partitioning.htm
48. Which is the quickest way to make this determination?
You want to enable result set caching to quickly see if this feature will help the
performance of
your application. Which is the quickest way to make this determination?
A.
Set RESULT_CACHE_MODE = FORCE in the initialization file.
B.
Set RESULT_CACHE = ENABLED in the initialization file.
C.
Set RESULT_CACHE_MAX_SIZE = 0.
D.
Set RESULT_CACHE = ENABLED in the initialization file and use a RESULT_CACHE
hint in
queries.
Explanation:
The RESULT_CACHE_MODE initialization parameter determines the SQL query
result cache mode. The parameter specifies when a ResultCache operator is
spliced into a querys
execution plan. The parameter accepts the following values:
FORCE
The ResultCache operator is added to the root of all SELECT statements, if that is
possible.
However, if the statement contains a NO_RESULT_CACHE hint, then the hint takes
precedence
over the parameter setting.
MANUAL
The ResultCache operator is added, only if you use the RESULT_CACHE hint in the
SQL query.
Reference: http://www.globusz.com/ebooks/Oracle11g/00000014.htm
49. What would you use to evenly distribute data across the disk in
your Oracle data warehouse?
What would you use to evenly distribute data across the disk in your Oracle data

warehouse?
A.
Range Partitioning
B.
Automatic Storage Management (ASM)
C.
List Partitioning
D.
RAC
Explanation:
Automatic Storage Management (ASM) is a feature provided by Oracle
Corporation
within the Oracle Database from release Oracle 10g (revision 1) onwards. ASM
aims to simplify
the management of database files. To do so, it provides tools to manage file
systems and volumes
directly inside the database, allowing database administrators (DBAs) to control
volumes and
disks with familiar SQL statements in standard Oracle environments. Thus DBAs
do not need
extra skills in specific file systems or volume managers (which usually operate at
the level of the
operating system).
With ASM:
* IO channels can take advantage of data striping and software mirroring
* DBAs can automate online redistribution of data, along with the addition and
removal of
disks/storage
* the system maintains redundant copies and provides 3rd-party[citation
needed] RAID
functionality
* Oracle supports third-party multipathing IO technologies (such as failover or
load balancing to
SAN access) the need for hot spares diminishes
Reference: http://en.wikipedia.org/wiki/Automat...age_Management
50. Which statement is true for you to get the benefits of partition-wise
joins?
Which statement is true for you to get the benefits of partition-wise joins?
A.
The parent table must be partitioned on the join Key and the child table must be
partitioned on]
the join key.
B.
The parent table must be partitioned on the primary key and the child table must
be partition the
join key.
C.
The child table must use a reference partition.
D.
The parent table must be partitioned on the primary key and the child table must

use a ref
partition.
Explanation:
Note:
Partition-wise joins reduce query response time by minimizing the amount of
data exchanged
among parallel execution servers when joins execute in parallel. This significantly
reduces
response time and improves the use of both CPU and memory resources. In
Oracle Real
Application Clusters (RAC) environments, partition-wise joins also avoid or at
least limit the data
traffic over the interconnect, which is the key to achieving good scalability for
massive join
operations.
Partition-wise joins can be full or partial. Oracle decides which type of join to use.
51. Which way of creating a hardware configuration will reduce the
implementation time the most?
You want to create an optimally performing data warehouse hardware
configuration for your
customer. Which way of creating a hardware configuration will reduce the
implementation time the
most?
A.
Use reference configurations or an appliance-like configuration.
B.
Use the existing system and add on relevant components.
C.
Customize a configuration from a vendor.
D.
Build the system from scratch.
Explanation:
Oracle Optimized Warehouse Reference Configurations are best practice guides
to
choosing the right server, storage and networking components to build an Oracle
data warehouse.
These best practice guides encapsulate years of configuration expertise from
Oracle and its
partners, helping customers take the risk out of implementing a data warehouse.
Reference: ORACLE OPTIMIZED WAREHOUSE REFERENCE CONFIGURATIONS
FREQUENTLY ASKED QUESTIONS
52. The most performant way to load data from an external table that
will also guarantee direct path loading is:
The most performant way to load data from an external table that will also
guarantee direct path
loading is:
A.
Using Create Table as Select (CTAS)

B.
Using Data Pump
C.
Using Insert as Select (IAS)
D.
Using transparent gateways
Explanation:
CTAS refers to a CREATE TABLE AS statement a new table is created and
populated with row
from a specified query.
The most common uses of CTAS are in these scenarios:
* Creating a table identical to another table in structure, but with a filter criteria
applied to its data.
* Creating a table with small structural differences from an existing table.
For best performance use Direct Path Load. The conventional path uses standard
insert
statements whereas the direct path loader loads directly into the Oracle data
files and creates
blocks in Oracle database block format.
During direct-path INSERT operations, the database appends the inserted data
after existing data
in the table. Data is written directly into datafiles, bypassing the buffer cache.
Free space in the
existing data is not reused, and referential integrity constraints are ignored.
These procedures
combined can enhance performance.
Reference: Oracle Database Administrators Guide 11g Release 1 (11.1), Loading
Tables
http://www.filibeto.org/sun/lib/nons...10/tables004.h
tm
53. Which best describes Oracles OLAP Option for Oracle Database 11g
Release 2?
Which best describes Oracles OLAP Option for Oracle Database 11g Release 2?
A.
Is stored as relational tables and is considered a ROLAP solution
B.
Uses bitmap indexes
C.
Physically stores OLAP cubes as objects within the relational database
D.
Is available both within the Oracle Database and as a stand-alone solution
Explanation:
Oracle OLAP is a world class multidimensional analytic engine embedded in
Oracle
Database 11g. Oracle OLAP cubes deliver sophisticated calculations using simple
SQL queries -producing results with speed of thought response times. This
outstanding query performance may
be leveraged transparently when deploying OLAP cubes as materialized views
enhancing the

performance of summary queries against detail relational tables. Because Oracle


OLAP is
embedded in Oracle Database 11g, it allows centralized management of data
and business rules
in a secure, scalable and enterprise-ready platform.
54. What data can you compress using Advanced Compression in Oracle
Database 11g?
What data can you compress using Advanced Compression in Oracle Database
11g?
A.
Read only data
B.
Data that can be updated, inserted and/or deleted (DML)
C.
Only data being archived
D.
Data warehousing data
Explanation:
Oracle Database 11g has new option named as Oracle Advanced Table
Compression option which aims at reducing space occupied by data for both
OLTP and
warehouse databases. This option provides the following types of compression:
* Compression of data tables even for OLTP environment. (Previous versions had
compression
option for tables that are mostly read only).
* Compression of unstructured data in SecureFiles.
* Compression of RMAN backups.
* Compression in Data Pump Export files.
* Compression of redo data transmitted to a standby database during redo gap
resolution (when
data guard is configured).
55. Select one.
Identify the benefit of using bitmap join indexes. Select one.
A.
Faster query performance for all queries.
B.
Reduced space for indexes.
C.
Faster query performance for some queries.
D.
Lower memory usage.
Explanation:
Oracle benchmarks claim that bitmap join indexes can run a query more than
eight times faster
than traditional indexing methods.
However, this speed improvement is dependent upon many factors, and the
bitmap join is not a
panacea. Some restrictions on using the bitmap join index include:

The indexed columns must be of low cardinalityusually with less than 300 distinct
values.
The query must not have any references in the WHERE clause to data columns
that are not
contained in the index.
The overhead when updating bitmap join indexes is substantial. For practical use,
bitmap join
indexes are dropped and rebuilt each evening about the daily batch load jobs.
This means that
bitmap join indexes are useful only for Oracle data warehouses that remain readonly during the
processing day.
Reference:
http://www.dba-oracle.com/art_builde...p_join_idx.htm
56. What is the difference between an ETL (Extraction Transformation
Load) approach and an ELT (Extraction Load Transformation) approach
to data integration?
What is the difference between an ETL (Extraction Transformation Load)
approach and an ELT
(Extraction Load Transformation) approach to data integration? Select one.
A.
ETL can operate between heterogeneous data sources.
B.
ELT requires a separate transformation server.
C.
ELT transforms data on the target server.
D.
ELT cannot be used for incremental data loading.
Explanation:
There are two approaches to consider for data integration: ELT and ETL.
The difference between ETL and ELT lies in the environment in which the data
transformations are
applied. In traditional ETL, the transformation takes place when the data is en
route from the
source to the target system. In ELT, the data is loaded into the target system,
and then
transformed within the target system environment.
Reference:
http://msdn.microsoft.com/en-us/library/aa480064.aspx
57. Identify the principle that would have the widest applicability.
You are looking for some general design principles that could be used in
designing every large
scale data warehouse you create. Identify the principle that would have the
widest applicability.
A.
Partition your tables appropriately to produce partition-wise joins.
B.
Always use a star schema or snowflake schema design.
C.
Do as much analytics as possible in your BI tools.

D.
Always use Oracle OLAP.
Explanation:
Partition-wise joins can be full or partial. Oracle decides which type of join to use.
A full partition-wise join divides a large join into smaller joins between a pair of
partitions from the
two joined tables. To use this feature, you must equipartition both tables on their
join keys, or use
reference partitioning.
Oracle Database can perform partial partition-wise joins only in parallel. Unlike
full partition-wise
joins, partial partition-wise joins require you to partition only one table on the join
key, not both
tables.
Note: Partition-wise joins reduce query response time by minimizing the amount
of data
exchanged among parallel execution servers when joins execute in parallel. This
significantly
reduces response time and improves the use of both CPU and memory resources.
In Oracle Real
Application Clusters (RAC) environments, partition-wise joins also avoid or at
least limit the data
traffic over the interconnect, which is the key to achieving good scalability for
massive join
operations.
Reference: Oracle Database VLDB and Partitioning Guide, 11g Release 1 (11.1), 4
Partitioning
for Availability, Manageability, and Performance
58. Why does partitioning help parallelism with RAC?
Why does partitioning help parallelism with RAC?
A.
The ability to do partition-wise joins reduces interconnect traffic.
B.
Partitioning allows you to split data storage across nodes.
C.
Partitioning reduces storage requirements.
D.
RAC will spawn additional parallel servers to meet the needs of requesting
applications.
Explanation:
Partition-wise joins reduce query response time by minimizing the amount of
data
exchanged among parallel execution servers when joins execute in parallel. This
significantly
reduces response time and improves the use of both CPU and memory resources.
In Oracle Real
Application Clusters (RAC) environments, partition-wise joins also avoid or at
least limit the data
traffic over the interconnect, which is the key to achieving good scalability for

massive join
operations.
Partition-wise joins can be full or partial. Oracle decides which type of join to use.
Reference: Oracle Database VLDB and Partitioning Guide, 11g Release 1 (11.1), 4
Partitioning
for Availability, Manageability, and Performance
59. Identify the benefit of using interval partitioning.
Identify the benefit of using interval partitioning.
A.
Automatic creation of new partitions based on hash values
B.
Automatic creation of new partitions based on the value of data being entered
C.
Improved performance compared to range partitions
D.
Automatic transfer of older partitions lower cost storage
Explanation:
Interval Partitioning was introduced in 11g, interval partitions are extensions to
range partitioning. These provide automation for equi-sized range partitions.
Partitions are created
as metadata and only the start partition is made persistent. The additional
segments are allocated
as the data arrives. The additional partitions and local indexes are automatically
created.
Reference: Partitioning in Oracle 11g, Oracle F
60. which option when deploying Oracles ILM Assistant to implement
this strategy?
Your customer wants to implement an ILM strategy. The customer must have
which option when
deploying Oracles ILM Assistant to implement this strategy?
A.
RAC
B.
Partitioning
C.
OLAP
D.
Oracle Clusterware
Explanation:
Information Lifecycle Management (ILM) is a set of policies and procedures for
managing data during its lifetime.
The ILM Assistant manages information by recommending the correct placement
of data on logical
storage tiers as specified by a lifecycle definition, where a lifecycle definition
describes the stages
and storage tiers that data resides on during its lifetime.
Each stage specifies a retention period during which the data resides on a logical

storage tier. A
logical storage tier is a collection of Oracle tablespaces in which partitions may
reside.
Note: Information today comes in a wide variety of types, for example an E-mail
message, a
photograph, or an order in an Online Transaction Processing System. Therefore,
once you know
the type of data and how it will be used, you already have an understanding of
what its evolution
and final destiny is likely to be.
One of the challenges facing each organization is to understand how its data
evolves and grows,
monitor how its usage changes over time, and decide how long it should survive,
while adhering to
all the rules and regulations that now apply to that data. Information Lifecycle
Management (ILM)
is designed to address these issues, with a combination of processes, policies,
software, and
hardware so that the appropriate technology can be used for each stage in the
lifecycle of the
data.
Reference: Implementing Information Lifecycle Management Using the ILM
Assistant
61. Which type of partitioning would you implement?
You want partitions to be automatically created when data that does not fit into
current date range
loaded. Which type of partitioning would you implement?
A.
Hash
B.
List
C.
Invisible
D.
Interval
Explanation:
Interval Partitioning was introduced in 11g, interval partitions are extensions to
range partitioning. These provide automation for equi-sized range partitions.
Partitions are created
as metadata and only the start partition is made persistent. The additional
segments are allocated
as the data arrives. The additional partitions and local indexes are automatically
created.
Note: Partitioning is one of the most sought after options for data warehousing.
Almost all Oracle
data warehouses use partitioning to improve the performance of queries and also
to ease the dayto-day maintenance complexities. Starting with 11G, more
partitioning options have been provided
and these should reduce the burden of the DBA to a great extent.
Reference: Partitioning in Oracle 11g, Oracle FAQs

62. For data warehousing, identify the benefits that would NOT be
provided by the use of RAC.
For data warehousing, identify the benefits that would NOT be provided by the
use of RAC.
A.
Distribute workload across all the nodes.
B.
Distribute workload to some of the nodes.
C.
Provide parallel query servers.
D.
Provide high availability for all the operations.
Explanation:
With Oracle RAC the workload can be distributed access all cluster nodes,
parallel query servers
can be provided through the Parallel Query tool, and high availability can be
obtained through, for
example, Oracle Clusterware.
Note: Oracle RAC (Real Application Clusters) is a cluster database with a shared
cache
architecture that overcomes the limitations of traditional shared-nothing and
shared-disk
approaches to provide highly scalable and available database solutions for all
your business
applications. Oracle RAC is a key component of Oracles private cloud
architecture. Oracle RAC
support is included in the Oracle Database Standard Edition for higher levels of
system uptime.
Reference: Data Warehousing on Oracle RAC Best Practices
63. Identify the result of the analysis that would indicate the most
potential for improvement with result set caching.
You think that result set caching might provide some benefits for your current
data warehouse
scenario. You perform some analysis on the composition of the queries used in
the scenario.
Identify the result of the analysis that would indicate the most potential for
improvement with result
set caching.
A.
The scenario consists mainly of queries that are used infrequently.
B.
The scenario consists mainly of queries that work on data which changes
frequently.
C.
The scenario consists mainly of queries with long run times and small result sets.
D.
All data warehouse scenarios will benefit from result set caching.
Explanation:
As its name suggests, the query result cache is used to store the results of SQL

queries for re-use in subsequent executions. By caching the results of queries,


Oracle can avoid
having to repeat the potentially time-consuming and intensive operations that
generated the
resultset in the first place (for example, sorting/aggregation, physical I/O, joins
etc). The cache
results themselves are available across the instance (i.e. for use by sessions
other than the one
that first executed the query) and are maintained by Oracle in a dedicated area
of memory. Unlike
our homegrown solutions using associative arrays or global temporary tables, the
query result
cache is completely transparent to our applications. It is also maintained for
consistency
automatically, unlike our own caching programs.
Reference: query result cache in oracle 11g, http://www.oracledeveloper.net/display.php?id=503
64. which index type is most likely to be used to improve the
performance of some queries where the data is of low cardinality?
You will be implementing a data warehouse for one of your customers. In your
design process,
which index type is most likely to be used to improve the performance of some
queries where the
data is of low cardinality?
A.
Bitmap indexes
B.
B*-tree indexes
C.
Reverse indexes
D.
Invisible indexes
Explanation:
Bitmap indexes are a highly compressed index type that tends to be used
primarily
for data warehouses.
Characteristic of Bitmap Indexes
* For columns with very few unique values (low cardinality)
* Columns that have low cardinality are good candidates (if the cardinality of a
column is <= 0.1 %
that the column is ideal candidate, consider also 0.2% 1%)
* Tables that have no or little insert/update are good candidates (static data in
warehouse)
* Stream of bits: each bit relates to a column value in a single row of table
Reference: The Secrets of Oracle Bitmap Indexes,
http://www.akadia.com/services/ora_bitmapped_index.html
65. Identify the action that you CANNOT perform using Database
Resource Manager.
Identify the action that you CANNOT perform using Database Resource Manager.
A.

Define Consumer Groups.


B.
Create rules to map sessions to Consumer Groups.
C.
Define a Resource Plan.
D.
Allocate individual CPUs to Consumer Groups.
Explanation:
Oracle Database Resource Management (DRM) provides tools that allow any
Oracle DBA to manage a database servers CPU resources effectively for
application user groups
and during different resource demand periods.
DRM consists of four basic components:
*Resource Consumer Groups (not A). A resource consumer group is a collection
of users with
similar requirements for resource consumption. Users can be assigned to more
than one resource
consumer group, but each users active sessioncan only be assigned to one
resource consumer
group at a time.
*Resource Plans (not C). In its simplest form, a resource plan describes the
resources allocated to
one or more resource consumer group(s).
*Resource Plan Directives (not B). Resource plan directives allocate resources
among the
resource consumer groups in the resource plan. Essentially, directives connect
resource
consumer groups or subplans to their resource plans.
* SYSTEM_PLAN. Oracle supplies an initial, default resource plan named
SYSTEM_PLAN. This
plan implements a CPU utilization resource allocation method to divide and
prioritize CPU
resources to three resource consumer groups
66. How many Exadata Storage Server cells can be used in a grid?
How many Exadata Storage Server cells can be used in a grid?
A.
7
B.
14
C.
128
D.
No practical limit
Explanation:
There is no practical limit to number of cells that can be in the grid.
Reference: Sun Oracle Exadata and Database Machine Overview
67. Which is NOT among Oracle SQL Analytic functions included in
Oracle Database 11g?

Which is NOT among Oracle SQL Analytic functions included in Oracle Database
11g?
A.
Ranking functions
B.
Substring functions
C.
Window aggregate functions
D.
LAG/LEAD functions
Explanation:
Substring functions are not analytic.
68. which type of query is the SQL result cache automatically disabled?
For which type of query is the SQL result cache automatically disabled?
A.
Queries that access data which changes frequently
B.
Queries that return large amounts of data
C.
Queries that use SQL functions such as SYSDATE
D.
Queries that are used infrequently
Explanation:
SYSDATE produces a new value every time it is used. Caching such a value would
make no sense.
69. Identify the control structure that would NOT be defined as part of a
data flow with Oracle Data Integrator.
Identify the control structure that would NOT be defined as part of a data flow
with Oracle Data
Integrator.
A.
Loops
B.
Conditions
C.
Error handling
D.
GOTOs
Explanation:
GOTOs cannot be used within the Oracle Data Integrator.
Reference: DIJQR.pdf, Page 7 (Oracle Data Integrator)
70. Indentify the true statement about REF partitions.
Indentify the true statement about REF partitions.
A.
REF partitions have no impact on partition-wise joins.

B.
Changes to partitioning in the parent table are automatically reflected in the
child table.
C.
Changes in the data in a parent table are reflected in a child table.
D.
REF partitions can save storage space in the parent table.
Explanation:
Reference partitioning is a partitioning method introduced in Oracle 11g. Using
reference partitioning, a child table can inherit the partitioning characteristics
from a parent table.