You are on page 1of 211

OTECH

#3
may 2014 MAGAZINE
Sten Vesterli

The Cost
of Technology
Patrick Barel Simon Haslam

Absolutely Enterprise Deployment


Typical of Oracle Fusion
Middleware Products
Part 2
Foreword

Rollercoaster
The clicking sounds the rail makes when haul- In my personal life we bought and renovated I would like to take this opportunity to thank
ing the carriage up. The moment of silence just a new house. Its large and comfortable, has my beautiful wife Simone for her patience with
before the screaming of the passengers starts. enough space for expansion. OTech Magazine, me, Roland for this exciting new magazine look
The wind in your hair during that first, breath- as you might have noticed by now, also had and feel, Rob for the commercial groundwork,
taking decent into the unknown quite the makeover. The basis is completely our partners AMIS and More Than Code, our
changed and it offers us enough room to sponsors, most of all the contributors (keep
OTech Magazine is going like a rollercoaster, expand in the future months: the magazine the good stuff coming)
just like my personal life. In the past few will change and improve a bit every issue.
months a lot has happened. And all our readers. Enjoy the ride!
In my personal life I broke my foot. Besides
In my personal life I got married to the most the pain, theres the agony of the best timing Douwe Pieter van den Bos
wonderful woman in the world (who makes in the world. But, not everything in life goes douwepieter@otechmag.com
my life the best place in the world sorry guys). the way we plan. This magazine was supposed
OTech Magazine has some new and exciting to be released a few weeks ago, but because twitter.com/omebos
partners as well. Thanks to Roland (who does of small humps on the road that I mention www.facebook.com/pages/
wonders with the graphics of the magazine) above, we didnt make it. OTech-Magazine/381818991937657
and Rob (who makes sure the commercial part  nl.linkedin.com/in/douwepietervandenbos/
of the magazine runs) OTech Magazine is But maybe for the better; this issue of OTech
becoming more and more professional. Magazine certainly turned out mighty fine.

2 OTech Magazine #3 May 2014


Try out our platform and book a free session for example:

- Migration from OWB to ODI


-Installation of Oracle enterprise manager 12 c

3 OTech Magazine #3 May 2014


Content

The Cost of Technology 6 uTIlITy usE CAsEs AsM_METrICs.pl 102 wHAT DOEs ADApTIvE IN
Sten Vesterli Bertrand Drouvot OrAClE ACM MEAN? 163
Lonneke Dikmans
wHy AND HOw TO usE OrAClE buIlD A rAC DATAbAsE fOr frEE
DATAbAsE rEAl ApplICATION TEsTING? 13 wITH vIrTuAlbOx 109 OrAClE ACCEss MANAGEr:
Talip Hakan Ozturk Christopher Ostrowski ClusTErs, CONNECTION rEsIlIENCE
AND COHErENCE 174
ENTErprIsE DEplOyMENT Of OrAClE DINOsAurs IN spACE - Robert Honeyman
fusION MIDDlEwArE prODuCTs pArT 2 26 MObIlIZING OrAClE fOrMs
Simon Haslam ApplICATIONs 132 QuMu & wEbCENTEr -
Mia Urman brINGING vIDEO TO THE ENTErprIsE 182
AbsOluTEly TypICAl 32 Jon Chartrand
Patrick Barel prOvIsIONING fusION MIDDlEwArE
usING CHEf AND puppEt pArT I 137 OrAClE DATA GuArD 12C:
sTEp by sTEp INsTAll OrAClE GrID 1 Ronald van Luttikhuizen & Simon Haslam NEw fEATurEs 189
1.2.0.3 ON sOlArIs 11.1 53 Mahir M Quluzade
Osama Mustafa MObIlITy fOr
wEbCENTEr CONTENT 144 INTrODuCTION TO OrAClE
THE rElEvANCE Of THE usEr ExpErIENCE 63 Troy Allen TECHNOlOGy lICENsE AuDITING 204
Lucas Jellema Peter Lorenzen
ANAlyTIC
Oracle NosQl pArT 2 89 wArEHOusE pICkING 150
James Anthony Kim Berg Hansen

4 OTech Magazine #3 May 2014


Do you want We are going to be at the next
Oracle Open World event,
to switch from September 28 - October 2,
Oracle owb 2014 San Francisco, USA.

to odi? Booth 115, Moscone South.


Why not pay us a visit?

We got the
solution!

www.owb2odiconverter.com
The Cost of
Sten Vesterli
www.vesterli.com
Technology
twitter.com/stenvesterli
www.facebook.com/
it.more.than.code
 dk.linkedin.com/in/stenvesterli

6 OTech Magazine #3 May 2014


The Cost
of Technology
1/4 Sten Vesterli

In the United Kingdom, the National Health Service (NHS) has just
given Microsoft more than 5 million pounds (equivalent to 9 million
U.S. dollars). This is money that could have paid for 7,000 eye operations
or 700 heart bypass operations, but now it goes to Microsoft to pay for
extended support for Windows XP. The reason for this cost is that the
NHS is a technological laggard, still running thousands of obsolete
Windows XP installations.

The Technology Adoption Lifecycle


The adoption of new technology follows a classical path called the tech-
nology adoption lifecycle. This is a widely applicable model that shows
how new practices spread through a population, originally based on
studies of new farm practices in the U.S. in the 1950s. The innovators are the risk-takers. They are financially strong in order to
be able to make the necessary investments in unproven technology and
This model divides the population of individuals or organizations into five survive any bets that do not pay off.
groups:
Innovators The early adopters are open to innovation, focusing on using existing
Early adopters technology to achieve improved business outcomes. They are open to
Early majority redesigning their processes and interactions when new technology
Late majority makes it attractive.
Laggards
The early majority is more careful and prefers smaller, incremental im-
The distribution of these types follows a normal distribution provements to existing processes. They like to see others lead the way
(bell curve) as shown below. before they invest in technology or process improvements.

7 OTech Magazine #3 May 2014


The Cost
of Technology
2/4 Sten Vesterli

The late majority is conservative and will only implement new technol- had collected as others had retired these ancient systems. Shortly before
ogy when it is cheaper than the status quo. Sometimes, financially weak I left, the second last disc controller in the world for these systems went
organizations end up in this category even if they have the mindset from up in flames, leaving us running a system for which the last spare part had
one of the higher groups. run out.

The laggards are extremely risk-averse and typically not well-informed Considered as a working museum, it was interesting. Considered as a
about technological trends. They will stick with tried-and-true solutions professional IT organization, it was indefensibly irresponsible.
even when everybody else has moved on.
The cost of being an innovator
The cost of being a laggard Innovators spend a lot of money on their technology, betting that they
Laggards incur a serious cost. They are stuck with unsupported software will recoup their investments later. They are risk-takers and sometimes
versions from which there is no upgrade path and the necessary skills are their bets do not pay off during the dot-com bubble of the late 1990s,
rare and expensive. Do you know what a COBOL programmer costs these companies like Webvan spent literally billions but ended up going out of
days? business.

Often, the main cost is financial, but sometimes laggards place the entire Many years ago, when Microsoft had just released the first version of
organization at risk Microsoft Windows, the young Bill Gates went to the most prominent
software companies of the day to try to persuade them to build a Win-
Early in my career, I worked at an organization that was still running their dows version of their software. But leading word processor WordPerfect
business on a very old mainframe computer. It had a disk drive the size and leading spreadsheet Lotus 1-2-3 were happy with their dominance
of a washing machine, with 14-inch removable stacks of magnetic disks of the character-based world and rejected Bill Gates. So he decided to
to store information. Occasionally, some component would fail, which build his own applications to showcase what a modern Windows applica-
would be evident from actual smoke and flames in the server room. We tion would look like. Microsoft invested heavily and built Word and Excel
were paying a very expensive yearly support fee to the vendor, and the which has come to completely dominate the market for office software
vendor actually kept a stock of spare parts specifically for us parts they and has repaid the investment many, many times.

8 OTech Magazine #3 May 2014


The Cost
of Technology
3/4 Sten Vesterli

Companies like Amazon and Tesla have yet to show a significant profit, This attrition explains why the laggards end up with the toxic combina-
but have stratospheric stock valuations. Why? Because investors ap- tion of low benefit and high cost. Their application might have provided a
preciate that they are innovators and have a chance of reaping outsize significant benefit when it was created 15 years ago, but now it offers no
rewards from dominating new markets. competitive advantage and is very expensive to maintain.

Cost and Benefit The late majority does not get much benefit because of benefit attrition,
Every organization tries to achieve the best balance between cost and but at least they are using well-known software where skills and support
benefit based on their understanding of the world. The cost is fairly resources are available and can easily be purchased offshore.
straightforward to calculate, but the benefit is relative. It depends on The early majority is getting a middling benefit from their systems at a
what your competitors are doing, and the benefit suffers from attrition cost comparable to the late majority.
it automatically becomes smaller over time as your competitors catch up.
The early adopters are getting a significant business benefit from their
systems because they continually renew them. Because they keep step-
ping forward, they do not drift to the left as the majority does. Their cost
tends to be slightly higher than the majority because they pick up new
tools and technologies early, before all the kinks have been worked out.
Finally, the innovators are furthest to the right on the benefit scale, but
they are also incurring a high cost because they tend to use cutting-edge
technology in radically new ways.

Improving yourself
You should make this kind of cost/benefit analysis continually for all of
your major IT systems. The important part is not to classify yourself into
one of the five groups, but to follow where your systems are moving
over time. To track this, you need to gather some data on both costs and
benefits.

9 OTech Magazine #3 May 2014


The Cost
of Technology
4/4 Sten Vesterli

Cost is easiest to calculate. You can read your software, hardware and
support costs directly from your financial system, and you can allocate
your total personnel cost to the various systems you are running. Be-
cause many people will be supporting different systems, you need to
allocate their cost across the systems they support. A simple ratio (25%
of this persons time on that system) is enough. If you already have more
detailed time tracking implemented, you can also use this for additional
precision.

The benefit is harder to calculate. Some organizations can calculate a


direct financial benefit from a system (profit from sales on the web site),
but most will have to use other metrics. If your organization is already
measuring Key Performance Indicators (KPIs), you can use these as a
starting point. There will be other non-financial benefits that you need to
figure out a way to measure consistently (customer satisfaction, churn,
service request resolution time, etc.).

Plot this measurement for each major system with regular intervals, for
example quarterly. This will make you aware of the benefit attrition as it
happens, and will prevent you from ending up with low benefits and high
costs.

10 OTech Magazine #3 May 2014


otech partner:

Success in IT depends on three things: Sten Vesterli


www.vesterli.com
Good people, good processes, and good
technology. At More Than Code, we work More Than Code
with all three, in any combination. Copenhagen, Denmark
+45 26 81 86 87
We help IT organizations build applications their users really love, we help you info@more-than-code.com
choose the right technology, we help you operate at maximum efficiency, and we
help individual IT developers become as happy and productive as possible. And if @stenvesterli
you have chosen Oracle ADF as your technology, we have one of the worlds leading www.facebook.com/it.more.than.code
experts to help you build amazing ADF applications as fast as possible.

Client Caser:
The users did not use the functionality of the new HR system, depending on their
own shadow systems based on spreadsheets or paper. After interviewing the us-
ers, we determined that the new system was too complex for casual users. We sug-
gested and designed a simple, user-friendly front-end to the system focusing only on
the tasks relevant for these users and achieved almost complete data coverage in
the central HR system and the elimination of shadow systems.

11 OTech Magazine #3 May 2014


FOEXFOEX Plugin Framework for Oracle APEX

Weve enhanced APEX to make


it the perfect PL/SQL based
a lternative to ADF and Forms Open multiple Applications within the Desktop Plugin

development, not just for your


developers but most importantly
for your end users.
Single Page, multiple Components linked together

Build beaut iful a nd powerful Applicat ions using


your existing PL/SQL and APEX Skills
www.tryfoexnow.com email: info@tryfoexnow.com
Why and How
Talip Hakan to use Oracle
Database Real
Ozturk
www.bankasya.com.tr

Application
www.twitter.com/
taliphaknozturk
www.facebook.com/thozturk

Testing?
 www.linkedin.com/in/
taliphakanozturk

13 OTech Magazine #3 May 2014


Why and How to use Oracle
Database Real Application Testing?
1/12 Talip Hakan Ozturk

The importance of competition is increasing day by day. Today, enterpris- Oracle Database Real Application Testing option addresses this testing
es have to offer service quality to their customers to be at the forefront. process. It offers cost-effective and easy-to-use change assurance solution.
Improvements and investments in IT infrastructure are the foundation of It enables you to perform real-world testing of your production database.
service quality. The Databases located in the center of IT infrastructure With this comprehensive solution, we can do a change in a test environ-
have important place in terms of the quality of services. Any changes ment, measure performance degradation or improvements, take any cor-
made to our databases will be reflected directly to our customers. So it rective action, and then to introduce the change to production systems.
must be considered well before any changes are made. And the neces- Real Application Testing offers two complementary solutions, SQL Perfor-
sary testing process also must be operated. mance Analyzer(SPA) and Database Replay.

A. SQL Performance Analyzer (SPA)


System changes such as upgrading database, changing a parameter,
adding new index, database consolidation testing, configuration changes
to the OS, adding/removing hardware that affect execution plans of
SQL statements can significant impact on SQL statement performance.
These system changes may improve or regress SQL/Application perfor-
mance. For example, we may encounter with a surprise after upgrading
the production database. Only a SQL statement is enough to disrupt the
functioning of our database. In this situation, Database Administrators
spend enormous amounts of time identifying and fixing regressed SQL
statements.

We can predict SQL execution performance problems and prevent ser-


vice outage using SQL Performance Analyzer. Using it, we can compare
the performance degradation of SQLs by running the SQL statements
serially before and after the changes. SPA is integrated with SQL Tuning Figure 1 Lifecycle of change management

14 OTech Magazine #3 May 2014


Why and How to use Oracle
Database Real Application Testing?
2/12 Talip Hakan Ozturk

Set, SQL Tuning Advisor, and SQL Plan Management components. You SPA Performance Analyzer or on remote database using database link.
can capture SQL statements into a SQL Tuning Set, compare the perfor-
mance of SQL statements before and after the change by executing SPA Explain Plan Only - This method generates execution plans only for SQL
on SQL Tuning Set. Finally, we can tune regressed SQL statements using statements through SQL Performance Analyzer. This can be done on
SQL Tuning Advisor component. the database running SPA Performance Analyzer or on remote data-
base using database link.
There are 5 steps to evaluate system changes:
1 Capturing the SQL statement workload. There are several methods to Convert SQL tuning set - This method converts the execution statistics
capture SQL workload such as AWR, Cursor Cache. Captured SQL state- and plans stored in a SQL tuning set.
ments loaded into a SQL Tuning Set. SQL Tuning Set (STS) is a database
object that contains many SQL statements with execution statistics 3 Making the system change (upgrading database, changing a param-
and context. If SQL workload contains more SQL statements, it will eter, adding new index, database consolidation testing, configuration
better simulate the state of the application. So we must capture as changes to the OS, adding/removing hardware, etc.)
many SQL statements as possible. It is possible to move STS from pro-
duction system to test system using export/import method. You should 4 Creating a post-change SQL trial. It is recommended that you create
install and configure the test database environment (platform, hard- the post-change SQL trial using the same method as the pre-change
ware, data, etc.) to match the database environment of the production SQL trial. After this step, a new SQL trial will be created and stored new
system as closely as possible. execution statistics/plans.

2 Creating a pre-change SQL trial. It is possible to generate the perfor- 5 Comparing the SQL Statement Performance. SQL Performance Analyz-
mance data needed for a SQL trial with SQL Performance Analyzer er compares the pre-change and post-change SQL trials using metrics
using the following methods. like CPU time, User I/O time, Buffer gets, Physical I/O, Optimizer cost,
and I/O interconnect bytes. It produces a report identifying any chang-
Test Execute - This method executes SQL statements through SQL Per- es in execution plans or performance metrics of the SQL statements.
formance Analyzer. This option can be done on the database running

15 OTech Magazine #3 May 2014


Why and How to use Oracle
Database Real Application Testing?
3/12 Talip Hakan Ozturk

1 Create a SQL Tuning Set (STS) on production database

BEGIN
DBMS_SQLTUNE.create_sqlset (sqlset_name => STS_TALIP, sqlset_owner => TALIP
);
END;
/

Load SQL Statements into created STS from cursor cache as belows.
DECLARE
sqlset_cur DBMS_SQLTUNE.sqlset_cursor;
BEGIN
OPEN sqlset_cur FOR
SELECT VALUE (p)
FROM table(DBMS_SQLTUNE.select_cursor_cache (NULL,
NULL,
NULL,
NULL,
NULL,
1,
NULL,
TYPICAL
)) p;

Figure 2 SQL Performance Analyzer Workflow DBMS_SQLTUNE.load_sqlset (sqlset_name => STS_TALIP,


populate_cursor => sqlset_cur,
You can use SQL Performance Analyzer through DBMS_SQLPA API and load_option => MERGE,
update_option => ACCUMULATE,
through Oracle Enterprise Manager interface. sqlset_owner => TALIP
);
END;
The DBMS_SQLSPA package is a command line interface can be used to /
test the impact of system changes on SQL performance. How to use SPA
through DBMS_SQLPA API? The following step by step example illus-
trates the impact of a new index creation.

16 OTech Magazine #3 May 2014


Why and How to use Oracle
Database Real Application Testing?
4/12 Talip Hakan Ozturk

2 Move the created STS to test database staging_table_name => STG_TABLE,


staging_schema_owner => TALIP
a To move STS, we must create a staging table. );
END;
SQL> BEGIN /
DBMS_SQLTUNE.create_stgtab_sqlset (table_name => STG_TABLE,
schema_name => TALIP,
tablespace_name => TALIP_TS Now, the SQL Tuning Set (STS) is ready for analyzing.
);
END;
3 The first step is to create an analysis task by calling the create_analy-
/ sis_task procedure of the DBMS_SQLSPA package. This procedure cre-
ates an advisor task and sets its corresponding parameters according to
b Pack the STS into staging table the user provided input arguments.
SQL> BEGIN BEGIN
DBMS_SQLTUNE.pack_stgtab_sqlset (sqlset_name => STS_TALIP, dbms.sqlpa.create_analysis_task
sqlset_owner => TALIP, (
staging_table_name => STG_TABLE, sqlset_name => TALIP_STS,
staging_schema_owner => TALIP task_name => talip_spa_task,
); description => index_creation_test
END; );
/ END;
/

c Export the staging table and copy export file to test system.
You can verify the analysis task as follows.
# expdp talip@dbtalip directory=export dumpfile=stg_table.dmp logfile=stg_table.log
tables=talip.stg_table SQL> select task_name, advisor_name, created, status from dba_advisor_tasks where
task_name=talip_spa_task;
e Unpack staging table to STS.

BEGIN
DBMS_SQLTUNE.unpack_stgtab_sqlset (sqlset_name => STS_TALIP,
sqlset_owner => TALIP,
replace => TRUE,

17 OTech Magazine #3 May 2014


Why and How to use Oracle
Database Real Application Testing?
5/12 Talip Hakan Ozturk

4 When the analysis task is successfully created, it is at an initial state. (


task_name => talip_spa_task,
Now, it is time to build the SQL performance data before making the execution_type => explain plan,
change. execution_name => second_trial
);
BEGIN END;
dbms_sqlpa.execute_analysis_task /
(
task_name => talip_spa_task,
execution_type => explain plan, 6 We can compare the result of first and second trial using same proce-
execution_name => first_trial
);
dure.
END;
/ BEGIN
dbms_sqlpa.execute_analysis_task
(
The procedure called execute_analysis_task is invoked with the execu- task_name => talip_spa_task,
execution_type => compare performance,
tion_type argument set to explain plan makes the Analyzer produce execution_name => analyze_result,
execution plans only. If we invoked with the execution_type argument execution_params => dbms_advisor.arglist(
execution_name1, first_trial,
set to test execute then it requests SQL Performance Analyzer to ex- execution_name2, second_trial,
ecute all SQL statements in the SQL tuning set in order to generate their comparison_metric, optimizer_cost)
);
execution plans as well as their execution statistics. END;
You can check the task execution status as follows. /
SQL> select task_name, execution_name, execution_start, execution_
end, status from dba_advisor_executions where task_name=talip_spa_ 7 When the analysis task execution is completed, the comparison
task order by execution_end; results can be generated in HTML/TEXT format by calling the report_
analysis_task procedure as follows.
5 Now, we can create new index on test database and call again the
SQL> set heading off long 1000000000 longchunksize 10000 echo off;
execute_analysis_task procedure using the same arguments. set linesize 1000 trimspool on;
spool report.html
BEGIN select xmltype(dbms_sqlpa.report_analysis_task(talip_spa_task, html, top_
dbms_sqlpa.execute_analysis_task

18 OTech Magazine #3 May 2014


Why and How to use Oracle
Database Real Application Testing?
6/12 Talip Hakan Ozturk

sql => 500) ).getclobval(0,0)


from dual;
spool off

You can see the comparison report summary generated in HTML for-
mat in figure 3-4. In this reports, there are improved and regressed SQL
statements after change. You must analyze changed execution plans of
regressed SQL statements.

Figure 4 Regressed SQL statements in the comparison report.

It is also possible to see changed new plans using dba_advisor_sqlplans


view.

B. Database Replay
Database Replay solution enables real-world testing of production sys-
tem changes such as database upgrades, patches, configuration changes
(Single instance to RAC or far from it), data storage changes (ASM fail-
Figure 3 The comparison report summary generated in HTML format.
groups, Storage SRDF configuration, etc.), file system changes (OCFS2 to

19 OTech Magazine #3 May 2014


Why and How to use Oracle
Database Real Application Testing?
7/12 Talip Hakan Ozturk

ASM), operating system changes (Windows, Linux, Solaris), and database (Workload Replay Client) replays the preprocessed capture files in test
consolidation testing projects using Oracle 12c Multitenant Databases, database. Using the WRC tool in calibration mode you can determine
etc. It captures the whole production database workload including all the number of wrc client to replay process.
concurrency, dependencies and timing. After capturing real-world pro-
duction workload, it replays the workload on the test database. 4 Analyzing the result. You can perform detailed analysis of workload
There are 4 steps to evaluate system changes. during capture and replay process. You can also take AWR (Automatic
Workload Repository) reports to compare performance statistics dur-
1 Capturing the production workload. After the starting capture process, ing capture and replay process. I think, AWR report is the best method
all client activities including SQL queries, DML statements, DDL state- for detailed analysis.
ments, PL/SQL blocks and Remote Procedure Calls are stored in binary
files which extension likes wcr_rec*,wcr_capture.wmd and wcr_*.rec .
These binary files contain all information about client requests such as
SQL statements, bind variables, etc. Captured files are stored in an Ora-
cle directory that we created in first step with the CREATE DIRECTORY
statement.

2 Preprocessing the workload. After the capturing production workload,


you must be copied the captured files to test system. In this step, the
captured data stored in binary files must be prepared to replay pro-
cess. This step creates the metadata needed for replaying the work-
load.

3 Replaying the workload. The data in production database and test


database must be same. So you must restore the backup taken before
capture process to the test database. And also you must make nec-
essary system change on test system. A client program called WRC Figure 5 Database Replay Workflow

20 OTech Magazine #3 May 2014


Why and How to use Oracle
Database Real Application Testing?
8/12 Talip Hakan Ozturk

Database replay can be used via both the command line interface and not valid in subsequent releases. After upgrading the database, you must
Oracle Enterprise Manager. Note that Oracle Database releases 10.2.0.4 remove the parameter from the parameter file otherwise, the database
and above support Enterprise Manager functionality for capture/replay of will fail when it start up.
workloads. The replay process can only be performed on Oracle Data- BEGIN
base 11g and higher versions. dbms_workload_capture.start_capture
(
name => CAPTURE,
The following step by step example illustrates using database replay from dir => CAPTUREDIR,
default_action => INCLUDE
command line interface (CLI). );
END;
/
1 Before starting the capture process, it is recommended that restart
the database to ensure that ongoing and dependent transactions are
allowed to be completed or rolled back before the capture begins. It is 3 You can filter out activities based on instance, program, user, module,
recommended but not required. Because, you can not restart the busi- and so on. Conversely, you can also record specific types of activities.
ness critical production database running on 24 X 7 basis. First we need SQL> exec dbms_workload_capture.ADD_FILTER ( fname =>INSTANCE_NUMBER,
to create a directory object in the database where capture files will be fattribute => INSTANCE_NUMBER,
fvalue =>1);
stored.
SQL> exec dbms_workload_capture.ADD_FILTER ( fname =>USERNAME,
# mkdir /data1/dbreplay fattribute => USER,
SQL> CREATE DIRECTORY capturedir AS /data1/dbreplay; fvalue =>TALIP);

SQL> exec dbms_workload_capture.DELETE_FILTER(fname=>USERNAME);


2 Take a RMAN level 0 backup of production database to restore target
test system and start the capture job.

Note that, to capture pre 11g databases you must set PRE_11G_ENABLE_
CAPTURE initialization parameter to value TRUE. This parameter can
only be used with Oracle Database 10g Release 2 (10.2). This parameter is

21 OTech Magazine #3 May 2014


Why and How to use Oracle
Database Real Application Testing?
9/12 Talip Hakan Ozturk

4 Let the capture process run long enough. For example, for a core restore database;
recover database;
banking database, you can capture the database in peak hours }
through branches and channels, batch job execution and end-day op-
erations. After running the capture process long enough, you can stop Now, the test database is same as the production database was when the
it as belows. workload capture started.
SQL> exec dbms_workload_capture.finish_capture;
7 First we need to create a directory object in the database where the
5 Navigate to the /data1/dbreplay directory. The captured workload files copied capture files are stored.
are located in this directory. Copy these files to the target test system SQL> CREATE DIRECTORY replaydir AS /data1/dbreplay;
(via ftp, scp, etc.). Before the capture process, an AWR snapshot is
taken by database. After the capture process, it is also taken and ex- 8 We need to preprocess the captured files on test system. It creates
ported to /data1/dbreplay directory automatically. necessary metadata to replay. Preprocess is required only once per
capture. After preprocessing the captured files, you can replay it
You can also export the AWR report of capture process when it needed. many times.
Select the capture_id using the DBA_WORKLOAD_CAPTURES view and
export necessary AWR snapshots as belows. 27 is the capture id. BEGIN
dbms_workload_replay.process_capture
SQL> exec dbms_workload_capture.export_awr(27); (
capture_dir => REPLAYDIR
);
6 Restore the backup of the production database taken before cap- END;
ture process on the target test system. You must recover it up to the /
minute the capture started. You can use RMAN set until time clause
for it as belows. 9 The test database is now ready for replay process. Let initialize replay
process as belows.
RMAN> run {
set until time 2014-02-19 10:08:46;
BEGIN

22 OTech Magazine #3 May 2014


Why and How to use Oracle
Database Real Application Testing?
10/12 Talip Hakan Ozturk

dbms_workload_replay.initialize_replay
(
replay_name => REPLAY,
We can also set the scale_up_multiplier parameter defines the number of
replay_dir => CAPTUREDIR times the workload is scaled up during replay. Each captured session will
);
END;
be replayed concurrently for as many times specified by this parameter.
/ However, only one session in each set of identical replay sessions will ex-
ecute both queries and updates. The rest of the sessions will only execute
10 You can specify some parameters below. queries. For example,

BEGIN
synchronization: whether or not commit order is preserved dbms_workload_replay.prepare_replay
(
scale_up_multiplier => 10
connect_time_scale:scales the time elapsed between the start of the );
replay and the start of each session END;
/

think_time_scale:scales the time elapsed between two successive user


calls from the same session 11 Start a replay client from the command line, using the wrc command.

# $ORACLE_HOME/bin/wrc userid=system password=***** replaydir=/data1/dbreplay


think_time_auto_correct - Auto corrects the think time between calls
when user calls takes longer during the replay than during the capture It gives a message like below.
BEGIN
dbms_workload_replay.prepare_replay Wait for the replay to start (10:11:26)
(
synchronization => TRUE,
connect_time_scale => 100, Note that the number of wrc clients that needs to be started depends on
think_time_scale => 100,
think_time_auto_correct => FALSE
the captured workload. To find number of wrc clients that needs to be
); started, you must execute wrc utility with calibrate mode as belows.
END;
/

23 OTech Magazine #3 May 2014


Why and How to use Oracle
Database Real Application Testing?
11/12 Talip Hakan Ozturk

# $ORACLE_HOME/bin/wrc userid=system password=***** mode=calibrate replaydir=/data1/ l_cap_id NUMBER;


dbreplay l_rep_id NUMBER;
BEGIN
l_cap_id := dbms_workload_replay.get_replay_info (dir => CAPTUREDIR);
12 Start the replay process. SELECT MAX (id)
INTO l_rep_id
SQL> exec dbms_workload_replay.start_replay; FROM dba_workload_replays
WHERE capture_id = l_cap_id;

When the replay process starts, the wrc replay client displays a message :v_rep_rpt := dbms_workload_replay.report (replay_id => l_rep_id,
format => dbms_workload_capture.type_html
below. );
END;
/
Replay started (10:12:09) PRINT :v_rep_rpt

When the replay process finishes, the wrc replay client displays a mes- You can also get reports in Oracle Enterprise Manager even you have
sage below. used CLI during replay process.

Replay finished (04:53:03)

During the replay process we can pause, resume or cancel the process.
SQL> exec DBMS_WORKLOAD_REPLAY.PAUSE_REPLAY();
SQL> exec DBMS_WORKLOAD_REPLAY.RESUME_REPLAY();
SQL> exec DBMS_WORKLOAD_REPLAY.CANCEL_REPLAY();

13 The last step is analyzing and reporting. We can get replay report as
belows.

SQL>SET SERVEROUTPUT ON TRIMSPOOL ON LONG 500000 LINESIZE


200
VAR v_rep_rpt CLOB;
DECLARE Figure 6 Database Replay comparison reports via Oracle Enterprise Manager.

24 OTech Magazine #3 May 2014


Why and How to use Oracle
Database Real Application Testing?
12/12 Talip Hakan Ozturk

Please check the metalink document 463263.1 for Database Capture And
Replay common errors and reasons. That is all :-) Test and Enjoy it.

Summary:
Change made to our systems will be reflected directly to our customers.
So it must be considered well before any changes are made. Using the
Oracle Database Real Application Testing option, you can easily manage
system changes with confidence. Real Application Testing helps you to
take lower change risks, lower system outages, and improve quality of
service. You can adopt new technologies with complacence.

25 OTech Magazine #3 May 2014


Enterprise
Simon Haslam Deployment of
Oracle Fusion
www.veriton.com

Middleware
Consultant at Veriton Ltd
Technical Director, O-box Products Ltd

Products
twitter.com/simon_haslam
www.facebook.com/thozturk
 uk.linkedin.com/in/simonhaslam

Part 2
26 OTech Magazine #3 May 2014
Enterprise Deployment of Oracle
Fusion Middleware Products
1/4 Simon Haslam

Welcome to the second in a series of articles about building production-


grade Fusion Middleware platforms, focussing on the Enterprise Deploy-
ment Guides (EDGs). Hopefully you have already read Part 1 from the
last issue of OTech Magazine where I introduced the EDGs and why you
might want to use them.

So, to catch up where we left off, an EDG offers a recipe of how to build
a secure and highly available system using one of the layered, or upper
stack product sets, such as SOA Suite or Identity Management. As an
Oracle-supplied blueprint it offers a number of well thought out practic-
es, though, in my experience, you rarely implement a 100% EDG compliant
system for reasons which will hopefully become apparent.

There are some areas where I think you may consider deviating from an
EDG in this issue I am going to cover:
Physical versus Virtual Implementations
Licensing Considerations
Failover Approaches
Component Workloads
Lifecycle Requirements
Lets drill into each of these in a bit more depth and, to make it a little
easier to follow, here is one of the EDG diagrams I have stuck on the wall
in front of my desk!

Diagram 1: SOA EDG diagram of MySOACompany Topology with Oracle Service Bus

27 OTech Magazine #3 May 2014


Enterprise Deployment of Oracle
Fusion Middleware Products
2/4 Simon Haslam

The above diagram is taken from the SOA 11g EDG. This EDG actually con- There are two reasons that introducing virtual machines alters some de-
siders 4 product combinations SOA (by which we primarily mean BPEL) sign decisions though:
and OAM alone, plus BAM, plus BPM, and finally SOA with OSB. Please
forgive my excessive use of acronyms hopefully if you are working with 1) When you are using virtual machines you have a lot of flexibility
Fusion Middleware they will be familiar to you (though the meaning of in terms of VM sizing, plus you can have as many of them as you like
these abbreviations isnt too important for the purposes of this article). (within reason). This encourages you to have one machine per function
The diagram shows the software components used, the hosts they run so, where the EDG suggests multiple managed servers per host, you
on, and the communication channels between both themselves and the may instead choose to have one managed server per VM as this can be
outside world. I have zoomed in on the SOA with OSB combination as its beneficial for administration and tuning.
a very common use case and illustrates several possible EDG deviations.
2) Virtual machines give you a degree of location neutrality, above the
Physical versus Virtual Implementations physical hardware. This may negate the need to use networking ab-
It is now several years since I designed or built a production Fusion Mid- stractions, such as virtual hostnames and virtual IP addresses (VIPs)
dleware environment using operating systems running on bare metal which can then simplify the configuration within the VMs. For exam-
instead most middleware administrators have to deal with virtual ple, if you put an Admin Server on its own VM you could save having to
machines (VMs). With modern servers having tens of cores it is hard to configure and manage a VIP for this purpose, instead leaving it up to
imagine many situations, certainly for the sort of mid-sized organisations the hypervisor to ensure that the Admin Server VM is always running
where I work, where you would use all of that compute power for a sin- somewhere. Incidentally this is the approach that Oracle has taken for
gle function. The EDGs talk about physical hosts where they discuss their Admin Servers in the WebLogic implementation on the Oracle
virtual they are usually talking about virtual hostnames, i.e. a means Database Appliance.
of abstracting the services hostname away from the underlying host.
Even though, for middleware, I think you can mostly treat physical hosts So separating out software components onto different (virtual) ma-
and virtual machines the same, if an EDG described both virtual host- chines, and reducing the use of virtual (network) hosts are two areas you
names and the hostname of virtual machines that it could get quite may want to diverge from the EDGs suggestions.
confusing!

28 OTech Magazine #3 May 2014


Enterprise Deployment of Oracle
Fusion Middleware Products
3/4 Simon Haslam

Licensing Considerations Furthermore when you start considering Disaster Recovery (DR) this is
Another consideration for most organisations is Oracle licensing. Prod- an area that the EDGs dont cover instead they refer you to the Fusion
ucts within the technology area described by a single EDG may have dif- Middleware Disaster Recovery Guide . There are numerous DR alterna-
ferent prices. A good example is OSB and SOA Suite ($23,000 and $57,500 tives these days, especially when using virtualization your ultimate ap-
per Oracle Processor respectively) whilst SOA Suite includes OSB it is proach will depend on the network connectivity between sites, your RTO
cheaper to license the cores you need for OSB separately with the cheap- and RPO s, and how much you want to use an Oracle-specific method as
er OSB-only licence. If we are not running on bare metal, the options to compared to something provided by the underlying infrastructure. So its
partition your licences vary according to underlying hardware and soft- very important to consider DR from the start of your project as this will
ware , but in some cases licensing will be a good reason to decompose probably influence your architecture.
the products on your physical hosts or VMs and deviate from the EDG.
Component Workloads
Failover Approaches A more subtle topic where you might want to have a different approach
Failover design, to handle the loss (planned or otherwise) of a single to the EDG is with regards to relative sizes/locations of the various com-
hardware or software component, is very important for most production ponents. A particular example of this the Web Services Manager Policy
systems. The EDGs suggest a middleware-led approach to failover. This Manager (WSM-PM) which is given its own managed server by the SOA
is usually by means of VIPs and Whole Server Migration for WebLogic, EDG you might decide that is oversized for your environment and co-
or may involve a cluster manager of some sort (e.g. Oracle Clusterware locate it in managed servers alongside other products. By and large the
or software supplied by the hardware vendor). However, depending on EDGs appears to have made carefully considered decisions in this area
your requirements, some services may be need to be more highly avail- though so, if you do choose to ignore Oracles advice, make sure you
able than others, mostly depending on whether the service is transac- understand the ramifications.
tional and customer-facing in nature. To avoid the relative complexity of
configuring failover in middleware, you could choose a hybrid approach
where some services, such as JMS, are failed over by WebLogic and oth-
ers, like the Admin Server, are failed over by a virtualization feature (like
Oracle VMs Live Migration or VMwares HA/vMotion).

29 OTech Magazine #3 May 2014


Enterprise Deployment of Oracle
Fusion Middleware Products
4/4 Simon Haslam

Lifecycle Requirements
Patching lifecycle is another factor which could influence how you decide
to split out your software components. For example, do you want to
patch all components at the same time? If you look at Diagram 1 you will
see that SOA (BPEL) and OSB, whilst having their own managed servers,
share the same domain you might decide that the patching timeframes
and frequency, as well as the availability requirements, of these services
are different and so youd like to patch them independently, thus have
them in 2 separate domains. This is a trade-off though between flexibil-
ity and complexity; in fact Oracle SOA professionals seem undecided on
this, with as far as I can tell, a fairly even mix of both approaches used in
production environments.

So, hopefully this has given you some food for thought. For the next
article in this series we will cover a few more areas where you need to
diverge from the EDG approach, including security, network topology
and the occasional documentation error. In the meantime, if youre not
too familiar with the EDG for your chosen product set, I encourage you to
dive in, spin up a few virtual machines, and try out an EDG configuration
for yourself!

30 OTech Magazine #3 May 2014


31 OTech Magazine #3 May 2014
Absolutely
Patrick Barel
www.amis.nl
Typical
twitter.com/patch72
 nl.linkedin.com/in/patrickbarel

32 OTech Magazine #3 May 2014


absolutely
typical
1/19 Patrick Barel

This article will convince database developers that types in the Oracle SDO_GEOMETRY which holds a combination of some scalar types and
Database are worth their salt - and more. With the recent improvements some collection types. You can even add behavior to the type that will
in 11gR2, the pieces are available to complete the puzzle of structured run on the values of that instance of the type.
and modern programming with a touch of OO and more importantly to
create a decoupled, reusable API that exposes services based on tables As you can see, types are a implemented the way object oriented lan-
and views to clients that speak SQL, AQ, PL/SQL, Types, XML, JSON or guages do this. This is available since Oracle version 8.0 (which is eight-
RESTful, through SQL*Net, JDBC or HTTP. We will show through many oh, not eight-zero). User Defined Types (or UDTs) can be helpful to do
examples how types and collections are defined, how they are used be- structured programming in PL/SQL.
tween SQL and PL/SQL and how they can be converted to and from XML
and JSON. Everyone doing PL/SQL programming will benefit immediately We will show you how to create a simple User Defined Type (UDT). Then
from this article. a type which holds references to other types and how to add behavior
to the UDT. We will also see how we can convert these types to XML and
Every Database Developer should be aware of Types and Collections. For JSON files. UDTs, in their many forms, can be used for the interaction be-
structured programming, for optimal SQL to PL/SQL integration and for tween SQL and PL/SQL but also for interaction with the outside world, for
interoperability to client application. This article introduces Types and Col- example through Java programs. They can used to do OO development
lections, their OO capabilities and the conversion to XML and JSON. in a PL/SQL environment.

Introduction Definition
Types have been available in the Oracle database since day one. Every To create a UDT which can be used in the SQL layer of the database you
column in a table is of a specified type. For instance the SAL column of have to create an object using the create (or replace) TYPE statement. To
the EMP table is a numeric type, where the column ENAME is a character create a UDT which can only be used in the PL/SQL layer you create a type
based type. They are only partially interchangeable. You can put a numer- in a program.
ic value into a character based type but you cannot put a character value
into a numeric type. These are so-called scalar data types. They can hold An example of creating an object is shown on the left:
exactly one value of a specified type. There are also composite types, like

33 OTech Magazine #3 May 2014


absolutely
typical
2/19 Patrick Barel

CREATE OR REPLACE TYPE person_t AS OBJECT CREATE TABLE t_person


( first_name VARCHAR2(30) ( first_name VARCHAR2(30) To address the different fields in the UDT you use the dot notation (<vari-
, last_name VARCHAR2(30) , last_name VARCHAR2(30)
, birthdate DATE
, birthdate DATE able_name>.<field>).
, gender VARCHAR2(1)
, gender VARCHAR2(1) ) DECLARE
) l_person person_t := person_t
( John
, Doe
, to_date(12-29-1972,MM-DD-YYYY)
It looks a lot like the creation of a database table indicated on the right- , M
);
but data in the UDT is not persisted unless you use a database table to BEGIN
store the data. l_person.first_name := Jane;
l_person.gender := F;
END;
The UDT can be used as the data type in the creation of a table just as you
would use a scalar datatype: Functions
CREATE TABLE t_person You can use the UDT as a parameter in for instance a function. This way
( person person_t you can send in a complete set of values as a single parameter instead of
)
all separate parameters. This can help making your code not only more
readable, but also a bit more self-documenting (provided you use a logi-
Using a UDT doesnt make much sense here, but if we extend the type it cal name for the UDT). Note that a UDT can be both an IN and an OUT
will make more sense. parameter.
So, instead of creating a function like this:
Using the UDT in your PL/SQL code is a bit different from using a scalar CREATE OR REPLACE FUNCTION display_label ( first_name_in IN VARCHAR2
, last_name_in IN VARCHAR2
data type. Instead of just declaring the variable and then using it you , birthdate_in IN DATE
, gender_in IN varchar2) RETURN
should instantiate (initialize) it, before you can use it. To instantiate the VARCHAR2
variable you call the constructor of the UDT. When you create a UDT, a IS
BEGIN
default constructor is automatically created. This is called by sending in RETURN CASE gender_in
WHEN M THEN Mr.
all the values for the properties to a function which has the same name WHEN F THEN Mrs.
END ||
as the UDT. After instantiating the variable it can be used like any other. || first_name_in ||
|| last_name_in ||

34 OTech Magazine #3 May 2014


absolutely
typical
3/19 Patrick Barel

|| ( || EXTRACT ( YEAR FROM birthdate_in) || );


END; Complex types
Types can consist of other types. Suppose we have a UDT with the infor-
we can create a function like this: mation for the social profile. It might look something like this:
CREATE OR REPLACE FUNCTION display_label (person_in IN person_t) RETURN
CREATE OR REPLACE TYPE social_profile_t AS OBJECT
VARCHAR2
( linkedin_account VARCHAR2(100)
IS
, twitter_account VARCHAR2(100)
BEGIN
, facebook_account VARCHAR2(100)
RETURN CASE person_in.gender
, personal_blog VARCHAR2(100)
WHEN M THEN Mr.
)
WHEN F THEN Mrs.
END || We can now extend our person type to include this social profile as an
|| person_in.first_name ||
|| person_in.last_name || attribute:
|| ( || EXTRACT ( YEAR FROM person_in.birthdate) || ); CREATE OR REPLACE TYPE person_t AS OBJECT
END; ( first_name VARCHAR2(30)
, last_name VARCHAR2(30)
, birthdate DATE
We can also call this function from SQL, like this: , gender VARCHAR2(1)
SELECT display_label(person_t , social_profile social_profile_t
( John )
, Doe
, to_date(12-29-1972,MM-DD-YYYY)
The social profile is now nested inside the person type. Creating an
, M instance of the person object gets a bit more complicated because the
)
) social profile has to be created as an instance itself:
FROM dual DECLARE
l_person person_t := person_t
( John
But it can also be used in PL/SQL, like this: , Doe
DECLARE , to_date(12-29-1972,MM-DD-YYYY)
l_person person_t := person_t , M
( John , social_profile_t( JohnDoe
, Doe , JohnTweets
, to_date(12-29-1972,MM-DD-YYYY) , JohnOnFacebook
, M , http://johndoe.blogspot.com
); )
BEGIN );
l_person.first_name := Jane; BEGIN
l_person.gender := F; dbms_output.put_line(display_label(l_person));
dbms_output.put_line(display_label(l_person)); dbms_output.put_line(l_person.social_profile.personal_blog);
END; END;

35 OTech Magazine #3 May 2014


absolutely
typical
4/19 Patrick Barel

As you can see in to example above, the function created still works. To Feature Associative Ar- Nested Table VArray
access the values of the nested UDT you use the chained dot notation. ray
First you point to the social profile attribute in the person variable and SQL PL/SQL PL/SQL only SQL and PL/SQL SQL and PL/SQL
then within that social profile you point to the attribute you want to ac- Dense - Sparse Initially Dense Dense
cess. Sparse Can become
sparse
Collections Size Unlimited Unlimited Limited
Besides record type UDTs you can also create collections of instances of Order Unordered Ordered Ordered
scalar or other, e.g. user defined, types. Collection are either sparse or Usage Any set of data Any set of data Small sets of
dense arrays of homogeneous elements. You can think of them as tables data
(thats why the Associative Array used to be called PL/SQL Table). There Use in Table No Yes Yes
are three types of collections available. Most important difference is that the Associative Array can only be used
Associative Array in PL/SQL, where the Nested Table and the VArray can be used in both
Nested Table SQL and PL/SQL. Because the collections can be used in SQL they can also
VArray be stored in tables. This is a bit against the normalization principle but it
can make sense in some cases.
These collection types are similar though there are some differences as An example of this could be a list of phone numbers. You create a type
you can see in this table: phone_t like this:
CREATE OR REPLACE TYPE phone_t AS OBJECT
( phone_type VARCHAR2(30)
, phone_nr VARCHAR2(30)
)

Then you create a nested table based on this UDT:


CREATE OR REPLACE TYPE phone_ntt AS TABLE OF phone_t

36 OTech Magazine #3 May 2014


absolutely
typical
5/19 Patrick Barel

Now that all is in place, the nested table can be used as a column in a tional table.
normal table:
CREATE TABLE persons
( first_name VARCHAR2(30)
Complex types
, last_name VARCHAR2(30) Types can be as complex as you would want them to be. The can consist
, phone_nrs phone_ntt
) of scalars, other UDTs, Nested Tables, VArrays which in turn, except for
NESTED TABLE phone_nrs STORE AS phone_nrs_ntt_tab the scalars, can consist of everything mentioned before. Consider the
person UDT we created earlier, including the social profile. One of the
Since we are using a nested table as a column and there is no way of properties can be a list of phone numbers, so we can add the Nested
telling how big this is going to become, you have to tell Oracle where Table to the UDT. The nested table of phone numbers itself consists of
to store the data. If you were using a VArray, then Oracle would know UDTs with the phone type and the phone number as properties.
upfront how big it could become at maximum. CREATE OR REPLACE TYPE person_t AS OBJECT
( first_name VARCHAR2(30)
To use a VArray instead of a nested table you would use this to create , last_name VARCHAR2(30)
the VArray: , birthdate DATE
CREATE OR REPLACE TYPE phone_vat AS VARRAY(10) OF phone_t , gender VARCHAR2(1)
, social_profile social_profile_t
, phone_numbers phone_ntt
)
And this for the table:
CREATE TABLE persons
( first_name VARCHAR2(30) INSERT INTO persons ( first_name
, last_name VARCHAR2(30) , last_name
, phone_nrs
, phone_nrs phone_vat )
) VALUES ( 'John'
, 'Doe'
, phone_ntt
Storing Nested Tables or VArrays in a table feels a bit strange, especially ( phone_t
( 'business'
when you are always normalizing your schema. It does make sense , '555-12345'
)
though when you are building a data warehouse or some sort of histori- , phone_t
( 'private'
cal data storage. You could for instance create a database table to hold , '555-67890'
)
both the invoice header information and all its invoice lines in a single )
)
record. In this case the invoice lines could be a Nested Table in the rela-

37 OTech Magazine #3 May 2014


absolutely
typical
6/19 Patrick Barel

The code gets a bit more complicated, but all the data is kept together. customer for instance may have a credit limit. For an employee we want
DECLARE
l_person person_t := person_t( John to know what job he or she is in. We can of course create entirely differ-
, Doe
, to_date(12-29-1972,MM-DD-YYYY)
ent types based on their different usage, but that would mean we would
, M have to create code that does basically the same at least two times. The
, social_profile_t( JohnDoe
, JohnTweets Object Oriented way of approaching this would be that we create a UDT
, JohnOnFacebook
, http://johndoe. with the common properties and then create children of this UDT with
blogspot.com the specific properties added. We create the UDT like we did before, but
)
, phone_ntt we add the keywords NOT FINAL indicating there can be children defined
( phone_t
( business under this type.
, 555-12345 CREATE OR REPLACE TYPE person_t AS OBJECT
) ( first_name VARCHAR2(30)
, phone_t , last_name VARCHAR2(30)
( private , birthdate DATE
, 555-67890 , gender VARCHAR2(1)
) , social_profile social_profile_t
) , phone_numbers phone_ntt
); ) NOT FINAL
BEGIN
dbms_output.put_line(display_label(l_person));
dbms_output.put_line(l_person.social_profile.personal_blog); Now we create the other two UDTs under this type
dbms_output.put_line(l_person.phone_numbers(1).phone_nr); CREATE OR REPLACE TYPE employee_t UNDER person_t
END; ( ID NUMBER(10)
, NAME VARCHAR2(30)
, job VARCHAR2(30)
, department_id NUMBER(10)
The last line displays the phone number that is stored in the first entry of , hiredate DATE
the nested table. , salary NUMBER(10,2)
)

Type hierarchy And


Types can be created as children of other types. For instance, a person CREATE OR REPLACE TYPE customer_t UNDER person_t
( company_name VARCHAR2(100)
can just a person or more specifically an employee or a customer. They , telephone_number VARCHAR2(15)
)
share some of the properties, but some properties are very specific. A

38 OTech Magazine #3 May 2014


absolutely
typical
7/19 Patrick Barel

the specified type and not one of its subtypes. A customer is a person,
but a person is not necessarily a customer.

Using the TREAT (AS type) operator you cast the instance to a specific
subtype. This way you can access the specific attributes that are only
available in this subtype.
CREATE OR REPLACE FUNCTION display_label (person_in IN person_t) RETURN
VARCHAR2
IS
l_label VARCHAR2(32767);
l_customer customer_t;
l_employee employee_t;
BEGIN
l_label := CASE person_in.gender
WHEN M THEN Mr.
WHEN F THEN Mrs.
END ||
|| person_in.first_name ||
|| person_in.last_name ||
|| ( || EXTRACT ( YEAR FROM person_in.birthdate) || );
-- check what the actual type is of the parameter sent in
We use the keyword UNDER <typename> to indicate that this type should CASE
-- when it is a person_t and not one of the subtypes
inherit all the properties of its specified parent type. The subtype created WHEN person_in IS OF (ONLY person_t) THEN
NULL;
has multiple identities. For instance: The EMPLOYEE_T is both PERSON_T -- when it is actually a customer_t
and EMPLOYEE_T. Any code created that can handle a PERSON_T can WHEN person_in IS OF (customer_t) THEN
l_customer := TREAT(person_in AS customer_t);
also handle an EMPLOYEE_T (and a CUSTOMER_T). But, besides handling l_label := l_label || of company ||l_customer.company_name;
-- when it is actually an employee_t
the common properties, it is also possible to handle a subtype in a spe- WHEN person_in IS OF (employee_t) THEN
cific manner. As we saw earlier we can send a subtype as a parameter to l_employee := TREAT(person_in AS employee_t);
l_label := l_label || function: ||l_employee.job||
a program that expects its supertype. In the program, we can check what in department ||l_employee.department_id;
END CASE;
was actually sent in and run the appropriate code. Using the IS OF opera- RETURN l_label;
END;
tor you can check what is the actual type of the parameter that was sent
in. If you add the keyword ONLY then you check for the parameter being

39 OTech Magazine #3 May 2014


absolutely
typical
8/19 Patrick Barel

Member functions access to the values of the properties of this instance. The instance
Besides creating a function that takes a UDT as a parameter, we can also is referenced using the SELF keyword, so a property is referenced as
define the function as part of the UDT. It is very OO to combine the data SELF.<propertyname>
CREATE OR REPLACE TYPE person_t AS OBJECT
and the behavior of that data. The behavior is defined in the specification ( first_name VARCHAR2(30)
, last_name VARCHAR2(30)
of the type and implemented in the body of the type. There are two main , birthdate DATE
types of member functions. , gender VARCHAR2(1)
, social_profile social_profile_t
Constructor functions , phone_numbers phone_ntt
, CONSTRUCTOR FUNCTION person_t RETURN SELF AS RESULT
Normal member functions CONSTRUCTOR FUNCTION person_t(first_name_in IN VARCHAR2
,last_name_in IN VARCHAR2
,birthdate_in IN DATE
Constructor functions ,gender_in IN VARCHAR2) RETURN SELF AS
RESULT,
When you instantiate a variable based on a UDT you call the constructor , MEMBER FUNCTION display_label RETURN VARCHAR2
) NOT FINAL
function. A default constructor function is always created for you, but
you can add your own, overloaded, constructor functions to the type.
The default constructor expects you to send in values for every property This UDT has both constructors and a member function defined. We still
in the type. By creating (overloaded) constructors you can control what have to provide the implementation for these functions, which is done in
properties need to be set when initiating an instance. A good practice is the BODY of the UDT:
CREATE OR REPLACE TYPE BODY person_t AS
to create a constructor without any parameters and to instantiate the CONSTRUCTOR FUNCTION person_t RETURN SELF AS RESULT
IS
variable with NULL values for all properties. But you can also create a BEGIN
self.first_name := NULL;
constructor that takes just a couple of arguments and instantiates the self.last_name := NULL;
self.birthdate := NULL;
rest of the properties with NULL values. self.gender := NULL;
self.social_profile := NULL;
self.phone_numbers := NULL;
Member functions RETURN;
END;
We created a function that accepts a UDT as a parameter. We can also
implement this function as a member function with the type. Instead CONSTRUCTOR FUNCTION person_t(first_name_in IN VARCHAR2
,last_name_in IN VARCHAR2
of accepting a parameter with the instance of the type the code has ,birthdate_in IN DATE
,gender_in IN VARCHAR2) RETURN SELF AS

40 OTech Magazine #3 May 2014


absolutely
typical
9/19 Patrick Barel

RESULT IS
BEGIN den function we can reference the function defined in the super type by
self.first_name := first_name_in; casting the UDT to its supertype:
self.last_name := last_name_in; CREATE OR REPLACE TYPE customer_t UNDER person_t
self.birthdate := birthdate_in; ( company_name VARCHAR2(100)
self.gender := gender_in; , telephone_number VARCHAR2(15)
self.social_profile := NULL; , OVERRIDING MEMBER FUNCTION display_label RETURN VARCHAR2
self.phone_numbers := NULL;
)
RETURN;
END;

MEMBER FUNCTION display_label RETURN VARCHAR2 IS


And then the implementation:
CREATE OR REPLACE TYPE BODY customer_t AS
BEGIN OVERRIDING MEMBER FUNCTION display_label RETURN VARCHAR2 IS
RETURN CASE self.gender BEGIN
WHEN M THEN Mr. RETURN (self AS person_t).display_label -- display label of the parent
WHEN F THEN Mrs. type
END || || of company ||self.company_name;
|| self.first_name || END;
|| self.last_name ||
END;
|| ( || extract(YEAR FROM self.birthdate) || );
END;
END;
In this example we use the display_label function as defined on the super
If this UDT is created we can use it pretty much the same way we did type and add some extra info to it.
earlier, but now we can call the member function display_label and we do
not need a stand-alone function anymore. Map and Order functions
DECLARE By creating Map or Order functions we can use the UDT in the order by
l_person person_t := person_t( John
, Doe clause of a SQL statement. You can define either a Map or Order function,
, to_date(12-29-1972,MM-DD-YYYY)
, M not both. In the order function you define the outcome of a comparison
); with another instance of the UDT. The function takes the other instance
BEGIN
dbms_output.put_line(l_person.display_label); as an argument and returns -1 (when this instance comes first), 1 (when
END;
this instance comes last) or 0 (when they draw).
If we create a UDT under this base type, then the new UDT automatically If you can come up with a scalar value based on the properties that can
inherits the member function. We can also override the behavior of the be used to order the instances, then you can also create a map function,
member function by OVERRIDING the member function. In this overrid-

41 OTech Magazine #3 May 2014


absolutely
typical
10/19 Patrick Barel

instead of the order function. This function is more efficient, because the Traditional approach Bulk processing approach
order function has to be called repeatedly since it only compares two DECLARE
CURSOR c_emp IS
CREATE OR REPLACE TYPE enames_ntt AS
TABLE OF VARCHAR2(10);
objects at a time, where the map function maps the object into a scalar SELECT ename
FROM emp; DECLARE
value which is then used in the sort algorithm. CURSOR c_emp IS
r_emp c_emp%ROWTYPE; SELECT ename
BEGIN FROM emp;
OPEN c_emp; l_emps enames_ntt;
FETCH c_emp INTO r_emp; BEGIN
Bulk Processing WHILE c_emp%FOUND LOOP
dbms_output.put_line(r_emp.ename);
OPEN c_emp;
FETCH c_emp BULK COLLECT INTO l_
Besides creating UDTs that hold data for a specified object, you can FETCH c_emp INTO r_emp;
END LOOP;
emps;
CLOSE c_emp;
also create sets of data. These sets can consist of scalars or even UDTs CLOSE c_emp;
END; FOR indx IN l_emps.first .. l_emps.
you created. This way you can work with the data as if it were relational last LOOP
dbms_output.put_line(l_
tables, but actually they are in memory variables or values of the record emps(indx));
END LOOP;
that is stored in the database. The most important use for collections END;

is probably the bulk processing. In traditional programming a cursor


is opened, a record is fetched from it, the data is being processed and Table Functions
then onto the next record. Just as long as there are records available in Another application of the collections is the creation of table functions.
the cursor. Every time a record is being fetched, there are two context These are functions that return a collection (instead of a single value) and
switches. One from the PL/SQL Engine to the SQL engine and then one that can be queried from a SQL statement using the TABLE() operator.
back. This Row-By-Row approach is also referred to as Slow-By-Slow. Using this approach you can leverage all the possibilities of the PL/SQL
Using collections you can minimize the number of context switches be- engine in the SQL engine. Be advised, that there are still context switches
cause multiple rows are collected and then returned in a single pass. This going on, so if you can solve your issue in plain SQL then that is the pre-
means all the data you selected in your cursor is available right away in ferred way.
the program you are running. This can have a major impact on the mem-
ory usage, that is why you can limit the number of rows returned in one
roundtrip. This means a little more coding, but the performance benefits
are enormous.

42 OTech Magazine #3 May 2014


absolutely
typical
11/19 Patrick Barel

First you create a collection type in the database. Notice that you can data. There are functions to retrieve an element at a specific path in the
only use VArrays and Nested Tables for this, since these are the only ones document but also functions to extract a portion of the xml document.
available in the SQL layer:
CREATE OR REPLACE TYPE enames_ntt AS TABLE OF VARCHAR2(10) There are numerous ways to construct XML in a database application. It
Then you create the code that returns the collection: can for instance be loaded from a file or created from an SQL statement
CREATE OR REPLACE FUNCTION scrambled_enames RETURN enames_ntt
IS using XML specific functions like XMLAgg, XMLElement, XMLForest and
CURSOR c_emp IS
SELECT ename
others. But it can also be instantiated based on another UDT instance.
DECLARE
FROM emp;
l_person person_t := person_t( John
l_returnvalue enames_ntt;
, Doe
BEGIN
, to_date(12-29-1972,MM-DD-YYYY)
OPEN c_emp;
, M
FETCH c_emp BULK COLLECT INTO l_returnvalue;
);
CLOSE c_emp;
l_xml XMLTYPE;
BEGIN
FOR indx IN l_returnvalue.first .. l_returnvalue.last LOOP
l_xml := XMLTYPE(l_person);
l_returnvalue(indx) := translate
dbms_output.put_line(l_xml.getstringval);
(abcdefghijklmnopqrs
END;
, srqponmlkjihgfedcba
, l_returnvalue(indx)
) ;
END LOOP; The output would be:
RETURN l_returnvalue; <PERSON_T>
END; <FIRST_NAME>John</FIRST_NAME>
<LAST_NAME>Doe</LAST_NAME>
Then you query this function as if it were a relational table: <BIRTHDATE>29-12-72</BIRTHDATE>
SELECT * <GENDER>M</GENDER>
FROM TABLE(scrambled_enames) </PERSON_T>

XML If you would want to convert a nested table to XML you will need a wrap-
XML is also stored in a specific type in the Oracle Database. Even though per object. You cannot convert a Nested Table to XML directly. You can
XML is just a plain text/ASCII file, which could be stored in a varchar2 type however convert a UDT which holds a nested table to XML.
CREATE OR REPLACE TYPE persons_ntt IS TABLE OF person_t
or (if it gets too big) in a clob type Oracle now provides us with the XML-
Type. This is a specialized type for handling XML. Besides storing the XML Create a wrapper UDT to convert the Nested Table to XML.
CREATE OR REPLACE TYPE persons_wrap AS OBJECT
content it also provides us with a lot of functions to manipulate the XML ( persons persons_ntt)

43 OTech Magazine #3 May 2014


absolutely
typical
12/19 Patrick Barel

<FIRST_NAME>John</FIRST_NAME>
DECLARE <LAST_NAME>Doe</LAST_NAME>
l_persons persons_ntt := persons_ntt( <BIRTHDATE>29-12-72</BIRTHDATE>
person_t( John <GENDER>M</GENDER>
, Doe <SOCIAL_PROFILE></SOCIAL_PROFILE>
, to_date(12-29-1972,MM-DD-YYYY) <PHONE_NUMBERS></PHONE_NUMBERS>
, M) </PERSON>
,person_t( Jane );
, Doe l_xml.toobject(l_person);
, to_date(03-06-1976,MM-DD-YYYY) END;
, F)
);
l_xml XMLTYPE; Note that the tags should be in uppercase otherwise the conversion will
BEGIN
l_xml := XMLTYPE(persons_wrap(l_persons)); fail. Not all properties have to be present in the XML. If a tag doesnt exist
dbms_output.put_line(l_xml.getstringval);
END;
in the XML, the corresponding property will be NULL.

The output would be1:


<PERSONS_WRAP>
<PERSONS>
<PERSON_T>
<FIRST_NAME>John</FIRST_NAME>
<LAST_NAME>Doe</LAST_NAME>
<BIRTHDATE>29-12-72</BIRTHDATE>
<GENDER>M</GENDER>
</PERSON_T>
<PERSON_T> JSON
<FIRST_NAME>Jane</FIRST_NAME> Sometimes XML is a bit heavy. It is quite a verbose method to store
<LAST_NAME>Doe</LAST_NAME>
<BIRTHDATE>06-03-76</BIRTHDATE> the data. Every value in the document is surrounded by tags which tell
<GENDER>F</GENDER>
</PERSON_T> us which field it is. This is where JSON may help. JSON consists of name-
</PERSONS>
</PERSONS_WRAP>
value-pairs. Where XML is written like this: <FIRST_NAME>John</FIRST_
NAME> The JSON equivalent is: { FIRST_NAME : John }. Unfortunate-
As you can convert a UDT to XML, it can also be done vice versa. ly there is no support for JSON like there is for XML in the database yet.
DECLARE
l_xml XMLTYPE; There is however an opensource library available that implements JSON
l_person person_t;
BEGIN
functionality. There is no implementation (yet) to convert a UDT to JSON
l_xml := XMLTYPE(<PERSON>

44 OTech Magazine #3 May 2014


absolutely
typical
13/19 Patrick Barel

directly, but PL/JSON implements functionality to convert XML to JSON. tion developers will use SQL to access the database. However, in apply-
Using XML as an intermediate step we can convert a UDT to JSON. ing some of the core concepts from Service Oriented Architecture and
basic good programming practice we quickly realize that it may not be
such a good idea to expose our data model so directly. Any change to the
data model may directly impact many users of our database. Yet we do
not want to be held back from creating improvements by such external
consumers. Additionally, having 3rd parties fire off SQL statements to our
database may result in pretty lousy SQL being executed which may lead
to serious performance issues. When it comes to data manipulation there
are even more reasons why direct access to our tables is undesirable. En-
DECLARE
l_json json_list; forcing complex data constraints and coordinating transaction logic are
BEGIN
l_json := two important ones.
-- convert
json_ml.xmlstr2json(
-- a converted XML instance So instead of allowing direct access to our tables, we should be thinking
XMLTYPE ( about publishing an API that encapsulates our data model and associated
-- of a UDT instance
person_t( John business logic and presents a third party friendly API. Using views on top
, Doe
, to_date(12-29-1972,MM-DD-YYYY) of the data model is one way of implementing such an API, and if we use
, M
)
Instead Of triggers along with those views we can route any DML to PL/
).getstringval() SQL packages that take care of business logic. Other options for imple-
);
l_json.print; menting an API include the native database web service option that was
END;
[PERSON_T, [FIRST_NAME, John], [LAST_NAME, Doe], [BIRTHDATE, introduced in Oracle Database 11g that allows us to publish SOAP Web
29-12-72], [GENDER, M]] Services from the database or use of the Embedded PL/SQL Gateway to
expose simple HTTP services to be discussed a little bit later on. Note
Publishing APIs that APEX 4.x provides a lot of help for creating such RESTful services, as
Exposing the functionality and data in our database to external consum- they are called.
ers is a frequent challenge. Traditionally, many applications and applica-

45 OTech Magazine #3 May 2014


absolutely
typical
14/19 Patrick Barel

Somewhere between the View approach and the HTTP based service way Despite all the data involved, the API itself can be very simple:
of thinking is the option of publishing a PL/SQL API. In this case, we use
PL/SQL packages that define the public interface in their Specification and PACKAGE music_api
contain the firmly encapsulated implementation in their Body. Note that
PROCEDURE search_for_cds
the Web Services will typically be just another layer on top of such a PL/ ( p_cd_query IN cd_query_t
SQL API. , p_cd_collection OUT cd_collection_t
);

When the operations supported in the interface need to leverage com-


plex, nested data structures such as an Order with Order Lines or a
Hotel Booking with all guests sharing a room UDTs are the perfect
vehicle to use. Using a single parameter, a complex data set can be The complexity is hidden away in the definition of the UDTs involved:
TYPE song_t AS OBJECT
transferred. Because UDTs support if not enforce a structured program- ( title VARCHAR2(40)
, duration NUMBER(4,2)
ming style inside the package body, the case for UDTs is even stronger. )
And, the database adapter that is frequently used in Oracle SOA Suite and
TYPE song_list_t AS TABLE OF song_t
Service Bus to integrate with the Oracle Database, knows very well how
TYPE cd_t AS OBJECT
to interact with PL/SQL APIs based on UDTs. In fact, many organizations ( title VARCHAR2(40)
have adopted the use of UDT based PL/SQL APIs as their best practice for , year NUMBER(4)
, artist VARCHAR2(40)
making SOA Suite & Service Bus interact with the Database. , track_list song_list_t
);

Lets take a look at an example of such a PL/SQL API. The API exposes a TYPE cd_collection_t AS TABLE OF cd_t
search operation through which consumers can lookup CDs. This search TYPE cd_query_t AS OBJECT
can be based on a number of search criteria currently title, artist, year ( title VARCHAR2(40)
, from_year NUMBER(4)
range. The result of the search is a collection of CDs with for each CD data , to_year NUMBER(4)
, artist VARCHAR2(40)
such as title and year of release and a listing of all songs. Per song, the )
title and the duration are included. TYPE song_t AS OBJECT
( title VARCHAR2(40)

46 OTech Magazine #3 May 2014


absolutely
typical
15/19 Patrick Barel

, duration NUMBER(4,2)
) er, working with UDTs is somewhat cumbersome in most cases. They are
usually pretty good at XML processing though. One approach then is to
TYPE song_list_t AS TABLE OF song_t
add a wrapper around the UDT based API. This wrapper interacts in terms
TYPE cd_t AS OBJECT
( title VARCHAR2(40) of XML and converts to and from the UDT based API.
, year NUMBER(4) Such a wrapper could be as simple as:
, artist VARCHAR2(40)
, track_list song_list_t
); PROCEDURE search_for_cds
( p_cd_query IN XMLTYPE
TYPE cd_collection_t AS TABLE OF cd_t , p_cd_collection OUT XMLTYPE
) IS
TYPE cd_query_t AS OBJECT l_cd_query cd_query_t;
( title VARCHAR2(40)
, from_year NUMBER(4)
l_cd_collection cd_collection_t;
, to_year NUMBER(4) BEGIN
, artist VARCHAR2(40) p_cd_query.toobject(l_cd_query);
) search_for_cds
( p_cd_query => l_cd_query
, p_cd_collection => l_cd_collection
Implementing this API should be fairly straightforward for seasoned PL/ );
p_cd_collection :=
SQL programmers. An example implementation is available on this link: XMLTYPE(jukebox_t(l_cd_collection));
http://bit.ly/1dCNDnV. END search_for_cds;

Interacting with such an API is also straightforward, from a number of Some technologies have a hard time dealing with XMLType structures
environments at least. PL/SQL programs can of course invoke the Music and prefer to have their XML served in strings. That would call for anoth-
API and process the results returned from it. The Database Adapter can er wrapper layer, that converts XMLType to and from VARCHAR2. Again,
also invoke the API and process results returned from it taking care of a simple feat to accomplish.
the conversion from and to XML that is the lingua franca inside the SOA
Suite and Service Bus.

Other technology settings may be able to interact with stored proce-


dures, but may have a problem in dealing with UDTs. For example, Java
programs can call stored procedures through most JDBC drivers. Howev-

47 OTech Magazine #3 May 2014


absolutely
typical
16/19 Patrick Barel

PROCEDURE search_for_cds
( p_cd_query IN CLOB
RESTful Services
, p_cd_collection OUT CLOB There are many definitions in use for what RESTful services exactly are.
) IS
l_cd_query XMLTYPE := XMLTYPE(p_cd_query); We will not go into the intricacies of that theoretical debate. The es-
l_cd_collection XMLTYPE; sence is that a RESTful service exploits the core features of HTTP, can be
BEGIN
search_for_cds accessed over HTTP using simple HTTP calls (plain old GET & POST, and
( p_cd_query => l_cd_query GET & PUT for more advanced interaction). Messages exchanged with a
, p_cd_collection => l_cd_collection
); RESTful service can be in any format, although XML and especially JSON
p_cd_collection := are most common. RESTful services are stateless (do not remember a
l_cd_collection.getClobVal();
conversation, only the current question).
END search_for_cds;

RESTful services called for retrieving information are the simplest and
by far the most common. When the provider of the service tries to main-
tain a semblance of true RESTful-ness, such services are typically defined
around resources and suggest a simple drill down navigation style. A
sequence of RESTful calls in our world of Employees and Departments
Of course through the use of PL/JSON, it is quite easy to also expose a could look like this:
JSON based API. Converting from UDT through XMLType to JSON and
vice versa is an out of the box operation with PL/JSON after all.

48 OTech Magazine #3 May 2014


absolutely
typical
17/19 Patrick Barel

HTTP GET request RESTful meaning The question then becomes: how to make such services with this style
http://HRM_SERVER/hrma- List of all Department resources of URL composition available from PL/SQL. We need an underlying
pi/rest/departments package HRM_API - that contains the following pseudo code:
gather the requested data using SQL
http://HRM_SERVER/hrma- Details for resource Department collect the data into an UDT
pi/rest/departments/10 with identifier 10 convert the UDT to XML and perhaps onward to JSON
write the converted result to the HTP buffer
http://HRM_SERVER/hrma- List of all detail resources of type
pi/rest/departments/10/ employee under Department re- HTTP GET requests in the format shown in the table above are received
employees source with identifier 10
by the Embedded PL/SQL Gateway (EPG) and have to be interpreted in
order to result in a call to the HRM_API package with the appropriate
http://HRM_SERVER/hrma- Details for employee resource with
pi/rest/departments/10/ identifier 4512 parameters. Typically, an HTTP request handled by the Embedded PL/SQL
employees/4512 Gateway looks something like:
(could perhaps also be ac- http://database_host:http_port/path/package.procedure?parameter1=val
cessed as http://HRM_SERV- ue&parameter2=value
ER/hrmapi/rest/employ-
ees/4512) To make the EPG work with the REST-style URL requires the use of a little
known feature in the dbms_epg package. This is the same package used
The response to these calls will typically be a text message in either XML for creating the DAD database access descriptor that links a URL path
or JSON format. Such service calls can be made from virtually any tech- to a database schema.
nology environment even from within a browser in JavaScript. All mod-
ern day programming languages have ample support for making HTTP
calls. That is one of the key reasons for their popularity.

49 OTech Magazine #3 May 2014


absolutely
typical
18/19 Patrick Barel

The statement The signature of this procedure is very straightforward:


begin procedure handle_request (p_path in varchar2);
DBMS_EPG.create_dad
( dad_name => hrmrestapi
, path => /hrmapi/*
The parameter p_path will contain whatever comes in the URL after hrm-
); restapi/rest. Looking back to the table of RESTful URLs, the procedure
end;
may expect to have to deal with these values for p_path:
creates a DAD called hrmrestapi that is associated with the path hrmapi. /departments
Subsequently, this DAD is authorized to a specific database schema, say /departments/10
SCOTT or HR. /departments/10/employees
That then means that any HTTP request starting with /departments/10/employees/4512
http://database_host:http_port/hrmapi /employees/4512
is routed to that database schema and expects to be handled by a pack- The article at http://bit.ly/1k3PjVx provides a complete example of the
age in that schema. The additional step we need to take is to configure a source code for dealing with these URLs and implementing the RESTful
special handler package that interprets the REST-style URL for us. We do service.
so with a code like this:
BEGIN
dbms_epg.set_dad_attribute
( dad_name => hrmrestapi
Conclusion
, attr_name => path-alias Working with User Defined Types not only simplifies your code, instead
, attr_value => rest);
dbms_epg.set_dad_attribute of sending five parameters, you can now send a single parameter with all
( dad_name => hrmrestapi
, attr_name => path-alias-procedure
five values in it, it can also speed up the interaction between PL/SQL and
, attr_value => hr.hrm_rest_api.handle_request); SQL, using the bulk operations. By adding behavior to the UDT you define
END;
logic as close to the data as you possibly can. Communicating with the
Here we instruct the EPG to send any request that arrives on the hrm- outside world can also be done using UDTs, that way hiding the datamod-
restapi DAD and starts with rest (meaning: all requests like http://data- el and achieving a high level of decoupling.
base_host:http_port/hrmapi/rest/andsomethingelseintheurl) to the
handle_request procedure on the hrm_rest_api package.

50 OTech Magazine #3 May 2014


absolutely
typical
19/19 Patrick Barel

Ref:
Basic Components of Oracle Objects -
http://docs.oracle.com/cd/B28359_01/appdev.111/b28371/adobjbas.htm
Collections in Oracle Part 1 -
http://allthingsoracle.com/collections-in-oracle-pt-1/
Collections in Oracle Part 2 -
http://allthingsoracle.com/collections-in-oracle-part-2/
Bulk Processing in Oracle Part 1 -
http://allthingsoracle.com/bulk-processing-in-oracle-part-1/
Bulk Processing in Oracle Part 2 -
http://allthingsoracle.com/bulk-processing-in-oracle-part-2/
Using Table Functions -
http://technology.amis.nl/2014/03/31/using-table-functions-2/
PL/JSON
http://pljson.sourceforge.net/
Creating RESTful services on top of the Embedded PL/SQL Gateway -
http://technology.amis.nl/2011/01/30/no-jdbc-based-data-retrieval-in-java-
applications-reststyle-json-formatted-http-based-interaction-from-java-
to-database/
Implementing the Enterprise Service Bus Pattern to Expose Database
Backed Services
http://www.oracle.com/technetwork/articles/soa/jellema-esb-pat-
tern-1385306.html

51 OTech Magazine #3 May 2014


otech partner:

AMIS is internationally recognized for its deep technological insight in Oracle tech- Patrick Barel Lucas Jellema
nology. This knowledge is reflected in the presentations we deliver at international www.amis.nl www.amis.nl
conferences such as Oracle OpenWorld, Hotsos and many user conferences around
the world. Another source of information is the famous AMIS Technology Blog, the
most referred to Oracle technology knowledge base outside the oracle.com domain.
However you arrived here, we appreciate your interest in AMIS.
Amis
AMIS delivers expertise worldwide. Our experts are often asked to: Edisonbaan 15
- Advise on fundamental architectural decisions 3439 MN Nieuwegein
- Advise on license-upgrade paths +31 (0) 30 601 6000
- Share our knowledge with your Oracle team info@amis.nl
- Give you a headstart when you start deploying Oracle www.amis.nl
- Optimize Oracle infrastructures for performance @AMIS_Services
- Migrate mission-critical Oracle databases to cloud based infrastructures https://www.facebook.com/AMIS.Services?ref=hl
- Bring crashed Oracle production systems back on-line
- Deliver a masterclass

52 OTech Magazine #3 May 2014


Step By Step
Osama Mustafa Install Oracle
Grid 11.2.0.3
www.gurussolutions.com

on Solaris 11.1
twitter.com/OsamaOracle
www.facebook.com/
osamaoracle
 jo.linkedin.com/in/
osamamustafa/

53 OTech Magazine #3 May 2014


Step By Step Install Oracle Grid
11.2.0.3 on Solaris 11.1
1/9 Osama Mustafa

Introduction Volume manager, complete Oracles new Grid Lets Start:


Infrastructure solution.
Oracle Cluster ware is portable cluster software Step #1:
that allows clustering of independent servers You need to know how the ect/hosts will look
so that they cooperate as a single system. Ora- like after adding IPs:
cle Cluster ware was first released with Oracle #########NODES#########
Database 10g Release 1 as the required cluster 180.111.20.21 Test-db1
technology for Oracle Real Application Clusters 180.111.20.22 Test-db2
(RAC). Oracle Cluster ware is an independent ########################
cluster infrastructure, which is fully integrated #########NODE-One-IP###########
with Oracle RAC, capable of protecting any kind 180.111.20.28 Test-db1-vip
of application in a failover cluster. 10.0.0.1 Test-db1-priv
################################
Oracle Grid Infrastructure introduces a new #########NODE-Two-ip############
server pool concept allowing the partitioning 180.111.20.29 Test-db2-vip
of the grid into groups of servers. Role-sepa- 10.0.0.2 Test-db2-priv
rated Management can be used by organiza- ################################
tions, in which cluster, storage, and database ######SCAN-IP##################
management functions are strictly separated. 180.111.20.30 Test-db-scan
Cluster-aware commands and an Enterprise Now after you already know what the setup ###############################
Manager based cluster and resource manage- will look like, Every Information about Loca-
ment simplify grid management regardless of tions, and Finally Operating system, during
size. Further enhancements in Oracle ASM, like installation for I faced lot of bugs Since Solaris
the new ASM cluster file system or the new 11.1 New But it was amazing experience and
dynamic learn something new.

54 OTech Magazine #3 May 2014


Step By Step Install Oracle Grid
11.2.0.3 on Solaris 11.1
2/9 Osama Mustafa

Step #2: -geometry 1280x720 -inetd -query localhost -once you copied the files on server 2 this command
securitytypes=none
Check OS Version Using the below command: should be done on server 2. Depend where you
/usr/platform/uname I/sbin/prtdiag
copied software.
Step #3 (Optional): e. Finally Enable Services share -F nfs -o rw /base
Because I was working remotely not directly svcadm restart gdm xvnc-inetd
svcadm disable gdm xvnc-inetd;
from Data Center, I configure Vncserver to en- svcadm enable gdm xvnc-inetd c. On All Server you can run The Mount com-
able access to server GUI and Run the Installer mand to Share all the files and start setup,
from there. f. Just to make sure my work was Right I re- More Easy Save time.
a. Install Required Package using start the Server and check vncsever again. mount -F nfs Base-Server-IP:software-location-
pkg info SUNWxvnc
on-remote-server mount-point-on-all-servers

b. Add the below line to /etc/services or use Step #4 (Optional):


1 I was thinking should I copy oracle software Step #5: Prerequisite
step d as command line As any Linux/Unix There is Prerequisite for Op-
vnc-server 5900/tcp # Xvnc 4 times and its almost 35 GB so 35*4 you are
talking about huge miss of time so what I did erating system follow the below
c. Configure /etc/X11/gdm/custom/conf here copy the files Once on one server and
[xdmcp]
Configure NFS to share it between all Nodes, 1 Users And Groups
Enable=true
[security] more easy and Save your time. Oracle Solaris 11 provide you with Command
DisallowTCP=false Called ZFS (amazing command to manage
AllowRoot=true
AllowRemoteRoot=true a. First you have to enable NFS on All Server us- File system), I Create Oracle Home Using this
ing the below command command, also create mount point /u01
svcadm enable nfs/server
d. Instead of step b you can do the below , I svcadm restart nfs/server
just want to mention both : This is For Oracle User
zfs create -o mountpoint=/u01 rpool/u01
svccfg -s x11-server setprop options/tcp_listen=true
svccfg -s xvnc-inetd setprop inetd/wait=true b. If you copied files on server one then this
svccfg -s xvnc-inetd
setprop inetd_start/exec=astring:/usr/bin/Xvnc
command should be done on server one, if

55 OTech Magazine #3 May 2014


Step By Step Install Oracle Grid
11.2.0.3 on Solaris 11.1
3/9 Osama Mustafa

Create Groups Create necessary Directory for Our Installation 2. Across the broadcast domain as defined for
groupadd -g 1001 oinstall mkdir -p /u01/app/11.2.0/grid
groupadd -g 1002 dba mkdir -p /u01/app/grid
the private interconnect
groupadd -g 1003 oper chown grid:oinstall /u01/app/11.2.0/grid 3. On the IP address subnet ranges 224.0.0.0/24
chown grid:oinstall /u01/app/grid
#!important!# chown -R grid:oinstall /u01
and 230.0.1.0/24
Create Oracle User
zfs create -o mountpoint=/export/home/oracle rpool/ex- Step #6:
port/home/oracle
Configure .Profile which is located in /export/ Regarding to oracle to you need to check udp
useradd -g oinstall -G dba oracle
passwd oracle home/oracle and /export/home/grid time using the below command
ndd /dev/udp udp_xmit_hiwat
export ORACLE_BASE=/u01/app/oracle/
ndd /dev/udp udp_recv_hiwat
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db-
Create necessary Directory for Our Installation home_1
chown -R oracle:oinstall /export/home/oracle export GRID_HOME=/u01/app/11.2.0/grid/
mkdir -p /u01/app/oracle export ORACLE_SID=TEST1
To avoid reboot you can set it on memory and
chown oracle:oinstall /u01/app/oracle export reboot time using the below
chmod -R 775 /u01/ PATH=$PATH:/usr/sbin:/usr/X11/bin:/usr/dt/bin:/usr/
chown -R oracle:oinstall /u01 openwin/bin:/usr/sfw/bin:/usr/sfw/sbin: /usr/ccs/bin: /
usr/local/bin:/usr/local/sbin: On Memory
$ORACLE_HOME /bin:$GRID_HOME/bin:. ndd -set /dev/udp udp_xmit_hiwat 65536
This is For Grid User ndd -set /dev/udp udp_recv_hiwat 65536
Note: copy .profile to all Database
Create Group for Grid User Servers using scp command On Reboot
groupadd -g 1020 asmadmin cd /export/home/oracle ndd -set /dev/udp udp_xmit_hiwat 65536
groupadd -g 1022 asmoper scp .profile oracle@Server-ip:/export/home/oracle/ ndd -set /dev/udp udp_recv_hiwat 65536
groupadd -g 1021 asmdba

Create Grid User Step #7: Step #8:


zfs create -o mountpoint=/export/home/grid rpool/ex- Oracle Grid and Networking Notes: In this step you need to make sure of disks on
port/home/grid
useradd -g oinstall -G dba grid
1. The broadcast must work across any config- both nodes, in my case I am using EMC storage.
usermod -g oinstall -G dba,asmdba grid ured VLANs as used by the public or private
passwd grid
chown -R grid:oinstall /export/home/grid
interfaces. List Disk Using
/usr/sbin/format

56 OTech Magazine #3 May 2014


Step By Step Install Oracle Grid
11.2.0.3 on Solaris 11.1
4/9 Osama Mustafa

fdisk the raw disk using fdisk command. Note: In Unix Each Disk have slide from 0-6 like the below :
Change owner to grid using
Chown grid:asmadmin /dev/rdsk/Disk-name
Chmod 660 /dev/rdsk/disk-name crw-rw---- 1 grid asmdba 302, 0 Apr 23 19:02 emcp@0:a,raw
crw-rw---- 1 grid asmdba 302, 8 Apr 23 20:38 emcp@1:a,raw
crw-rw---- 1 grid asmdba 302, 16 Apr 22 14:05 emcp@2:a,raw
You can list your available using crw-rw---- 1 grid asmdba 302, 24 Apr 23 21:00 emcp@3:a,raw
ls ltr /dev/rdsk/emc* crw-rw---- 1 grid asmdba 302, 32 Apr 23 21:00 emcp@4:a,raw
crw-rw---- 1 grid asmdba 302, 40 Apr 23 21:00 emcp@5:a,raw
Change owner for these disk crw-rw---- 1 grid asmdba 302, 48 Apr 22 14:05 emcp@6:a,raw
Chown 660 emc*
Chown grid:asmadmin emc*
oracle::::defaultpriv=basic,net_privaddr;roles=root To make sure new memory value effect applied
grid::::defaultpriv=basic,net_privaddr;roles=root
on oracle and grid user open new terminal and
Step #9: run prctl command again.
Disable ntp using Step #11:
svcadm disable ntp
Now we need to configure memory Parameter Step #12:
for Oracle and Grid User and make permanent During installation Oracle will check swap
Step #10:
By Default Oracle Solaris SPARC prevent root memory so you need to increase swap memory
To check memory Current Value for Both user depend on your setup for sure I will use ZFS
access, for example we created oracle and grid prctl -n project.max-shm-memory -i process $$
user but we cannot access to root using su command.
command to enable root access for oracle and Modify memory values using
grid do below projmod -a -K project.max-shm-
Check swap value
bash-3.00# swap -lh
memory=(privileged,32G,deny) -U oracle default
projmod -a -K project.max-shm-
Edit file /etc/user_attr memory=(privileged,32G,deny) -U grid default swapfile dev swaplo blocks free
projmod -s -K project.max-shm-
memory=(privileged,32G,deny) default
/dev/zvol/dsk/rpool/swap 256,1 16 4194288
Add the below lines 4194288

57 OTech Magazine #3 May 2014


Step By Step Install Oracle Grid
11.2.0.3 on Solaris 11.1
5/9 Osama Mustafa

Remove Swap using root user ing the the Below command, Oracle Provide Congratulations you can start your setup now!!
bash-3.00# swap -d /dev/zvol/dsk/rpool/swap you with new way, you can now configure SSH
using sshsetup its already exists within media Step #14:
Configure New Swap Start Install Grid Infrastructure, in my case I
bash-3.00# zfs set volsize=20G rpool/swap
bash-3.00# swap -a /dev/zvol/dsk/rpool/swap
For Oracle User choose to install Software only then Configure
# ./sshUserSetup.sh -hosts node1 node2 node3 node4
ASM in this way if error appeared I will know
-user oracle -advanced noPromptPassphrase
Check New Value where to start troubleshooting. Follow the
bash-3.00# swap -lh screens ( this steps should be Done As Grid
For Grid User User )
# ./sshUserSetup.sh -hosts node1 node2 node3 node4
Step #13: -user grid -advanced noPromptPassphrase
One more step Configure SSH Between Nodes
You can do this Step During installation or Us-

58 OTech Magazine #3 May 2014


Step By Step Install Oracle Grid
11.2.0.3 on Solaris 11.1
6/9 Osama Mustafa

59 OTech Magazine #3 May 2014


Step By Step Install Oracle Grid
11.2.0.3 on Solaris 11.1
7/9 Osama Mustafa

Step #15: After installation done you have to see


After success installation now I need to Con- MOUNTED on Both Node.
figure ASM ( This Step should be Done as Grid
User) , From One Node Only. REDUNDANCY:
export ORACLE_HOME=/u01/app/11.2.0/grid/
cd $ORACLE_HOME/bin
NORMAL REDUNDANCY - Two-way mirroring,
Run ./asmca requiring two failure groups.
HIGH REDUNDANCY - Three-way mirroring,
The Below Screen Should be Open requiring three failure groups.
EXTERNAL REDUNDANCY - No mirroring for
disks that are already protected using hard-
ware mirroring or RAID.

If you have hardware RAID it should be used in


preference to ASM redundancy, so this will be
the standard option for most installations.

Step #16:
Three ASM Disks should be created: Finally You Are done with Grid Infrastructure
1 DATA : DataFiles and Parameter files and now you should configure Database on
( should be big ) RAC (this step should be done as Oracle User),
2 FRA : FLASH RECOVERY AREA Usually I am installing Software only then
( should be big ) Called dbca to configure Instance Install Soft-
3 CRS : this Disk for OCR and VOTING ware.
( 4-5G will be enough )

60 OTech Magazine #3 May 2014


Step By Step Install Oracle Grid
11.2.0.3 on Solaris 11.1
8/9 Osama Mustafa

61 OTech Magazine #3 May 2014


Step By Step Install Oracle Grid
11.2.0.3 on Solaris 11.1
9/9 Osama Mustafa

Done!!

You can reboot node to test your Setup.

Reference:
1- Oracle Documentation Here.
2- Oracle White Paper Here.

62 OTech Magazine #3 May 2014


The Relevance
Lucas Jellema of the User
Experience
www.amis.nl

twitter.com/lucasjellema
 nl.linkedin.com/pub/
because a naive approach
lucas-jellema/0/536/39b
to the user experience is
no longer acceptable

63 OTech Magazine #3 May 2014


The Relevance of
the User Experience
1/25 Lucas Jellema

Ever since the SQL*Plus prompt was no longer acceptable as the only
way for users to access the contents of the database have Oracle devel-
opers been building user interfaces. That is where we got Oracle Forms
from (pka SQL*Forms). And that explain why for so long the user inter-
faces have basically been windows on data, supporting CRUD operations
and thereby enabling any task and process. Initially only character based,
running on a terminal and gradually morphing into the GUI on the desk-
top and onwards into the browser. But essentially still a window on data.
In recent years we have come or should I say been made to realize that
this somewhat nave approach to the user experience is no longer ac- Figure: The huge influence within Oracle from the User Experience is most clearly demonstrated in this Simplified UI in the recent HCM Cloud
R8 release.
ceptable. Providing our users with a modern experience is not a luxury
it is critical. That experience determines the productivity of the users,
the quality of their work and in the end even their willingness and ability apply to custom applications, especially those developed using Oracle Fu-
to use the application at all. This article provides an overview of what the sion Middleware technology.
current state of affairs is with regard to the user experience, focusing on
the world of Oracle products and Oracle technology. It briefly discusses The definition of User Experience employed by the Oracle Applications
where have come from and how and why the world has changed. It will UX team: A complete contextual experience an understanding of
provide some insights and inspiration for the evolution of the experience everything that makes up an experience for a user who works with an ap-
you provide to your users. plication: technologies, tools, business processes and flows, tasks, inter-
actions with others, physical and cultural work environments. This clearly
The article is strongly influenced by the Oracle Applications User Experi- goes way beyond just the user interface!
ence team and their Usable Apps initiative that sets the standard for the
next generation user experience with Oracle Applications. Their vision, Where we come from
approach, examples, best practices and tools used inside Oracle to create In the early nineties, life was relatively easy for IT staff even if we may
the next releases of for example the Fusion Application products, equally not have realized it at the time. End users only used computer applica-

64 OTech Magazine #3 May 2014


The Relevance of
the User Experience
2/25 Lucas Jellema

tion at work, so whatever their enterprise application looked like, was not integration with other IT systems. File based data exchange per-
the standard. At that time, it was character based on simple terminals haps using fancy solutions including database links was about the ex-
with only keyboard interaction. The terminal could be monochrome gray, tent of the integration. Each business application was represented by its
green or orange on a black background. The type of terminal that until own icon on the users desktop and that was about as far as UI integra-
fairly recently you could find at the check in desks at airports. tion went. The Windows desktop was our idea of a mash up. Each appli-
cation was typically supported by its own team of designers and develop-
The Windows revolution introduced the Graphical User Interface (GUI). ers that took care of all aspects from UI to data model. In the world of
The GUI ran locally on the Client that did all the user interaction things Oracle, they would initially use Oracle Forms 4.5 on top of the brand new
and leveraged the server for data processing. Both users, designers and Oracle7 Database. Many are still using Forms and quite a few also still
developers were extremely happy with all the colors, buttons and mouse that Oracle7 or perhaps the long ago de-supported Oracle 8i Database.
movements and essentially continued to create the same type of table
and form style applications as before. However, the resolution increased, With the advent of the three tier architecture and the rise of the inter-
the number of pixels multiplied and the data presented on the pages net and browser as the preferred application platform, not a whole lot
of the applications magnified substantially. Using tabs and popups, the changed for enterprise applications. The distribution of the application
pages were virtually expanded to hold even more data. and any patches and upgrades became much simpler. However, Java
Applet technology, as employed by WebForms, made it possible to run
This was very much the era of One size fits all. One application, running the original client/server desktop applications inside the browser. While
on one device the desktop on one location the office - to be used by initially that kick-started the three tier architecture and the use of the
all users regardless of their specific role or task. All the data was available browser as the platform, it did little to innovate the user interface and the
on those virtually enlarged pages so all tasks were supported by the ap- whole user experience. Enter query/execute query remains pretty much
plication. the same in the browser. The focus in the UI on opening up the database,
presenting a window on the data was still ubiquitous. User interaction
This also was the age of the silo. Each application was a world unto itself. with the computer continued to be through mouse and keyboard
Using its own database, its own business logic and its own set of user despite several promising but failed attempts at voice recognition.
interfaces, the application typically did not have interaction and certainly

65 OTech Magazine #3 May 2014


The Relevance of
the User Experience
3/25 Lucas Jellema

In terms of the technology available to the developers, little was changed interaction. And end users are starting to become seriously disgruntled
too. Pixel perfect positioning of a fairly limited set of UI components with with the enterprise applications they are forced to deal with at work
a distinct Windows look and feel to them was still the name of the game. when they know from their social media interaction, their online shop-
Only few ventured outside the applet or introduced webby newness ping experience and their gaming what should be possible in terms of
into the applet. using computers, also for doing ones work. When the enterprise user ex-
perience becomes so different from what users increasingly experience in
However, from somewhere in the middle of the first decade of the their own environment, they and their employers - do not benefit from
century, many things were slowly but steadily evolving, and would soon what has become intuitive and natural while doing their work, they do
change the picture. Dramatically. not leverage the possibilities of the technology and the clear distinction
between work and not-work continues to exist both in time and space.
State of the nation Productivity, quality of work, motivation are among the main factors that
Today, the world is quite different. When exactly the change came about will suffer from such a gap.
is hard to say. It was of course an evolutionary process with some ac-
celeration points. And the process of change was and is not the same for At the same time, the number of people interacting with the enterprise
all regions in the world, for all industries and for all companies. Important information systems increases. Use of computers is pervasive: every role
change drivers in the position of applications and the requirements for in the modern organizations encompasses interaction with IT systems
user experience include internet, mobile devices, touch devices, globali- to check, register, monitor, report, approve, order. From blue collar em-
zation, 24/7, Cloud, Apple, Moores law (or the continued increase in com- ployees to the boardroom. Managers become users - rather than instruct-
pute power), battery technology, movies (and their portrayal of future ing their secretaries to send emails for them, managers have started
technologies and man-machine interaction), wearables, social media, the to directly engage with computers themselves even if it took a status
pace of the business. gadget like the iPad to get them there. The need for speed is another fac-
tor of course they cannot afford to wait until Monday when Janet is in
At some point in the previous decade, the consumer experience with the office to print out the mails they have to digest and respond to.
computers overtook the enterprise experience. No longer is the work-
place in the lead when it comes to modern, fancy, advanced computer Additionally, in order to make staff departments leaner (and less expen-

66 OTech Magazine #3 May 2014


The Relevance of
the User Experience
4/25 Lucas Jellema

sive) organizations rely heavily on self-service style applications. Get In this environment, everything is digital. On the edges of the system,
information about your remaining vacation days, call in sick and report information may come in on paper or may be sent out in the form of let-
being well again, submit expense reports, learn about retirement facili- ter but anything in between is strictly digital. Scanning and interpreting
ties, order office supplies, report a broken piece of equipment or suspi- paper based data, managing digital content, finding ways to express con-
cious event? Human intervention with all of these activities is distinctly tracts and signatures in a strictly digital way, are common challenges that
lopsided: the reporter interacts with a self-service application (which is are part of the changing user experience. Complementing this digital con-
sometimes more about self than about service) tent with new media sound, pictures and video is still relatively new
in enterprise applications, but is likely to become more commonplace.
Also quite importantly: external parties become users. Customers as well VoIP and Skype-style video calls, integrated into enterprise applications
as suppliers, regulators, and others increasingly engage directly with an are not futuristic, but rather round the corner. Collecting visual inspection
organizations enterprise IT systems . results is already daily practice for many organizations.

With such diverse internal parties, a more intuitive user experiences with It is amazing how much data processing power modern computers and
as little learning curve as possible and therefore low training require- application developers - have at their disposal, and even more how little
ments is strongly desirable. With external parties it is virtually imperative. of it is used to really facilitate the end user. Most applications [of the
Ten years ago, what was called the mobile workforce was a small van- recent past] simply show data. They do very little in terms of data pro-
guard of sales representatives, on site inspectors and servicemen. Today, cessing to turn that information into meaningful information. Instead, the
to be mobile means to be able to interact with enterprise IT at any time, interpretation of the data is left as an exercise to the end user. Unless of
from any location using any from a wide range of devices. Collabora- course the application is labeled BI (business intelligence). In the section
tion, communication, quick decisions, constant monitoring a far larger Visualization, we will revisit the desire of users to be facilitated in the job
percentage of the employees of organizations engage in these activities, they have to do, the responsibility they have to fulfill. They do not care
not just during office hours and on premises, but more or less around the about data as such, they need facilities to do their tasks more efficiently,
clock. While at the airport or in a plane, from the car or the queue at the with higher quality and if at all possible, a little more conveniently. Typi-
supermarket, from a bench in the park or during commercial breaks from cally they prefer information or better yet, insight and calls to action,
couch. And yes, sometimes also from a desktop PC in a real office envi- over just the raw data.
ronment.

67 OTech Magazine #3 May 2014


The Relevance of
the User Experience
5/25 Lucas Jellema

impression. The user experience is obviously the first thing that counts
in that first impression. When Oracle is competing with its cloud applica-
tions, the big challenger is not so much the on premises giant of old, SAP,
but rather classic cloud vendors like SalesForce or more recent vendors
with modern applications designed for the cloud and todays user experi-
ence, such as Workday.com.

What Comes Next


We are not alone in facing the user experience challenges of today. All
Oracle E-Business Suite R11 Timesheet Semi-modern user experience
traditional vendors or enterprise applications have to deal with these
same challenges, including Oracle. Clearly, Oracle has never had a great
The user experience is not a luxury- to coddle a new generation of spoilt reputation for its user interfaces. Being firmly rooted in the database,
bratty end users. The user experience is critical for efficiency and for con- on the server side of the enterprise IT systems, most Oracle applications
stant, 24/7 engagement of employees. Just as Sheldon Cooper puts the do not have a reputation for a breezy, light weight, attractive, modern,
fun in funeral, there is no reason why a decent user experience should beautiful look and feel. And until not too long ago, that reputation or
not be used to improve the quality of the timesheet or expense report lack of it was well deserved. Even the early 2000s initiative around the
application. Accessible, attractive user interfaces not only speed up the Oracle Browser Look And Feel (or BLAF) guidelines, while relevant and
actual process once an employee or customer has started to engage, it well-intended as well as consistent, was behind the game from the very
also lowers the threshold to actually start the interaction, reduces the beginning.
number of errors made during the transaction and decreases the risk of
the user abandoning the process midway. This will save money on a seri- A change has come at Oracle. Oracle wants to lead in User Experience.
ous scale. Plain and simple. To that end, it has established the Applications User
Experience team (back in 2007) a relatively independent team within
For SaaS providers, there is even more at stake: they have to differentiate Oracle that explores all kinds of UX options, conceptual and technology-
against the competition and they really have one chance to make a first wise, and translates them into guidelines, templates, buildings blocks

68 OTech Magazine #3 May 2014


The Relevance of
the User Experience
6/25 Lucas Jellema

that make it possible to apply the UX vision to actual software. The team since the UI supports but a single task for a specific user group. Such user
does this first and foremost for Oracles own Applications development interfaces can be created in large numbers, each being small and cheap
teams and makes most of their work available to outsiders to also make and not meant to last. When the task definition changes or a new type of
use of when developing their own custom applications. device is used for performing the task, either change or even replace the
UI. Because the business process and the services are likely to not change
In this section, we will look at a number of concepts and designs that so frequently, these can easily be reused for the next generation of user
have come out of this UX team and that have had and continue to have interfaces.
tremendous influence on the way the Fusion Applications look such as
HCM Cloud and Sales Cloud, that take the lead in the UX evolution. Slowly
but steadily, the influence from the Applications User Experience team
permeates in other Oracle products and through the technology compo-
nents into custom built applications as well, perhaps yours included.
The team has developed a number of key concepts and messages and
turned them into actual software as well. A number of their ground rules
are discussed next, sometimes interpreted somewhat liberally or ex-
pressed in my very own words.

One Size does NOT Fit All


Applications should not try to be a one size fits all solution, where a single Figure: the upended pyramid represents an architecture that consists of consolidated enterprise resources, reusable services and numerous,
user interface attempts to satisfy all users, internal and external, in all small, tailored user interfaces

their respective roles and through all their individual tasks. User interfac-
es instead should be created as small, highly focused and specific interac-
tion vehicles for specific tasks. If the UI can be built on top of rich reus-
able business services that expose data and operations, the development
of the user interface itself can be relatively simple and cheap, especially

69 OTech Magazine #3 May 2014


The Relevance of
the User Experience
7/25 Lucas Jellema

90:90:10
If you do not have the option to rigorously replace existing enterprise
applications with task-specific user interfaces , you have other options.
The 90:90:10 rules states that 10% of the functionality of typical enter-
prise applications is used by 90% of the users during 90% of their time. In
other words: a fairly small portion of the application takes most of the
heat. Most users hardly ever see more of the application than that 10%. By
focusing your efforts primarily on that 10% of the functionality in terms of
providing the next generation user experience pays off. It is the most vis-
ible part that impacts most people and therefore helps realize the biggest
gain in productivity.

Simplified UI Figure: Simplified UI in Oracle HCM Cloud

The 90:90:10 rule is an important driver for what the Oracle UX team
calls: the simplified UI. The 90% group consists largely of infrequent or
at least not full-time-heads-down power users. They perform self-service
tasks that do not require the full power of the enterprise application
but only about 10%, as the rule states. The simplified UI is a wrapper
around the enterprise application platform. The tasks that are surfaced
and highlighted in the simplified user interface represent the 10 percent
of tasks that 90 percent of people are doing 90 percent of the time.

Note: In the section Getting Started, a quick introduction is provided in


how any organization can created this type of user interface using a num- Figure: Screenshot from the Simplified UI in Oracle HCM Cloud R8

ber of core ADF components.

70 OTech Magazine #3 May 2014


The Relevance of
the User Experience
8/25 Lucas Jellema

Simplicity the UX team and formally known as Vice President of Oracle Applications
The theme of simplicity is about as far as we can go from the use every User Experience, compares this with shopping for clothes. The first step
last pixel to put as much data on the page as we possibly can design is glance over the rack, to quickly locate anything that might be interest-
style that is so characteristic of many Forms applications and even later ing. The next level is to get the hanger from the rack, hold up the piece of
day browser based user interfaces. Simplicity in the view of the UX team garment before ones body, maybe show it to a companion or look in a
means just the right amount. Of data, of functionality, of anything on the nearby mirror. The third step in the process is considerable more involved:
pages of the user interface. The UI should contain what you most fre- take the blouse or the pair of trousers to a dressing room to actually try it
quently need in that context not the 90% stuff that only occasionally is on and in a considerable number of cases actually continue to buy it.
required. That is clutter most of the time and should not be in the way
all of the time. If the user relies on the fact that there is an easy access Obviously, glance is superficial, quick and meant to distill examples of
path from the current context to that second level data and functionality, potential interest. Scan takes these areas that deserve attention, and sub-
then it is no problem to leave it out of the primary UI. ject them to a closer inspection. That may result in either confirmation
that they are indeed of interest and further action is required commit
As the UX team states: We give users less to learn and more opportunity or the initial impression does not warrant more engagement right now.
to do their work. Our design is very approachable, touchable. Consumer This same three stage engagement model works well in the user inter-
apps can only be successful if they are highly intuitive. In the fast moving face of an enterprise application. At the first level, information that helps
world of app stores, users wont accept any learning curve at all. And the the user quickly get an overview of the state of affairs in a certain context
same principles can be applied to the enterprise app experience: very and find the areas that deserve more attention. Then an easy drill down
intuitive, no training required to get going. The power of the enterprise step to the second level, where the selected area is presented with more
application is still there, and now we are presenting the user experience detail and context and where some quick actions can be performed
in a way that anyone can use that power effortlessly. such as a quick approval, sending a note, adding a tag or comment or
making a simple update. When the engagement stretches beyond this
Rules of Engagement - Glance, Scan, Commit scan level, another in context navigation commences the real commit.
One way to provide a simple, intuitive experience is by recognizing the This means the user is virtually rolling up his sleeves, sits down, takes a
fact that users frequently work in three stages. Jeremy Ashley, leader of deep breath and gets ready for some real action.

71 OTech Magazine #3 May 2014


The Relevance of
the User Experience
9/25 Lucas Jellema

security. Once the user commits, the navigation still should be simple to
activate. However, at this point, a serious mood change takes place on
the part of the user. She embarks on an activity that will not be done in
just a few seconds. That lightning fast application performance so desir-
able at the first two levels is not so overridingly important any longer.
After glancing and scanning, the user is now buying into the way of doing
things on the enterprise application she has gone to where she need to
go to actually complete whatever task she is engaged in, which may be
that one size fits all screen for the power user.

Task & Process oriented


The user interface the users are dealing with should focus on the specific
task a user needs to perform instead of offering a generic window-on-
data such as the CRUD-style Forms of the Client/Server era that allows
all tasks and therefore supports none. In order to be truly intuitive, re-
quiring no training and making efficient execution of tasks possible, user
interfaces should be tailored to the role, process and task at hand.
Figure: Screenshot from Oracle Sales Cloud R8 Glance level to quickly identify problem areas; clicking on any of these cards allows drill down
to the scan level where more details and additional context is available as well as some quick actions
This may require some thinking outside the box. For example: if a busi-
ness users responsibility in a business process is making a decision
In terms of the simplified UI, the first level glance needs to be at the based on fairly simple, straightforward information, then the best way to
users fingertips. Hovering around at the scan level should be effort- ensure quick and painless contributions from that user may be by send-
less, slick and quick. The drill down to the scan level should also be very ing that person an email that holds all information including a deadline
smooth. Easy to perform and rapidly executed, as should be the step for making the decision as well as a hyperlink to activate for each of the
back from scan to glance. These operations should be accessible on possible decision outcomes.
mobile devices, operable while standing in line at Starbucks or airport

72 OTech Magazine #3 May 2014


The Relevance of
the User Experience
10/25 Lucas Jellema

It is definitely not a good idea to have that user start up a client/server ap- to step away from raw data and to present only information and better
plication, browse through a three level nested drop down menu, open a yet to present the information in a way that offers insight and suggests
page, then enter search criteria and execute the query to get the relevant relevant actions and decisions.
records in context, have the user locate five pieces of information on a
page filled to the brim with fields, checkboxes, radio groups and tabs We have technology that is good at processing data in many ways, includ-
with even more data, make the decision by setting a different value in a ing filter, structure & sort, abstract [away irrelevant details], aggregate,
dropdown list and finally press the Commit button to confirm the deci- associate/interpret and ultimately even predict. The next figure shows
sion. That seems rather obvious but last year I encountered exactly that a very simple example of how merely structuring data can lead to much
situation at a Dutch financial institution. easier to access information:

A common way to support a specific task is through the use of a multi-


step wizard that guides a user along a potentially complex path in clear
steps with the right level of complexity.

Visualization
With all the data processing capabilities at our disposal, it is remarkable
in a disappointing way for how long we have surfaced raw data to our
end users, pretending that was the information they were looking for. Figure: Right and Left is the same data. Counting the number of circles is much easier when a little structuring of the same data has been done

Frequently, the information was hidden in the data, and we left it as an


exercise to the user to extract the information and derive from it the To be able to present information that is relevant to a user, we need of
decisions and actions to be taken. course to understand
What are the users responsibilities?
Visualization can be regarded as the presentation of information in a What actions/decisions may have to be taken?
way that enables the user to fulfill his or her responsibilities correctly What information is required to perform an action?
and completely In a timely, efficient, convenient manner. A key aspect is Which information determines if an action should be taken?

73 OTech Magazine #3 May 2014


The Relevance of
the User Experience
11/25 Lucas Jellema

How should the user be informed about an action that needs taking? Visualizations can be used to highlight and categorize specific informa-
What shape does the call-to-action take? tion, to provide context for certain information for example with time
How should be the information required to start an action or make and geographic location and to allow humans to exploit their talent for
decision be presented? visual comparison and extrapolation.
What data is the information derived from [and how]?
Many types of charts have developed over time to present information
A little understanding of human biology will help to take the next step. in ways that make it accessible and interpretable. From simple line charts
If we make sensible use of the various ways in which our body and mind (good for trending, interpolation and extrapolation) to bar charts and
collect and interpret information, we can come up with visualizations pie charts (for simple comparison) and multi-dimensional displays such
that allow for much quicker and better interpretations and reactions. as bubble charts and funnels. Timelines and maps are good ways to
If we can unleash the associative brain and the unconscious back ground provide either time or space context. Tag clouds can be used to very
processing of the human mind, we accelerate the information processing rapidly assess the relative (occurrence based) importance. Tree maps
capabilities of our end users. By leveraging our human ability to collect, and sun bursts allow for multi-level hierarchical comparison.
interpret and interact along multiple dimensions, we create an experi- Summarizing: plenty of ways are available to represent information.
ence that is much more efficient, effective and pleasant. Todays technol- And associated with these representation are interaction paths: drill
ogy allow easy exploitation of such additional dimensions beyond plain down, roll up, navigation, reorientation and other interactions further
text based table presentations of reams of data. Some examples are: enhance the interpretation of the information.
Color, Size, Shapes/ Font, Story/Atmosphere, Icons, Sound, Animation, Figure: Tree Map that visualizes relative popula-
3D presentation, Interaction (drill down, roll up, pivot). tion sizes across regions (Asia and Pacific account
for more than half of the worlds population) and
countries (China and India host about two thirds
of the population of Asia and Pacific. The largest
population sizes outside that region are found in
the USA, Russia and Nigeria. Note: the area size of
A simple emoticon can convey so much meaning with so little effort. a rectangle is proportional to the population size.
Clicking on any rectangle triggers a drill down
that will present the next level of detail: Countries
It is a simple and powerful example of how a visualization can represent within a Region and the main cities population-
wise per country

certain data and information in a way that is very telling and easy to
interpret quickly.

74 OTech Magazine #3 May 2014


The Relevance of
the User Experience
12/25 Lucas Jellema

For a long time, charts were used in BI applications, but not so much in Visualization is can be used for making information easier to digest in very
normal OLTP applications. Technology restrictions existed, that made operational ways, such as finding information. The iPhone for example
creating and embedding those charts cumbersome and that caused prob- allows me to not only browse through my photographs in long list with
lems with on the fly data aggregation. And coming from the window meaningless names along with file size and timestamp. It presents the
on data approach to user interface design, charts were not an obvious Photo Roll a list of thumbnails not very useful with my 4000+ photo-
choice. Todays technology does not offer (m)any constraints; HTML 5 graphs and a geographic presentation of where the photographs were
has all the facilities we need to produce quite spectacular visual displays taken see next figure. Using this map based overview, I can quickly drill
of charts and other presentations and interactions. Processing data to down to a specific location and isolate only the pictures taken at that
feed the visualizations, even in real time, is typically not a serious chal- location.
lenge. Designing the relevant visualizations and analyzing which data to
use for feeding them is probably a much harder challenge. And one well
worth addressing.

Figure: Visualization used to quickly locate and drill down to 1 out of 4387 pictures based on the location (Malta)

Figure: Screenshot from Oracle HCM Cloud R8 Simplified UI with in context Visualization for quick aggregated data interpretation and
interaction

75 OTech Magazine #3 May 2014


The Relevance of
the User Experience
13/25 Lucas Jellema

Gamification
People like to engage in games. They create little contests everywhere.
Small bets, highest number of whatever, first to reach. From my days at
Oracle I remember the frenzy around who would be the one to record
bug number 1,000,000. It is part of how humans are wired.

This penchant for gaming can be leveraged in enterprise applications.


Gamification therefore is the application of game design principles to
business software. Gamification motivates players to engage in desired
behaviors by taking advantage of peoples innate enjoyment of play.
Simple elements like scoring points for completing tasks [in time] and
introducing leader boards may already help stimulate users to improve
their performance. Just like the numbers of tweets and followers have an
effect on the average Twitter user, so will a well-chosen equivalent in the
enterprise application influence the actions of the enterprise app users.
Creating epic stories or a journey-like equivalent with explicit challenges
and milestones to represent business processes has shown to engage
and motivate users.
Figure: Simplified UI, personal pages and gamification
Gamification will frequently work with visualizations, to create appealing aspects announced for Oracle HCM Cloud

and easy to interpret representations of results and current status. The


next figure shows an example of how Oracle intends to introduce game
elements into a future release of Oracle HCM Cloud to engage employees
in personal health and fitness as well as in skill training.

76 OTech Magazine #3 May 2014


The Relevance of
the User Experience
14/25 Lucas Jellema

Mobility enterprise systems is rapidly growing. Smartphones, tablets, media play-


Mobility is another corner stone of the simplicity, mobility and exten- ers, desktops, displays in cars, wearables (shoes, glasses, garments-with-
sibility tag line for the Oracle UX team. It refers to the fact that users sensors, watches), kiosks and other contraptions are used to engage
interact with enterprise applications at almost any time and from almost with automated systems. And through these mechanisms, various modes
any place, using a device that is convenient to them given those circum- of interacting are used including typing, mouse controlling, swiping
stances. From a user experience perspective, we need to work from that and gesturing, voice control, arm and leg movements (for example the
situation and we can even leverage it. Kinect), head/eye coordination, simply walking by detectors and the giant
button that can be operated by any body part.
The variety in devices that people use to connect to the internet and
interact with a variety of apps, social media, web applications and also

Figure: alternative interaction channels & devices

77 OTech Magazine #3 May 2014


The Relevance of
the User Experience
15/25 Lucas Jellema

Other capabilities of the plethora of devices can and should - be lever- the typical place for data such as in flight transactions, custom prefer-
aged as well. Collecting audio, image and video associated with location ences and my personal notes and contacts.
for example. Or starting phone calls or other forms of conversations
in the context of a business application. Navigation instructions to the Apps are not only used at any time, they are also active at any place. This
customer location that is selected in the enterprise application. Taking may include places where on-line usage is not possible or connectivity is
physical measurements of temperature, distance, speed, noise levels limited. Because of this and because a local cache may be unavoidable
etcetera and feeding them into the enterprise application without human to ensure decent performance- the enterprise apps will make use of local
intervention. data storage and synchronization challenges. On the brighter side the
fact that devices are used at any location combined with the fact that the
Because of the omnipresence of devices and therefore of the enterprise device itself is location aware, opens up opportunities for applications to
[application], business processes can be conducted much more rapidly. further facilitate the end user. By providing location based information
Users are in almost constant touch and communication, conferral and such as showing the information about the object to inspect that I am
decision making can take place much more rapidly. Humans are still a currently close to or by understanding that the meeting summary I am
slowing factor in the overall process we simply cannot compete with a about to enter is in the context of the customer on whose premises I am
well programmed computer but we can increase our performance quite located.
dramatically.
Mobility requires applications to be designed either to run on a wider
End users make use of multiple devices to interact with the same busi- range of devices adaptive and/or responsive design or to be designed
ness process. One part of the process is handled on the desktop, the next specifically for each device. The latter obviously means being able to
on a smartphone or tablet and yet another through a voice controlled make more use of the specific device features at the cost creating less
telephone system or a fingerprint or iris scan based authentication. This reusable apps. For specific tasks that are performed very frequently (vol-
means that the user experiences presented by our applications has to ume) or whose performance makes all the difference (raw speed), it may
function on and be tailored to a potentially wide range of devices and be well worth it to select a specific device and create a dedicated app to
interaction styles. The data used through the application on the various run on it. In many other instances, creating a standards based applica-
devices is available on all devices and therefore on none. The cloud is tion HTML 5 that has the ability to adapt to the device form factor,

78 OTech Magazine #3 May 2014


The Relevance of
the User Experience
16/25 Lucas Jellema

screen size, interaction mechanisms such as mouse vs gesture results


in the right balance between development and maintenance effort and a
tailored device specific user experience.

The Oracle UX team adopts a tablet-first design approach, especially for


the 10% of the functionality from the 90:90:10 rule. The UIs designed for a
tablet will also render perfectly fine on a larger desktop display. They are
simple (as well as inviting and appealing) on the big screen as well as on
the small one.

The power UIs used by the power users those professionals that work
with the enterprise applications 90% of their time will typically run on
powerful desktops with large screens with most of the interaction mouse
and keyboard driven. These interfaces have less of a mobility require-
ment, at this moment.

Customization
One size does not fit all, at all. Having said that, creating individual appli-
cations for each niche of users, roles, departments, devices, screen sizes,
cultural backgrounds and geographic locations is simply not realistic.
What we need therefore is a way to create applications that know how to
adapt. Depending on the context in which they are used, they take on an
appropriate disguise.

79 OTech Magazine #3 May 2014


The Relevance of
the User Experience
17/25 Lucas Jellema

Responsive and adaptive design are approaches specifically targeted Personalization - Changes made by self-service users at run-time that
at screen size and form factor. These are part of this chameleon like only affect that user. Can be made to new or changed artifacts.
behavior that we want to instill in our applications. To make our applica- Localization - Changes to provide specific functionality for a given coun-
tions align with role, geographic location, language and culture, division/ try or region, typically created at design time by product development
department and other context factors, we have to embed customization or as third-party extensions.
into the application. This means that during design and development of
the application, starting from the base functionality and look and feel, we It is important to realize that in a cloud environment where a single ap-
define along each of the customization dimensions such as region/coun- plication instance is used by multiple organizations, some of the tailoring
try/language, industry, role what the specific adaptations should be in steps are not available. Configuration for example can only be done to
the applications behavior and look & feel. These context specific modifi- fairly small extent per organization. Design time customizations cannot
cations on top of a core application, are called customizations. be created per organization. However, all run time tailoring steps are
available in a SaaS environment as well as in the on premises situation.
Having identified the customizations, we also need an infrastructure in
our application [platform] that allows us to apply the relevant customiza-
tions at run time depending on the actual context in a specific user ses-
sion. Of course every concurrent user session may have its own distinct
set of customizations applied, depending on its own distinct context.
In the world of Fusion Applications, any change on top of the core prod-
uct is seen as tailoring. Various types of tailoring are identified:

Configuration - Setup step made by customers to alter the applications


in a way that has been pre-defined by the base product.
Customization - All changes to existing artifacts, either at design time
or run time.
Extension - All creations of new artifacts.

80 OTech Magazine #3 May 2014


The Relevance of
the User Experience
18/25 Lucas Jellema

Extensibility
In the tag line Simplicity, Mobility and Extensibility, the latter is the catch
all term for any modification a business user may want to make to the
core application. These changes can be for an entire organization unit, for
a small team or for an individual. These changes are made to improve the
implementation for the users involved. They allow the user to work with
a more simplified experience by leaving out elements that the user does
not have a need for or a more tailored experience through a presenta-
tion that is more intuitive to the user. Common examples of such changes
are to alter the terminology (text in prompts, titles, hints, ), to hide
items, to reorganize elements and to fine tune rules for highlighting and
alerting. True extension takes place when users create new elements
from additional derived fields to user defined flex fields, custom data
filters and reports and even entirely new and integrated business objects
with associated pages.
Figure: Appearance customization in Release 8 of HCM Cloud and Sales Cloud

The simplest form of customization available to any organization using a capabilities comes from core ADF features along with the MDS (Meta
standard application would be the ability to define the visual style of the Data Service) complemented with business composers. The customiza-
application for the organization using the logos, colors, fonts and other tion framework in ADF supports both design time customizations (cre-
organization specific display style elements. ated by developers) as well as run time customizations and extensions
(potentially created by business representatives).
More elaborate customizations and personalization require a more
sophisticated mechanism that has to be embedded into the application. At run time, the business composers in Fusion Applications are available
Oracle Fusion Applications have customizations as intrinsic part of both to create customizations. These composers are also available outside
the applications as well as the application platform. Most of this platform of Fusion Applications, in various Fusion Middleware products such as

81 OTech Magazine #3 May 2014


The Relevance of
the User Experience
19/25 Lucas Jellema

WebCenter Portal (Page Composer and Data Composer), BI EE (BI Com- Not only does the Applications User Experience team thus provide the
poser) and BPM Suite (Process Composer). These can be used in custom ability to easily tailor applications with the simplified user interface using
applications. composers for the business analyst, but also the team provides guidance
for more complex extensions through its UX Direct program.
In the HCM and Sales Cloud products, on the Structure page, a business
system analyst can reorganize how the pages will appear in the user in- Personal cloud
terface by simply dragging and dropping them around the page. Renam- This personal cloud is the back end tied to a specific user that allows the
ing functional areas and pages is as easy as typing over existing names. user to move across devices while participating in a business process or
transaction. For example to allow a shopping basket to be manipulated
on different devices. The personal cloud is somewhat similar to session
state in web applications. However, because it stretches across devices
and therefore clearly across physical sessions as well, it has to be handled
in a special way. Part of the personal cloud is also the collection of user
preferences and user specific extensions that govern the personalized
look and feel a user experiences when accessing applications. When a
user specifies on one device that she wants to hide or reposition a field in
a page that supports a certain task, then that same configuration should
be applied to that page on other devices, and even in a different app sup-
porting the same task.

Of course any customization made to the enterprise application today


should continue to exist across new releases of the core enterprise ap-
plication. My customizations in general should be carried forward until
Figure: Customizing the appearance of pages in the simplified UI of HCM Cloud R8
such time where they do not make sense any longer, for example when
my customization atmpts to hide a field that is removed from the based

82 OTech Magazine #3 May 2014


The Relevance of
the User Experience
20/25 Lucas Jellema

product altogether. Whatever the mechanism used to record and apply In general, we are going to think differently about applications. Instead
the customizations, it should be able to work across upgrades of the ap- of the large enterprise applications of today, we will see a proliferation
plication. of small enterprise apps. These apps support a relatively small task or
process with a tailored user experience running against a highly reusable
The exact implementation of the personal cloud can vary. It could be held back end with services and processes containing the business logic and
in a public cloud environment provided it is secure or be located in the consolidated data. The apps are welded together in the run time environ-
private cloud of the enterprise data center where it can be stored in vari- ment to collectively form the user experience for an individual user. Each
ous ways. Data in the personal cloud has to be rapidly available because user may work with a different collection of apps.
it is at the very forefront of the user experience. The personal cloud is im-
plemented through the MDS (Meta Data Services) in Fusion Middleware. Enterprise apps should be small, focused chunks of functionality that
are used in an enterprise environment yet offer a consumer experience,
Rapid evolution similar to well-known apps from iTunes, Google Play or other app stores.
The way our users perceive applications is going through a major change These apps should require no training for the end users and be as intui-
leap frogging from the mid-nineties way of enterprise IT thinking to the tive as an iPad App. The apps should efficiently use information to help
consumer style approach of simple, mobile and extensible . What is more: the user derive insight and from there processed to decision and action.
the change is not finite. With the ongoing evolution of technology and Data is not relevant, it is merely the raw material that users typically are
expectations, we will not reach a new status quo with our applications not interested in and should not be bothered by. The app guides the user
and the user experience they offer. Users have come to expect continuous to what [information] is relevant for example because it requires an ac-
change. A new version every few months at least, perhaps far more often. tion (pending deadline) or a review (threshold crossed).

Last week, I got introduced to a Dutch bank that uses continuous im- Just like consumer apps, these enterprise apps will typically have a rapid
provement and delivery to rebuild its customer web site every 15 minutes evolution. They should also be considered almost throw away software:
(in the development environment) and that releases to production every if an app does not fully satisfy the users requirements, an organization
two weeks. This of course takes a large degree of automated testing and should have no qualms about replacing it with a new incarnation of the
a very well organized DTAP environment and process. same or even an entirely different app.

83 OTech Magazine #3 May 2014


The Relevance of
the User Experience
21/25 Lucas Jellema

An important part of the user experience of consumer apps and there- If you are building new applications, you have a great opportunity to im-
fore of the next generation of enterprise apps is the notion of real time. bue the applications with an optimized user experience from day one.
Push notifications, informing the user of events almost instantaneously, However, even if you have an existing application that you cannot com-
is at the heart of many consumer experiences. From email and whats pletely overhaul, there is still much you do as Oracle has demonstrated
app to Wordfeud and Twitter, these notifications drive much of the users with the Simplified UI that is basically a wrapper around or on top of an
actions. A similar experience is sought for in the enterprise apps. New existing enterprise application that while not necessarily ugly is not de-
tasks, questions and status changes should be handed to the enterprise sign according to the latest insights either. Such a Simplified UI based on
user in near real time. The enterprise users may well require collaboration 90:90:10 and offering intuitive paths into the existing application are rela-
in the enterprise environment in a similar style as they are used to in their tively easy to achieve. To a large extent, such a UI could be created using
social media dealings (Facebook, Twitter, LinkedIn, and so on). Or they a different technology from the base application that it wraps because
may even want the worlds to fuse together merging notifications from the two are linked but not interwoven in the UI itself.
their personal sphere with those that are work related.
The Simplified UI in Oracle Cloud Applications is created using ADF
Distribution of enterprise apps, especially those that are used natively on (Application Development Framework). Relatively new components
devices is a special challenge, one that increases with the high app turno- SpringBoard, Cards, PanelDrawer and Vertical Tabs along with the Data
ver rate that is envisioned. When talking about continuous build, delivery Visualization Tags (DVTs) are used heavily from creating this UI. These
and improvement, we have to ensure that distribution of the app when it components are available to anyone as part of ADF and even ADF Essen-
is released is very much part of that effort. tials (the free edition of ADF). Oracle is working on a special developers
kit for Simplified UI. This is a soon to be released toolkit that helps devel-
How to get going opers quickly create their own simplified UI.
User experience as described in this article is applied by Oracle itself, to
its Cloud Applications start with and increasingly to all its products. We The Oracle Applications User Experience team shares many resources on
can do much the same in the custom applications we create. The same its UX Direct website (http://www.oracle.com/us/uxdirect). On this site,
principle apply such as 90:90:10, simplified UI, one size does not fit all, the user centered design process is detailed. Design Patterns and Guide-
simplicity, mobility and extensibility; visualization, glance|scan|commit. lines are introduced. Many tools such as templates and checklists are

84 OTech Magazine #3 May 2014


The Relevance of
the User Experience
22/25 Lucas Jellema

provided, to help ensure that no essential steps in the design process are
missed.

On practical level, for example the extensive set of Oracles ADF Rich Cli-
ent User Interface Guidelines (http://www.oracle.com/webfolder/ux/mid-
dleware/richclient/index.html) will be valuable for many ADF UI develop-
ers.

Another interesting resource is the Oracle Usable Apps For Developers


section (http://bit.ly/1eppv2K) that introduces and guides developers into
the use of the UX concepts and the best practices Oracle provides.

Figure: The Oracle UX Direct Design Process poster that can be downloaded from http://www.oracle.com/
us/uxdirect

85 OTech Magazine #3 May 2014


The Relevance of
the User Experience
23/25 Lucas Jellema

In her blog article Six Things You Can Do Today to Jump-Start Your User Fusion Middleware and other Oracle technologies for UI development
Experience for Enterprise Applications (https://blogs.oracle.com/VOX/en- Oracle offers various tools and technologies for developing and running
try/six_things_you_can_do), Misha Vaughan from the Oracle Applications user interfaces.
User Experience team explains how good usability practices are com-
pletely possible even on the smallest budget, and with no UX staff. She Since 2004, first as HTML DB, there has been APEX. Browser based devel-
introduces six steps that are available to any organization at the cost of opment and run time, ideally suited for rapid development and delivery.
only a little time: APEX leverages HTML 5, increasingly more so in the APEX 5 release, due
1. Identify who are the users of the application? Per role: what do they later this calendar year. Note that APEX can also be used to implement
do, how/when/why/with what. Use the cheat sheet from UX Direct in the REST services on top of the database that other UI apps may want
this step to leverage. APEX plays an important role in the Oracle Database Cloud
2. Work smarter jump start with the design patterns already developed both for the administration of that cloud environment and for the devel-
and proven by the UX team and available from UX Direct opment of cloud based UI applications and REST services. The latter can
3. Sketch create wireframes before starting to actually code be consumed by components running on the Oracle Cloud, some other
4. Visual Design think about general visual principles including color, public cloud, or on premises or by UI apps running on a desktop or mobile
order of content, (check https://www.youtube.com/watch?v=kNcM8r device.
wz5gQ&feature=youtu.be for an introduction)
5. Get feedback on the wireframes and the visual design from real end us- Oracles premier application development framework is ADF, used for the
ers but do not let the designer who created those interview the end development of the vast majority of Oracles own user interface applica-
users herself, to prevent biased results tions, such as Fusion Applications. ADF provides several for developing
6. Iterate. Re-design and re-test, as resources permit. Do not wait until user interfaces:
the entire design (and even development) phase is complete before re- ADF Di (Desktop Integration) for creating Excel applications against an
connecting with the end users. ADF back end
ADF Swing (deprecated as of release 11gR2)
ADF Faces for implementing rich Java EE web applications
ADF Mobile for developing semi-native, cross device mobile apps

86 OTech Magazine #3 May 2014


The Relevance of
the User Experience
24/25 Lucas Jellema

ADF Faces is currently by far the most widely used of these options. ADF to hide fields, reposition page elements and change prompts and other
Faces 11g has been available since 2008. It is based on the Java EE stand- boilerplate text at run time. The personal cloud is implemented through
ard of JavaServer Faces. That very name reveals a lot about the archi- the MDS (Meta Data Services) in Fusion Middleware.
tecture of ADF Faces: even though the client has become richer (with
increasingly more dynamic HTML manipulation going on in the client and Creating a simple, mobile and extensible UX is very possible with ADF
more client/server interaction handled through background, AJAX-style Faces, as demonstrated for example with the FUSE style in Fusion Appli-
interactions) the role of the server is still very large. ADF Faces applica- cations HCM R7. ADF Faces components springboard, paneldrawer and
tions are stateful, with session state being held in the server. The per- vertical tabs for example are used to create the icon rich, intuitive user
session footprint is quite substantial with ADF Faces user interfaces. This interface that very naturally guides a user to a specific action.
architecture is very useful in large transactions and with complex busi-
ness logic for data intensive operation by power users. It can be much Oracle launched ADF Mobile in 2012. Through ADF Mobile, developers
less useful for light weight, read only, self-service style applications. can create a cross device mobile app, that renders HTML 5 and also has
access to on device services such as email, contacts, camera and GPS.
ADF Faces is further evolving, for example with explicit support for tab- ADF Mobile apps are developed in JDeveloper in a way that is very similar
lets (including touch based interactions and adaptive flow layout), use to the ADF Faces development experience. ADF Mobile apps run in an on
of HTML 5 for rendering of data visualizations and some streamlining for device Java Virtual Machine. They access backend services frequently
better use of ADF Faces for public sites (for example smaller initial JavaS- the same RESTful services accessed by rich HTML 5 apps.
cript footprint).
There seems to be a move within Oracle not yet formally announced
The extensibility of UI apps is supported in ADF Faces to a large degree. to rebrand this mobile solution to Oracle Mobile Development Frame-
Both personalization at run time and customization adapting the ap- work, to position it more broadly as the strategic solution from Oracle
plication at design time or run time for specific roles, user groups, loca- for developing mobile apps and not focus too much on the existing ADF
tions or other conditions is catered for. Oracle provides ADF Faces developers community. The Oracle Mobile Cloud platform is closely as-
developers with many facilities to build dynamic customization into the sociated with this initiative.
applications, for example to enable application managers or end users

87 OTech Magazine #3 May 2014


The Relevance of
the User Experience
25/25 Lucas Jellema

The Oracle Mobile Suite has been announced, and is available for vices. A RESTful JavaScript Client wizard is also part of NetBeans, allow-
download as well as on the pricelist. This suite contains ADF Mobile, the ing generation of JavaScript code snippets for interacting with a RESTful
Oracle Service Bus as well as all Applications Adapters. At this point, it web service. See https://netbeans.org/features/html5/ for details.
seems nothing more than a bundling of existing components that enable
development of mobile solutions, albeit at a much higher price and the Summary
sum of the individual components. For now it seems primarily a market- Whether you have read this article on paper, on your e-reader, desktop
ing statement about the prominence of mobile development front end browser or tablet, on your smart phone while riding the subway, or hav-
(UI) and [especially] back end - in Oracles product strategy. ing it read out aloud to you in the car the fact is undeniable that there
is an increasing number of channels through which users interact with IT
systems. Any user may use a range of different devices, even for perform-
ing the same task. Each device requires a device specific style of interac-
tion from mouse to voice driven, from hand gestures to head shakes.

Note that NetBeans, one of the IDEs offered by Oracle and part of the
Sun Microsystems inheritance, has strong support for HTML 5 develop-
ment, including JavaScript and CSS 3. One of its features is live web pre-
view: two-integration between Chrome browser and the NetBeans IDE,
meaning that every change is exposed instantaneously in the browser
and that any DOM element in the browser can be traced back to a code
line in the IDE. Netbeans also offers preview for many different page
sizes, ratios and resolutions to inspect UI design for many different de-

88 OTech Magazine #3 May 2014


Oracle NoSQL
James Anthony
www.e-dba.com
PART 2
twitter.com/jamescanthony
 www.linkedin.com/pub/
james-anthony/1/3a4/101

89 OTech Magazine #3 May 2014


Oracle NoSQL
Article 2
1/12 James Anthony

In the first part of this series we discussed the distributed databases. Up until this point Id not
basic types of NoSQL database, what I dont think I really talked too much about distribution of data- Consistency: When a system is clustered/
made clear at the time is that even where a data- bases, so at this juncture its worth bringing this distributed the ability of all nodes within the
base might fall into the category (take for example up. NoSQL databases are typically (although not distributed system to see the same data at the
Cassandra and HBase in the column family data- always) used in a distributed manner, with data- same time
base) they arent architected the same and may base servers that are physically separated coupled Availability: The ability of the system to service
have different capabilities. This shouldnt come as together to form a single logical entity. Giving an requests for data
a major surprise as anyone in the RDBMS world example from the relational world, you could think Partition (Tolerance): The ability of the distributed
knows Oracle and SQLServer are two vastly differ- of Oracle master-master replication as just one system to deal with loss of some part of the overall
ent beasts that have the paradigm of being Rela- example of a distributed database system, with solution such as a message or node
tional Databases in common. multiple geo-graphically separated databases act-
ing as a single entity.
In this article Ill discuss first CAP and ACID and The need for distribution arose because many What youll notice about these is they are func-
how these were perceived as weaknesses of the of the NoSQL databases in use today come from tions of the distributed database in general. A full
RDBMS and led to the development of NoSQL large internet scale organisations, and therefore debate on the proof of CAP theorem is beyond this
databases, then well dive in further to explore the the ability to distribute databases across multiple article, in fact this is a debate that still rages for
KeyValue type and in particular Oracles implemen- data centres, in multiple countries was a primary many people and looks set to continue to do so.
tation, the Oracle NoSQL database. goal to ensure a) high availability, b) global data Briefly put, the view was/is that only 2 of these 3
distribution for both locality of service and data tenants can be maintained at any time. Let me give
Im pretty sure that most people reading this protection and c) suitable load balancing of work you and example in the traditional Oracle world to
article have at least some familiarity with both CAP to ensure no single location/DC/Server represents better illustrate.
and ACID, but lets just do a quick recap. a pinch point.
Take an example of a 2-server configuration with
CAP theorem is based on Consistency, Availabil- In CAP Theorem we discuss a system having these plain old Oracle replication configured. In this
ity and Persistence, and deals specifically with three guarantees; environment we have our first decision to make,

90 OTech Magazine #3 May 2014


Oracle NoSQL
Article 2
2/12 James Anthony

namely that over consistency. If we want both So now we have 2PC configured, and we adhere to interested I thoroughly recommend reading the
nodes to always see the same data then we need our consistency guarantee, but weve broken one Amazon Dynamo paper and hope that like me
to configure synchronous replication (2 Phase of our other guarantees that of Partition toler- youll be suitably impressed by the elegance of
Commit - 2PC) such that any transaction that gets ance. Why? Because in order to preserve consist- semantic and syntactic reconciliation.
committed on either node is immediately and ency across the nodes 2PC requires that the data
synchronously replicated to the other node. If we is committed on both sides. Therefore loss of one http://www.allthingsdistributed.com/files/amazon-
dont, and chose asynchronous replication instead, side has an impact on the ability of the other side dynamo-sosp2007.pdf
we have relaxed one of our guarantees immedi- to be able to process, stopping it from doing so! If
ately and we risk an inconsistent view of the data we take the alternative model and choose asyn- Whilst CAP refers to the system in general, ACID
depending on which node is queried. chronous replication, we can continue to operate concerns itself with transactions within the sys-
if one side is down (and therefore achieve partition tem.
tolerance) but we lose our consistency guarantee
in order to uphold this. Therefore the general prin-
ciple is that in CAP theorem we can provide a 2 of
3 model but must relax one of the guarantees.

Not being completely relevant at this point, but


worth introducing is a phrase youll hear a lot of
working in and around NoSQL solutions Even-
1) Transaction A comes in from user to the left hand side database
tual consistency. Many NoSQL database relax the
Because no 2PC is in place the transaction is queued for later delivery consistency guarantee, instead offering to make
to the second database
2) Shortly after the transaction commits on the left hand side a user the system consistent over a period of time, such
connects to the right hand side database and queries the informa- as handling a failure of a node (or separation of
tion. They are given the before image
3) Asynchronous replication now occurs and the right hand side is multiple nodes due to network outage) by syn-
updated, BUT our user has seen an inconsistent view of the data
chronising the data upon restoration and therefore
Figure 1 making the data eventually consistent. For those

91 OTech Magazine #3 May 2014


Oracle NoSQL
Article 2
3/12 James Anthony

ferent paradigm, but in fact I see this as less of an issue. Oracle has always
Atomicity: Each transaction is all or nothing. If you are updating 50 rows, had asynchronous replication with conflict resolution rules, and more
then all 50 rows update or the entire transaction is rolled back. Each trans- recently Streams and GoldenGate provide even more elegant solutions,
action is atomic, that is to say indivisible. all of which can be considered to provide eventual consistency with the
Consistency: Unlike the C in CAP consistency here refers to moving from ability to provide partition tolerance. In many cases the perceived weak-
one valid state to another, such that the transaction committed does not nesses were in comparison to the release of MySQL that was available at
violate integrity constraints or other defined rules. the time. None the less, and leaving the pseudo-religious arguments
Isolation: This ensures that multiple concurrent executions of transactions
will result in the same end state that would occur should those transac-
tions be processed in a serial manner 1) A put operation changes data (insert, update or
Durability: Once committed a transaction will be permanent, across delete)
2) Replication occurs from the node receiving the put
failures such as power or system crash. The Oracle redo log write ahead to the upper of the replicas. For some reason the
lower replica is unavailable (outage or network parti-
logging model is an excellent implementation of this. tion). In this case due to network partition meaning
the lower replica is still running
3) Read (get) requests occur at both locations, notice
Now that weve done a quick recap of the CAP and ACID properties we how the lower replica will serve an older version of
the data
will discuss how these were perceived (and notice my use of that word
as particularly relevant in my opinion) as weaknesses in the relational
database that led to the development of the NoSQL movement that is so 1) At some later time the network partition is resolved
and the NoSQL solution will resolve the consistency
strong today. by replicating the change to the remaining replica.
Going back to the example I gave regarding CAP and 2PC, it is easy to see The system has become eventually consistent.

how a 2PC model is hard to envisage as a production solution for a global


database with high volumes of traffic. Not only would loss of one data- Figure 2

base (or isolation due to network or other factors) stop the entire system
processing, but the impact of network latency on each and every transac- I would no doubt start by carrying on along the rebuttal line, lets keep
tion would inevitably become a pinch point. Much is made in many of the discussing these drawbacks and the mechanisms deployed by NoSQL to
NoSQL papers about the issues with 2PC and the need to move to a dif- resolve this.

92 OTech Magazine #3 May 2014


Oracle NoSQL
Article 2
4/12 James Anthony

A second technical issue that NoSQL databases focus on is the need it is unquestionable that the NoSQL database is here to stay. In the next
for a less rigid structure for data models than is enforced by relational part of this article we will discuss the Oracle NoSQL implementation.
models. This has seen the rise of document databases such as MongoDB
and CouchBase. The standard table structure with a relatively rigid col-
umn format was seen as restrictive to rapid development models. Many Oracle NoSQL
of these databases adhere to a document data model, which stores all Database
information in the form of documents (where all the information about
an entity, such as a person, is held within a single document -- an entirely Oracle NoSQL Database is a KeyValue (KV) store, much in the same vain
de-normalised approach) have to sacrifice ACID properties and do away as Amazon Dynamo and Voldemort but with some significant advantages
with transactions across multiple documents entirely! Others needed to we will discuss later on. In the previous article in this series we discussed
cope with a variable number of columns in each row. The classic example what KV storage looks like, so hopefully you can remember that far back!
being the number of links within a web page, each column represents a The salient points are: The value can be anything we like, a simple alpha-
different page but it is unclear how many links a given page might con- numeric value (a name perhaps), a serialisation of a session state (a really
tain -- therefore a flexible number of columns is required. Interestingly good use case), or an object such as a JSON document (which I will use in
many people may now be aware that Oracle are extending the database many of my examples). Data is queried using a get command (passing
in 12.1.0.2 to support JSON document models, but with all the benefits of in the key) and written to the database using a put statement (passing
the Oracle RDBMS in terms of ACID support, back and recovery, manage- in the key and value obviously), there is no query language in the vain of
ment etc. SQL.
Other NoSQL databases (perhaps most notably the BigTable based data- The Oracle NoSQL database is designed from the ground up to be distrib-
bases) needed both highly distributed and fault tolerant solutions, as well uted, in fact whilst there is a lite version that can run on a single node
as dealing with massive data volumes and a data model that needed to for testing, you really arent going to deploy the Oracle NoSQL Database
incorporate a multi-dimensional element (time). in a guise with less than 3 nodes (I will shortly discuss the architecture of
the solution). This means that the concepts of replication and consistency
Whatever your personal view on these restrictions, whether technology immediately come into play.
both software and hardware has improved to get around some of these,

93 OTech Magazine #3 May 2014


Oracle NoSQL
Article 2
5/12 James Anthony

Architecture So, lets get back to some NoSQL concepts and explain partitions and
Firstly, a big thanks up front to the Oracle NoSQL product management sharing in this context.
team, Im going to be using their illustrations throughout this to save me
the need to recreate them. One of the things youll notice when you look Partitioning
at the docs is a lot more diagrams than the RDBMS has in its documenta- Weve already discussed how NoSQL solutions are typically distributed,
tion these days, and I think thats a great way to illustrate what is a new therefore the question becomes how does the system decide how to
topic to most people. distribute data. This is called partitioning.
When a value is stored a hashing algorithm is applied to the key and a
Within the Oracle NoSQL database the database is referred to as the partition ID is derived based on this. A few relevant points are:
KVStore (Key-Value Store), with the KVStore consisting of multiple com-
ponents, but at high level the store is broken down into multiple storage A single partition will contain multiple keys
nodes. A storage node is a server (or a VM) with local disks, so unlike RAC A single replication node can contain multiple
there is no need for a clustered file system, SAN or NAS -- you just provi- partitions
sion local disks allowing deployment on truly commodity based hard-
ware. This last point is especially relevant, and you will want multiple partitions
Within each storage node are replication nodes, were going to discuss per node. Why? Well lets say we start with a 4-node cluster and we de-
replication shortly, but all you need to know at this point is that the num- fine 4 partitions. As key/value pairs are inserted the hashing algorithm will
ber of replication nodes within a storage node is determined by its capac- equally balance keys between the different nodes, all good so far. Then
ity. This gives Oracle NoSQL the ability to run across a bunch of nodes we decide we want to scale out and add more nodes, so we bring it two
that have different capacities (based on CPU and IO specs), meaning you more servers, but how can we spread our 4 partitions across these now 6
can start small and grow the cluster out with newer hardware without nodes? The answer is we cant as the number of partitions is fixed. Com-
worrying about the new kit being constrained by the metrics of the older pare this to a situation where we defined 24 partitions, then in the initial
servers. 4-node cluster each node would have 6 partitions (4 nodes * 6 = 24).
Then as we expand the cluster with two additional nodes, the partitions
Going down one more level, each replication node hosts one or more just move around and we end up with 4 partitions on each node (6 nodes
partitions (almost always more than one partition).

94 OTech Magazine #3 May 2014


Oracle NoSQL
Article 2
6/12 James Anthony

* 4 = 24). These different storage nodes are referred to as shards (as we


have separated the data physically with each node having access only to
this shard of data).

Replication
The observant amongst you are probably already thinking if the data
isnt shared how is it protected? This is where replication in NoSQL solu-
tions comes in (here we will discuss the Oracle NoSQL implementation,
but its similar for most of the NoSQL solutions out there). Within the
Oracle NoSQL Database you configure a replication factor for each KVS-
tore, which controls the number of other storage nodes onto which the Like some other NoSQL implementations the Oracle solution has a con-
key/value pair will be copied (this is why we have replication nodes inside cept of a single write master for data. For each shard a master node is
storage nodes). elected to which all writes are performed, the master node then copies
the data to the replica nodes (later on we will discuss how this can be
Take a look at Figure 1 from this you can see how data is copied to two controlled). Whilst write traffic is performed against this single node (and
other nodes from the master (more on this in a moment) based on a rep- remember this is a single node per shard, having multiple shards means
lication factor of 3. we balance write activity for different keys across multiple nodes), reads
can be performed against any replica in the shard, which allows us to hor-
izontally scale read workloads. Given this is Oracle you probably expect it
anyway, but just to state it explicitly, a failed master node will automati-
cally be detected and one of the replica nodes will then become the new
master, all transparently happening in the background.

By balancing multiple shards and multiple partitions we can ensure we


have sufficient capacity for write activity and future expansion. A key fea-

95 OTech Magazine #3 May 2014


Oracle NoSQL
Article 2
7/12 James Anthony

ture of the Oracle NoSQL Database is the ability to horizontally scale read Within the Oracle NoSQL database this is easy to process, and Im going
workloads using this method, and scaling is indeed linear in this fashion. to use some pseudo java code to build the example;

Major/Minor Keys // We create an array of String (VARCHAR2 style values) for the key
rrayList<String> majorComponents = new ArrayList<String>();
One of the key features of the Oracle NoSQL database is support for // Define the major path components for the key
majorComponents.add(competitionId);
multi-part keys, with the ability to specify both major and minor compo- majorComponents.add(matchId);
nents to the key. Lets build an example to illustrate, based on one of our
// Create the key
real world deployments. Key myKey = Key.createKey(majorComponents);
Imagine we are processing an incoming feed of information relating to // Do some work here to define the value we store

sporting events. We will use soccer (football to us Brits!) as the game in // Now store the key value with a simple put request
question. The feed sends us real time information on the events happen- NoSQLDataAccess.getDB().put(myKey, <VALUE>));
ing within the game, such as a goal, free kick, throw in etc. and we want
to process some type of action within our application based on this.
Firstly the feed provides us with an ID for the tournament or competi- However within each match I now have multiple events coming in, each
tion in question (World Cup, Premier League etc.), allowing us to identify again with its own unique event ID. This is where I can use the minor key
which tournament any incoming entry is for. Each incoming entry also component, adding this as the minor key element:
has a unique identified for the given match/fixture (we can have mul-
ArrayList<String> majorComponents = new ArrayList<String>();
tiple matches happening at once and clearly will have a large number // This time we also have a minor components element
of matches over time). We can see that weve got two parts to our key ArrayList<String> minorComponents = new ArrayList<String>();
already: // Define the major and minor path components for the key
majorComponents.add(competitionId);
majorComponents.add(matchId);
Part 1: The competition ID minorComponents.add(eventId);
Part 2: The match ID
// Create the key
Key myKey = Key.createKey(majorComponents, minorComponents);
// Do some work here to define the value we store

96 OTech Magazine #3 May 2014


Oracle NoSQL
Article 2
8/12 James Anthony

// Now retrieve the records.


// Now store the key value with a simple put request
SortedMap myRecords = NoSQLDataAccess.getDB().multiGet(myKey);
NoSQLDataAccess.getDB().put(myKey, <VALUE>);

I can also use the full major and minor keys, again using an example; in
So whats the advantage of using the minor key like this? Well, lets take this case to check if weve already received this event for the given match
the situation where I later down the line want to get ALL the events for in the given competition (in order to perform duplicate checking)
a given match. what I can do now is execute an operation to get the
ArrayList<String> majorComponents = new ArrayList<String>();
events using just the major key ArrayList<String> minorComponents = new ArrayList<String>();
// Define the major and minor path components for the key
majorComponents.add(competitionId); majorComponents.add(Game);
majorComponents.add(matchId); majorComponents.add(competitionId);
majorComponents.add(matchId);
// Create the retrieval key // Add the Event ID as the minor Key
Key myKey = Key.createKey(majorComponents); minorComponents.add(eventId);

// Now retrieve the records.


SortedMap myRecords = NoSQLDataAccess.getDB().multiGet(myKey); // Create the key
Key myKey = Key.createKey(majorComponents, minorComponents);
// Now retrieve the record. We use a single get as opposed to a
// multi-get here as we only expect one value
ValueVersion vv = NoSQLDataAccess.getDB().get(myKey);
Or perhaps I want to get all of the match IDs for a given tournament (for
example as part of the process to show a historical fixture list), in this ex- A quick but very important note: The V3 release of the Oracle NoSQL
ample I can do a get operation using only the first part of the major key. Database provides a table mapping feature and much of this key design
majorComponents.add(competitionId);
goes away! Well discuss just how powerful this is in a future article.
// We no longer need the following line for the 2nd part of the key
// majorComponents.add(matchId);

// Create the retrieval key


Key myKey = Key.createKey(majorComponents);

97 OTech Magazine #3 May 2014


Oracle NoSQL
Article 2
9/12 James Anthony

Consistency Master Node only


All Nodes (in the replica set) basically enforcing synchronous replica-
and Durability tion
guarantees A majority of nodes (in the replica set)

Going back to the discussion on replication one of the features of many Additionally the durability policy in Oracle NoSQL also allows you to con-
NoSQL databases is that unlike a traditional relational database it is pos- trol the level of write persistence for the write operations to the master
sible to choose the level of durability (write persistence) and consistency and replica nodes. You can do this by choosing whether the data is writ-
(read consistency) of the data at both system level, and then override ten to a) the local in-memory buffer, b) the OS buffer cache, or c) wheth-
this per operation level. er we wait for it to be written all the way to disk. We will show some of
these examples in a later article, but suffice for now to say they offer a
Lets explore how that works. Firstly remember back to when we dis- great deal of flexibility in trading off performance and durability. The abil-
cussed replication and replication factors, and in our example we had a ity to control these at the transaction level allows for certain operations
replication factor of 3. This mean once the data had been written to the to be performed in a fail safe mode, whilst others can sacrifice durabil-
master node, it will then be written to two additional replica nodes. Clear- ity in favour of performance.
ly doing these extra write operations has an overhead, especially if the
nodes are separated by any distance due to network latency. In certain
cases we may not want our transaction to wait for these additional write Versions &
operations to complete, so we can tune our durability policy and change Consistency
this to a different value. If we choose a durability value of 1, then once the
data is written to the master node, the operation will return control to
the calling program, with the replication happening in the background. Another concept of NoSQL databases that is slightly different to that of
the traditional relational database is that of Versions. When we insert
In the Oracle NoSQL database we have 3 acknowledgement-based dura- data into the KV store it is implicitly given a version in the system, lets
bility models illustrate with an example, where we insert a KV pair with a Key a A, value

98 OTech Magazine #3 May 2014


Oracle NoSQL
Article 2
10/12 James Anthony

of B, and it is then given a version of 1, we can represent this as;

Now when we read back the value for key A, depending on which node
we get the data from we get different values (remember at the start of
the article we discussed eventual consistency well here is the down-
Now, a process comes along an updates the KV pair (which really means side!). So how do we manage this?
an update to the Value), we arent concerned with what the data has
changed to, just that its version has changed; Well this is where versions come in! When we retrieve the data we can
choose to check the version number, since get operations return
both the data and the record version. We can then compare this version
number to what we held for the insert operation and only proceed if we
are working from the current version. Again, without repeating myself,
I seriously recommend you go and read that Amazon Dynamo paper, as
see how semantic and syntactic reconciliation work.

Oracle NoSQL also allows for other consistency guarantees based on:
So far so good, but how does this relate to the real world usage? Lets
take an example where we have relaxed our durability guarantees and Absolute: Read from the master node. Unlike other NoSQL solutions with
were using asynchronous replication. Its conceivable that we have dif- no pre-defined master this is an option. We know writes for a key will
ferent versions of the value for the key on different nodes; go through a master for the shard, so servicing requests from this will
always return that latest, most consistent value.

99 OTech Magazine #3 May 2014


Oracle NoSQL
Article 2
11/12 James Anthony

None: Read from any node, without concern as to the most recent state to work around ACID often stating they dont need transactions (re-
of the data member most NoSQL databases sacrifice transaction support). However
Time Based: This is based on the lag (as defined by time) of any replica in my experience you might not need them now but you will at some
from the master (for example if the replica is no more than 1 second out) point.
Integration: Not unique amongst NoSQL databases is Hadoop integration,
but additionally the Oracle NoSQL database can integrate directly with the
Oracle NoSQL Oracle RDBMS (using external tables) and Coherence. Being able to cross
the chasm from NoSQL to SQL world is just a great feature. Allowing me
Differentiators - to query across my data sources, and stop me from just getting another
ACID silo.
Transparent Load Balancing: Again perhaps not unique but certainly not
Oracle NoSQL database has quite a few differentiators in my eyes, these prevalent in the NoSQL is the fact that the NoSQL driver provided by
include; Oracle performs all my load balancing for me (something well discuss in a
future article)
Enterprise Grade Support: Always a tricky subject whether to pay for Free: Yep! You read that right. Oracle offer the NoSQL database in two
something! However, one of the problems for many organisations in sup- flavours. The paid for version (which is actually not that bad by Oracle
portability, after all Google is a search engine not a support tool. Being terms), but also the community edition (CE) which is totally free!
able to fall back on Oracle support means organisations looking to deploy
in the brave new NoSQL world might have to rely on Oracle, but they can
rely on Oracle. Whats new
Proven storage engine: One of the attractions to us when first deploying in version 3?
NoSQL was that we were already POCing based on the Berkeley DB as
we know the track record of that. Oracle NoSQL uses BerkeleyDB as its
persistence engine. Oracle recently announced the availability of Oracle NoSQL Database V3.
ACID support: For me this is the big one. Whilst I get why people wanted Whilst weve had some time to look at this weve not deployed this into

100 OTech Magazine #3 May 2014


Oracle NoSQL
Article 2
12/12 James Anthony

any of our existing implementations, but we plan to soon! The reason?


Some of the new features provide significant benefit including;

Increased Security: OS-independent, cluster-wide password-based user


authentication and Oracle Wallet integration enables greater protec-
tion from unauthorized access to sensitive data. Additionally, session-
level Secure Sockets Layer (SSL) encryption and network port restric-
tions deliver greater protection from network intrusion.
Usability and Ease of Development: Support for tabular data models
simplifies application design and enables seamless integration with fa-
miliar SQL-based applications. Secondary indexing delivers dramatically
improved performance for queries.
Data Center Performance Enhancements: Automatic failover to metro-
area secondary data centers enables greater business continuity for
applications. Secondary server zones can also be used to offload read-
only workloads, like analytics, report generation, and data exchange
for improved workload management.

101 OTech Magazine #3 May 2014


asm_metrics.pl
Bertrand Drouvot utility use
twitter.com/BertrandDrouvot
 fr.linkedin.com/in/bdrouvot
cases

102 OTech Magazine #3 May 2014


asm_metrics.pl
utility use cases
1/6 Bertrand Drouvot

In the winter 2014 edition of the OTech Magazine I introduced the asm_ Into the spring 2014 issue I will cover some use cases of the utility.
metrics.pl utility and explained how it works (See http://www.otechmag. Before to see use cases, lets have a look at the help page of the ASM
com/magazine/2014/winter/OTech%20Magazine%20-%20Winter%202014. metrics utility:
pdf#page=114 for more details)

I created this utility because when I need to deal with the ASM I/O statis-
tics, the tools provided by Oracle (asmcmd iostat and asmiostat.sh from
MOS [ID 437996.1]) do not suit my needs. The metrics provided are not
enough, the way we can extract and display them is not customizable
enough, and we dont see the I/O repartitions within all the ASM or data-
base instances into a RAC environment.
In the help-instructions you can see that there are a few parameters that
To summarize, the script connects to an ASM instance and takes a snap- you use with the ASM Metrics Utility.
shot each second (default interval) from the gv$asm_disk_iostat cumu-
lative or gv$asm_disk_stat and computes the delta with the previous 1. You can choose the number of snapshots to display and the time to
snapshot. In this way we get the following real-time metrics based on wait between the snapshots. The purpose is to see a limited number of
cumulatives metrics: snapshots of a specified amount of wait time between snapshots.

Reads/s: Number of read per second. 2. You can choose on which ASM instance to collect the metrics thanks
KbyRead/s: Kbytes read per second. to the -INST= parameter. Useful in RAC configuration to see the reparti-
Avg ms/Read: ms per read in average. tion of the ASM metrics per ASM instances.
AvgBy/Read: Average Bytes per read.
Writes/s: Number of write per second. 3. You can choose for which DB instance to collect the metrics thanks
KbyWrite/s: Kbytes write per second. to the -DBINST= parameter (wildcard % allowed). In case you need to
Avg ms/Write: ms per write in average. focus on a particular database or a subset of them.

103 OTech Magazine #3 May 2014


asm_metrics.pl
utility use cases
2/6 Bertrand Drouvot

4. You can choose on which Diskgroup to collect the metrics thanks to 9. You can sort based on the number of reads, number of writes or num-
the -DG= parameter (wildcard % allowed). In case you need to focus on ber of IOPS (reads+writes) thanks to the -SORT_FIELD= parameter (so
a particular diskgroup or a subset of them. that you could for example find out which database is the top respon-
sible for the I/O). So that you can find the ASM instances, the database
5. You can choose on which Failgroup to collect the metrics thanks to the Instances, or the
-FG= parameter (wildcard % allowed). In case you need to focus on a diskgroup, or the failgroup or whatever you want that is generat-
particular failgroup or a subset of them. ing most of the I/O reads, most of the I/O writes or most of the IOPS
(reads+writes)
6. You can choose on which Exadata Cells to collect the metrics thanks to
the -IP= parameter (wildcard % allowed). In case you need to focus on Now we are ready to see some use cases. Keep in mind that the utility is
a particular cell or a subset of them. not limited to those examples as you can aggregate the results following
your needs in a customizable way: Aggregate per ASM Instances, data-
7. You can aggregate the results on the ASM instances, DB instances, base instances, Diskgroup, Failgroup or a combination of all of them.
Diskgroup, Failgroup (or Exadata cells IP) level thanks to the -SHOW=
parameter. Useful to get an overview of what is going on per Use case 1:
ASM Instances, per diskgroup or whatever you want, as this is fully Find out the most physical IO consumers through ASM in real time. This is
customizable. useful as you dont need to connect to any database instance to get this
info as this is centralized into the ASM instances.
8. You can display the metrics per snapshot, the average metrics value
since the collection began (that is to say since the script has been Lets sort first based on the number of reads per second that way:
launched) or both thanks to the -DISPLAY= parameter. In this way you ./asm_metrics.pl -show=dbinst -sort_field=reads
can get the metrics per snapshots, since the script has been launched
or both.

104 OTech Magazine #3 May 2014


asm_metrics.pl
utility use cases
3/6 Bertrand Drouvot

And check its behaviour thanks to the utility.


./asm_metrics.pl -show=dg,inst,fg -dg=BDT_PREF

As you can see data have been read from their preferred read failure
groups. We can also see their respective performance metrics.
As you can see the USB3CMMO_2 instance is the one that recorded the
most number of reads during the last second. You can also sort based on Use case 3
the number of writes or IOPS (means reads+writes). I want to see the IO distribution on Exadata across the Cells (storage
Use case 2: nodes). For example I want to check that the IO load is well balanced
I want to see the ASM preferred read in action for a particular diskgroup across all the cells. This is feasible thanks to the show=ip option.
./asm_metrics.pl -show=dbinst,dg,ip -dg=BDT
(BDT_PREF for example) and see the IO metrics for the associated
failgroups. I want to see that no reads are done outside the preferred
failgroup.

Lets configure the ASM preferred read parameters:


SQL> alter system set asm_preferred_read_failure_groups=BDT_PREF.WIN sid=+ASM1;
System altered.
SQL> alter system set asm_preferred_read_failure_groups=BDT_PREF.JMO sid=+ASM2;
As you can see the IO load is well balanced across all the cells.
System altered.

105 OTech Magazine #3 May 2014


asm_metrics.pl
utility use cases
4/6 Bertrand Drouvot

Use case 4
I want to see the IO distribution recorded into the ASM instances.
./asm_metrics.pl -show=inst

As you can see most of the IOPS are recorded into the ASM2 instance
(which means its clients are doing more IOPS than the ASM1 clients).
It also means that the host on which the ASM2 instance is located is the
one that generated most of the IOPS, so it could be useful to know which You can see for example how the 343 Reads/s that are recorded into the
host generates most of the IOPS in a RAC configuration ASM2 instance are distributed across the database instances.
(This is not necessary true with the 12c Flex ASM feature as the Database
Instance could be remote to the ASM instance). Now drill down a step
further with the following use case. Use case 6
I want to see the IO distribution recorded into the ASM instances for the
Use case 5 database instances linked to the BDT database.
./asm_metrics.pl -show=inst,dbinst -dbinst=%BDT%
I want to see the IO distribution recorded into the ASM instances for each
database instance (which are the clients we talked about into the use
case 4).
./asm_metrics.pl -show=inst,dbinst

I wanted to see for %BDT% instances only.

106 OTech Magazine #3 May 2014


asm_metrics.pl
utility use cases
5/6 Bertrand Drouvot

Use case 7 Warning: Regarding the preferred read, in case of Flex ASM then
I want to see the IO distribution over the FAILGROUPS. watch out for unpreferred reads (see http://bdrouvot.wordpress.
./asm_metrics.pl -show=fg
com/2013/07/02/flex-asm-12c-12-1-and-extended-rac-be-careful-to-unpre-
ferred-read/)

Now drill down a step further with the following use case.

Use case 9
Only the failgroups metrics are reported. I want to see the IO distribution across the ASM instances, diskgroups
Now drill down a step further with the following use case. and failgroups.
Use case 8 ./asm_metrics.pl -show=fg,inst,dg

I want to see the IO distribution and their associated metrics across the
ASM instances and the failgroups.
./asm_metrics.pl -show=fg,inst

That way you can see the IO distribution between the ASM instances and
the failgroups. Based on the metrics you can also decide if this is neces-
sary (performance reason) and feasible (enough bandwidth) to put the That way I can see that all the reads are done from the DATA diskgroup.
ASM preferred read feature in place.

107 OTech Magazine #3 May 2014


asm_metrics.pl
utility use cases
6/6 Bertrand Drouvot

Use case 10 View the average since the collection began (not only the snaps
I want to see the metrics for the disks that belongs to the FRA diskgroup. delta) thanks to the display parameter that way:
./asm_metrics.pl -show=dsk -dg=FRA ./asm_metrics.pl -show=dbinst -sort_field=iops -display=avg

The output reports the collection begin time:

Remarks:
1. Into the previous use cases you may have seen rows with blank value
into some fields. For example:

Conclusion:
Thanks to these use cases, I hope you can see how customizable the util-
It means that the values have been aggregated for this particular field(s). ity is and how you could take benefit of it in a day-to-day work with ASM.
The aggregation depends on what you want to see (the show option).
The main entry for the tool is located to this blog page: http://bdrouvot.
2. The use cases focused only on snapshots taken during the last second wordpress.com/asm_metrics_script/ from which youll be able to down-
but you could also: load the script or copy the source code.
Takes snapshots of longer period of time thanks to the interval
parameter: Feel free to download it and to provide any feedback.
./asm_metrics.pl -interval=10 (for snaps of 10 seconds)

108 OTech Magazine #3 May 2014


BUILD A RAC
Christopher Ostrowski DATABASE
FOR FREE WITH
www.avout.com

VIRTUALBOX
twitter.com/chrisostrowski
www.facebook.com/
chris.ostrowski.140

A STEP BY
 www.linkedin.com/in/
ostrowskichris

STEP GUIDE
109 OTech Magazine #3 May 2014
BUILD A RAC
DATABASE FOR FREE
1/22 Christopher Ostrowski

Introduction two servers with disk storage array connecting the two. While Network
Attached Storage (NAS) systems have dropped in price in the last couple
Oracle Corporation has made it incredibly easy to download and use of years, the cost and installation are still beyond most DBAs who wish to
virtually all of their software offerings via the Oracle Technology Network set up a sandbox environment (as well as the cost of investing in hard-
website. The availability of both software and documentation makes it ware with a singular use).
easy for individuals and organizations to test drive Oracle software be-
fore implementing it. For DBAs and developers anxious to learn and use Two years ago, I set up a goal for myself to learn about RAC and I went
new languages, development environments and software features, the looking for a solution that, in the best scenario, wouldnt cost me any-
fully-functional software is a godsend for those who want to keep their thing. There were various resources on the internet with different pieces
skills up to date. of information on how to do this this paper is an attempt to show how I
was able to do it for $0 and the things I learned since then that has made
Perhaps the only real limitation to this bounty provided by Oracle is hard- the process of building your own RAC system much easier.
ware. Many of the pieces of software are complex and require significant
hardware investments even for just a sandbox environment (i.e. an envi- The Pieces Youll Need
ronment that doesnt require sizing to accommodate many users logging Please remember that the software you download from Oracle is for
in simultaneously). As an example, a sandbox environment with Oracle evaluation purposes only do not use anything you build using these
SOA Suite running on top of Oracle Weblogic Server driven by an Oracle instructions in a production environment!
database requires a significant amount of RAM just to run. While the siz-
ing of said components can be scaled down, it still requires a machine to First, lets talk hardware. At a minimum, youll need 8GB of RAM on the
have pretty significant resources. server youre planning to build this on. Why 8GB? Youll need 2 virtual
machines and the minimum youll want to create those machines are
While RAM and disk space costs have dropped significantly in the last with 2GB of RAM. The virtual machine grabs the 2GB of RAM whether
couple of years, there is still one area that is very difficult for DBAs to youre actively using it or not (for a DBA analogy, think of the SGA when
create their own sandbox environment: Oracle Real Application Clusters an Oracle instance starts up the instance grabs the physical memory
(RAC). Traditionally, the basic requirements for a RAC system involve outlined in your init.ora file and keeps it allocated as long as the instance

110 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
2/22 Christopher Ostrowski

is running). OK, youre thinking: 2GB+2GB is 4GB why do I need 8GB of


RAM? Its never a good idea to use more than 50% of your physical RAM
for virtual machines. You certainly CAN do it its very possible, however,
that weird things will start to happen if your VMs use more than 50% (es-
pecially if youre using Windows as your host operating system).

Next, disk space. At a minimum, I would allocate 20GB for each virtual
machine (40GB total), and at least 30GB for your shared disks, so youll
need at least 70GB of disk space. As we will see, the virtualization soft-
ware well use is very efficient at using disk space the actual disk space
used at the host operating system level doesnt get allocated to the vir-
tual machine until it is needed but making sure you have at least 70GB
of usable disk space will be the minimum to get started.

Next, the software:


1. Oracle Database 11gR2 (available for download at http://www.oracle. 2. Oracle Grid Software the Oracle Grid software is what communicates
com/technetwork/database/enterprise-edition/downloads/index.html). between your servers and what allows the servers to act as a single en-
As of March 2014, the latest 11.x version available is 11.2.0.1.0. Download tity. The grid software can be downloaded from (http://www.oracle.com/
the two files that make up the Linux-x86-64 link: technetwork/products/clusterware/downloads/index.html). As of March
2014, the latest version of the Grid software is 11.2.0.1.0. Download the
Linux x86-64 version. Make sure to also grab the cluvfy utility this will
be used to verify the cluster right before installing.

3. CentOS Release 5.9 64-bit CentOS is a free operating system that


is equivalent (with some very minor exceptions) to Red Hat Enterprise

111 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
3/22 Christopher Ostrowski

Linux. You can find a public mirror to download CentOS from http://wiki. tioned before, make sure you have at least 8GB of RAM and 70GB of disk
centos.org/Download. From there, click on x86_64 link next to CentOS-5 space on this server. The installation is very straightforward and will not
(as of the March 2014, the latest 5.x release is 5.10). Pick a location close be covered in detail here.
to you, then click the file named CentOS-5.10-x86_64-bin-DVD-1of2.iso
dont worry if you dont have a DVD burner; were not going to actually CentOS
burn the DVD. The process were going to use to create our virtual machines are as fol-
lows: well create the first virtual machine, create shared disks, and then
4. Oracle VirtualBox (available from http://www.oracle.com/technetwork/ clone the first virtual machine. After VirtualBox is installed, run it create
server-storage/virtualbox/downloads/index.html) Oracle VirtualBox is a new virtual machine by clicking on the New icon in the top-left of the
a free virtualization program from Oracle. It differs from Oracles other screen. Give your new virtual machine a meaningful name (I called mine
virtualization product (Oracle VM) in the important distinction that it re- RAC1), select Linux as the type and Red Hat (64-bit) as the version. For
quires an underlying operating system to run on top of. As such, it is not memory size, select 2048MB. Note that this is the minimum if you have
suitable to most virtualized production environments as all system calls more memory you can use on this server, bump up the memory alloca-
(disk reads and writes, memory reads and writes, etc.) have to be trans- tion accordingly.
lated to the native host operating system. This usually causes enough of
a performance hit that using VirtualBox in production is not acceptable. Next, select Create a virtual hard drive now, then VDI (VirtualBox Disk
For our purposes, however, VirtualBox will do the job. Image) then Dynamically Allocated. Specify a location and make sure
the disk is at least 30GB (again, you can allocate more if you have the
Believe it or not, thats all the pieces youll need to build your own sand- space). I mentioned earlier that the virtualization software were going
box RAC environment. to use is very efficient when it comes to disk space. After creating the
virtual machine, we can look at the corresponding file on our base operat-
The Steps ing system and well see that its much less that 30GB is size VirtualBox
will dynamically allocate space as its needed up to 30GB (or more if we
Oracle VirtualBox specify more in the wizard).
First, install Oracle VirtualBox on the machine you wish to use. As men-

112 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
4/22 Christopher Ostrowski

After that last page in the wizard, youll see the main VirtualBox page list- SELinux set to disabled
ing the virtual machines that have been created. Before we can start up Package groups:
our VM, we need to make a few tweaks to the network options for the o Desktop Environments > GNOME Desktop Environment
VM. Click on the Network link on the right side of the page, then click on o Applications > Editors and Graphical Internet
the Adapter 1 tab. Make sure Enable Network Adapter is checked and o Development > Development Libraries and Development Tools
Attached to: is set to Bridged Adapter, then click Adapter 2. Make sure o Servers > Server Configuration Tools
Enable Network Adapter is checked and Attached to: is set to Inter-
nal Network. On the networking screen, do NOT choose DHCP the IP addresses need
to remain consistent for your server, so pick an IP address for both eth0
Why do we do this? Oracle RAC needs two network cards attached to (the public interface) and eth1 (the private interface (interconnect)).
each server one to handle communications with the outside world and Make sure both addresses are on a different subnet. As an example, I
one to handle communications between the two servers. This second used the following on my system:
connection is referred to as interprocess communication and needs to
be a direct connection between the two servers this is why the second IP Address eth0: 192.168.0.101 (public address)
network adapter for the virtual machine has a connection type of Inter- Default Gateway eth0: 192.168.0.1 (public address)
nal Network. IP Address eth1: 192.168.1.101 (private address)
Click on OK to close the wizard, then click Start in the top-left of the Default Gateway eth1: none
VirtualBox Manager window. Since this is the first time were starting up Upon completion, shut down your server.
the virtual machine, VirtualBox is smart enough to ask where the oper-
ating system disk is. Click the folder icon to the left and find where you
saved the CentOS ISO file (CentOS-5.10-x86_64-bin-DVD-1of2.iso). Contin-
ue through the Oracle Linux 5 installation as you would for a basic server.
It should be a server installation with:
A minimum of 4GB of swap space
Firewall disabled

113 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
5/22 Christopher Ostrowski

Create Shared Disks And then attach them to the RAC1 virtual machine:
Heres where we get to use the really cool features of VirtualBox. In Vir- VBoxManage storageattach RAC1 --storagectl SATA --port 2 --device 0 --type hdd
tualBox, we can create network attached disks just by issuing two com- --medium c:\VMs\shared\asm2.vdi --mtype shareable
mands: VBoxManage storageattach RAC1 --storagectl SATA --port 3 --device 0 --type hdd
--medium c:\VMs\shared\asm3.vdi --mtype shareable
VBoxManage createhd --filename c:\VMs\shared\asm1.vdi --size 10240 --format VDI
--variant Fixed
VBoxManage storageattach RAC1 --storagectl SATA --port 4 --device 0 --type hdd
--medium c:\VMs\shared\asm4.vdi --mtype shareable
VBoxManage storageattach RAC1 --storagectl SATA --port 1 --device 0 --type hdd
--medium c:\VMs\shared\asm1.vdi --mtype shareable
VBoxManage storageattach RAC1 --storagectl SATA --port 5 --device 0 --type hdd
--medium c:\VMs\shared\asm5.vdi --mtype shareable
The first command creates a 10GB disk and makes it available to Virtual-
Box. The second command attaches the disk to a specific virtual machine. Even though were defined the disks as sharable, we still need to issue
Since we specified mtype shareable at the end, the disk can be attached the following commands:
to more than one virtual machine. After we clone RAC1, well attach the VBoxManage modifyhd c:\VMs\shared\asm1.vdi --type shareable
disks to the second virtual machine.
VBoxManage modifyhd c:\VMs\shared\asm2.vdi --type shareable

Issue the following commands to create four more attached disks: VBoxManage modifyhd c:\VMs\shared\asm3.vdi --type shareable

VBoxManage createhd --filename c:\VMs\shared\asm2.vdi --size 10240 --format VDI VBoxManage modifyhd c:\VMs\shared\asm4.vdi --type shareable
--variant Fixed
VBoxManage modifyhd c:\VMs\shared\asm5.vdi --type shareable
VBoxManage createhd --filename c:\VMs\shared\asm3.vdi --size 10240 --format VDI
--variant Fixed

VBoxManage createhd --filename c:\VMs\shared\asm4.vdi --size 10240 --format VDI


--variant Fixed

VBoxManage createhd --filename c:\VMs\shared\asm5.vdi --size 10240 --format VDI


--variant Fixed

114 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
6/22 Christopher Ostrowski

At the virtual machine operating system level, the new disks will be Configure the first Virtual Machine
named: Step 1: Create groups
/dev/sdb
/dev/sdc
As root:
/dev/sdd
/usr/sbin/groupadd -g 500 dba
/dev/sde and
/usr/sbin/groupadd -g 600 oinstall
/dev/sdf
/usr/sbin/groupadd -g 700 oper
/usr/sbin/groupadd -g 800 asm
cat /etc/group
Start the RAC1 virtual machine and partition the new disks:
# fdisk /dev/sdb
Step 2: Check that user nobody exists
Command (m for help): n As root:
Command action
e extended grep nobody /etc/passwd
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1): Step 3: Add oracle user
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305,default 1305):
As root:
Using default value 1305
/usr/sbin/useradd -b /home/local/oracle -d /home/local/oracle -g 500 -m -p oracle -u
Command (m for help): p
500 -s /bin/bash oracle
Disk /dev/sdb: 10.7 GB, 10737418240 bytes
grep oracle /etc/passwd
255 heads, 63 sectors/track, 1305 cylinders
/usr/sbin/usermod -g oinstall oracle
Units = cylinders of 16065 * 512 = 8225280 bytes
/usr/sbin/usermod -a -G dba oracle
Device Boot Start End Blocks Id System
/usr/sbin/usermod -a -G oper oracle
/dev/sdb1 1 1305 10482381 83 Linux
/usr/sbin/usermod -a -G asm oracle
Command (m for help): w
id oracle
The partition table has been altered!
uid=500(oracle) gid=600(oinstall) groups=600(oinstall),500(dba),700(oper),800(a
Calling ioctl() to re-read partition table.
sm)
Syncing disks.

Repeat the process for disks /dev/sdc through /dev/sdf.

115 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
7/22 Christopher Ostrowski

Step 4: Setup directories Step 6: Verify that the following packages exist
As root, create directories for Oracle grid software (must be outside of 64-bit only:
Oracles home directory), change ownership and permission levels. yum install binutils.x86_64 -y
yum install elfutils-libelf.x86_64 -y
cd /
yum install elfutils-libelf-devel.x86_64 -y
mkdir oracledb
yum install gcc.x86_64 -y
mkdir oraclegrid
yum install gcc-c++.x86_64 -y
mkdir oraclegridbase
yum install glibc-common.x86_64 -y
mkdir oraInventory
yum install libstdc++-devel.x86_64 -y
chown oracle:oinstall oracledb
yum install make.x86_64 -y
chown oracle:oinstall oraclegrid
yum install sysstat.x86_64 -y
chown oracle:oinstall oraclegridbase
chown oracle:oinstall oraInventory
chmod 777 oracledb Both 32 and 64 bit:
chmod 777 oraclegrid
chmod 777 oraclegridbase yum install compat-libstdc++-33.i386 -y
chmod 777 oraInventory yum install compat-libstdc++-33.x86_64 -y
yum install glibc.i686 -y
Step 5: Unzip oracle software yum
yum
install
install
glibc.x86_64 -y
glibc-devel.i386 -y
As oracle: yum install glibc-devel.x86_64 -y
yum install libaio.i386 -y
[oracle@RAC1 software]$ pwd yum install libaio.x86_64 -y
/home/local/oracle/software yum install libgcc.i386 -y
yum install libgcc.x86_64 -y
unzip linux.x64_11gR2_grid.zip yum install libstdc++.i386 -y
unzip linux.x64_11gR2_database_1of2.zip yum install libstdc++.x86_64 -y
unzip linux.x64_11gR2_database_2of2.zip yum install libaio-devel.x86_64 y
yum install libaio-devel.i386 -y
mkdir cvu yum install unixODBC.x86_64 -y
mv cvupack_Linux_x86_64.zip cvu yum install unixODBC.i386 -y
cd cvu yum install unixODBC-devel.i386 -y
unzip cvupack_Linux_x86_64.zip yum install unixODBC-devel.x86_64 y
yum install pdksh.i386 -y

116 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
8/22 Christopher Ostrowski

Step 7: Change security level Step 9: Set kernel parameters


Disable SELinux
As root: vi /etc/sysctl.conf
kernel.sem=250 32000 100 142
selinuxenabled && echo enabled || echo disabled
fs.file-max=327679
net.ipv4.ip_local_port_range=1024 65000
To disable: net.core.rmem_default=4194304
net.core.rmem_max=4194304
echo 0 > /selinux/enforce net.core.wmem_default=262144
net.core.wmem_max=262144
Step 8: Check NTP net.ipv4.tcp_rmem=4194304 4194304 4194304
net.ipv4.tcp_wmem=262144 262144 262144
vi /etc/sysconfig/ntpd
vi /etc/security/limits.conf
Add -x to end of OPTIONS line (inside of quote marks)
oracle soft nofile 131072
/sbin/service ntpd stop oracle hard nofile 131072
/sbin/service ntpd start oracle soft nproc 131072
/usr/sbin/ntpq oracle hard nproc 131072
ntpq> peers
vi /etc/pam.d/login
Make sure at least one entry shows up. If not:
session required pam_limits.so

1) copy /etc/ntp.conf from RAC1. Have system changes take effect:


2) /sbin/service ntpd stop
sysctl -p
3) /sbin/service ntpd start
4) /usr/sbin/ntpq Step 10: Configure hangcheck timer
5) ntpq> peers
/sbin/insmod /lib/modules/2.6.18-308.11.1.el5/kernel/drivers/char/hangcheck-timer.ko
hangcheck_tick=1 hangcheck_margin=10 hangcheck_reboot=1
For ntpd reference, see:
http://www.eecis.udel.edu/~mills/ntp/html/ntpd.html

117 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
9/22 Christopher Ostrowski

Check that at least 1 row is returned:


# Virtual
[root@RAC1 bin]# lsmod | grep -i hang 192.168.0.171 rac1-vip.localdomain rac1-vip
hangcheck_timer 2526 0 192.168.0.181 rac2-vip.localdomain rac2-vip

# SCAN
Add command to /etc/rc.d/rc.local: 192.168.0.190 rac-scan.localdomain rac-scan
192.168.0.191 rac-scan.localdomain rac-scan
vi /etc/rc.d/rc.local
192.168.0.192 rac-scan.localdomain rac-scan
/sbin/insmod /lib/modules/2.6.18-308.11.1.el5/kernel/drivers/char/hangcheck-timer.ko
hangcheck_tick=1 hangcheck_margin=10 hangcheck_reboot=1 Step 12: Configure ASM support

Step 11: Configure network Step 12.1: Download 3 files based on kernel version
http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html
Right before we configured our disks in the Shared Disks section above,
we created the server with the following IP addresses: oracleasm-2.6.18-308.11.1.el5-2.0.5-1.el5.x86_64.rpm
oracleasmlib-2.0.4-1.el5.x86_64.rpm
Node 1 Public: 192.168.0.101 (bond0) oracleasm-support-2.1.7-1.el5.x86_64.rpm
Node 1 Private: 192.168.1.101 (bond1)

[root@RAC11 ~]# cat /etc/hosts


Step 12.2: Install ASM RPMs as root
127.0.0.1 RAC1 RAC1 localhost localhost.localdomain localhost4 localhost4.
rpm -ivf oracleasm-support-2.1.7-1.el5.x86_64.rpm
localdomain4
rpm -ivf oracleasm-2.6.18-308.11.1.el5-2.0.5-1.el5.x86_64.rpm
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
rpm -ivf oracleasmlib-2.0.4-1.el5.x86_64.rpm

# Add these lines: Step 12.3: Check that all were installed successfully
# Public
[root@RAC1 software]# rpm -qav | grep oracleasm
192.168.0.101 rac1.localdomain rac1
oracleasm-2.6.18-308.11.1.el5-2.0.5-1.el5
192.168.0.102 rac2.localdomain rac2
oracleasm-support-2.1.7-1.el5
oracleasmlib-2.0.4-1.el5
# Private
192.168.1.101 rac1-priv.localdomain rac1-priv Step 12.4: Configure ASM
192.168.1.102 rac2-priv.localdomain rac2-priv

118 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
10/22 Christopher Ostrowski

[root@RAC1 software]# /etc/init.d/oracleasm configure -i Step 13: Verify Cluster


Configuring the Oracle ASM library driver.
Step 13.1 Run cluvfy
This will configure the on-boot properties of the Oracle ASM library [oracle@RAC1 bin]$ pwd
driver. The following questions will determine whether the driver is /home/local/oracle/software/cvu/bin
loaded on boot and what permissions it will have. The current values will [oracle@RAC1 bin]$ ./cluvfy comp sys -n RAC1 -p crs -r 11gR2 -osdba dba
be shown in brackets ([]). Hitting <ENTER> without typing an answer
Verifying system requirement
will keep that current value. Ctrl-C will abort. Total memory check passed
Available memory check passed
Default user to own the driver interface []: oracle Swap space check passed
Default group to own the driver interface []: asm Free disk space check passed for RAC1:/tmp
Start Oracle ASM library driver on boot (y/n) [n]: y Check for multiple users with UID value 500 passed
Scan for Oracle ASM disks on boot (y/n) [y]: y User existence check passed for oracle
Writing Oracle ASM library driver configuration: done Group existence check passed for oinstall
Group existence check passed for dba
Step 12.5: Initialize ASM Membership check for user oracle in group oinstall [as Primary] passed
Membership check for user oracle in group dba passed
Run level check passed
[root@RAC1 /]# /etc/init.d/oracleasm stop
Hard limits check passed for maximum open file descriptors
Dropping Oracle ASMLib disks: [ OK ]
Soft limits check passed for maximum open file descriptors
Shutting down the Oracle ASMLib driver: [ OK ]
Hard limits check passed for maximum user processes
Soft limits check passed for maximum user processes
[root@RAC1 /]# /etc/init.d/oracleasm start
System architecture check passed
Initializing the Oracle ASMLib driver: [ OK ]
Kernel version check passed
Scanning the system for Oracle ASMLib disks: [ OK ]
Kernel parameter check passed for semmsl
Kernel parameter check passed for semmns
[root@RAC1 /]# /etc/init.d/oracleasm status
Kernel parameter check passed for semopm
Checking if ASM is loaded: yes
Kernel parameter check passed for semmni
Checking if /dev/oracleasm is mounted: yes
Kernel parameter check passed for shmmax
Kernel parameter check failed for shmmni
Check failed on nodes:
RAC1
Kernel parameter check passed for shmall
Kernel parameter check failed for file-max

119 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
11/22 Christopher Ostrowski

Check failed on nodes: Verification of system requirement was unsuccessful on all the specified nodes.
RAC1
Kernel parameter check passed for ip_local_port_range
Kernel parameter check passed for rmem_default
Step 13.2 Run cluvfy with fixup switch
Kernel parameter check passed for rmem_max
./cluvfy comp sys -n RAC1 -p crs -r 11gR2 -osdba dba -fixup fixupdir /home/local/ora-
Kernel parameter check passed for wmem_default
cle/software/cvu/bin/fixit
Kernel parameter check failed for wmem_max
Check failed on nodes:
RAC1 Log in as root:
Kernel parameter check failed for aio-max-nr
Check failed on nodes: cd /tmp/CVU_11.2.0.3.0_oracle
RAC1 ./runfixup.sh
Package existence check passed for make
Package existence check passed for binutils
Package existence check passed for gcc(x86_64)
Log back in as oracle:
Package existence check passed for libaio(x86_64)
su - oracle
Package existence check passed for glibc(x86_64)
Package existence check passed for compat-libstdc++-33(x86_64)
Package existence check passed for elfutils-libelf(x86_64) Step 13.3 Verify Cluster again
Package existence check passed for elfutils-libelf-devel
Package existence check passed for glibc-common [oracle@RAC1 bin]$ ./cluvfy comp sys -n RAC1 -p crs -r 11gR2 -osdba dba
Package existence check passed for glibc-devel(x86_64)
Package existence check passed for glibc-headers Verifying system requirement
Package existence check passed for gcc-c++(x86_64) Total memory check passed
Package existence check passed for libaio-devel(x86_64) Available memory check passed
Package existence check passed for libgcc(x86_64) Swap space check passed
Package existence check passed for libstdc++(x86_64) Free disk space check passed for RAC1:/tmp
Package existence check passed for libstdc++-devel(x86_64) Check for multiple users with UID value 500 passed
Package existence check passed for sysstat User existence check passed for oracle
Package existence check passed for ksh Group existence check passed for oinstall
Check for multiple users with UID value 0 passed Group existence check passed for dba
Membership check for user oracle in group oinstall [as Primary] passed
Starting check for consistency of primary group of root user Membership check for user oracle in group dba passed
Run level check passed
Check for consistency of root users primary group passed Hard limits check passed for maximum open file descriptors
Time zone consistency check passed Soft limits check passed for maximum open file descriptors
Hard limits check passed for maximum user processes

120 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
12/22 Christopher Ostrowski

Soft limits check passed for maximum user processes Starting check for consistency of primary group of root user
System architecture check passed
Kernel version check passed Check for consistency of root users primary group passed
Kernel parameter check passed for semmsl Time zone consistency check passed
Kernel parameter check passed for semmns
Kernel parameter check passed for semopm Verification of system requirement was successful.
Kernel parameter check passed for semmni
Kernel parameter check passed for shmmax
Kernel parameter check passed for shmmni
Step 14: Create ASM disks
Kernel parameter check passed for shmall As root, reset the headers on the disks:
Kernel parameter check passed for file-max
Kernel parameter check passed for ip_local_port_range dd if=/dev/zero of=/dev/sdb bs=1024 count=1000
Kernel parameter check passed for rmem_default dd if=/dev/zero of=/dev/sdc bs=1024 count=1000
Kernel parameter check passed for rmem_max dd if=/dev/zero of=/dev/sdd bs=1024 count=1000
Kernel parameter check passed for wmem_default dd if=/dev/zero of=/dev/sde bs=1024 count=1000
Kernel parameter check passed for wmem_max dd if=/dev/zero of=/dev/sdf bs=1024 count=1000
Kernel parameter check passed for aio-max-nr
Package existence check passed for make
Package existence check passed for binutils
Make sure ownership and permissions are correct on all 3 nodes:
Package existence check passed for gcc(x86_64)
Package existence check passed for libaio(x86_64) [root@RAC1 etc]# ls -ltr /dev/sd*
Package existence check passed for glibc(x86_64) brw-rw---- 1 oracle oinstall 253, 3 Aug 9 08:03 /dev/sdb
Package existence check passed for compat-libstdc++-33(x86_64) brw-rw---- 1 oracle oinstall 253, 4 Aug 9 08:03 /dev/sdc
Package existence check passed for elfutils-libelf(x86_64) brw-rw---- 1 oracle oinstall 253, 5 Aug 9 08:03 /dev/sdd
Package existence check passed for elfutils-libelf-devel brw-rw---- 1 oracle oinstall 253, 6 Aug 9 08:03 /dev/sde
Package existence check passed for glibc-common brw-rw---- 1 oracle oinstall 253, 6 Aug 9 08:03 /dev/sdf
Package existence check passed for glibc-devel(x86_64) brw-rw---- 1 oracle oinstall 253, 3 Aug 9 08:03 /dev/sdb1
Package existence check passed for glibc-headers brw-rw---- 1 oracle oinstall 253, 4 Aug 9 08:03 /dev/sdc1
Package existence check passed for gcc-c++(x86_64) brw-rw---- 1 oracle oinstall 253, 5 Aug 9 08:03 /dev/sdd1
Package existence check passed for libaio-devel(x86_64) brw-rw---- 1 oracle oinstall 253, 6 Aug 9 08:03 /dev/sde1
Package existence check passed for libgcc(x86_64) brw-rw---- 1 oracle oinstall 253, 6 Aug 9 08:03 /dev/sdf1
Package existence check passed for libstdc++(x86_64)
Package existence check passed for libstdc++-devel(x86_64)
Package existence check passed for sysstat
As root:
Package existence check passed for ksh
[root@RAC1 ~]# /etc/init.d/oracleasm createdisk data1 /dev/sdb1
Check for multiple users with UID value 0 passed
Writing disk header: done
Instantiating disk: done

121 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
13/22 Christopher Ostrowski

Clone the RAC.vdi disk:


[root@RAC1 ~]# /etc/init.d/oracleasm createdisk data2 /dev/sdc1 VBoxManage clonehd c:\VMs\RAC1\RAC.vdi c:\VMs\RAC2\RAC.vdi
Writing disk header: done
Instantiating disk: done
Create the RAC2 virtual machine in VirtualBox in the same way as you did
[root@RAC1 ~]# /etc/init.d/oracleasm createdisk data3 /dev/sdd1 for RAC1, with the exception of using the c:\VMs\RAC2\RAC.vdi virtual
Writing disk header: done
Instantiating disk: done
hard drive.

[root@RAC1 ~]# /etc/init.d/oracleasm createdisk data4 /dev/sde1 Add the second network adaptor as you did on RAC1. After the VM is cre-
Writing disk header: done
Instantiating disk: done
ated, attach the shared disks to RAC2:

[root@RAC1 ~]# /etc/init.d/oracleasm createdisk data5 /dev/sdf1 VBoxManage storageattach RAC2 --storagectl SATA --port 1 --device 0 --type hdd
Writing disk header: done --medium c:\VMs\shared\asm1.vdi --mtype shareable
Instantiating disk: done VBoxManage storageattach RAC2 --storagectl SATA --port 2 --device 0 --type hdd
--medium c:\VMs\shared\asm2.vdi --mtype shareable
[root@RAC1 ~]# /etc/init.d/oracleasm listdisks VBoxManage storageattach RAC2 --storagectl SATA --port 3 --device 0 --type hdd
DATA1 --medium c:\VMs\shared\asm3.vdi --mtype shareable
DATA2 VBoxManage storageattach RAC2 --storagectl SATA --port 4 --device 0 --type hdd
DATA3 --medium c:\VMs\shared\asm4.vdi --mtype shareable
DATA4 VBoxManage storageattach RAC2 --storagectl SATA --port 5 --device 0 --type hdd
DATA5 --medium c:\VMs\shared\asm5.vdi --mtype shareable

Step 15: Clone the VM Start RAC2 by clicking the Start button on the toolbar. Ignore any net-
Shut down RAC1: work errors during the startup.
# shutdown -h now

122 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
14/22 Christopher Ostrowski

Log in to the RAC2 as root and reconfigure the network settings: ping -c 3 RAC2
ping -c 3 RAC2-priv
hostname: RAC2
IP Address eth0: 192.168.0.102 (public address)
Default Gateway eth0: 192.168.0.1 (public address)
On Node 2 as root:
[root@RAC2 CVU_11.2.0.3.0_oracle]# /etc/init.d/oracleasm scandisks
IP Address eth1: 192.168.1.102 (private address)
Reloading disk partitions: done
Default Gateway eth1: none
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Amend the hostname in the /etc/sysconfig/network file. Instantiating disk DATA1
Instantiating disk DATA2
NETWORKING=yes Instantiating disk DATA3
HOSTNAME=RAC2 Instantiating disk DATA4
Instantiating disk DATA5
Remove the current ifcfg-eth0 and ifcfg-eth1 scripts and rename the [root@RAC2 CVU_11.2.0.3.0_oracle]# /etc/init.d/oracleasm listdisks
original scripts from the backup names: DATA1
DATA2
# cd /etc/sysconfig/network-scripts/ DATA3
# rm ifcfg-eth0 ifcfg-eth1 DATA4
# mv ifcfg-eth0.bak ifcfg-eth0 DATA5
# mv ifcfg-eth1.bak ifcfg-eth1
Install the Oracle Grid software
Edit the /home/oracle/.bash_profile file and correct the ORACLE_SID As the oracle user on node 1 (RAC1):
and ORACLE_HOSTNAME values.
cd /home/local/oracle/software/grid
ORACLE_SID=RAC2; export ORACLE_SID
ORACLE_HOSTNAME=RAC2; export ORACLE_HOSTNAME ./runInstaller

Restart RAC2 and start RAC1. When both nodes have started, check they
can both ping all the public and private IP addresses using the following
commands:
ping -c 3 RAC1
ping -c 3 RAC1-priv

123 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
15/22 Christopher Ostrowski

124 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
16/22 Christopher Ostrowski

125 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
17/22 Christopher Ostrowski

126 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
18/22 Christopher Ostrowski

After installation completes, a configuration file called root.sh must be The following environment variables are set as:
ORACLE_OWNER= oracle
run on all nodes. ORACLE_HOME= /oracleasm/11.2.0/grid

If root.sh fails on any node other than the first one, perform the follow- Enter the full pathname of the local bin directory: [/usr/local/bin]:
ing steps: The file dbhome already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
On all nodes, The file oraenv already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
1. Modify the /etc/sysconfig/oracleasm with: The file coraenv already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
ORACLEASM_SCANORDER=dm
ORACLEASM_SCANEXCLUDE=sd
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
2. Restart the asmlib (on all nodes except the 1st node): Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2012-07-06 00:14:20: Parsing the host name
# /etc/init.d/oracleasm restart
2012-07-06 00:14:20: Checking for super user privileges
2012-07-06 00:14:20: User has super user privileges
3. De-configure the root.sh settings on all nodes (except the Using configuration parameter file: /oracleasm/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
1st node): LOCAL ADD MODE
Creating OCR keys for user root, privgrp root..
Operation successful.
$GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force root wallet
root wallet cert
root cert export
4. Run root.sh again on all nodes except the first peer wallet
profile reader wallet
Output of root.sh on node 1: pa wallet
peer wallet keys
[root@RAC1 grid]# ./root.sh pa wallet keys
Running Oracle 11g root.sh script... peer cert request

127 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
19/22 Christopher Ostrowski

pa cert request Operation successful.


peer cert CRS-2672: Attempting to start ora.crsd on RAC1
pa cert CRS-2676: Start of ora.crsd on RAC1 succeeded
peer root cert TP CRS-4256: Updating the profile
profile reader root cert TP Successful addition of voting disk 4baed8b3ca254f86bf91e6a19ef6aeeb.
pa root cert TP Successful addition of voting disk 0e8a2bac79f84fdcbf1a5dcd73fa208e.
peer pa cert TP Successful addition of voting disk 401dae362bbb4f76bf3bddb8d047a429.
pa peer cert TP Successfully replaced voting disk group with +DATA.
profile reader pa cert TP CRS-4256: Updating the profile
profile reader peer cert TP CRS-4266: Voting file(s) successfully replaced
peer user cert ## STATE File Universal Id File Name Disk group
pa user cert -- ----- ----------------- --------- ---------
Adding daemon to inittab 1. ONLINE 4baed8b3ca254f86bf91e6a19ef6aeeb (ORCL:DATA1) [DATA]
CRS-4123: Oracle High Availability Services has been started. 2. ONLINE 0e8a2bac79f84fdcbf1a5dcd73fa208e (ORCL:DATA2) [DATA]
ohasd is starting 3. ONLINE 401dae362bbb4f76bf3bddb8d047a429 (ORCL:DATA3) [DATA]
CRS-2672: Attempting to start ora.gipcd on RAC1 Located 3 voting disk(s).
CRS-2672: Attempting to start ora.mdnsd on RAC1 CRS-2673: Attempting to stop ora.crsd on RAC1
CRS-2676: Start of ora.gipcd on RAC1 succeeded CRS-2677: Stop of ora.crsd on RAC1 succeeded
CRS-2676: Start of ora.mdnsd on RAC1 succeeded CRS-2673: Attempting to stop ora.asm on RAC1
CRS-2672: Attempting to start ora.gpnpd on RAC1 CRS-2677: Stop of ora.asm on RAC1 succeeded
CRS-2676: Start of ora.gpnpd on RAC1 succeeded CRS-2673: Attempting to stop ora.ctssd on RAC1
CRS-2672: Attempting to start ora.cssdmonitor on RAC1 CRS-2677: Stop of ora.ctssd on RAC1 succeeded
CRS-2676: Start of ora.cssdmonitor on RAC1 succeeded CRS-2673: Attempting to stop ora.cssdmonitor on RAC1
CRS-2672: Attempting to start ora.cssd on RAC1 CRS-2677: Stop of ora.cssdmonitor on RAC1 succeeded
CRS-2672: Attempting to start ora.diskmon on RAC1 CRS-2673: Attempting to stop ora.cssd on RAC1
CRS-2676: Start of ora.diskmon on RAC1 succeeded CRS-2677: Stop of ora.cssd on RAC1 succeeded
CRS-2676: Start of ora.cssd on RAC1 succeeded CRS-2673: Attempting to stop ora.gpnpd on RAC1
CRS-2672: Attempting to start ora.ctssd on RAC1 CRS-2677: Stop of ora.gpnpd on RAC1 succeeded
CRS-2676: Start of ora.ctssd on RAC1 succeeded CRS-2673: Attempting to stop ora.gipcd on RAC1
CRS-2677: Stop of ora.gipcd on RAC1 succeeded
ASM created and started successfully. CRS-2673: Attempting to stop ora.mdnsd on RAC1
CRS-2677: Stop of ora.mdnsd on RAC1 succeeded
DiskGroup DATA created successfully. CRS-2672: Attempting to start ora.mdnsd on RAC1
CRS-2676: Start of ora.mdnsd on RAC1 succeeded
clscfg: -install mode specified CRS-2672: Attempting to start ora.gipcd on RAC1
Successfully accumulated necessary OCR keys. CRS-2676: Start of ora.gipcd on RAC1 succeeded
Creating OCR keys for user root, privgrp root.. CRS-2672: Attempting to start ora.gpnpd on RAC1

128 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
20/22 Christopher Ostrowski

CRS-2676: Start of ora.gpnpd on RAC1 succeeded


CRS-2672: Attempting to start ora.cssdmonitor on RAC1
CRS-2676: Start of ora.cssdmonitor on RAC1 succeeded
CRS-2672: Attempting to start ora.cssd on RAC1
CRS-2672: Attempting to start ora.diskmon on RAC1
CRS-2676: Start of ora.diskmon on RAC1 succeeded
CRS-2676: Start of ora.cssd on RAC1 succeeded
CRS-2672: Attempting to start ora.ctssd on RAC1
CRS-2676: Start of ora.ctssd on RAC1 succeeded
CRS-2672: Attempting to start ora.asm on RAC1
CRS-2676: Start of ora.asm on RAC1 succeeded
CRS-2672: Attempting to start ora.crsd on RAC1
CRS-2676: Start of ora.crsd on RAC1 succeeded
CRS-2672: Attempting to start ora.evmd on RAC1
CRS-2676: Start of ora.evmd on RAC1 succeeded
CRS-2672: Attempting to start ora.asm on RAC1
CRS-2676: Start of ora.asm on RAC1 succeeded
CRS-2672: Attempting to start ora.DATA.dg on RAC1
CRS-2676: Start of ora.DATA.dg on RAC1 succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
and the error message tells you to search the /oraInventory/logs/
installActions<date>.log file and you find an error similar to:
Checking swap space: must be greater than 500 MB. Actual 131071 MB Passed
The inventory pointer is located at /etc/oraInst.loc
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name CLUSTER2
The inventory is located at /oraInventory
INFO: ERROR:
UpdateNodeList was successful.
INFO: PRVF-4657 : Name resolution setup check for CLUSTER2 (IP address:
10.230.100.82) failed
If, after running the root.sh script on all nodes, you encounter an error in INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name CLUSTER2
Grid installation program similar to:
see:
http://www.oracle-base.com/articles/11g/oracle-db-11gr2-rac-installation-
on-ol5-using-vmware-server-2.php

129 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
21/22 Christopher Ostrowski

Install the Oracle Database In the database configuration assistant, if you select a RAC installation,
The installation of the Oracle database is the same as non-RAC instance, the DBCA will automatically select both nodes. On this screen of the
with a few exceptions. The screens that are unique to RAC are listed wizard, ASM is chosen automatically, since RAC was specified. Note that
below: the ASM file group must exist before running the database configuration
wizard:
On this screen, you are prompted to enter the nodes in the cluster that
the database software will be made aware of:

This screen prompts for the Flash Recovery Area. By default the ASM
group is selected:

130 OTech Magazine #3 May 2014


BUILD A RAC
DATABASE FOR FREE
22/22 Christopher Ostrowski

Conclusion
For no money (except for memory and disk requirements) you can build
a fully-functional RAC system using Oracles state-of-the-art software.
The sandbox environment is fully functional and can be used to test
Oracle software and learn the ins and outs of Oracles premier database
product.

131 OTech Magazine #3 May 2014


Dinosaurs
Mia Urman in Space -
Mobilizing
www.auraplayer.com

Oracle Forms
twitter.com/miaurman
www.facebook.com/auraplayer
 il.linkedin.com/in/miaurman

Applications
132 OTech Magazine #3 May 2014
Dinosaurs in Space - Mobilizing
Oracle Forms Applications
1/4 Mia Urman

With more than 93% of the current population in possession of a mobile not directly relate to the task at hand. We also must take into considera-
device, we must ask, why cant we access our Oracle Forms based sys- tion that the use of a mouse and keyboard are all but eliminated from a
tems from our mobile devices? Companies operating Oracle Forms based mobile device as almost all interactions are done with a touch of a finger.
systems face a seemingly insurmountable challenge. As their legacy This forces us to reexamine how the user does navigation and we must
technologies become mature, they seek for ways to export the business figure out how to most effectively move the user through the process on
logic trapped within their Forms systems and leverage them in modern the mobile based system. As typing on a mobile device can be cumber-
technologies such as Webservices, mobile or cloud based environments. some, we also need to find ways to reduce the number of plain text input
Oracle Forms systems are mainly applications in maintenance mode, fields and replace them with clickable lists. These and other differences
most were developed over a decade ago and as such, many lack docu- between mobile and desktop applications must all be considered in order
mentation and the original developers are usually unavailable. Reverse successfully migrate an existing system to mobile.
engineering the system, even if it could be accomplished, would take
years and millions of dollars. Additionally, most Forms based systems are The Business Need
mission critical enterprise systems that allow for little or no downtime, Matrix, an Oracle Forms development house, decided to face their Oracle
making QA of these processes even more daunting. Forms to mobile challenge head-on. They sought to mobilize the surgi-
cal scheduling module of its Tafnit ERP application for medical centers as
Translating Desktop to Mobile - The Mobility Challenge the first module in their overall mobility strategy. Tafnit is used to sched-
The challenges of going mobile are not exclusive to Oracle Forms sys- ule over 500 different types of procedures for over 2,000,000 patients
tems. The translation of a system from a desktop environment to a mo- each year. Due to the existing constraints of Oracle Forms, accessing this
bile device is a difficult challenge to overcome. The nature of tasks per- system was impossible from outside the walls of the hospital. Surgeons
formed on a mobile device are different than tasks done on a desktop, so were especially affected by this lack of mobility since to retrieve sched-
its always important to remember to focus on the process you want to ules or make adjustments to surgeries, doctors would have to call into a
run on the mobile and not simply copy the existing desktop form. Since telemarketing system or receive a faxed copy of their schedule at home.
the user interface of a mobile device is restricted in size and the keyboard Matrix recognized this inefficiency but they needed a solution that didnt
overlaps half the screen, managing real estate is a real challenge. As require them to reinvent the wheel or break their bank. They found a so-
such, you need to look with an editing eye to remove Form fields that do lution in a 2-phase modernization project: First, the Oracle Forms system

133 OTech Magazine #3 May 2014


Dinosaurs in Space - Mobilizing
Oracle Forms Applications
2/4 Mia Urman

was upgraded to the web-enabled Oracle Fusion Middleware 11g. Then, The Webservice Creation Process
the system was modernized to run on all mobile devices using a combina-
tion of the Oracle ADF Mobile development framework and AuraPlayer, a
unique Oracle Forms Webservice generation technology.

Designing For Mobility


The first step of the mobility redesign process was defining the specific
business processes needed to run on the mobile device. Identifying the
user actions along with the input data needed to run the business pro-
cess and what should be the results returned were all integral parts of
this first design phase. They had to consider not only what would be
performed in the system, but what were the possible expected results
as well. Understanding the systems success and failure messages and
being able to react accordingly in the new system, were critical to enable
the mobile system to behave consistently. Once the business process has
been selected, they used AuraPlayer to wrap the Oracle Forms business
process as a Webservice. Figure 1: The AuraPlayer Recording Toolbar

To begin, the designated business scenario was recorded in the Oracle


forms system using the AuraPlayer Recording Toolbar (a similar process
to recording a macro). After the recording was completed, it was auto-
matically deployed as a Webservice using the AuraPlayer Service Manag-
er. The process resulted in generated REST /JSON and SOAP Webservices
along with the details of what input and output parameters are part of
the service. Once the Webservice generation was complete, Matrix was
ready to begin developing the user interface. Although they could have

134 OTech Magazine #3 May 2014


Dinosaurs in Space - Mobilizing
Oracle Forms Applications
3/4 Mia Urman

chosen any web technology that can visualize a Webservice to develop data if needed and allow synchronization of changes back to the original
the user interface, they chose Oracle ADF Mobile due to the flexibility of Forms systems once the connection had been restored.
coding in Java and the ability to deploy to both Android and iOS with one
code base.

Figure 3: Jdeveloper Webservice DataControl development Wizard

Figure 2 : Automatically Generating Webservice in the AuraPlayer Service Manager


Designing Your Mobile User Experience
The UI generation wizard was then used to create the ADF Mobile AMX
Defining Your Data Structures and Offline Capabilities pages and the navigation flows between the pages. The wizard built de-
Using wizard based development in Jdeveloper, the ADF Mobile Data- fault pages and the navigation task flow, allowing Matrix to only concen-
Controls were created based on the existing AuraPlayer Webservices. trate on extending this application to include native device features like
This gave Matrix the basis for binding the ADF Mobile page items to the Calendar, location based services and phone capabilities.
Oracle Forms values returned from the Webservice. In addition, the Jde-
veloper wizard developed a localized database that would reside on the
mobile device. This would allow the application to work offline on cached

135 OTech Magazine #3 May 2014


Dinosaurs in Space - Mobilizing
Oracle Forms Applications
4/4 Mia Urman

offering was available enabling doctors to access surgery scheduling data


and patient information from anywhere at any time.

Figure 4: Jdeveloper UI generation Wizard

The beauty of ADF Mobile is that Matrix was able to develop an app using
a simple declarative wizard simply by copying the URL of the Webservices
received from the AuraPlayer into Jdeveloper. Using the combination
of AuraPlayer and ADF Mobile allowed Matrix to extend their existing Figure 5: New Mobile Surgery scheduling app

system to new environments. In the modernized system, Matrix only


maintains one source of business logic in the Oracle Forms system for the
2 UIs: a Java applet-based UI for the back-end users and a lightweight
Oracle ADF Mobile app based on a Forms Webservice for on-the-go users.
In coupling AuraPlayer with ADF Mobile, Matrix was able to implement
its mobile app in a matter of days without any costly redevelopment or
migration of the underlying system. Now that the Oracle Forms system
modernization is complete, Matrix has a single management system with
different user interfaces, both web-based and mobile, that access the
same core system. Within less than a week of development a new mobile Figure 6: The Application Task Flow diagram

136 OTech Magazine #3 May 2014


Provisioning
Ronald van Luttikhuizen Fusion Middle-
ware using
Simon Haslam
www.vennster.nl
www.veriton.com

twitter.com/rluttikhuizen
 nl.linkedin.com/in/soaarchitecture Chef and
Puppet | Part I
twitter.com/simon_haslam
www.facebook.com/thozturk
 uk.linkedin.com/in/simonhaslam

137 OTech Magazine #3 May 2014


Provisioning Fusion Middleware
using Chef and Puppet | Part I
1/6 Ronald van Luttikhuizen & Simon Haslam

There are several ways to speed up installation and configuration of Software Delivery and Middleware Provisioning
Oracle Fusion Middleware products and being able to spend more of your The entire process of delivering functional software on infrastructure
time on creating valuable solutions for the business using these products. consists of a number of steps that are shown in the following picture:
Common approaches involve a public or private cloud: you can move your
infrastructure and middleware to a cloud provider or use off-the-shelf
and preconfigured appliances such as Oracle Exalogic, Oracle Exadata or
the O-box SOA Appliance to create your on-site private cloud. A third ap-
proach, in case you insist on installing and configuring middleware your-
self, is to automate the provisioning process instead of performing this
manually. The infrastructure on which you provision can still be on prem-
ise as well as in the cloud. Currently, Chef and Puppet are the most popu-
lar, general purpose configuration management tools out there. These
tools are very well suited for automated provisioning of middleware.
This two part article explains the position of middleware provisioning in As a first step you will need machines on which you can install middle-
the entire process of software delivery, explains the advantages of auto- ware, packaged apps, and/or custom software: either physical machines
mated provisioning compared to other approaches, introduces Chef and or virtual machines, either on premise or in the cloud. Cloud providers
Puppet, and indicates how you can get started with Oracle Middleware and server virtualization products such as Oracle VM, VMware, and Ora-
provisioning using these tools. cle VirtualBox provide features to automate the provisioning of OS-ready
images. Since you dont want to administer and frequently update doz-
Note that Oracle also provides its own, specialized tooling for server and ens of different server images with new patches, versions, and so on, the
middleware provisioning such as Oracle Virtual Assembly Builder (OVAB). number of different images is usually little and are very basic. In other
We wont be covering these tools in this article. You can find more infor- words, they only contain a small set of necessary packages.
mation on OVAB on the Oracle Technology Network (OTN) website.
As a next step we want to rollout non-functional software such as pack-
ages and middleware on these basic images and keep these configura-

138 OTech Magazine #3 May 2014


Provisioning Fusion Middleware
using Chef and Puppet | Part I
2/6 Ronald van Luttikhuizen & Simon Haslam

tions up to date with the latest security patches, new versions, and so on. components, SOA composites, BPMN processes, etc.). You can use spe-
Configuration management tools such as Chef and Puppet are designed cialized deployment tools to manage dependencies and define rollback
to apply such changes to machine configuration frequently in a controlled scenarios.
fashion.
In general, as software delivery becomes more and more automated, the
These first two steps are usually in the domain of IT Operations and to- tools that come along with it are better integrated too. An example is the
gether result in fully configured servers on which functional software can integration between Vagrant and Chef or Puppet where you can initiate
then be deployed. the creation of a virtual machine based on a configuration file that will
automatically trigger server provisioning in one go.
Next step is to build, test, and package your (custom) software such as
applications, services, and business processes as part of the software de- Automating the software delivery process improves time-to-market and
velopment process. When this process is automated and performed in a quality of new software and increases predictability. It is relatively easy to
high frequency we use the term Continuous Integration. To facilitate this create a business case to demonstrate the added value of this, compared
we use yet another set of tools: build servers such as Hudson or Jenkins, to a software delivery process with lots of manual steps in it. Depending
build and packaging tools such as Maven or Ant, version control systems on the state and level of automation of the various parts of the software
such as Git or Subversion, test tools such as JUnit and FitNesse, and re- delivery process, you can choose which aspect to deal with first when you
positories to store our packaged software such as Nexus or Artifactory. want to improve the delivery process. You dont need to eat the entire
software delivery elephant in one go.
Finally we need to deploy our applications, services, and business pro-
cesses from our artifact repository to our middleware provisioned History of Middleware Provisioning
servers, which is called continuous deployment when performed in an Lets take a trip down memory lane. In the early days of middleware pro-
automated and repetitive fashion. While you can use the same tools for visioning we would install and configure all middleware ourselves, manu-
deployment as described for middleware provisioning and continuous ally. This way of working is called artisan server crafting and is manage-
integration, this can become complex if there are several dependencies able as long as the number of servers is low. It is comparable to when
and ordering between the deployable artifacts (database scripts, Java you get your new laptop and spend a couple of days to get it the way you

139 OTech Magazine #3 May 2014


Provisioning Fusion Middleware
using Chef and Puppet | Part I
3/6 Ronald van Luttikhuizen & Simon Haslam

want. This is doable for one laptop, but becomes pretty boring when you To this end, standard configuration management tools emerged that sup-
need to manage dozens of laptops. ported most operating systems, have a uniform way to describe a wide
variety of server configurations, are able to provision servers, and contin-
uously keep them in check. Automated configuration management tools
often use a Domain-Specific Language (DSL) to describe a server configu-
ration independent of the underlying machine, provider and operating
system. You describe the desired state once and can apply it as often as
No matter how good we are in our profession, from time to time we will you want.
make some errors. Especially when the middleware installation and con-
figuration is complicated and involves a great deal of steps. Also, you will Today, Chef and Puppet are among the most popular of these general-
notice that after some period of time the state of servers isnt the same purpose configuration management tools and have a big community
across all servers. Manual changes were applied to some of the servers base. When you hire new people for your DevOps team, chances are
which were forgotten to be applied on other servers. In the end, manag- good they know Chef and Puppet.
ing a lot of (identical) servers manually is boring, stressful, error-prone
and time consuming. And thereby non-scalable and very expensive. Advantages of using configuration management tools
So you know about the evolution from artisan server crafting to the use
To cope with the growing number of servers and complexity of the instal- of standard configuration management tools. More precisely, these tools
lations we created our own proprietary scripts and frameworks to auto- offer the following concrete benefits:
mate certain aspects of the installation for certain products on certain Automate the repetitive installation and configuration tasks, so you
operating systems. While this improved the quality and timeliness of the can focus on improvements and real problems;
installation, the scripts were often proprietary. New employees needed Predictable results through automation;
time to learn these scripts, the scripts couldnt easily be used by other (Near) real-time provisioning: automated provisioning is fast;
organizations and individuals, and the scripts were only applicable for a Keep servers in sync: a configuration management tool checks the cur-
certain type of middleware product and version for a specific operating rent state of a server periodically and applies changes needed to bring
system. it (back) to the desired state;

140 OTech Magazine #3 May 2014


Provisioning Fusion Middleware
using Chef and Puppet | Part I
4/6 Ronald van Luttikhuizen & Simon Haslam

The configuration is human readable, thereby serves as documenta- The following snippet shows part of a manifest that will install and config-
tion, and is always up-to-date; ure Apache HTTP Server when applied to a node:
Configuration is version controlled like any other software artefact; package { httpd:
name => httpd.x86_64,
Configurations can be changed, tested and reapplied in DTAP environ- ensure => present,
ments just like other software. }
file { http.conf:
path => /etc/httpd/conf/httpd.conf,
The term infrastructure as code is often used to describe this new way owner => root,
group => root,
of working and the associated benefits with automated server provision- mode => 0644,
ing. In this respect the worlds of development and IT operations, that source => puppet:///modules/apache/httpd.conf,
require => Package[httpd],
used to be clearly separated from each other, become more and more }
integrated to work together. service { httpd:
ensure => running,
enable => true,
Introduction to Puppet subscribe => File[http.conf],
}
Puppet is an open-source configuration management tool from Puppet
Labs written in Ruby. It is shipped as freely available open-source vari- The manifest contains three resource declarations: package, file, and
ant or as a commercially supported Enterprise Edition that comes with service. Every resource declaration has a name to uniquely identify it and
additional features such as a management console, support, Role-Based a set of name/value pairs to instruct Puppet how the resource should be
Access Control, and so on. Puppet Labs was founded in 2005. configured. In the above example:
Puppet describes the desired state of a server in so-called manifests. the package http.x86_64 needs be installed;
Manifest files declare resources such as files, packages, and services us- the configuration file http.conf must be present, should have certain
ing predefined attributes in a Domain Specific Language. Manifests are rights and should be identical to a http.conf file that is centrally stored
applied by Puppet to nodes; the term Puppet uses to denote machines by Puppet;
that need to be provisioned. Puppet manifests are stored in files with a the service httpd should be running.
.pp extension.
Notice that you can use attributes such as require and specify to

141 OTech Magazine #3 May 2014


Provisioning Fusion Middleware
using Chef and Puppet | Part I
5/6 Ronald van Luttikhuizen & Simon Haslam

indicate a certain order in which the resources are applied, or triggered can create your own modules or reuse existing ones that are published by
by Puppet. If not specified, Puppet will determine the order in which re- Puppet Labs or the community on the Puppet Forge: https://forge.pup-
sources are applied itself. Also, resources with the same name are applied petlabs.com. For example have a look at the Oracle modules on Puppet
once per node and not as often as you declare them. Resources are only Forge by Edwin Biemond at https://forge.puppetlabs.com/biemond.
applied by Puppet when the target state of a node does not match the Not all your nodes will have the same configuration. You can use node
resource declarations for that node. declarations in Puppet to indicate what classes and resources should be
applied on what nodes.
You can learn more about built-in resource types in Puppet such as file node www1.example.com {
include common
and user in the Puppet Type Reference at http://docs.puppetlabs.com/ include apache
references/latest/type.html. Besides these out-of-the-box resource types }
node db1.example.com {
you can also define your own resource types to (re)use in your manifests. include common
Parts of manifests can be dependent on the specific state of a node such include oraclexe
}
as the operating system, amount of memory, and so on. An example is
the package name that can differ between various Linux distributions; for
example ntp versus ntpd. Puppet uses so-called facts that you can There are mainly two ways of letting Puppet do its magic at runtime. The
use to retrieve information on the node you are operating on. You can first approach (left hand side of the following figure) is by triggering Pup-
use both out-of-the-box facts and define your own custom facts. pet runs on nodes manually or periodically using some scheduler such as
if $operatingsystem == CentOS
Cron. As part of a Puppet run, Puppet will compile the manifests that it
needs to apply on that node into a node-specific catalog. It will then in-
Several resource declarations can be bundled into coarser grained units spect the state of the node and determine what the deviation is with the
called classes. Manifests, together with accompanying files, classes, tem- desired state as described in the manifests. If there is a deviation, Puppet
plates, etc. are packaged into modules. Modules are autonomous build- will apply the catalog so the result is a node that is in the desired state.
ing blocks that provide a certain functionality. An example would be an
Oracle XE module which has all the necessary code and configuration (ex-
cept the actual binaries) to install Oracle XE on one or more nodes. You

142 OTech Magazine #3 May 2014


Provisioning Fusion Middleware
using Chef and Puppet | Part I
6/6 Ronald van Luttikhuizen & Simon Haslam

price is lowered stepwise when the total number of nodes increases. For
information on pricing, reference customers, and specific differences
between the open-source and Enterprise Edition of Puppet see http://
puppetlabs.com/puppet/puppet-enterprise.

This concludes the first part of this article. In the second part of this
article we will introduce another popular configuration management tool
called Chef, discuss how tools like Puppet and Chef can help you in the
provisioning of Oracle Fusion Middleware, and provide conclusions

The right-hand side depicts the Puppet client/server model in which the
central Puppet Master plays an important role in the coordination of
the provisioning process. Every node that needs to be provisioned has
a Puppet Agent installed. The Agent will periodically ask the Master for
the catalog to be applied. The Master authenticates the Agent and puts
together a compiled catalog for that Agent. After the catalog has been
applied by the Agent, the Agent will report back to the Master with a
status update of the Puppet run. Status updates, run information, node
declarations and son can be inspected in the Puppet Enterprise Console.
The Puppet Enterprise Edition is free up to ten managed nodes. Above
that number of nodes you pay an amount per node, in which the node

143 OTech Magazine #3 May 2014


Mobility for
Troy Allen WebCenter
Content
www.tekstream.com

twitter.com/troy_allen_TS
 www.linkedin.com/pub/
troy-allen/3/b17/46b

144 OTech Magazine #3 May 2014


Mobility for
WebCenter Content
1/5 Troy Allen

With the release of Oracles WebCenter Con- Search


tent (WCC) 11.1.1.8 (SP7), Oracle also released For those who are familiar with the extensive
a mobile app for WCC for Android and IOS. As search capabilities of WCC server, the mobile
most initial software releases, the mobile appli- version may seem like a letdown. That was my
cation was a framework of some functionality first reaction, but as I started using the tool, I
and held great promise for what will be coming realized that I didnt need more bells and whis-
in future releases. After a few updates to the tles to find my documents and get work done.
mobile app, Oracle has filled out some of the
key functions that make it a viable tool for con- Searching with the WCC Mobile app is very
tent on the run, but there is more work to be much like using the Quick Search from the
done. Overall, it is a sound tool and I use it as a primary web-based user interface. The search
quick way to keep in touch with my documents, field performs a query against the Title, Com-
but I want more out of it. This article will take a ments, Full-Text, and a few other metadata
look at what is working well, what needs some fields. Once a Result is found, the user is pre-
help, and what needs to be added to turn the sented with a result set.
WCC mobile app into a great application I cant
do without.

The mobile application (latest released version


at the time of this article is 11.1.1.8.1.1) is geared
towards standard document management
functions including: Search, Libraries, Viewing,
Downloads, Check-in, workflow, and Sharing.

145 OTech Magazine #3 May 2014


Mobility for
WebCenter Content
2/5 Troy Allen

information to the folder and its content. This


functionality allows for dragging and drop-
ping content into the system without having
to worry about how it will be tagged, streamlin-
ing the check-in processes.

The mobile app also takes advantage of Li-


braries and Folders by providing users with an
intuitive navigation path to finding content that
they need. Users can tap on the Library, tap on
the Info icon to see metadata about the Library
or folder, and can sort the Libraries to their de-
sired view (by ascending or descending order).
Once a user has navigated into a Library, they
also have the ability to add new folders by tap-
The mobile version allows users to filter their FrameWork folders to help people navigate to ping the plus sign in the lower portion of the
search result by Content Type or Security what is important (Libraries and FrameWork screen. In testing this functionality, I expected
Group. For power users of the WCC primary Folders were introduced in Oracle WCC 11.1.1.8). to be able to also create a new library, but this
web-based interface used to profiling searches Libraries functionality is restricted to the ADF (Applica-
and search Query Builder form, may be a bit tion Development Framework) version of the
frustrated at first with the mobile application, WCCs Libraries are designed to provide a logi- WCC user interface. Despite the inability to cre-
but this tool is intended for general users who cal categorization of content within the reposi- ate libraries, enabling users to create their own
need to find their content quickly, view it, and tory. Within the Libraries, folder structures folders allows for the sharing of information
make decisions. For that very reason, Oracle continue the categorization and are often used across multiple platforms more intuitive and
maintains the newly introduced Libraries and to automatically assign metadata and security relatively easy.

146 OTech Magazine #3 May 2014


Mobility for
WebCenter Content
3/5 Troy Allen

In WCC, original files that are uploaded are


stored in their original file format as the Na-
tive Format. In most cases, WCC is also au-
tomatically creating a Web Viewable version,
typically in PDF format. Many organizations us-
ing WCC are adding additional functionality to
their PDF versions such as Watermarking them,
or applying dynamic information that appears
on the PDF such as who downloaded it, who
the author was, the date it was downloaded,
some kind of watermark, or other pertinent
metadata that should be viewed as part of the
PDF. In the mobile app, this functionality is not
available. While being able to see the Native
Version rendered on my mobile device is nice,
there may be important information in the Web
Viewing Viewable version that I need as well. Im hop-
Viewing anything from Word documents to im- ing that Oracle will embrace its native ability Downloads
ages is easy with the WCC mobile application. to perform document conversions and make As with viewing, the ability to download a doc-
From either a search results or from a folder, them part of the mobile application for viewing ument through the mobile application is limited
simply tab the title of the document and the ap- in future releases. to the Native Version of the file. I use a number
plication will render the content for viewing. It of applications on my mobile devices to cre-
is important to note that the file that is getting ate slide shows, edit images or videos, take
rendered is the Native Version. notes, create documents, or to share content
through cloud-based services. Oracles mobile

147 OTech Magazine #3 May 2014


Mobility for
WebCenter Content
4/5 Troy Allen

app uses a proprietary storage location for Check-In


the files downloaded through the application, The WCC mobile application allows users to
which makes it difficult to find them and use check-in content to folders. However, this is
them with other tools. I must, in all fairness, limited to taking a picture or video at the time
also note that this is not an unusual practice for of check-in, looking for a picture or video from
applications working with content on both IOS your devices camera directory, or uploading
and Android devices. However, the user does a file that you had previously downloaded
have the option of opening the document or through the mobile application. Using tools
file in other tools by tapping the up-arrow box like Notability and selecting to share or open
at the bottom of the screen. The Native Ver- in does not provide the Oracle application as
sion will be transferred to the application you an option. While a lot of apps do not register
choose and you can work with it as a one-off themselves to the mobile operating system as a
document (any changes made to the document valid candidate for sharing or for being a target
in this fashion are not uploaded to the WCC to open documents to, the WCC mobile solu-
Repository). tion would be greatly improved with this fea-
Even with the ability to open the file in other ture and enable true intra-application sharing
applications, I still have a general frustration of content. Like many of you, I have different
with inter-application communication as not all apps that I use based on the type of content
applications lend themselves to Open In reg- Im working on, having the ability to save files
istration. This, again, is not unique to Oracles from any application into WCC from my mobile
mobile solution, but a common issue with many device would be a huge win
productivity based mobile applications.

148 OTech Magazine #3 May 2014


Mobility for
WebCenter Content
5/5 Troy Allen

the WCC repository. Being able to view a docu- uct would be to allow users to choose what
ment or image and approve it from my phone version and/or rendition of the file or image
or tablet is a huge timesaver for me. Like me, they want to share with people (much like the
many business travelers are constantly moving Desktop Integration Suite tool for Oracle WCC
from location to location and meeting to meet- does in Microsoft Outlook).
ing and opening up a laptop or navigating to a
website takes time just to approve a document Final Thoughts
that could quickly be done through a mobile If I had to rate the Oracle WCC mobile applica-
app. This is definitely one feature that would tion on a scale from 1 to 10 (with 10 being the
make the Oracle mobile application a huge suc- gold standard for mobile applications), it would
cess and Im looking forward to future versions receive a solid 7. The latest release has provid-
where this might be supported. ed some much needed functionality over the
products debut release, but there is still work
Sharing to be done. I like the interface and its intuitive
Their sharing features of the application are design and am happy with the functions that
what Id expected in most mobile solutions. have been included in the current release of
Users can choose to attach the file or image the app. However, not granting access to ren-
Workflow from WCC to an email or they can send a link. ditions and the Web Viewable versions of con-
While the mobile app does provide users with Unfortunately, the file that gets attached to tent, any actual workflow actions, and lack of
the ability to see content that is in their work- the email or the URL for the link is to the Native inter-application support has cost the solution
flow queue, I was surprised that the solution File. Oracle WCC server is designed to provide a few points. I think this is a tool that could
did not allow for the approval or acceptance of multiple versions of managed documents and easily be rated a solid 9 if just a few items were
the workflowed item. I am constantly on the comes with full set Digital Asset Management addressed. I realize that Oracle will continue to
road traveling for work and I receive a fair num- capabilities that just arent being utilized in the make improvements to the application, and I
ber of items that need to be approved within mobile application. Another win for the prod- cant wait to see what they are.

149 OTech Magazine #3 May 2014


Analytic
Kim Berg Hansen warehouse
picking
www.thansen.dk

twitter.com/kibeha
 www.linkedin.com/in/kibeha

Blog: http://dspsd.blogsport.com

150 OTech Magazine #3 May 2014


analytic
warehouse picking
1/12 Kim Berg Hansen

Analytic functions rock, analytic functions roll, as Tom Kyte usually says. the operator to work from.
But as we shall see, it can also be done in a single SQL statement
I couldnt agree more, I use them all the time and cannot imagine living
without analytic functions. They make it possible to do work in SQL that The data
otherwise would have required slow procedural handing, because they Inventory table contains how many of each item is at each location in the
allow the developer to access data across rows, not just within rows. warehouse and what date that batch was purchased.
A great example is picking goods in a warehouse.
1 create table inventory (
Let us suppose we trade beer. Whenever we buy a batch of beer the pal- 2 item varchar2(10) -- identification of the item
let is placed in a location in an aisle in one of our two warehouses: 3 , loc varchar2(10) -- identification of the location
4 , qty number -- quantity present at that location
5 , purch date -- date that quantity was purchased
6 );
7
8 insert into inventory values(Ale , 1-A-20, 18, DATE 2014-02-01);
9 insert into inventory values(Ale , 1-A-31, 12, DATE 2014-02-05);
10 insert into inventory values(Ale , 1-C-05, 18, DATE 2014-02-03);
11 insert into inventory values(Ale , 2-A-02, 24, DATE 2014-02-02);
12 insert into inventory values(Ale , 2-D-07, 9, DATE 2014-02-04);
13 insert into inventory values(Bock, 1-A-02, 18, DATE 2014-02-06);
14 insert into inventory values(Bock, 1-B-11, 4, DATE 2014-02-05);
15 insert into inventory values(Bock, 1-C-04, 12, DATE 2014-02-03);
16 insert into inventory values(Bock, 1-B-15, 2, DATE 2014-02-02);
17 insert into inventory values(Bock, 2-D-23, 1, DATE 2014-02-04);
18 commit;
When we receive an order for beer, we need to create a picking list telling
the operator at which locations he must go and pick beer and how much Orderline table contains how many of each item is to be picked for each
each place. order.
That could be done procedurally by looping over the order lines, for each
line finding locations for that item, deciding which locations to pick from 1 create table orderline (
2 ordno number -- id-number of the order
if there are multiple places, and output the results in a suitable order for 3 , item varchar2(10) -- identification of the item

151 OTech Magazine #3 May 2014


analytic
warehouse picking
2/12 Kim Berg Hansen

4 , qty number -- quantity ordered Bock 18 2-D-23 2014-02-04 1


5 ); Bock 18 1-B-11 2014-02-05 4
6 Bock 18 1-A-02 2014-02-06 18
7 insert into orderline values (42, Ale , 24);
8 insert into orderline values (42, Bock, 18);
Visually we see what we should pick first 18 of the oldest Ale, then 6
9 commit;
from the next oldest, and similar for the Bock. Now how to do this in
SQL?
FIFO picking We add analytic rolling sum to the select list:
We wish our operator to pick the beers for order number 42. To avoid at
some point having a lot of old beer in the warehouse, the beers should be
6 , sum(i.qty) over (
picked in order of purchase date the First-In-First-Out or FIFO principle. 7 partition by i.item
We join orderlines with inventory and order each item by purchase date: 8 order by i.purch, i.loc
9 rows between unbounded preceding and current row
10 ) sum_qty
1 select o.item

2 , o.qty ord_qty
3 , i.loc
ITEM ORD_QTY LOC PURCH LOC_QTY SUM_QTY
4 , i.purch
----- ------- ------- ---------- ------- -------
5 , i.qty loc_qty
Ale 24 1-A-20 2014-02-01 18 18
6 from orderline o
Ale 24 2-A-02 2014-02-02 24 42
7 join inventory i
Ale 24 1-C-05 2014-02-03 18 60
8 on i.item = o.item
Ale 24 2-D-07 2014-02-04 9 69
9 where o.ordno = 42
Ale 24 1-A-31 2014-02-05 12 81
10 order by o.item, i.purch, i.loc;
Bock 18 1-B-15 2014-02-02 2 2
Bock 18 1-C-04 2014-02-03 12 14
ITEM ORD_QTY LOC PURCH LOC_QTY
Bock 18 2-D-23 2014-02-04 1 15
----- ------- ------- ---------- -------
Bock 18 1-B-11 2014-02-05 4 19
Ale 24 1-A-20 2014-02-01 18
Bock 18 1-A-02 2014-02-06 18 37
Ale 24 2-A-02 2014-02-02 24
Ale 24 1-C-05 2014-02-03 18
Ale 24 2-D-07 2014-02-04 9 This ROWS BETWEEN clause makes SUM_QTY a cumulative sum. We see
Ale 24 1-A-31 2014-02-05 12
Bock 18 1-B-15 2014-02-02 2
that the first 18 is less than 24 so it is not enough, but 42 is sufficient so
Bock 18 1-C-04 2014-02-03 12 we need no more. The problem is to make a where clause that includes

152 OTech Magazine #3 May 2014


analytic
warehouse picking
3/12 Kim Berg Hansen

both 18 and 42. 19 ) s


20 where s.sum_prv_qty < s.ord_qty
We can solve it by a small change to the ROWS BETWEEN clause: 21 order by s.item, s.purch, s.loc;

ITEM ORD_QTY LOC PURCH LOC_QTY SUM_PRV_QTY PICK_QTY


6 , sum(i.qty) over (
----- ------- ------- ---------- ------- ----------- --------
7 partition by i.item
Ale 24 1-A-20 2014-02-01 18 0 18
8 order by i.purch, i.loc
Ale 24 2-A-02 2014-02-02 24 18 6
9 rows between unbounded preceding and 1 preceding
Bock 18 1-B-15 2014-02-02 2 0 2
10 ) sum_prv_qty
Bock 18 1-C-04 2014-02-03 12 2 12
Bock 18 2-D-23 2014-02-04 1 14 1
ITEM ORD_QTY LOC PURCH LOC_QTY SUM_PRV_QTY
Bock 18 1-B-11 2014-02-05 4 15 3
----- ------- ------- ---------- ------- -----------
Ale 24 1-A-20 2014-02-01 18
Ale 24 2-A-02 2014-02-02 24 18 The least of the location quantity and whats left to pick is what we need
Ale 24 1-C-05 2014-02-03 18 42
Ale 24 2-D-07 2014-02-04 9 60
to pick at that location.
Ale 24 1-A-31 2014-02-05 12 69 So we can now simplify and get a picking list in location order for our
Bock 18 1-B-15 2014-02-02 2
Bock 18 1-C-04 2014-02-03 12 2
picking operator:
Bock 18 2-D-23 2014-02-04 1 14
Bock 18 1-B-11 2014-02-05 4 15 1 select s.loc
Bock 18 1-A-02 2014-02-06 18 19 2 , s.item
3 , least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
4 from (
Now SUM_PRV_QTY is cumulative sum of all previous rows. When all pre-
19 ) s
vious rows have picked at least the ordered quantity, we can stop. 20 where s.sum_prv_qty < s.ord_qty
So we put NVL(,0) around our analytic sum to avoid NULL problems 21 order by s.loc;
and then we can filter on all rows where the previous rows have not
picked everything yet: LOC ITEM PICK_QTY
------- ----- --------
1-A-20 Ale 18
1 select s.*
1-B-11 Bock 3
2 , least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
1-B-15 Bock 2
3 from (
1-C-04 Bock 12

2-A-02 Ale 6

153 OTech Magazine #3 May 2014


analytic
warehouse picking
4/12 Kim Berg Hansen

2-D-23 Bock 1 ------- ----- --------


1-A-02 Bock 18
2-A-02 Ale 24
The picking list shows these locations the operator needs to visit and
how much to pick each location: Or principle of cleaning out small quantities first for efficient space man-
agement:


12 order by i.qty, i.loc

LOC ITEM PICK_QTY


------- ----- --------
1-A-20 Ale 3
1-A-31 Ale 12
1-B-11 Bock 4
1-B-15 Bock 2
1-C-04 Bock 11
2-D-07 Ale 9
Switch picking strategies 2-D-23 Bock 1

The SQL now has two ORDER BY clauses one within the analytic func- The last output (principle of clean out small quantity first) is a nice long
tion defines picking strategy (FIFO), the other at the end of the SQL picking list let us examine this in more detail:
defines picking route.
We can then change picking strategy simply by changing the analytic or-
der by. For example if we were trading items with no Best before date,
we could exchange First-In-First-Out principle with a principle of least
number of picks for fast picking:

12 order by i.qty desc, i.loc

LOC ITEM PICK_QTY

154 OTech Magazine #3 May 2014


analytic
warehouse picking
5/12 Kim Berg Hansen

We can see our picking operator needs to double back on himself a few 16 rows between unbounded preceding and 1 preceding
17 ),0) sum_prv_qty
times if he picks in the order we output the data not very efficient. Lets 18 from orderline o
improve that. 19 join inventory i
20 on i.item = o.item
21 where o.ordno = 42
Picking route 22 ) s
23 where s.sum_prv_qty < s.ord_qty
We would like the picking list to be ordered so that the operator goes 24 order by s.loc;
up in the first aisle he visits and down in the next aisle and then up
WAREHOUSE AISLE POSITION LOC ITEM PICK_QTY
again and so on. --------- ----- -------- ------- ----- --------
As the inner analytic ORDER BY handles picking strategy, we can improve 1 A 20 1-A-20 Ale 3
1 A 31 1-A-31 Ale 12
the picking route by changing the outer ORDER BY. First we need to split 1 B 11 1-B-11 Bock 4
the location into warehouse, aisle and position. 1 B 15 1-B-15 Bock 2
1 C 4 1-C-04 Bock 11
(In real life that might be columns by themselves in a location table or we 2 D 7 2-D-07 Ale 9
might need more complex regexp_substr, but here we can use simple 2 D 23 2-D-23 Bock 1
substr.)
Using DENSE_RANK we can number each visited aisle consecutively:
1 select to_number(substr(s.loc,1,1)) warehouse
2 , substr(s.loc,3,1) aisle 1 select to_number(substr(s.loc,1,1)) warehouse
3 , to_number(substr(s.loc,5,2)) position 2 , substr(s.loc,3,1) aisle
4 , s.loc 3 , dense_rank() over (
5 , s.item 4 order by to_number(substr(s.loc,1,1)) -- warehouse
6 , least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty 5 , substr(s.loc,3,1) -- aisle
7 from ( 6 ) aisle_no
8 select o.item 7 , to_number(substr(s.loc,5,2)) position
9 , o.qty ord_qty 8 , s.loc
10 , i.loc 9 , s.item
11 , i.purch 10 , least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
12 , i.qty loc_qty 11 from (
13 , nvl(sum(i.qty) over (
14 partition by i.item WAREHOUSE AISLE AISLE_NO POSITION LOC ITEM PICK_QTY
15 order by i.qty, i.loc -- small qty first principle --------- ----- -------- -------- ------- ----- --------

155 OTech Magazine #3 May 2014


analytic
warehouse picking
6/12 Kim Berg Hansen

1 A 1 20 1-A-20 Ale 3 This is a much better picking route for our operator alternately up
1 A 1 31 1-A-31 Ale 12
1 B 2 11 1-B-11 Bock 4
and down:
1 B 2 15 1-B-15 Bock 2
1 C 3 4 1-C-04 Bock 11
2 D 4 7 2-D-07 Ale 9
2 D 4 23 2-D-23 Bock 1

We wrap the entire SQL in an inline view and order the result by ware-
house and aisle and then odd aisles ascending and even aisles de-
scending:

1 select s2.warehouse, s2.aisle, s2.aisle_no, s2.position


2 , s2.loc, s2.item, s2.pick_qty
3 from (

But what if the two warehouses only had one connecting door? We can
26 ) s2 solve that easily by changing the DENSE_RANK to PARTITION by ware-
27 order by s2.warehouse
28 , s2.aisle_no
house:
29 , case
30 when mod(s2.aisle_no,2) = 1 then s2.position 6 , dense_rank() over (
31 else -s2.position 7 partition by to_number(substr(s.loc,1,1)) -- warehouse
32 end; 8 order by substr(s.loc,3,1) -- aisle
9 ) aisle_no

WAREHOUSE AISLE AISLE_NO POSITION LOC ITEM PICK_QTY
--------- ----- -------- -------- ------- ----- -------- WAREHOUSE AISLE AISLE_NO POSITION LOC ITEM PICK_QTY
1 A 1 20 1-A-20 Ale 3 --------- ----- -------- -------- ------- ----- --------
1 A 1 31 1-A-31 Ale 12 1 A 1 20 1-A-20 Ale 3
1 B 2 15 1-B-15 Bock 2 1 A 1 31 1-A-31 Ale 12
1 B 2 11 1-B-11 Bock 4 1 B 2 15 1-B-15 Bock 2
1 C 3 4 1-C-04 Bock 11 1 B 2 11 1-B-11 Bock 4
2 D 4 23 2-D-23 Bock 1 1 C 3 4 1-C-04 Bock 11
2 D 4 7 2-D-07 Ale 9 2 D 1 7 2-D-07 Ale 9
2 D 1 23 2-D-23 Bock 1

156 OTech Magazine #3 May 2014


analytic
warehouse picking
7/12 Kim Berg Hansen

In this output we restart the AISLE_NO sequence for each warehouse, We can do FIFO batch picking on the total quantities by simply grouping
so aisle D in warehouse 2 becomes an odd aisle and thus ordered as- by item. The easy way is to use a WITH subquery for the grouped data
cending: and then simply replacing orderlines table with the orderbatch subquery
in the FIFO query:
1 with orderbatch as (
2 select o.item
3 , sum(o.qty) qty
4 from orderline o
5 where o.ordno in (51, 62, 73)
6 group by o.item
7 )
8 select s.loc
9 , s.item
10 , least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
11 from (
12 select o.item
13 , o.qty ord_qty
Batch pick multiple orders 14 , i.loc
15 , i.purch
So far weve picked just one order now lets try multiple orders. 16 , i.qty loc_qty
We get rid of our order 42 from before and replace with three other 17 , nvl(sum(i.qty) over (
18 partition by i.item
orders: 19 order by i.purch, i.loc -- FIFO
20 rows between unbounded preceding and 1 preceding
1 delete orderline; 21 ),0) sum_prv_qty
2 insert into orderline values (51, Ale , 24); 22 from orderbatch o
3 insert into orderline values (51, Bock, 18); 23 join inventory i
4 insert into orderline values (62, Ale , 8); 24 on i.item = o.item
5 insert into orderline values (73, Ale , 16); 25 ) s
6 insert into orderline values (73, Bock, 6); 26 where s.sum_prv_qty < s.ord_qty
7 commit; 27 order by s.loc;

LOC ITEM PICK_QTY


------- ----- --------
1-A-02 Bock 5

157 OTech Magazine #3 May 2014


analytic
warehouse picking
8/12 Kim Berg Hansen

1-A-20 Ale 18 LOC ITEM PICK_QTY FROM_QTY TO_QTY


1-B-11 Bock 4 ------- ----- -------- -------- ------
1-B-15 Bock 2 1-A-20 Ale 18 1 18
1-C-04 Bock 12 2-A-02 Ale 24 19 42
1-C-05 Ale 6 1-C-05 Ale 6 43 48
2-A-02 Ale 24 1-B-15 Bock 2 1 2
2-D-23 Bock 1 1-C-04 Bock 12 3 14
2-D-23 Bock 1 15 15
1-B-11 Bock 4 16 19
Result is OK, but we cannot tell how much of each pick goes to each or- 1-A-02 Bock 5 20 24
der. So let us add SUM_QTY besides SUM_PRV_QTY and calculate from/
to qty: The output shows quantity intervals the 24 Ale we pick at 2-A-02 is num-
ber 19-42 out of the total 48 Ale we are picking. Similarly for orderlines we
1 with orderbatch as (

create quantity intervals:
6 )
7 select s.loc, s.item 1 select o.ordno, o.item, o.qty
8 , least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty 2 , nvl(sum(o.qty) over (
9 , sum_prv_qty + 1 from_qty 3 partition by o.item
10 , least(sum_qty, ord_qty) to_qty 4 order by o.ordno
11 from ( 5 rows between unbounded preceding and 1 preceding
12 select o.item, o.qty ord_qty 6 ),0) + 1 from_qty
7 , nvl(sum(o.qty) over (
19 , nvl(sum(i.qty) over ( 8 partition by o.item
20 partition by i.item 9 order by o.ordno
21 order by i.purch, i.loc 10 rows between unbounded preceding and current row
22 rows between unbounded preceding and current row 11 ),0) to_qty
23 ),0) sum_qty 12 from orderline o
24 from orderbatch o 13 where ordno in (51, 62, 73)
25 join inventory i 14 order by o.item, o.ordno;
26 on i.item = o.item
27 ) s ORDNO ITEM QTY FROM_QTY TO_QTY
28 where s.sum_prv_qty < s.ord_qty ----- ----- ---- -------- ------
29 order by s.item, s.purch, s.loc; 51 Ale 24 1 24
62 Ale 8 25 32
73 Ale 16 33 48

158 OTech Magazine #3 May 2014


analytic
warehouse picking
9/12 Kim Berg Hansen

51 Bock 18 1 18 25 select o.item, o.qty ord_qty


73 Bock 6 19 24 26 , i.loc, i.purch, i.qty loc_qty
27 , nvl(sum(i.qty) over (
28 partition by i.item
The 8 Ale from order 62 is number 25-32 out of the total 48 Ale. 29 order by i.purch, i.loc
Now we can join on overlapping quantity intervals: 30 rows between unbounded preceding and 1 preceding
31 ),0) sum_prv_qty
ORDERLINES subquery creates the quantity interval for the orderlines. 32 , nvl(sum(i.qty) over (
ORDERBATCH then sums quantities by item to be batch picked in FIFO 33 partition by i.item
34 order by i.purch, i.loc
subquery. FIFO subquery is joined to ORDERLINES on overlapping inter- 35 rows between unbounded preceding and current row
vals. 36 ),0) sum_qty
37 from orderbatch o
38 join inventory i
1 with orderlines as (
39 on i.item = o.item
2 select o.ordno, o.item, o.qty
40 ) s
3 , nvl(sum(o.qty) over (
41 where s.sum_prv_qty < s.ord_qty
4 partition by o.item
42 )
5 order by o.ordno
43 select f.loc, f.item, f.purch, f.pick_qty, f.from_qty, f.to_qty
6 rows between unbounded preceding and 1 preceding
44 , o.ordno, o.qty, o.from_qty, o.to_qty
7 ),0) + 1 from_qty
45 from fifo f
8 , nvl(sum(o.qty) over (
46 join orderlines o
9 partition by o.item
47 on o.item = f.item
10 order by o.ordno
48 and o.to_qty >= f.from_qty
11 rows between unbounded preceding and current row
49 and o.from_qty <= f.to_qty
12 ),0) to_qty
50 order by f.item, f.purch, o.ordno;
13 from orderline o
14 where ordno in (51, 62, 73)
LOC ITEM PURCH PICK_QTY FROM_QTY TO_QTY ORDNO QTY FROM_QTY TO_QTY
15 ), orderbatch as (
------- ----- ---------- -------- -------- ------ ----- ---- -------- ------
16 select o.item, sum(o.qty) qty
1-A-20 Ale 2014-02-01 18 1 18 51 24 1 24
17 from orderlines o
2-A-02 Ale 2014-02-02 24 19 42 51 24 1 24
18 group by o.item
2-A-02 Ale 2014-02-02 24 19 42 62 8 25 32
19 ), fifo as (
2-A-02 Ale 2014-02-02 24 19 42 73 16 33 48
20 select s.loc, s.item, s.purch
1-C-05 Ale 2014-02-03 6 43 48 73 16 33 48
21 , least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
1-B-15 Bock 2014-02-02 2 1 2 51 18 1 18
22 , sum_prv_qty + 1 from_qty
1-C-04 Bock 2014-02-03 12 3 14 51 18 1 18
23 , least(sum_qty, ord_qty) to_qty
2-D-23 Bock 2014-02-04 1 15 15 51 18 1 18
24 from (

159 OTech Magazine #3 May 2014


analytic
warehouse picking
10/12 Kim Berg Hansen

1-B-11 Bock 2014-02-05 4 16 19 51 18 1 18 51 join orderlines o


1-B-11 Bock 2014-02-05 4 16 19 73 6 19 24 52 on o.item = f.item
1-A-02 Bock 2014-02-06 5 20 24 73 6 19 24 53 and o.to_qty >= f.from_qty
54 and o.from_qty <= f.to_qty
55 order by f.item, f.purch, o.ordno;
Notice the pick of 24 Ale at 2-A-02 is joined to all three orders. Those 24
LOC ITEM PURCH PICK_QTY FROM_QTY TO_QTY ORDNO QTY FROM_QTY TO_QTY PICK_ORD_QTY
are number 19 to 42 of the total, which overlaps with all three intervals ------- ----- ---------- -------- -------- ------ ----- ---- -------- ------ ------------
for the orders. 1-A-20 Ale 2014-02-01 18 1 18 51 24 1 24 18
2-A-02 Ale 2014-02-02 24 19 42 51 24 1 24 6
By using LEAST and GREATEST we calculate how much to pick from each 2-A-02 Ale 2014-02-02 24 19 42 62 8 25 32 8
location to each order. We need to pick the smallest of either the quan- 2-A-02 Ale 2014-02-02 24 19 42 73 16 25 48 10
1-C-05 Ale 2014-02-03 6 43 48 73 16 33 48 6
tity on the location or how much the two intervals overlap: 1-B-15 Bock 2014-02-02 2 1 2 51 18 1 18 2
1-C-04 Bock 2014-02-03 12 3 14 51 18 1 18 12
2-D-23 Bock 2014-02-04 1 15 15 51 18 1 18 1
1 with orderlines as (
1-B-11 Bock 2014-02-05 4 16 19 51 18 1 18 3

1-B-11 Bock 2014-02-05 4 16 19 73 6 19 24 1
15 ), orderbatch as (
1-A-02 Bock 2014-02-06 5 20 24 73 6 19 24 5

19 ), fifo as (
20 select s.loc, s.item, s.purch, s.loc_qty The 24 Ale we noticed before is picked from location 2-A-02 and split with

24 from (
6 to order 51, 8 to order 62 and 10 to order 73.
25 select o.item, o.qty ord_qty So we clean up the code and leave the columns the picking operator
26 , i.loc, i.purch, i.qty loc_qty

needs and order by location:
40 ) s
41 where s.sum_prv_qty < s.ord_qty 1 with orderlines as (
42 )
43 select f.loc, f.item, f.purch, f.pick_qty, f.from_qty, f.to_qty 15 ), orderbatch as (
44 , o.ordno, o.qty, o.from_qty, o.to_qty
45 , least( 19 ), fifo as (
46 f.loc_qty
47 , least(o.to_qty, f.to_qty) 41 )
48 - greatest(o.from_qty, f.from_qty) + 1 42 select f.loc, f.item, f.pick_qty pick_at_loc, o.ordno
49 ) pick_ord_qty 43 , least(
50 from fifo f 44 f.loc_qty

160 OTech Magazine #3 May 2014


analytic
warehouse picking
11/12 Kim Berg Hansen

45 , least(o.to_qty, f.to_qty)
46 - greatest(o.from_qty, f.from_qty) + 1 19 ), fifo as (
47 ) qty_for_ord
48 from fifo f 41 ), pick as (
49 join orderlines o 42 select to_number(substr(f.loc,1,1)) warehouse
50 on o.item = f.item 43 , substr(f.loc,3,1) aisle
51 and o.to_qty >= f.from_qty 44 , dense_rank() over (
52 and o.from_qty <= f.to_qty 45 order by
53 order by f.loc, o.ordno; 46 to_number(substr(f.loc,1,1)), -- warehouse
47 substr(f.loc,3,1) -- aisle
LOC ITEM PICK_AT_LOC ORDNO QTY_FOR_ORD 48 ) aisle_no
------- ----- ----------- ----- ----------- 49 , to_number(substr(f.loc,5,2)) position
1-A-02 Bock 5 73 5 50 , f.loc, f.item, f.pick_qty pick_at_loc, o.ordno
1-A-20 Ale 18 51 18 51 , least(
1-B-11 Bock 4 51 3 52 f.loc_qty
1-B-11 Bock 4 73 1 53 , least(o.to_qty, f.to_qty)
1-B-15 Bock 2 51 2 54 - greatest(o.from_qty, f.from_qty) + 1
1-C-04 Bock 12 51 12 55 ) qty_for_ord
1-C-05 Ale 6 73 6 56 from fifo f
2-A-02 Ale 24 51 6 57 join orderlines o
2-A-02 Ale 24 62 8 58 on o.item = f.item
2-A-02 Ale 24 73 10 59 and o.to_qty >= f.from_qty
2-D-23 Bock 1 51 1 60 and o.from_qty <= f.to_qty
61 )
62 select p.loc, p.item, p.pick_at_loc, p.ordno, p.qty_for_ord
So we have a FIFO picking list for multiple orders all we now need is to 63 from pick p
give the operator the better picking route. 64 order by p.warehouse
65 , p.aisle_no
66 , case
Multiple orders with picking route 67 when mod(p.aisle_no,2) = 1 then p.position
68 else -p.position
Finally we can combine this batch multi-order FIFO picking with the ef- 69 end;
ficient route calculation going ascending/descending in the aisles:
LOC ITEM PICK_AT_LOC ORDNO QTY_FOR_ORD
------- ----- ----------- ----- -----------
1 with orderlines as (
1-A-02 Bock 5 73 5

1-A-20 Ale 18 51 18
15 ), orderbatch as (

161 OTech Magazine #3 May 2014


analytic
warehouse picking
12/12 Kim Berg Hansen

1-B-15 Bock 2 51 2 Just do it


1-B-11 Bock 4 51 3
1-B-11 Bock 4 73 1
Ive walked through this step by step to demonstrate how I develop
1-C-04 Bock 12 51 12 SQL step-wise with analytic funtions. Once you start using this more and
1-C-05 Ale 6 73 6
2-A-02 Ale 24 51 6
more often, you will get the hang of thinking about it whenever your task
2-A-02 Ale 24 73 10 requires comparing or summing data across rows. Youll discover many
2-A-02 Ale 24 62 8
2-D-23 Bock 1 51 1
of your tasks profitably can use analytics to avoid procedural row-by-row
code (either PL/SQL or client side) and become much more efficient.
So using analytic functions we ended up with a single SQL statement that Your boss will love you for utilizing the power in the Oracle database he
efficiently batch-picks multiple orders by First-In-First-Out principle in an has paid dearly for. He will save money when your code does not need
optimal picking route. bigger application servers. And your users will love you for being able to
work faster without having to wait for the system. And you will love your-
self every time you make an awesome piece of analytic SQL.

The complete script used for this article can be found here:
http://goo.gl/XvgEBd

162 OTech Magazine #3 May 2014


What does
Lonneke Dikmans adaptive in
Oracle ACM
www.vennster.nl

twitter.com/lonnekedikmans
 nl.linkedin.com/in/
serviceorientedarchitecture
mean?

163 OTech Magazine #3 May 2014


What does adaptive
in Oracle ACM mean?
1/10 Lonneke Dikmans

In the previous article in O-Tech magazine you 1: the act or process of adapting : the state of 3. What type of change do I need? A new activ-
learned how the case component fits in the being adapted <his ingenious adaptation ity? A new rule? A new plan? A new milestone?
BPM Suite and Oracle Fusion Middleware, and of the electric cautery knife tosurgery 4. Goal of the change. What is the goal? To
when to use BPMN 2.0 versus case manage- George Blumer> improve the quality of the process? To improve
ment. In this article we look into the adap- 2: adjustment to environmental conditions: as the efficiency of the process? To minimize risk
tive part of ACM. The definition and different a : a djustment of a sense organ to the intensity of failure? Or to minimize the impact of failure?
aspects of adaptation are discussed and the or quality of stimulation To react to a change in the environment?
Oracle BPM Suite and Oracle SOA Suite are b:m  odification of an organism or its parts that
evaluated against these aspects. The example makes it more fit for existence under the Elements we want to change
we use is the example that is shipped with the conditions of its environmentcompare A case consists of number of elements [5]:
pre-built virtual machine 11.1.1.7 [1]: the EURent ADJUSTMENT 1b Case File. The case file represents case infor-
example. mation. It consists of items that can be any
Basically, adaptation is about changing some- type of data structure.
What do we mean by Adaptive? thing so it fits your needs better or to protect Role. Caseworkers or a team of caseworkers
There have been heated debates over the defi- yourself from change in your environment. that are authorized to execute tasks or raise
nition of Adaptive Case Management in the last events are represented by a role.
couple of years. On top of that, people have When it comes to adaptive case management, Input and output parameters.
suggested other terms as well: Production Case a number of aspects are important: Case Plan Model. The case plan model con-
Management, Dynamic Case Management and 1. Who makes the change [2]? The business tains both the initial plan and all the elements
Advanced Case Management (see references). analyst? The developer? An administrator? The that support further evolution of the plan
knowledge/case worker? The system [4]? through run-time planning by case work-
According to Merriam-Webster [1], adaptation 2. How long does it take to make a change? ers. The plan consists of stages, milestones,
is: Going through an analysis-design-develop-test- tasks, and event listeners. There are rules
deploy cycle is less adaptive than changing and entry and exit criteria associated with
something while executing a case [2], [3]. these elements.

164 OTech Magazine #3 May 2014


What does adaptive
in Oracle ACM mean?
2/10 Lonneke Dikmans

The possibilities the Oracle BPM Suite offers to ] Unstructured data


change these elements differ per element. The Because the case component points to a folder
end user can make some change on a case-by- in your document management system (DMS)
case basis at runtime; the administrator can or enterprise content management (ECM) and
make changes that apply to all cases, the devel- the data is unstructured, these can be easily
oper sometimes needs to be involved to make changed to whatever your DMS or ECM sup-
other changes. ports. In the screenshot below you see an
example of an upload screen to upload docu-
Case File Illustration 1. JDeveloper screen to add structured case data and point to ECM for
unstructured data ments to the case. This data is stored in UCM.
The case file consists of both structured data
and unstructured data. In the Oracle Case Structured data
component the structured data that is stored in The structured data are used as input for activi-
the BPM database, and unstructured case data ties, as facts in the rules and as output of the
is stored in a Document Management System case. Input for activities is either case data or
(DMS) or Enterprise Content Management user input that can be saved as case data. You
System (ECM) that you integrate with the case can create your own screens for data entry,
component. When designing the case in JDe- based on predefined structured data elements.
veloper you point to the folder in your content Illustration 3. User interface that exposes the API to add documents or other file types to
management system. the case file [1]

Changing the Case File


The case file can be easily adapted as far as the
unstructured data is concerned. This can be
done on the fly, as long as the DMS or ECM sup-
ports the data structure (for example movies,
audio, etc.). However, adding new structured
Illustration 2. Structured data is shown to the user from the EURent example [1]

165 OTech Magazine #3 May 2014


What does adaptive
in Oracle ACM mean?
3/10 Lonneke Dikmans

What Add new (struc- Edit struc- Add new unstruc- Edit unstructured data
tured) data type tured data tured data type
Scope All cases Instance Case instance Case instance
Who Developer Case worker Case worker Case worker
Timing Design time Runtime Runtime Runtime
Goal Management Edit data that Add new docu- Build the content of the case.
information, au- is relevant to ment types to a This can be for auditing pur-
diting purposes, the case case poses, for communication pur-
business rules are poses or other reasons.
based on the con-
tent of the data.
Tool JDeveloper User inter- User interface Case GUI (out of the box or Illustration 4. Claim check applied: using keys to point to objects that are needed in the GUI
face (note that the custom made) but not in the case

ECM should sup-


port the docu-
ment type)

data elements to the case is something that ness rules. Other data that you need for the
needs to be done by a developer. The table execution of an activity can be fetched in the
above summarizes who can do what type user interface, based on a key that is part of the
of change and what the scope of the change structured case data. This offers separation of
entails. concern: when you want to change something
in your case, you only need to change the defi-
Because users often want to change the data nition in the case, when you want to change
they see on the screen as part of the execution something in the GUI you only need to change
of the task it is a good idea to limit the struc- something in the GUI.
tured case data to data you need for your busi-

166 OTech Magazine #3 May 2014


What does adaptive
in Oracle ACM mean?
4/10 Lonneke Dikmans

Guideline: Limit the use of structured data in the case


Statement Structured data is used for business rules about milestones, stages and starting or
ending activities. All other data, for example that is needed execute a task should be
defined in the business applications.
Rationale Structured data need to be changed by the developer. The data that the user want
to see in the screens changes the most. Often this has nothing to do with the case
progression.
Data can be manipulated in the context of different case types, think for example
about customer data that can be manipulated in a customer case and in a permit
case. The data in the case becomes out of date if it is stored in different cases.
Often organizations have COTS applications that keep track of the structured data
(CRM for example)
Implica- Use claim check pattern in case activities
tions The data services need to take auditing requirements into account

Changing Roles
According to the standard, a case has one or more case roles. These are called stakeholders in the
case component of the Oracle BPM Suite. You can assign application roles, process roles, users or
groups to a stakeholder in a case.

Who plays what role can change for two reasons:


1. When a process is changed, it changes the way people work. You might be in production and de-
cide you want people with different skill sets to execute a task.

2. Different departments might assign different people to activities, based on the population, size etc.

167 OTech Magazine #3 May 2014


What does adaptive
in Oracle ACM mean?
5/10 Lonneke Dikmans

For these two reasons it is a good idea to use application or process roles, rather than users
and groups in the task definitions. This will make sure your process can adapt to changes you
make in your organization structure or to changes you want to make in the process in terms of
who does what.

Guideline: Use application roles or process roles in your case


Statement Assign application roles or process roles to stakeholders in your case, not
users and groups
Rationale In the LDAP store, groups are often organized according to hierarchical
structure of the organization. Different departments might divide the
work differently.
If reorganization takes place, the only thing that has to be changed is the
groups or users that are assigned to the role. This can be done at run-
time.
Implications Always use process roles or application roles in the case Illustration 5. JDeveloper screen to add and edit stakeholders (case roles) in a case

Assign the users and groups in the EM

Design time
JDeveloper allows you to add and edit stakeholders. Once this is deployed, these stakeholders
can be used in all subsequent new cases that are started.

Runtime
The case component of the BPM Suite offers an API to the case. One of the methods of the
ICaseInstanceService is addStakeholder. This means that if you expose this to the user, they
can add Stakeholders at runtime as well, to a specific case. This means that these stakehold-
ers wont be part of the regular case execution, only of the running instance you added the Illustration 6 Example of how to expose the adding stakeholders to the case worker

stakeholder to.

168 OTech Magazine #3 May 2014


What does adaptive
in Oracle ACM mean?
6/10 Lonneke Dikmans

Apart from adding stakeholders to running case instances, administrators can assign groups
and users to application roles or process roles using the tools from SOA Suite (Enterprise Man-
ager) and the Administration panel in the BPM Workspace.
Note that this only works if the developer assigned application roles or process roles to the
stakeholders in the case!

This can be changed at runtime. It does not require a redeploy of the application. This will then
be applied to all cases (both running and new cases), to which it applies.
Aspect Add new stakeholder to a case Assign people to an activity
Scope All cases One instance All cases One instance
Who Developer Case worker, Developer, Adminis- Case worker Illustration 7. Administrative interface from BPM Workspace to assign members to roles
process owner, trator, Process own-
etc. er, team manager
Timing Design time Runtime Design- and runtime Runtime
Goal New partici- Add a new Reorganization, Transfer a task
pants because stakeholder to Holiday, Process im- to someone
the scope of the one particular provement else
case is expanded instance
(include custom-
ers e.g.)
Tool JDeveloper User interface JDeveloper, BAM, BPM work-
SOA Composer, EM, space
BPM Workspace
Prerequisite None Case API is ex- Use application roles Expose reas-
posed in GUI or process roles as sign API in GUI
members of the
stakeholder

169 OTech Magazine #3 May 2014


What does adaptive
in Oracle ACM mean?
7/10 Lonneke Dikmans

Changing input and output parameters


The input parameters are mainly used for the case file and as facts to define rules, as we saw
before. The output parameters are mainly used for management information.

The input and output parameters can be defined in JDeveloper at Design time. Input and out-
put parameter structures cant be changed at runtime.
Obviously, the content of the input and output parameters can be changed at runtime. The
input parameters are set when the case is started, and the output parameters are either deter-
mined by the user or by the system based on the business rules that you have defined.
Illustration 8 JDeveloper screen to define the outcome

Aspect Input Output


Scope All cases All cases
Who Developer Developer
Timing Design time Design time
Goal Add new data to create rules Measure more fine grained outcomes
Tool JDeveloper JDeveloper

Changing the case plan model


A case consists of two distinct phases: design time phase and a run time phase. Activities that
are executed can be so called plan items or discretionary items. The caseworker decides at run
time whether and when the discretionary items will be executed [5].

170 OTech Magazine #3 May 2014


What does adaptive
in Oracle ACM mean?
8/10 Lonneke Dikmans

Defining new elements


The Oracle case component does not support
stages. These can be emulated using mile-
stones.
New milestones, events and activities for a case
model plan can be defined at design time in
JDeveloper.

Illustration 9. Design time and run time items. From: CMMN Beta 1

There are a number of things that can be changed in the case plan model:
The rules that determine when an activity is executed can change
The activities itself can change (new activities can occur for example)
The type of the activity can change (an activity that was discretionary can become a plan
item or vice versa) Illustration 10. Adding or editing milestones in JDeveloper

A new event can be raised


A new milestone can be defined Adding activities is a little less straightforward:
A milestone can be attained first you define the activity and then you pro-
A milestone can be revoked mote it to a case activity.
A new stage can be defined
A stage can be attained or completed
A stage can be reopened

171 OTech Magazine #3 May 2014


What does adaptive
in Oracle ACM mean?
9/10 Lonneke Dikmans

Changing conditions and rules


Rules can be changed using the rule editor in the Oracle Business Process Composer at
runtime. The new rules will apply to all the cases.

The case component offers an API to add ad-hoc tasks and raise events (using the ICa-
seEventService interface)

Aspect Add element Change conditions of an element


Scope All cases One instance All cases One instance
Who Developer Case worker Developer, Administra- Case worker
tor, Process owner,
You can define ad hoc tasks (that have not
team manager
been predefined) or events on a case-by-case
Timing Design time Runtime Design- and runtime Runtime
basis. The figure below shows an example of
Goal Add new activi- Create an ad- Process improvement, Handle an ex-
this from the EURent sample. ties, user events, hoc activity changing the rules ceptional situa-
milestones to or raise a user when a milestone is tion by activat-
case to make event for a reached, when an activ- ing an activity
the case more specific case ity is activated, etc. or attain a mile-
efficient or to because some- stone.
get another re- thing exception-
sult al happens
Tool JDeveloper User interface JDeveloper, BAM, SOA User Interface
Composer, EM, BPM
Workspace
Illustration 11. Example of an ad-hoc task that is added by a caseworker

172 OTech Magazine #3 May 2014


What does adaptive
in Oracle ACM mean?
10/10 Lonneke Dikmans

Summary owners to monitor KPIs and allow them to Planning and Social Business.
You have seen how you can adapt a single case make the case execution more efficient. http://social-biz.org/2012/09/12/case-manage-
at runtime, change rules for all cases at runtime ment-contrasting-production-vs-adaptive/
and how to change the plan items in the case At the moment there is no intelligence in the
plan using JDeveloper. case component. The case component is not 4. What is Adaptive Case Management?
a self learning system, nor does it offer capa- Adaptive Case Management
The caseworker can edit the case file, reas- bilities to business users (the case worker) to http://acmisis.wordpress.com/what-is-adaptive-
sign activities to other users, choose when to promote on-off actions to case activities that case-management-acm/
execute discretionary activities and attain and are part of the initial case plan, available to all
revoke milestones in case instances. case workers in the organization. This is still a 5. Advancing BPM by adding Smart or
Administrative users can change rules at runt- development effort. Intelligent. Welcome to the Real (IT) World.
ime, assign new groups, application roles and http://isismjpucher.wordpress.com/category/
users to existing application roles and process machine-learning/
roles and edit the task definitions. References
These changes apply to all cases. 1. Pre-built Virtual Machine for SOA Suite and 6. Case Management Model and Notation
BPM Suite 11g. (CMMN). January 2013. OMG.
Adding new structured data elements, activi- http://www.oracle.com/technetwork/middle- http://www.omg.org/spec/CMMN/1.0/Beta1/
ties, milestones or events to the initial case ware/soasuite/learnmore/vmsoa-172279.html PDF/
plan can only be done at design time in
JDeveloper. 2. Adaptation. Merriam-Webster. http://
www.merriam-webster.com/medical/
BPM Suite offers a BPM space that supports adaptation?show=0&t=1397299440
cooperation between case workers and the
BAM tooling that is part of the SOA Suite and 3. Case Management: Contrasting Production
BPM Suite help team managers and process vs. Adaptive September 2012, Collaborative

173 OTech Magazine #3 May 2014


Oracle Access
Robert Honeyman Manager:
Clusters,
www.honeymanit.co.uk

twitter.com/Honeyman_IT
www.facebook.com/

connection
HoneymanItConsulting
 uk.linkedin.com/in/
roberthoneyman/

resilience and
Coherence
174 OTech Magazine #3 May 2014
Oracle Access Manager: Clusters,
connection resilience and Coherence
1/7 Robert Honeyman

In my previous article for OTech I provided a set The following preparatory step must be Once you have completed the preparatory
of high-level considerations for a High Availabil- performed independently of the deployment steps you can launch the OAM domain con-
ity Oracle Access Manager and Oracle Internet hosts: figuration wizard using config.sh. This must
Directory solution for Single Sign-On. I outlined Prepare shared storage for your domain be the version from the OAM ORACLE_HOME/
the solution topology and some of the key AdminServer directory (NFS / Clustered File common/bin directory to ensure you have the
design and implementation considerations. System) OAM configuration options available to you.
Select the Oracle Access Management and
In this article I want to elaborate on some of The main preparatory steps to be performed Oracle Enterprise Management options, OPSS
these OAM and OID related topics which were on both deployment hosts (oamhost1, oam- and JRF will also be automatically selected as
not covered in depth in the previous article. host2) are: dependencies.
I will discuss the configuration and options Install a JDK and Weblogic 10.3.6
required to build an OAM cluster, provide detail Install the OAM 11g Release 2 software You will then be asked for the domain and
on configuring High Availability for database (Install only option) application location, here you can specify the
connections and outline the use of Coherence Mount the shared storage directory shared storage you prepared earlier for your
and its provision of OAM High Availability master domain directory. The Weblogic domain
features. As shown in the steps above I prefer perform- startup mode should be Production Mode as
ing an Install only deployment initially. The there should be no need for classloader trac-
OAM cluster configuration example deployment can then be used as template for ing or other developer features with an off the
To start I provide some detail on building a ba- other configurations or easily rolled back or shelf Fusion Middleware application. We also
sic two-node OAM cluster configuration. I start restored without starting from scratch. If using want the safety provided by Weblogic lock and
from the assumption that basic Fusion Middle- a virtual machine or versioned storage then edit change management provided in Produc-
ware host and operating system pre-requisites you can snapshot your installation, if not then tion Mode for the OAM domain.
have been satisfied and the OAM repository you always have tar or zip to take backups for
has been created with RCU prior to embarking subsequent restores.
on the installation.

175 OTech Magazine #3 May 2014


Oracle Access Manager: Clusters,
connection resilience and Coherence
2/7 Robert Honeyman

When presented with the Optional Configu-


ration be sure to select the Administration
Server and also the Managed Servers, Clusters
and Machines option. This will allow configura-
tion of the OAM Weblogic cluster topology.

To configure for high availability the Admin


Server listening address should be a host-inde-
pendent floating IP address to enable the Ad-
min Server to run on any host. This is standard Figure 1 OAM cluster configure managed servers Figure 2 Configure OAM cluster
Fusion Middleware practice as only one Admin
Server can exist with a Weblogic domain. The OAM configuration tool will ask for cluster Having prepared your managed servers and
You must configure two OAM managed serv- configuration on the Configure Clusters dia- named your cluster you will need to associate
ers each using the Fully Qualified Domain Name logue you need to name the cluster, as shown I your managed servers wls_oam1 and wls_oam2
(FQDN) of the two OAM hosts. The example used oamcluster in the example below. You do with your cluster oamcluster. The Assign Sev-
shown in Configure Managed Servers screen- not need to specify the cluster address which ers to Clusters screenshot below shows this
shot uses managed server names wls_oam1 will be automatically derived. Unicast is the association between the managed servers and
and wls_oam2, but you can select an alterna- preferred protocol for WebLogic inter-cluster the cluster.
tive name if you prefer such as oamserver1 and communication these days, so retain the de-
oamserver2. fault setting.

176 OTech Magazine #3 May 2014


Oracle Access Manager: Clusters,
connection resilience and Coherence
3/7 Robert Honeyman

Connection resilience for


OAM database connections
OAM uses JDBC to connect to the OAM reposi-
tory database and the preferred approach for
connection resilience for Oracle RAC databases
is to use WebLogic GridLink Data Sources.
GridLink Data Sources have advantages over

Multi Data Sources including:


Figure 3 OAM managed server cluster assignment Figure 4 OAM cluster target deployments
Fast Connection Failover for enhanced con-
nection failure detection and management
The configuration assistant will also request If all goes well your cluster configuration will Fast Application Notifications (FAN) for intel-
database connection information, but I provide complete and the configuration assistant takes ligent load balancing and session affinity
more detail on this in the next part of the arti- care of the leg work behind the scenes. As The ability to use Oracle RAC SCAN addresses
cle. Through the remaining steps of the config- discussed in my previous article there are a few
uration you will need to configure node man- additional steps to ensure your OAM cluster is Note that if you are not using an Oracle RAC da-
agers for your OAM hosts, one per node, and fully operational, these are: tabase for your OAM repository, but an alterna-
ensure the OAM application deployments are Configure load balancing for HTTP e.g. OAM tive DBMS you will not be able to use GridLink
targeted at the cluster as opposed to individual credential collector (login page) and logout Data Sources and will have to use Multi Data
managed servers. The final screen shot below Set OAM RequestCacheType setting to Sources.
shows the OAM application oam_server and COOKIE
oamsso_logout deployment targets for oam- Configure the front-end host for the OAM As outlined above when I discussed building a
cluster, and the administrative em and oam_ad- cluster cluster you will be asked by the OAM configura-
min (oamconsole) applications deployed to the Deploy Webgate and policies for you secured tion tool to specify database connection de-
Admin Server. applications tails as the screenshot below illustrates. Upon

177 OTech Magazine #3 May 2014


Oracle Access Manager: Clusters,
connection resilience and Coherence
4/7 Robert Honeyman

selecting the Oracle Driver (Thin) for GridLink nication transport for notifications from the I dont cover building an OID Identity Man-
Connections driver option you will be present- RAC cluster. Again you can use the SCAN agement cluster here, but while on the topic
ed with the configuration components shown address, and the example shown uses the of database connection resilience I thought I
in the screenshot. default ONS port 6200. would cover how to set up OID to highlight the
4. Enable FAN by selecting the corresponding differences in the connection method when
check box. This will ensure the OAM applica- compared to OAM.
tion servers listen for health and status notifi-
cations from the RAC cluster. These notifica- As outlined in my previous article OID connects
tions will be provided through the ONS client to its back-end database using OCI so Transpar-
transport outlined in step 3. ent Application Failover (TAF) must be used for
connection resilience.
Connection resilience for
OID database connections Prior to OID configuration using Oracle Univer-
Figure 5 OAM configure GridLInk RAC connections
The latest Oracle Identity Management 11.1.2.2 sal Installer (OUI) you must configure a TAF
Enterprise Deployment Guide specifies the use enabled service in the OID RAC database clus-
I discuss the meaning of these options below. of Oracle Unified Directory as the directory ser- ter using srvctl on a cluster member database
1. The Database Service Name should be con- vice. However as outlined in my previous article server. The form of the srvctl add command is
figured with a value of your service_names OUD is still not certified with core legacy tech- shown below:
srvctl add service \
parameter for your OAM database. nologies such as Oracle Forms and Reports, -d oid_database_name \
2. The SCAN address or host alias and the SCAN only OID is. Clearly OUD is the way forward for -s oid_service_name \
-r oid_preferred_rac_inst_1,oid_preferred_rac_
listener port are specified in the Service Lis- Oracle Identity Management in the longer term inst_2 ...\
tener section. SCAN will manage the routing as OID is not receiving active development. In -q aq_ha_notifications_flag \ (TRUE|FALSE)
-m failover_method \ (SELECT)
and load balancing to available RAC VIPs. the meantime OID is still the only option when -e failover_type \ (BASIC)
3. Specify the ONS (Oracle Notification Service) configuring SSO for the complete Fusion Mid- -w failover_delay_in_secs \ (5 recommended)
-z failover_retries (5 or less)
client configuration to enable the commu- dleware stack.

178 OTech Magazine #3 May 2014


Oracle Access Manager: Clusters,
connection resilience and Coherence
5/7 Robert Honeyman

For those of you not familiar with RAC con- configuration you will be asked for the OID da- connect to the RAC service created using srvctl
figurations I will briefly discuss a few of these tabase connection details by the configuration prior to installation. As a result of the server-
parameters in relation to OID. tool. The connection details are specified in the side TAF policy being configured on the RAC
aq_ha_notifications when set to TRUE form shown below to the OID configuration database cluster, when OID connects it will
this enables logging of database HA status utility when using a RAC cluster. adopt the TAF policy assigned through srvctl
events to the OID logs, this is the recom- and failover will be enabled.
dbhost1-vip:1521:idmdb1^dbhost2-vip:1521:idmdb2@oiddb
mended setting. If you really do not wish to
record these events you can set to FALSE. Coherence and Oracle Access Manager
failover_method the SELECT option fails The RAC instance targets are separated by a The final topic I will discuss in this article is how
over the OID database session and resumes caret. This information is used to create the Coherence is used within an OAM cluster. The
any read operations from the point the data- initial connection to the OID database for Coherence distributed cache system has two
base connection failed. This is the preferred configuration and generate connection infor- primary functions in an Oracle Access Manager
approach for OID. mation. The information is not used in this configuration. These both aid the availability of
failover_type the BASIC option enables form for connections once OID is operational, the Oracle Access Manager infrastructure.
TAF, without this set TAF is disabled. Note instead OID uses tnsnames.ora. Note that the Distribute configuration from the oamcon-
PRECONNECT is not supported for server- RAC SCAN address is not specified, while using sole application amongst clustered servers
side TAF at least for the Oracle 11g Database. SCAN may well work its use is not specified in Maintain server-side user session information
the OID documentation . in a resilient and distributed cache
After configuring the service for OID you must
start the service using srvctl start <service- After configuration of OID is complete, a When a change is made to the OAM configura-
name> and Oracle recommend to validate the barebones TNS database service configuration tion in the oamconsole application the change
service and check parameter configuration us- will be present in a tnsnames.ora file in the is written to the OAM data stores but is also
ing srvctl status and srvctl config. ORACLE_INSTANCE/config directory. This TNS distributed by Coherence to the OAM managed
Assuming you have the TAF enabled database service configuration on the OID hosts will not servers. This use of Coherence allows real time
service configured and have proceeded to OID contain TAF parameters, but will be used to updates of configuration, policies and session

179 OTech Magazine #3 May 2014


Oracle Access Manager: Clusters,
connection resilience and Coherence
6/7 Robert Honeyman

management changes to the OAM services as Oracle Advanced Security can be employed. uted cache to replicate to another node. A new
without the need for application or server primary copy of the user session is reinstated
restarts. Clearly restarting services would be A copy of a users OAM session is maintained to the local cache of a member server when a
highly impractical with such a critical infrastruc- in the local cache of the OAM cluster member request is made to the OAM cluster. This allows
ture system so propagating these changes to server where the session is active. A secondary the user session to continue uninterrupted. The
the managed servers is a key requirement. copy is maintained on other Coherence cluster diagram below illustrates the state of a single
servers in the distributed cache. In the event user session cached in an OAM server cluster
Coherence clusters also store and communi- that the server holding the local copy is lost configuration before and after a failure.
cate OAM user session information, The OAM Coherence uses a secondary copy in the distrib-
user session information includes user informa-
tion such as session lifetime information, idle
timeout, state and persistence options. Coher-
ence servers communicates with other cluster
members via UDP using encrypted communi-
cations secured with mutual SSL. Only servers
which have a recognized SSL trust relationship
are able to participate in these communica-
tions. This provides wire security but session
information is not encrypted by Coherence in
memory or in the OAM database if one is used
for persisted sessions from the distributed
cache. In the event that a database is not used
sessions evicted from cache are stored unen-
crypted on the file system. Additional measures
to encrypt database or file system storage such Figure 6 OAM Coherence and user session cache - Before failure

180 OTech Magazine #3 May 2014


Oracle Access Manager: Clusters,
connection resilience and Coherence
7/7 Robert Honeyman

With Oracle Access Manager 11g Release 2 some advantages for large installations such as
it is now possible to use client-side session reduced memory and client traffic, but also has
management as an alternative to the default more limited functionality for enterprise con-
server-side Coherence option. In the client-side figurations. Client-side session management is
configuration the users session information an option you can explore if you require high
is stored only in a client-side cookie and the throughput for large user bases.
OAM server becomes stateless. This offers

Figure 7 OAM Coherence and user session cache - After failure

181 OTech Magazine #3 May 2014


Qumu &
Jon Chartrand WebCenter -
Bringing
www.teaminformatics.com

 www.linkedin.com/in/
jonchartrand

Video to the
Enterprise
182 OTech Magazine #3 May 2014
Qumu & WebCenter - Bringing
Video to the Enterprise
1/6 Jon Chartrand

They say a picture is worth a thousand words. If this is true, then 30 specifically, a service that addresses those barriers to enterprise use...
frames per second of HD quality video must be worth volumes. We all This is where I happily introduce you to Qumu (koo-moo) and their prod-
know the rise in consumer video has been meteoric. With more than a uct Video Control Center. Shake hands, youre going to be friends.
billion unique users a month spending more than 4 billion hours watch-
ing videos, Googles YouTube is the undisputed champion of online video
and consumers continue to clamor for more. With video rated as 6 times
more effective than print, businesses have been keen on leaping into
the advertising and marketing side of the video trend, and with no small
amount of success. Whats been a much slower sell, however, is use of
this medium for non-marketing activities in the enterprise and the rea-
sons for that can be summed up in three barriers: ownership, features, While Qumu has already started making ingress to the enterprise mar-
and integration. ket through successful partnerships with the likes of Vodaphone, Dow
Chemical, and Safeway, my purpose here is to expand your insight when
it comes to integrating enterprise video with Oracles WebCenter plat-
form. As an architect of WebCenter projects and installations, Ive seen
first-hand how some or all of the WebCenter applications can be used to
great effect across all realms of the business. However, while the Digital
Asset Management services of WebCenter are powerful, there is a func-
When you compare the functionality of online video providers catering tionality gap when it comes to true enterprise video services. This is my
to consumers (coughYouTubecough) against the needs of a world-class focus for today: introducing you to the functionality of a true enterprise
enterprise the immediate result is a glaring gap. Combined with the video service and discussing how we can bring that power right into a
realization that online video users are expected to double to 1.5 billion WebCenter installation - no matter which pieces youre using.
by 2016 and yet, as of now, only 24% of national brands are using online
video this indicates theres a need to be filled through providing tailored,
enterprise-class video services integrated into existing platforms. More

183 OTech Magazine #3 May 2014


Qumu & WebCenter - Bringing
Video to the Enterprise
2/6 Jon Chartrand

Barrier 1: Owning Your Content Barrier 2: Enterprise-Class Features


Lets start at the beginning with the first barrier: ownership. Several The first, and in my opinion, most important feature of any enterprise
clients that began looking into using video internally have run up against class service is a robust SDK API structure. Qumu provides access to APIs
the legal barrier of YouTubes Terms of Service. While there have been which allow management of video content (browse, search, organize,
many analyses of YouTubes TOS language, the generally accepted short publish, and view), authentication, adjustable security, and customizable
version is that by uploading to the service the uploader is granting You- video players. With support for HTML5, Flash, and Silverlight players,
Tube rights to the content identical to those of the owner without actu- social engagement via ratings, comments, and sharing, and enablement
ally conferring ownership. of live webcasts, the API is incredibly strong and allows you to fully inte-
grate not only the interfaces but the functionality of Video Control Center
This means, if it chose to, YouTube could appropriate all or part of any of into your infrastructure.
your videos for use in their own promotional materials. While the same
analyses have concluded that such appropriation is unlikely and the right
extends only until the video is deleted, this has simply been a legal bridge
too far. When it comes to advertising or marketing related materials, the
quandary is minimal. For internal materials such as orientations or train-
ing, the potential ramifications are usually too much to leave to chance.

Qumus TOS for Video Control Center has your business not only retain-
ing full ownership but, unlike YouTube, it doesnt transfer use-rights to
Qumu. This means your sensitive or internal materials remain that way.
Beyond this, Qumus Video Control Center API - how well integrate
features and functionality with the enterprise - includes capabilities for
user authentication, access controls for permissions, and singular private
codes to ensure confidentiality.

184 OTech Magazine #3 May 2014


Qumu & WebCenter - Bringing
Video to the Enterprise
3/6 Jon Chartrand

When dealing with video content, one thing we always want to be wary Taken a step further, Qumu not only offers their own Content Delivery
of is network saturation. Our corporate networks may not necessarily be Network, VideoNet Edge, but their product also works with pre-existing
delicate but a wise administrator always keeps an eye on utilization. 3rd party CDNs such as Cisco, BlueCoat, Akamai, and AT&T. You even
When it comes to delivering streaming content, Video Control Center has get access to real-time data so you can see bytes transferred, active and
your back with multi-format transcoding and the Qumu Pathfinder. Basi- failed requests and other performance metrics. Essentially, Qumu has
cally, when you upload your raw video VCC transcodes that into several done everything short of handing your viewers a BluRay disc to make
different formats and bitrates that you configure. When requested to de- sure they experience the best quality and your network remains solid.
liver the stream, Pathfinder automatically detects who the user is, what
type of device is being used, and what network the user is connected to, Before a user can view a video, they need to find the video. Qumus
and then serves the appropriate version of the video. Speech Search indexes the audio of each video and using proprietary al-
gorithms parses out the phonemes (the individual sounds which make up
a word) instead of the words. This results in higher accuracy than typical
speech-to-text methods. Each video receives a phonetic index file which
is searched against to return weighted results. This means returns are
generated quickly and accurately - even for dialects within languages.

Finally, Qumu also offers features to help you create content as well as
distribute it. Their Quick Capture tool allows for browser-based produc-
tion of content spanning screen recording, webcam video, and audio.
This means high-quality content can be produced quickly and easily. Even
better, Qumu offers live broadcasting so events like quarterly meetings
or executive town halls can be offered out to those unable to attend in
person.

185 OTech Magazine #3 May 2014


Qumu & WebCenter - Bringing
Video to the Enterprise
4/6 Jon Chartrand

What weve covered here are, in my opinion, the level of features re- The goal of any addition to an enterprise architecture is not to simply slap
quired of an enterprise class video service. However we dont want this in a new application or website but to integrate functionality into existing
set of features to live in a bubble, so the next question is, how do we get structures. We want to marry the old and new to improve whats there
them to fit into the tapestry that is an existing WebCenter installation. and make advances in functionality, scalability, durability, and/or support-
The answer, thankfully, is easily. ability. This is how weve approached the concept of bringing enterprise
video capabilities in WebCenter from theory to practice. The idea is to
Barrier 3: Integration into Existing Infrastructure approach the integrations from either a surface perspective or from a
What distinguishes an enterprise from a small business is infrastructure; depth perspective. Lets look at examples of both.
pieces put in place to ensure continuity of operation, consistency of pro-
cesses, or scalability of action. For my clients, one of the primary pieces of From the surface perspective, we attack the concept by integrating
infrastructure tends to be one or more components of Oracles WebCent- Video Control Center functionality into surfaced applications and editorial
er platform. This means theyre focused on owning and managing their interfaces. Bringing the Video Control Center interfaces into your existing
content, building communities of structured collaboration, publishing applications is a simple matter. You can choose to rebuild the functional-
dynamic and scalable web properties, or any combination of these. Each ity via the API or just surface the interface via iframe. Below, you can see
are inevitably tall orders for any business to achieve (and achieve well) a custom portal that has been developed and includes the native VCC My
but the WebCenter platform has proven itself again and again. Programs interface.

186 OTech Magazine #3 May 2014


Qumu & WebCenter - Bringing
Video to the Enterprise
5/6 Jon Chartrand

From here, you have access to your entire video library, the ability to The same integration can be added to the CK Editor used by both Sites
upload additional files as you create them, and even access to the Quick and Content for their respective contribution actions. This makes the
Capture tool to use as you need. From the editorial side, merging VCC vid- front-end integration virtually seamless for editors and contributors to
eos into your content is just a matter of customizing the right editor with enterprise web properties across any WebCenter application.
additional capabilities which access the APIs retrieval and embedding
functions. Below you can see the simple addition of a video button allows Now while this level of integration works for the editorial side of the
an editor to choose and place a video within a blog post. equation, what it still doesnt accomplish is fulfilling the mission of true
content management. Those MP4 files youre uploading to Video Con-
trol Center still exist in your ecosystem - possibly as unstructured content
living on a shared drive somewhere and that, my friends, is no way to live.
Youre losing the benefits that come with structured/managed content
which is why you (hopefully) have WebCenter Content in the first place.
Lets picture how this integration would happen, utilizing the VCC API and
WebCenter Contents powerful service-oriented architecture.

Instead of living unstructured, you check your raw MP4 into the reposi-
tory like all the rest of your content. Its organized by metadata, secured
by roles, revisioned as-needed, and stored in a secure location. Now we
make use of the Qumu APIs and automatically upload the raw file to
your Video Control Center library. Metadata tags from Content provide
Video Control Center with basic information such as title and also what
(Go to http://goo.gl/MB2Kj0 to see it as-captured by the transcoding options were chosen. The API acknowledges the file and re-
Quick Capture tool.) sponds with its own vital information; transcoding data, pathfinder data,
search index data, all of which is added to the metadata of the original
content entry.

187 OTech Magazine #3 May 2014


Qumu & WebCenter - Bringing
Video to the Enterprise
6/6 Jon Chartrand

For those of you unfamiliar with primary and secondary files in Web- This means any other application in your ecosystem could use WebCenter
Center Content, think of the traditional roles for these to be played by a Contents services to access and utilize that pointer. As you update ver-
software installer application (primary) and the readme file which ac- sions of your video, the pointer file is updated as well and, as follows, all
companies it (secondary). Both would be checked into Content under the applications accessing it dynamically receive the updated information.
same ID but be available separately. Used non-traditionally, our service WebCenter Content retains all the revision information including the
would swap the MP4 from primary to secondary and then create a stub original MP4 files so you can go back at any time. Additionally, WebCent-
or pointer file containing all the vital Video Control Center information er Portal and WebCenter Sites (via connector) each capably share from
about the video in its place. With the information in this pointer file, virtu- the WebCenter Content repository. This allows you to unify not only your
ally any application in your enterprise could be quickly and easily directed standard structured content but your streaming video as well.
to the stream of that video.
The deeper integration represents a true fulfillment of owning your
content when it comes to enterprise video. When combined with the
editorial/surface integrations, your enterprise will be truly meshed with
the Video Control Center functionality and capability. This is the true end-
project goal for an enterprise integration.

Hopefully through the course of this treatise youve seen an example of


what an enterprise-class video service can and should offer but also just
how that service can be integrated into an enterprise through the Oracle
WebCenter platform. Im excited to be able to bring this pillar of service
to our clients around the world and enable them to communicate better
both internally and externally without missing a beat when it comes to
integrating with their existing ecosystem.

188 OTech Magazine #3 May 2014


Oracle
Mahir M Quluzade Data Guard
12c:
www.mahir-quluzade.com

twitter.com/marzade

New features
 linkedin.com/in/mahirquluzade

189 OTech Magazine #3 May 2014


Oracle Data Guard 12c:
New features
1/14 Mahir M Quluzade

It is my first article for OTech Magazine. In this article Ill touch upon some A Data Guard configuration consists of one database that in the primary
new features of Oracle Data Guard 12c. role and one or more (max 30) databases that in the (Physical, Logical or
Snapshot) standby role and data guard services. Redo Transport service
Overview Data Guard transmits redo data from the primary database to the standby database
Oracle Data Guard ensures high availability, data protection, and disaster in the configuration. The redo data transmitted from the primary data-
recovery for enterprise data. Data Guard provides a comprehensive set of base is written to the standby redo log on the standby database. Apply
services that create, maintain, manage, and monitor one or more standby services automatically apply the redo data on the standby database to
databases to enable production Oracle databases to survive disasters and maintain consistency with the primary database. Redo Apply run on
data corruptions. A Standby database is a copy of the primary (produc- Physical standby and SQL Apply runs on Logical standby database. Role
tion) database. Then, if the primary database becomes unavailable Data transition service initiate a role transition between the primary database
Guard can switch any standby database to the production role. and one standby database in the Data Guard configuration.

Data Guard Configurations have three protection modes: Maximum


Availability, Maximum Performance and Maximum Protection. All three
protection modes require that specific redo transport options be used to
send redo data to at least one standby database. Data Guard offers two
choices of transport services: synchronous (SYNC) and asynchronous
(ASYNC).

You can use SQL*Plus to manage primary and standby databases and
their various interactions. Data Guard also offers a distributed manage-
ment framework called the Data Guard broker, which automates and
centralizes the creation, maintenance, and monitoring of a Data Guard
configuration. Also you can use Oracle Enterprise Manager GUI interface
to manage Data Guard configurations.
Figure 1: Data Guard Configuration

190 OTech Magazine #3 May 2014


Oracle Data Guard 12c:
New features
2/14 Mahir M Quluzade

Administration privilege: SYSDG Create Standby Database on Multitenant Container Database (CDB)
Oracle Database 12c provides an Oracle Data Guard-specific administra- Oracle Database 12c introduces a new multitenant architecture [3] that
tion privilege, SYSDG, to handle standard administration duties for Oracle makes it easy to deploy and manage database clouds.
Data Guard. The SYSDBA privilege also continues to work as in previous
releases. The SYSDG privilege enables the startup, shutdown, alter data-
base and etc. [1] operations. In addition, the SYSDG privilege enables you
to connect to the database even when it is not open, as SYSDBA privi-
lege.

USING CURRENT LOGFILE clause is deprecated

When preparing the primary database for standby database creation,


best practice is to create Standby Redo Logs (SRLs) on primary database.
In this case your primary database will be ready to quickly transition to
the standby role and begin receiving redo data. SRLs are important for Figure 2: Oracle Database 12c Multitenant Architecture

Redo Apply process when using Real-Time Apply, also important for We can create standby database (either physical or logical) only on Mul-
Maximum Protection and Maximum Availability protection modes. With titenant Container Database (CDB), not on Pluggable Databases (PDBs)
Oracle Database 12c, creation of SRLs is important step of Preparing the because in multitenant architecture CBD and all PDBs are using same
Primary Database for Standby Database Creation [2]. Because by default control file and only a common instance. So PDBs use common control
Redo Apply process uses Real Time apply and USING CURRENT LOGFILE file and common background processes (for example: LGWR). It means
clause is not required for start Real-Time Apply. In other words ALTER DA- you cannot create standby database for a PDB.
TABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM
SESSION command starts the apply process as Real-Time Apply. Online move data files
Prior to Oracle Database 12c, you could only move the location of an on-
line data file if the database was down or not open, or by first taking the

191 OTech Magazine #3 May 2014


Oracle Data Guard 12c:
New features
3/14 Mahir M Quluzade

file offline. But with Oracle Database 12c you can online move data files. Required Redo Transport Attributes for Data Protection
An online move of the data file operation is performed with the ALTER Modes in Oracle Database 12c
DATABASE MOVE DATAFILE statement. It increases the availability of the
Maximum Maximum Maximum
database because it does not require the database to be shut down in or- Availability Protection Performance
der to move the location of an online data file. You can perform an online
AFFIRMorNOAFFIRM AFFIRM NOAFFIRM
move data file operation independently on the primary and on the stand-
by (either physical or logical). The standby is not affected when a data file SYNC SYNC ASYNC
is moved on the primary, and vice versa. But there are some restrictions
[4] on online moving data files on databases where data guard configura- DB_UNIQUE_NAME DB_UNIQUE_
NAME
DB_UNIQUE_NAME

tions are set-up.


Prior to Oracle Database 12c for Maximum Availability mode SYNC AF-
Synchronous Transport Redo: FAST SYNC FIRM was required for attribute LOG_ARCHIVE_DEST_n parameter, but
Maximum Protection and Maximum Availability protection modes require with Oracle Database 12c we can use SYNC NOAFFIRM with Maximum
synchronous (SYNC) transport in Data Guard configuration. The SYNC Availability protection mode. This feature name is FAST SYNC. [5]
redo transport process transmits redo changes from primary to standby Fast Sync provides an easy way of improving performance in synchronous
database synchronously with respect to transaction commitment. A zero data loss configurations also allows a standby to acknowledge the
transaction cannot commit on primary database until all redo generated primary database as soon as it receives redo in memory, without waiting
by that transaction has been successfully sent to standby databases that for disk I/O to a standby redo log file.
uses the synchronous redo transport mode.
Note: There is no limit on the distance between a primary database and How to configure FAST SYNC?
SYNC redo transport destination. Transaction commit latency increases It is very easy. So, you need only change LOG_ARCHIVE_DEST_2 param-
as network latency increases between a primary database and SYNC redo eter attributes. If you Data Guard Configuration are managed non-broker,
transport destination. then you must change LOG_ARCHIVE_DEST_2 as below. Connect to
SQL*Plus on primary side:

192 OTech Magazine #3 May 2014


Oracle Data Guard 12c:
New features
4/14 Mahir M Quluzade

SQL> select cdb,name,database_role,protection_mode from v$database; DGMGRL> show configuration


onfiguration - dg
CDB NAME DATABASE_ROLE PROTECTION_MODE
---- ------ ------------- --------------- Protection Mode: MaxPerformance
YES PRMCDB PRIMARY MAXIMUM AVAILABILITY Databases:
prmcdb - Primary database
SQL> select value from v$parameter where name = log_archive_dest_2; stbcdb - Physical standby database

VALUE Fast-Start Failover: DISABLED


-------------------------------------------------------------------------------------
----- Configuration Status:
service=stbcdb, SYNC AFFIRM db_unique_name=stbcdb, valid_for=(online_logfile,all_ SUCCESS
roles)
DGMGRL> EDIT DATABASE prmcdb SET PROPERTY LogXptMode=FASTSYNC;
SQL> alter system set log_archive_dest_2 =
2 service=stbcdb SYNC NOAFFIRM db_unique_name=stbcdb Property logxptmode updated
3 valid_for=(online_logfile,all_roles);
DGMGRL> EDIT DATABASE stbcdb SET PROPERTY LogXptMode=FASTSYNC;
SQL> select value from v$parameter where name = log_archive_dest_2;
Property logxptmode updated
VALUE
------------------------------------------------------------------------------------- DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;
-------
service=stbcdb, SYNC NOAFFIRM db_unique_name=stbcdb, valid_for=(online_ Succeeded
logfile,all_roles)
DGMGRL> show configuration
Configuration - dg
same As you see, in my case primary database (prmcdb) and standby
Protection Mode: MaxAvailability
database (stbcdb) is Multitenant Container Database (CDB). At time I Databases:
have broker-managed data guard configuration with the same database prmcdb - Primary database
stbcdb - Physical standby database
names.
If your Data Guard Configuration is broker-managed, then you must use Fast-Start Failover: DISABLED
Data Guard Manager Command-Line (DGMGRL). So, you must change Configuration Status:
LogXptMode property of databases in configuration to FASTSYNC value. SUCCESS
Connect to DGMGRL as SYSDG:

193 OTech Magazine #3 May 2014


Oracle Data Guard 12c:
New features
5/14 Mahir M Quluzade

DGMGRL> show database prmcdb LogXptMode


LogXptMode = fastsync

Note: You can configure FAST SYNC for Maximum Availability protection
mode also with Oracle Enterprise Manager Cloud Control 12c.

New syntax of Role Transition Figure 4: Role Transition - Failover

Data Guard can change database roles in configuration. Role transi- Manual failover is initiated by the DBA using the Oracle Enterprise Man-
tion services function is role transition between the primary database ager GUI interface, the Data Guard brokers command line interface, or
and one standby database in the Data Guard configuration. Oracle Data SQL*Plus. Optionally, Data Guard can perform automatic failover using
Guard supports the following role management services: Switchover and Fast-Start Failover only in broker-managed Data Guard configuration.
Failover. Oracle Database 12c introduces new SQL syntax for performing switcho-
ver and failover operations to a physical standby database. [6]
A switchover is always a zero data loss operation regardless of the trans-
port method or protection mode used. A failover brings a standby online Pre-12c Role Transition Syntax for Physical 12c Role Transition Syntax for Physical
Standby Databases Standby Databases
as the new primary during an unplanned outage of the original primary
To switch over to a physical standby To switch over to a physical standby data-
database. database, on the primary database: base:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER SQL> ALTER DATABASE SWITCHOVER TOtar-
TO PHYSICAL STANDBY; get_db_name[FORCE][VERIFY];

On the physical standby database:

SQL>ALTER DATABASE COMMIT TO SWITCHOVER


TO PRIMARY;
To failover to a physical standby database To failover to a physical standby data-
base, the following statement replaces
Figure 3: Role Transition - Switchover SQL> ALTER DATABASE RECOVER MANAGED the two statements previously required:
STANDBY DATABASE FINISH;
A failover does not require the standby database to be restarted in order SQL> ALTER DATABASE FAILOVER TOtarget_
SQL> ALTER DATABASE COMMIT TO SWITCHO- db_name;
to assume the primary role. VER TO PRIMARY;

194 OTech Magazine #3 May 2014


Oracle Data Guard 12c:
New features
6/14 Mahir M Quluzade

As you see from new switchover statement have VERIFY option. This Apply State: Not Running
Apply Lag: 59 seconds
statement is very useful for validation target standby database, so VERIFY Apply Delay: 0 minutes
checks are being performed for many conditions required for switchover.
Current Log File Groups Configuration:
Verify operation write alerts to alert log file, when find any problem for Thread # Online Redo Log Groups Standby Redo Log Groups
switchover. If there havent any problem then you are getting Database (prmcdb) (stbcdb)
1 3 2
altered after call switchover statement with VERIFY option:
Future Log File Groups Configuration:
SQL> alter database switchover to stbcdb verify; Thread # Online Redo Log Groups Standby Redo Log Groups
ERROR at line 1: (stbcdb) (prmcdb)
ORA-16470: Redo Apply is not running on switchover target 1 3 2

It means standby database (stbcdb) is not ready to switch role to prima- Ready for Switchover status shows target whether standby database is
ry. Because Redo Apply process is stopped on target standby database ready or not for switch role.
and last archived logs may not be applied.
In broker-managed Data Guard configurations you can do this task with Far Sync Zero Data Loss Protection at any Distance
VALIDATE DATABASE command as below: [7] In many Data Guard configurations, primary database sends redo
changes to standby database(s) using asynchronous (ASYNC) transport.
DGMGRL> validate database stbcdb
In maximum performance protection mode when the primary fails, we
Database Role: Physical standby database may experience some data loss. Prior to Oracle Database 12c, we used
Primary Database: prmcdb
synchronous transport to achieve zero data loss. Sometimes it is not vi-
able option because of the impact on the commit response times at the
Ready for Switchover: No
Ready for Failover: Yes (Primary Running)
primary due to network latency between the two databases.

Flashback Database Status:


prmcdb: Off
Oracle Database 12c introduces new FAR SYNC instance [8]. Far sync in-
stbcdb: Off stance is archive destination that accepts redo from the primary database
Standby Apply-Related Information:
and then sends that redo to other standby database(s) of the Oracle Data
Guard configuration.

195 OTech Magazine #3 May 2014


Oracle Data Guard 12c:
New features
7/14 Mahir M Quluzade

far sync instance) with higher protection guarantees. So, if the far sync
instance was synchronized at the time of the failure of primary database,
the far sync instance would coordinate a final redo send from the far sync
instance to the standby then perform a zero-data-loss failover.

How to configure FAR SYNC instance?


I use my broker-managed Data Guard configuration and I will create Far
sync instance in same server of primary database.
DGMGRL> show configuration

Configuration - dg

Protection Mode: MaxPerformance


Databases:
prmcdb - Primary database
stbcdb - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
Figure 5: FAR SYNC Instance in Data Guard Configuration SUCCESS

A far sync instance manages a control file, receives redo into standby 1. Create necessary folders for Far Sync instance.
redo logs (SRLs), and archives those SRLs to local archived redo logs also
does not have user data files, cannot be opened for access, cannot run mkdir -p /u01/app/oracle/oradata/prmfs
mkdir -p /u01/app/oracle/admin/prmfs/adump
redo apply, and can never function in the primary role or be converted to
any type of standby database.

Creating a far sync instance has a benefit of minimizing impact on commit


response times (due to the smaller network latency between primary and

196 OTech Magazine #3 May 2014


Oracle Data Guard 12c:
New features
8/14 Mahir M Quluzade

2. Create initialization parameter file and control file for Far Sync instance. prmfs.__oracle_base=/u01/app/oracle#ORACLE_BASE set from environment
prmfs.__pga_aggregate_target=281018368
prmfs.__sga_target=524288000
[oracle@oel62-prmdb-12c ~]$ export ORACLE_SID=prmcdb
prmfs.__shared_io_pool_size=16777216
[oracle@oel62-prmdb-12c ~]$ sqlplus / as sysdba
prmfs.__shared_pool_size=167772160
prmfs.__streams_pool_size=0
SQL*Plus: Release 12.1.0.1.0 Production on Wed Feb 12 16:42:31 2014
*.archive_lag_target=0
*.audit_file_dest=/u01/app/oracle/admin/prmfs/adump
Copyright (c) 1982, 2013, Oracle. All rights reserved.
*.audit_trail=db
*.compatible=12.1.0.0.0
Connected to:
*.control_files=/u01/app/oracle/oradata/prmfs/control01.ctl
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
*.log_file_name_convert=prmcdb,prmfs
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
*.db_block_size=8192
*.db_domain=
SQL> create pfile=/u01/prmfs_pfile.ora from spfile;
*.db_name=prmcdb
*.db_unique_name=prmfs
File created.
*.db_recovery_file_dest=/u01/app/oracle/fast_recovery_area
*.db_recovery_file_dest_size=4800m
SQL> alter database create far sync instance controlfile as /u01/app/oracle/oradata/
*.dg_broker_start=TRUE
prmfs/control01.ctl;
*.diagnostic_dest=/u01/app/oracle
*.dispatchers=(PROTOCOL=TCP) (SERVICE=prmfsXDB)
Database altered.
*.enable_pluggable_database=true
*.log_archive_dest_1=location=USE_DB_RECOVERY_FILE_DEST,valid_for=(ALL_LOGFILES,
Edit initialization parameter file for Far Sync Instance. The important pa- ALL_ROLES)
*.log_archive_format=%t_%s_%r.dbf
rameters are control_files, db_unique_name and log_file_name_convert. *.log_archive_max_processes=4
So, db_unique_name must be different from db_unique_name of primary *.log_archive_min_succeed_dest=1
*.log_archive_trace=0
or standby database(s) in Data Guard configuration. We are not setting *.memory_target=768m
db_file_name_convert parameter, because Far Sync instance is not using *.open_cursors=300
*.processes=300
data files. *.remote_login_passwordfile=EXCLUSIVE
*.standby_file_management=MANUAL
prmfs.__data_transfer_cache_size=0 *.undo_tablespace=UNDOTBS1
prmfs.__db_cache_size=318767104
prmfs.__java_pool_size=4194304
prmfs.__large_pool_size=8388608

197 OTech Magazine #3 May 2014


Oracle Data Guard 12c:
New features
9/14 Mahir M Quluzade

Note: If standby redo logs (SRLs) preconfigured on primary database ------- --------------- ------------
PRMCDB prmfs FAR SYNC
before creation of standby database, then Far Sync Instance will create
SRLs automatically, otherwise you must add SRLs to Far Sync instance 4. Copy password file for Far Sync instance from primary database password file. Pass-
word file must be the same for every databases of Data Guard configuration; also far
manually. sync instances.
[oracle@oel62-prmdb-12c ~]$ cd $ORACLE_HOME/dbs
[oracle@oel62-prmdb-12c dbs]$ cp orapwprmcdb orapwprmfs
3. Start Far Sync instance. Far Sync instance opening every time mount
mode. 5. Add Far Sync instance network service name to tnsnames.ora on both sides (primary
and standby) as below:
[oracle@oel62-prmdb-12c ~]$ export ORACLE_SID=prmfs PRMFS =
[oracle@oel62-prmdb-12c ~]$ sqlplus / as sysdba (DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)
SQL*Plus: Release 12.1.0.1.0 Production on Wed Feb 12 17:10:24 2014 (HOST = oel62-prmdb-12c.localdomain)(PORT = 1521))
(CONNECT_DATA =
Copyright (c) 1982, 2013, Oracle. All rights reserved. (SERVER = DEDICATED)
(SERVICE_NAME = prmfs)
Connected to an idle instance. )
)
SQL> create spfile from pfile=/u01/prmfs_pfile.ora;
Now we can add Far Sync instance (prmfs) to our data guard configuration with DGMGRL.
File created. Connect to DGMGRL as SYSDG.
DGMGRL> ADD FAR_SYNC prmfs AS CONNECT IDENTIFIER IS prmfs;
SQL> startup mount;
ORACLE instance started. far sync instance prmfs added

Total System Global Area 801701888 bytes DGMGRL> ENABLE FAR_SYNC prmfs;
Fixed Size 2293496 bytes
Variable Size 545259784 bytes Enabled
Database Buffers 251658240 bytes
Redo Buffers 2490368 bytes DGMGRL> show configuration;
Database mounted.
Configuration - dg
SQL> select name, db_unique_name, database_role from v$database;
Protection Mode: MaxPerformance
NAME DB_UNIQUE_NAME DATABASE_ROLE Databases:

198 OTech Magazine #3 May 2014


Oracle Data Guard 12c:
New features
10/14 Mahir M Quluzade

prmcdb - Primary database Succeeded.


stbcdb - Physical standby database
prmfs - Far Sync (inactive) DGMGRL> show configuration

Fast-Start Failover: DISABLED Configuration - dg

Configuration Status: Protection Mode: MaxAvailability


SUCCESS Databases:
prmcdb - Primary database
prmfs - Far Sync
For configuring Far Sync instance in Data Guard configuration, we must stbcdb - Physical standby database
change new property RedoRoutes of primary database and Far Sync
Fast-Start Failover: DISABLED
instance. This property changes LOG_ARCHIVE_DEST_n initialization
parameter for configuration of Far Sync instance. RedoRoutes property is Configuration Status:
SUCCESS
also used for configuring Cascaded Redo Transport Destinations. [9]
Note: RedoRoutes property has been configured with a redo transport
mode, then the mode specified by that RedoRoutes property value over- We can see changes on data guard related initialization parameters on
rides the value of the LogXptMode property. The optional redo transport primary, standby and far sync instance after adding Far Sync instance in
attribute specifies the redo transport mode to use to send redo to the Data Guard configuration. Changes are as below:
associated destination. It can have one of three values: [ASYNC | SYNC | On Primary database:
FASTSYNC]. If the redo transport attribute is not specified, then the redo SQL> select name, value from v$parameter where name in
transport mode used will be the one specified by the LogXptMode prop- 2 (fal_server,log_archive_config,log_archive_dest_2);
erty for the destination. NAME VALUE
---------------------- -------------------------------------
DGMGRL> EDIT DATABASE prmcdb SET PROPERTY RedoRoutes=(LOCAL : prmfs SYNC); log_archive_dest_2 service=prmfs, SYNC AFFIRM db_unique_name=prmfs
Property RedoRoutes updated valid_for=(online_logfile,all_roles)
fal_server
DGMGRL>EDIT FAR_SYNC prmfs SET PROPERTY RedoRoutes=(prmcdb : stbcdb ASYNC); log_archive_config dg_config=(prmfs,prmcdb,stbcdb)
Property RedoRoutes updated

DGMGRL>EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;

199 OTech Magazine #3 May 2014


Oracle Data Guard 12c:
New features
11/14 Mahir M Quluzade

On Far Sync Instance:


Cascading standby database has restrictions. So only physical standby
SQL> select name, value from v$parameter where name in
2 (fal_server,log_archive_config,log_archive_dest_2);
databases can cascade redo, non-real-time cascading is supported on
destinations 1 through 10 only. Real-time cascading is supported on all
NAME VALUE
------------------ ---------------------------------------
destinations. Real-time cascading requires a license for the Oracle Active
log_archive_dest_2 service=stbcdb, ASYNC NOAFFIRM db_unique_name=stbcdb Data Guard option.
valid_for=(standby_logfile,all_roles)
fal_server prmcdb, stbcdb
log_archive_config dg_config=(prmfs,prmcdb,stbcdb)
On Standby database:

SQL> select name, value from v$parameter where name in


2 (fal_server,log_archive_config,log_archive_dest_2);

NAME VALUE
------------------ ---------------------------------------
log_archive_dest_2
fal_server prmfs, prmcdb
log_archive_config dg_config=(stbcdb,prmcdb,prmfs)

Note: If you not using broker-managed Data Guard configuration, then


you must change appropriate parameters with SQL statement; ALTER
SYSTEM SET.
Figure 6: Real-Time Cascade
Real-Time Cascade
With Oracle Database 12c, a cascading standby database can either cas- With Oracle Database 12c Data Guard Broker is able to manage cascade
cade redo in real-time (as it is being written to the standby redo log file) destination standby database. For configuring Real-Time Cascade with
or non-real-time (as complete standby redo log files are being archived on DGMGRL, we must change RedoRoutes property. The ASYNC redo trans-
the cascading standby).

200 OTech Magazine #3 May 2014


Oracle Data Guard 12c:
New features
12/14 Mahir M Quluzade

port attribute must be explicitly specified for a cascaded destination to To enable temporary undo on the primary database, use the TEMP_
enable real-time cascading to that destination. [10] UNDO_ENABLED initialization parameter. On an Oracle Active Data Guard
standby, temporary undo is always enabled by default so the TEMP_
DMLs on Oracle Active Data Guard UNDO_ENABLED parameter has no effect.
Oracle Active Data Guard is introduced to Oracle Database 11g. So, in
Oracle Active Data Guard we can open standby database in READ ONLY Note: The temporary undo feature requires that the database initializa-
mode. The main purpose of using Oracle Active Data Guard standbys is to tion parameter COMPATIBLE be set to 12.0.0 or higher. The temporary
read-mostly reporting applications. But sometimes reporting applications undo feature on Oracle Active Data Guard instances does not support
need global temporary tables for storing temporary data. Prior to Oracle temporary BLOBs or temporary CLOBs.
Database 12c global temporary tables could not be used on Oracle Active
Data Guard standbys, where are read-only. Sequences in Active Data Guard
With Oracle Database 12c in an Oracle Active Data Guard environment, if
With Oracle Database 12 global temporary tables can be used on Active sequences are created by the primary database with the default CACHE
Data Guard standbys. However, as of Oracle Database 12c the temporary and NOORDER options, then standby databases can use this sequence.
undo feature allows to undo the changes to a global temporary table Otherwise, the sequence created with the NOORDER or NOCACHE
they are stored in the temporary tablespace as opposed to the undo options, then Oracle Active Data Guard cannot use this sequences. In
tablespace. Undo stored in the temporary tablespace does not generate this case, the primary database ensures that each range request from a
redo, thus enabling redo-less changes to global temporary tables. This standby database gets a range of sequence numbers that do not overlap
allows DML operations on global temporary tables on Oracle Active Data with the ones previously allocated for both the primary and standby da-
Guard standbys. When temporary undo is enabled on the primary data- tabases. This generates a unique stream of sequence numbers across the
base, undo for changes to a global temporary table are not logged in the entire Oracle Data Guard configuration.
redo and thus, the primary database generates less redo. Therefore, the
amount of redo that Oracle Data Guard must ship to the standby is also If you want, standby databases to get all of range of sequences then
reduced, thereby reducing network bandwidth consumption and storage you must create sequences with SESSION option by the primary data-
consumption. base. A session sequence is a special type of sequence that is specifically

201 OTech Magazine #3 May 2014


Oracle Data Guard 12c:
New features
13/14 Mahir M Quluzade

designed to be used with global temporary tables that have session Minimizing risk: All changes are implemented and thoroughly tested at
visibility. Unlike the existing regular sequences (referred to as global the standby database with zero risk for users running on the production
sequences for the sake of comparison), a session sequence returns a version. Also Oracle Real Application Testing enables real application
unique range of sequence numbers only within a session, but not across workload to be captured on the production system and replayed on the
sessions. Another difference is that session sequences are not persistent. standby for the most accurate possible test result.
If a session goes away, so does the state of the session sequences that
were accessed during the session. [11] The Rolling Upgrade Using Oracle Active Data Guard feature, new as of
Oracle Database 12c, provides a streamlined method of performing roll-
Database Rolling Upgrade using Active Data Guard ing upgrades. It is implemented using the new DBMS_ROLLING PL/SQL
More companies are placing increasing priority on reducing planned package, which allows you to upgrade the database software in an Oracle
downtime and risk when introducing change to a mission critical produc- Data Guard configuration in a rolling fashion.
tion environment. With Oracle Database 12c Database rolling upgrades
provide two advantages: [12]

Minimizing downtime: Database upgrades and alterations of the physi-


cal structure of a database (other than changing the actual structure of a
user table), can be implemented at the standby while production contin-
ues to run at the primary database. If all changes have been validated, a
switchover moves the production applications to the standby database.
It means the original primary will be upgraded while users run on the new
version. Total planned downtime is limited to the brief time required to Figure 7: Rolling Upgrade steps

switch production to the standby.


Database Rolling Upgrades using Active Data Guard can be used for
version upgrades starting with the first patchset of Oracle Database 12.
This means that the manual procedure included with Data Guard and

202 OTech Magazine #3 May 2014


Oracle Data Guard 12c:
New features
14/14 Mahir M Quluzade

described earlier in this paper must still be used for rolling upgrades from
Oracle Database 11to Oracle Database 12, or when upgrading from the ini-
tial Oracle Database 12release to the first patchset of Oracle Database 12.

Conclusion
Oracle Data Guard is the disaster recovery solution for Oracle Databases.
With this new features Oracle expanded capabilities of Oracle Data
Guard. So this new features increases protection you production data-
base and increases using of Active Data Guard.

References:
Oracle Data Guard Broker 12c Release 1 (12.1)
Oracle Data Guard Concepts and Administration 12c Release 1 (12.1)

203 OTech Magazine #3 May 2014


Introduction
Peter Lorenzen to Oracle
Technology
www.cgi.dk

twitter.com/theheatDK
http://www.linkedin.

License
com/in/peterlorenzendk

Auditing
204 OTech Magazine #3 May 2014
Introduction to Oracle
Technology License Auditing
1/5 Peter Lorenzen

It is important to keep track of which Oracle software you have installed Transactional Oracle Master Agreement (TOMA). An OMA, previously
to ensure license compliance. Licenses can be expensive and when you known as an OLSA, can be running for several years and in this case, the
buy software from Oracle, you automatically accept that Oracle can drop LDR will be in a separate document.
by for an audit.
If you are an Oracle Partner, you can find the current LDR here
There are many reasons for a lack of compliance; but ultimately, it does (EMEA - http://goo.gl/lbcC0A).
not matter. If somebody forgets to delete an installation or adds an ex-
tract CPU to a server without buying extra licenses, you have a problem. The LDR does not contain all licensing details. This page (http://goo.
Therefore, it is a good idea to do your own audit once in a while. gl/86f3je) lists some documents that are referenced from the OMA/LDR.
An example is the Oracle Processor Core Factor Table (http://goo.gl/
First, you need to understand Oracles license vocabulary and compo- l7D8hn).
nents. Oracle License Management Services (LMS) is a good place to
start (http://goo.gl/W4ICVM). These are the people that will visit you if License documentation
Oracle decides to do an audit. They have created a Software Investment If you are an Oracle Partner, you have access to other documents that
Guide (http://goo.gl/dBysjh) that will introduce you to Oracle licensing. can help you with regard to licensing. The Oracle Technology Global Price
List Supplement (http://goo.gl/LD8eSN) will tell you exactly which prod-
License agreements ucts/components are included with a specific license, and which products
When you buy an Oracle license, you sign a contact with Oracle. A part of must be licensed separately as a prerequisite.
this agreement includes a document describing all the nitty-gritty license
rules. These are the rules you have to be compliant with. Although Oracle The Oracle Technology Global Price List (http://goo.gl/LD8eSN) contains
continuously changes the rules, you only have to worry about the rules in footnotes called Oracle Technology Notes. They contain the same rules as
your contract. the LDR, but are organized differently and are easier to read.

The rules are called the License Definitions and Rules (LDR). The LDR Oracle has some much debated rules regarding server virtualization/par-
will often be included in an Oracle Master Agreement (OMA), or a titioning and licensing. Partitioning is not mentioned in the LDR, so you

205 OTech Magazine #3 May 2014


Introduction to Oracle
Technology License Auditing
2/5 Peter Lorenzen

need licenses for all the hardware where the Oracle software is installed CPU has been added to a server, an extra processor license or NUP li-
and/or running. This means that contractually you are bound to get licens- cense is probably needed. NUP licenses often have a minimum number of
es for the full hypervisor. Oracle has then created a policy document that NUPs per processor. For example, the Oracle database Enterprise Edition
describes some situations where you do not need to license the hypervi- has a minimum of 25 users per processor. If you add an extra CPU with
sor, but only the guest machine/server. You can read about partitioning 4 cores and the CPU has a core factor of 0.5, you need a minimum of 50
here http://goo.gl/SG45Mv. This document is available to the general extra NUPs.
public.
As a side note please note that Oracle ignores hyper-threading and only
Track your software assets counts physical cores.
You need to document your licenses somewhere. Oracle has a spread-
sheet you can use (http://goo.gl/XXpa3D). It is a bit rudimentary, but it is Auditing the installed software
a start if you do not already have a way to document this. There is no easy way to audit the installed software. Oracle does not sup-
ply any tools for this, so you have to do it manually, which can be time
Oracle offers both term and perpetual licenses. If you use a term license, consuming.
remember to add the start/end date to the documentation.
You can buy third-party tools that gather the needed data from the serv-
You can find much more comprehensive documentation examples via ers, but I have never used any of them. Please note that LMS has only
Google. Here is one http://goo.gl/giSgUR. verified some of the vendors to produce the right data.

License Metrics Now lets say you want to start by looking at your database servers. Most
Oracle uses many different license metrics. For technology licenses, the of Oracles products use the Oracle Universal Installer (OUI), and this will
most used metrics are Processor and Named User Plus (NUP). NUPs are by default create a Central Inventory of all the software that has been
users authorized to use the software, both human and non-human. installed with the OUI. You can see how to locate the inventory here,
(http://goo.gl/XMgOYg).
Hardware assets must be consolidated with software assets. If an extra

206 OTech Magazine #3 May 2014


Introduction to Oracle
Technology License Auditing
3/5 Peter Lorenzen

The inventory will lead you to the Oracle Homes on the server. Once they You can buy options and management packs for the database (http://
are located, you can use the opatch tool to list the products installed in goo.gl/ee6CMn). To figure out which options and management packs
an Oracle Home. you are using, Oracle has created two scripts. They can be downloaded
from MOS - Database Options/Management Packs Usage Reporting for
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
cd $ORACLE_HOME Oracle Database 11g Release 2 (Doc ID 1317265.1).
OPatch/opatch lsinventory

Oracle Database 12c 12.1.0.1.0


option_usage.sql:
There are 1 products installed in this Oracle Home.

Next we need to find out which edition this 12c database installation is. If
you log in to a database via SQL*Plus you will see a banner. Here are two
examples:

Enterprise Edition
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Produc-
tion

Standard Edition or Standard Edition One


Oracle Database 11g Release 11.2.0.3.0 - 64bit
used_options_details.sql:
Production

Unfortunately, Standard Edition and Standard Edition One are just license
options and use the same software, making it difficult to know which is
installed. You can see it in the OUI install log, if it has not been deleted. The scripts are created for 11g but they work just as well for 12c. The 12c
For more information check MOS note titled How to find if the Database documentation also mentions them (http://goo.gl/JUHWXR).
Installed is Standard One Edition? (Doc ID 1341744.1)

207 OTech Magazine #3 May 2014


Introduction to Oracle
Technology License Auditing
4/5 Peter Lorenzen

The Oracle database has a built in view that can help you if you use NUP The WebLogic server can be licensed as the below three products.
licensing. Have a look at v$license (http://goo.gl/JmgTyv).
Product Comments
Moving on to Fusion Middleware (FMW) software, here it is also a good Standard Does not include Clustering.
idea to start by looking in the OUI inventory, and then find the products Edition Includes: TopLink, ADF, Web Tier, Java SE.
using opatch. Enterprise Includes:
Edition TopLink, ADF, Web Tier, Java SE Advanced, Virtual
export ORACLE_HOME=/u01/app/oracle/product/forms112/fr_binaries Assembly Builder, WebLogic Software Kit for ODA.
cd $ORACLE_HOME
OPatch/opatch lsinventory Suite Prerequisite for options like OSB, SOA Suit etc.
Includes:
Oracle Forms and Reports 11g 11.1.2.2.0 WebLogic Server Enterprise Edition, Java SE Suite,
There are 1 products installed in this Oracle Home.
IAS Enterprise Edition, Coherence Enterprise Edition.
Restricted-use: Management Pack for Oracle Coherence.
Not all products use the OUI. An example is the WebLogic Server before
version 12.1.2. The older releases of the WebLogic Server maintained a Coherence is distributed with the WebLogic installer, but you need a We-
beahomelist file that lists all the directories where WebLogic are installed. bLogic Suite license or a specific Coherence license to be allowed to use
it. To check if it is installed:
beahomelist location:
(UNIX) /home/oracle/bea/beahomelist
cd /u01/app/oracle/product/adf12c/oui/bin
(Windows) C:\bea\beahomelist
./viewInventory.sh -jreLoc /u01/app/oracle/product/java_current/jre \
-oracle_home /u01/app/oracle/product/adf12c -output_format report \
| grep oracle.coherence

Component: oracle.coherence 12.1.2.0.0

WebLogic Server Standard Edition does not include clustering. To check


if clustering is used; you can use the Admin Console or have a look in DO-
MAIN_HOME/config/config.xml.

208 OTech Magazine #3 May 2014


Introduction to Oracle
Technology License Auditing
5/5 Peter Lorenzen

WebLogic Suite is the most complete WebLogic license. You need this if
you want to buy options like OSB, SOA Suite etc.

Besides the three standard WebLogic Server licenses, there is also a


restricted-use license called WebLogic Server Basic (http://goo.gl/pikP1k).
If you for example buy a Forms and Reports license, it includes a restrict-
ed-use license for WebLogic. There are a lot of things you are not allowed
to do when you use WebLogic Server Basic. The restrictions take up
around 7 pages in the documentation. Oracle has created a WLST script
that can verify most of these rules. You can get it from MOS note - We-
bLogic Server Basic License Feature Usage Measurement Script (Doc ID
885587.1). It is very simple to use.

The tip of the iceberg


This article has just scratched the surface of licensing and auditing. It is a
complicated, but important subject. You can avoid a lot of trouble by be-
ing proactive and knowing how Oracle licensing works, and exactly what
software is installed on which hardware.

209 OTech Magazine #3 May 2014


All content is the sole responsibility of the authors. This includes all
OTech Magazine Authors text and images. Although OTech Magazine does its best to prevent
OTech Magazine is an independent magazine for Oracle profession- Why would you like to be published in OTech Magazine?
copyright violations, we cannot be held responsible for infringement
als. OTech Magazines goal is to offer a clear perspective on Oracle - Credibility. OTech Magazine only publishes stories of the best-of-
of any rights whatsoever. The opinions stated by authors are their
technologies and the way they are put into action. OTech Magazine thebest of the Oracle technology professionals. Therefore, if you
own and cannot be related in any way to OTech Magazine.
publishes news stories, credible rumors and how-tos covering a publish with us, you are the best-of-the-best.
variety of topics. As a trusted technology magazine, OTech Magazine - Quality. Only selected articles make it to OTech Magazine. There-
provides opinion and analysis on the news in addition to the facts. fore, your article must be of high quality. Programs and Code Samples
- Reach. Our readers are highly interested in in the opinion of the OTech Magazine and otechmag.com could contain technical inac-
OTech Magazine is a trusted source for news, information and best Oracle professionals in the world. And all around the world curacies or typographical errors. Also, illustrations contained herein
analysis about Oracle and its products. Our readership is made up of are our readers. They will appreciate your views. may show prototype equipment. Your system configuration may dif-
professionals who work with Oracle and Oracle related technologies fer slightly. The website and magazine contains small programs and
on a daily basis, in addition we cover topics relevant to niches like OTech Magazine is always looking for the best of the best of the Ora- code samples that are furnished as simple examples to provide an
software architects, developers, designers and others. cle technology professionals to write articles. Because we only want illustration. These examples have not been thoroughly tested under
to offer high-quality information, background stories, best-practices all conditions. otechmag.com, therefore, cannot guarantee or imply
OTech Magazines writers are considered the top of the Oracle or how-tos to our readers we also need the best of the best. Do you reliability, serviceability or function of these programs and code sam-
professionals in the world. Only selected and high-quality articles want to be part of the select few who write for OTech Magazine? ples. All programs and code samples contained herein are provided
will make the magazine. Our editors are trusted worldwide for their Review our writers guidelines and submit a proposal today at www. to you AS IS. IMPLIED WARRANTIES OF MERCHANTABILITY, NON-
knowledge in the Oracle field. otechmag.com. INFRINGEMENT AND FITNESS FOR A PARTICULAR PURPOSE ARE
EXPRESSLY DISCLAIMED.
OTech Magazine will be published four times a year, every season
once. In the fast, internet driven world its hard to keep track of
Advertisement
In this first issue of OTech Magazine there are no advertisements
whats important and whats not. OTech Magazine will help the
placed. For now, this was solely a hobby-project. In the future, to
Oracle professional keep focus.
make sure the digital edition of OTech Magazine will still be available
free of charge, we will add advertisements. Are you willing to partici-
OTech Magazine will always be available free of charge. Therefore
pate with us? Contact us on www.otechmag.com or +31614914343.
the digital edition of the magazine will be published on the web.
OTech Magazine is an initiative of Douwe Pieter van den Bos. Please
note our terms and our privacy policy at www.otechmag.com. Intellectual Property
OTech Magazine and otechmag.com are trademarks that you may
not use without written permission of OTech Magazine.
Independence The contents of otechmag.com and each issue of OTech Magazine,
OTech Magazine is an independent magazine. We are not affili- including all text and photography, are the intellectual property of
ated, associated, authorized, endorsed by, or in any way officially OTech Magazine.
connected with The Oracle Corporation or any of its subsidiaries or
its affiliates. The official Oracle web site is available at www.oracle. You may retrieve and display content from this website on a com-
com. All Oracle software, logos etc. are registered trademarks of the puter screen, print individual pages on paper (but not photocopy
Oracle Corporation. All other company and product names are trade- them), and store such pages in electronic form on disk (but not on
marks or registered trademarks of their respective companies. any server or other storage device connected to a network) for your
own personal, non-commercial use. You may not make commercial
or other unauthorized use, by publication, distribution, or perfor-
In other words: we are not Oracle, Oracle mance without the permission of OTech Magazine. To do so without
is Oracle. We are OTech Magazine. permission is a violation of copyright law.

210 OTech Magazine #3 May 2014


OTECH MAGAZINE

See you in
the summer...

You might also like