You are on page 1of 64

Rusty Schmidt 1

Analyzing Oracle Workflow data for increased system performance

Analyzing Oracle Workflow data for increased system performance

Rusty Schmidt

Apollo Group, Inc.

August 30, 2013

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 2
Analyzing Oracle Workflow data for increased system performance

Table of Contents
Executive Summary....................................................................................................................................... 4
About the author .......................................................................................................................................... 5
E-mail ........................................................................................................................................................ 5
Blog ........................................................................................................................................................... 5
LinkedIn ..................................................................................................................................................... 5
About the company ...................................................................................................................................... 5
About the Paper ............................................................................................................................................ 6
Introduction .................................................................................................................................................. 7
Initial Research.............................................................................................................................................. 8
Understanding the data ................................................................................................................................ 9
Who we work with ...................................................................................................................................... 11
Creating the plan......................................................................................................................................... 13
Next steps ............................................................................................................................................... 14
How to purge .......................................................................................................................................... 15
Quick tips ................................................................................................................................................ 15
Executing the plan....................................................................................................................................... 17
How our purge went ................................................................................................................................... 18
Our problem................................................................................................................................................ 23
A new problem............................................................................................................................................ 31
Possible gains .............................................................................................................................................. 33
Actual gains ................................................................................................................................................. 34
Our end results............................................................................................................................................ 35
Operational changes ................................................................................................................................... 36
The future ................................................................................................................................................... 37
In Summary ................................................................................................................................................. 38
Acknowledgements..................................................................................................................................... 39

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 3
Analyzing Oracle Workflow data for increased system performance

Appendix ..................................................................................................................................................... 41
Direct References .................................................................................................................................... 41
External sources .................................................................................................................................. 41
HWM Reference.................................................................................................................................. 42
Workflow Reference ........................................................................................................................... 43
SQL References ....................................................................................................................................... 44
SQL Reference 1 Script 1: ................................................................................................................. 44
SQL Reference 1 Script 2: ................................................................................................................. 45
SQL Reference 1 Script 3: ................................................................................................................. 46
SQL Reference 2 Script 1: ................................................................................................................. 47
SQL Reference 3 Script 1 Defined: ................................................................................................ 48
SQL Reference 3 Script 2 Actual Data: .......................................................................................... 48
SQL Reference 4 Script 1 How to find WF Items by month: ......................................................... 49
SQL Reference 4 Script 2 How to find WF Items by day: .............................................................. 49
SQL Reference 5 Script 1: ................................................................................................................. 50
SQL Reference 6 Script 1: ................................................................................................................. 51
Workflow alert references ...................................................................................................................... 52
Workflow alert reference 1 Script 1 Alert Name: Workflows by type and year:....................... 52
Workflow alert reference 2 Script 1 Alert Name: Workflows errored by week: ....................... 53
Workflow alert reference 3 Script 1 Alert Name: Workflows which will not be purged:.......... 55
Workflow alert reference 4 Script 1 Alert Name: Workflows errored in last day: .................... 58
Workflow alert reference 5 Script 1 Alert Name: Workflows with no end dates: .................... 60
Workflow alert reference 6 Script 1 Alert Name: Workflow items not being purged: ............. 61
Workflow alert reference 7 Script 1 Alert Name: Workflow item report:................................. 62
Version Information .................................................................................................................................... 64

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 4
Analyzing Oracle Workflow data for increased system performance

Executive Summary

The purpose of this whitepaper is to allow you to identify trending problems


in your system regarding Oracle Workflow by learning what tools are readily
available at your fingertips to help get a handle on this large black box of data
where data lives on forever without YOUR intervention. Included are many tricks
and tips of the trade such as methodologies for understanding your data, knowing
what parties need to be involved in your project, and how to create a master
document for your plan once you have all the information at your disposal on what
needs purging.

I will be presenting an example of how to utilize these tools with a


walkthrough of what having more than 10 years of this data in a system looks like,
including the steps we had to take in order to start purging the information.
Additionally, Ill share with you a problem that we encountered while purging in
the hopes that you can avoid this same fate as I show you how much purging is
possible in terms of data points in the tables and what is possible from an absolute
disk space perspective.

There are very simple lessons within this whitepaper that can be picked up
by individuals of many different skill levels and departments so you can spot
negative trends, gain knowledge regarding the space requirements and space
available to be recovered, and change or modify current operational habits so the
Workflow product is no longer that large black box of data that just sits there
collecting dust.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 5
Analyzing Oracle Workflow data for increased system performance

About the author

Erwin (Rusty) Schmidt is a support focused individual with 7 years of EBS support
experience relating to the Financials suite of products such as Payables,
Receivables, Fixed Assets and Purchasing. His interests are in better ways to give
the end user a great experience and many times that lies within the internals of the
database or application. Our group within Apollo is responsible for the Financials
applications with products including the Hyperion and Oracle EBS suites.

The author can be reached via these outlets:

E-mail: rusty.schmidt@apollo.edu

Blog: http://theoracleemt.blogspot.com

LinkedIn: www.linkedin.com/pub/rusty-schmidt/60/2a6/310/

About the company

The Apollo Group is the parent company of the University of Phoenix, with UoPX
currently having 356,000 students and an alumni population of 710,000 making up
over a million students through the history of the University as of Feb/Mar 2012.
Visit www.apollo.edu for more information.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 6
Analyzing Oracle Workflow data for increased system performance

About the Paper

This whitepaper was created as a result of being selected by the OAUG Committee
for the inaugural AppsTech Connection Point conference. If you are not a member
of the OAUG, please visit them at http://oaug.org/ and consider signing up as they
offer a wealth of knowledge regarding Oracle applications.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 7
Analyzing Oracle Workflow data for increased system performance

Introduction

Our end users were seeing the submission or approval of Internet Expense
items go slow until it would finally timeout after an excessive period of time if it
did not get to successfully process. At the same time our home grown access
provisioning system was hanging as well so when the power users would work on
the items in their queues the platform would actually time out for them so they
would have to try it multiple times before being successful. Adding another vector
to this perfect storm were warning signs such as concurrent reports like the
Workflow Background Process, a report that used to take 3-5 minutes, starting to
take 13 minutes to run. This does not sound like a horrible problem, but once you
realized that the report is scheduled to run every 15 minutes this resulted in the
report constantly running in our system for almost 21 hours every day adding
undue stress and CPU cycles which need to be reduced.

While we have identified this as a problem, at the same time we were in the
middle of upgrading to a new database version and hardware platform in the form
of a quarter rack Exadata Database Machine X2-2 which is obviously a faster
system, yet there will be an obvious advantage if we can clean up our data. We
will be able to provide that project some dividends as there will be less data that
needs to be exported and imported which can provide them saving in the hours
range for manpower and will give us more storage back in our other instances of
Exadata.

We have identified a problem, but we do not yet know what to do about it.
The first thing I do when I do not have all the facts is to investigate and learn more
about the issue. So where to start?

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 8
Analyzing Oracle Workflow data for increased system performance

Initial Research

First, we need to do some researching to understand what our data footprint


looks like for the Oracle Workflow product but almost as important is WHY it is
the way it is. Having learned that we control our purging destiny we need to begin
with looking at our concurrent requests, this is where having a historical table of
concurrent requests comes in handy so you are able to look at the history of
concurrent requests to see what ran previously in your system to help identify gaps
where maybe the requests stopped running for some reason. Of course you can see
what is currently scheduled to run in your system but without that historical
perspective, something is lost.

The next source of review is examining any currently running purge


requests, as they may not be running correctly and could need some fine tune
adjustments. Then we need to know why we are doing the purging that we are
doing by examining who set the purge process schedule, if there is a reason they
selected what they did, if the business was consulted or informed, and finally if
there are audit concerns with us purging information from the system. Many of
these questions can be answered by consulting your documentation such as a local
repository which houses information related to the purge processing for Oracle
Workflow data, or some of the more advanced Oracle guides available online.

At a very high level, you need to know what the data looks like by
identifying what tables are involved with the Oracle Workflow product and
looking at your data distribution. Obviously an effort like this does not take place
within a vacuum, so you need to consult with external resources such as My Oracle
Support, which also is the home for the Oracle Workflow Analyzer script, as well
as a document from Solution Beacon which really started me going and the
specific MOS notes which were used in this project and documented at the end of
this whitepaper.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 9
Analyzing Oracle Workflow data for increased system performance

Understanding the data

Getting some knowledge under our belts about our system and the Oracle
Workflow product in general allows us to better understand the data so we can find
out how large our data footprint is.SQL 1 We do this by calculating the space that
our tables and indexes are taking up with the Oracle Workflow product after
identifying what tables are being used and My Oracle Support Note 277124.1 has
the details I started with. This allowed me to understand what the oldest items in
our tables were and this was important for many reasons, but primarily due to
needing to know what our data distribution looked like so we could know where
we needed to be purging. SQL 2 Doing this allowed me to identify that the oldest
item in our tables dated back to 2001, plus that there was a sudden spike in the year
2009 of over 6 million items, which could be purged and told me where our efforts
needed to be focused initially.

The identification of what Workflows are being used is an important data


point that needs to be researched, as there is every possibility that you can leave
debris in your system in the form of items that are no longer being used. SQL 3
(Originally ITEM_TYPE was being used in the paper instead of Workflow, likely
because I had been running my scripts and the table results had the ITEM_TYPE
column in the heading, but Karen Brownfield helped me realize that terms like
ITEM_TYPE and Workflow are synonymous with each other.) Understanding the
Workflows allows us to find out what is running in the system that needs to be
purged, allows us to categorize this in order to communicate with the business, and
track our work while we go through the project. For instance, in our system we
had self-service password change e-mails going out to the users, powered by the
Workflow product, but after switching to an integrated single-sign on platform we
no longer needed to provide the users self-service password changes or the
associated e-mails. As there was a small number of these workflows that ever
went out they were never purged, but are now lingering on in the form of data
which is no longer relevant and only takes up storage space costing us money as
well as performance with the bloat.

Finding out what our Workflow distribution was like helped find where the
biggest bang for our buck was in the system which is important if you are planning
on purging as you may want to focus on the Workflow with several million open
Apollo Group, Inc. ORACLE - August 30th 2013
Rusty Schmidt 10
Analyzing Oracle Workflow data for increased system performance

lines stretching back 5+ years as opposed to the Workflow numbering in the


hundreds from the current year as it was just released into the environment. SQL 4
This extends into identifying what the impact was going to be for our indexes since
purging activity affects your indexes just the same as it does your tables, and my
research has lead me to the conclusion that indexes are the "secret sauce" to the
Oracle Workflow product as more than twice as much data existed in the indexes
as opposed to the tables.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 11
Analyzing Oracle Workflow data for increased system performance

Who we work with

So now that I have identified an enormous backlog that needs to be purged,


we need to detail exactly who (or what groups) we need to be working with on the
project. We followed the RACI model, an ITIL methodology, that breaks down
what parties are Responsible, Accountable, Consulted, and Informed as charted out
here:

This can obviously be different depending on who initiates this project, or how
involved groups such as DBAs, development and the business are but I believe that
DBAs and development are absolutely Responsible because DBAs need to be
aware of the project and watching disk space while the development team needs to
be Responsible for making sure new workflow items are going to be addressed
appropriately.

Initially we identified our business as a critical component for the project


that were Accountable and needed to be Consulted so we had to work with the
business owners to make sure they approved of the purge schedule we jointly
developed. Below you can see an example of the communication and information
that was sent to a business partner to bring them up to speed on the project which
included the names they identified the Workflows by (and the ITEM_TYPEs so we
can track it internally without re-running scripts later), the past purge schedule, and
what is suggested on a go-forward basis. A key here is the word metadata and
originally that just said data, but it is vitally important that you make it clear right
up front that this is not the "real" data of expense reports or POs that will be
purged.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 12
Analyzing Oracle Workflow data for increased system performance

This helps the business get introduced to the idea of metadata, and you may have
to help them further to understand what metadata really is as one of our business
partners were thinking that we wanted to purge the actual items that this Workflow
data represents. Our discussions revolved around the fact that this data was merely
the representation of what had already gone through our system and it helped them
to understand the situation when I put forward the idea that this was metadata, or
otherwise, data about our data. This is also part of why we implemented a year
barrier as we did not want to purge anything that could have been in discussions
with the business and also there was a potential that we might wipe out something
that could possibly be "in flight" with too short of a purge duration.

Depending on the situation for your company, you may need to make sure
that the audit and compliance groups understand what you are doing and approve
which is especially important if your company or group may be under a data
retention policy. We stressed that this was a. metadata, b. naturally supposed to be
purged, and c. has been previously purged so our audit and compliance group
signed off. The business was also concerned about audit reports they were running
against expense reports, but once we showed that the reports were built to look at
the processed expense report tables and not this metadata they were able to sign
off. Of the parties Consulted with, the business was the largest and most important
because in our scenario they were the drivers of our purge schedule and were co-
signers as to the Acceptance of the project.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 13
Analyzing Oracle Workflow data for increased system performance

Creating the plan

We know there is a problem, we have identified what we think the cause is,
we have worked to identify who needs to be involved, now we need to come up
with a plan based on what we found in our research showing:

a large backlog of data


we were purging and we did not even know it
we stopped purging some workflows
we had not even started purging newer workflows

All of these warning signs tell us that we need to take some kind of action to get
the system back on course.

Seeing that the year 2009 seemed to be a watershed moment, due to the
inflated number of items that were in the system from that year, this was our
eventual target after I had learned enough about the Oracle Workflow system in
doing our initial purge of the oldest Workflow items; going a step further for this,
we were targeting a specific Workflow type that had stopped being purged at the
end of 2008 which lead to this huge backlog in 2009 data. Seeing as how we were
starting a project which was new to us, we decided to gather as much data possible
in order to build a gameplan to attack the backlogged data and one of the most
important of these initial keys is to do this in QA first. Running this project in QA
first allowed us to avoid performance issues with the purge reports running in
PROD since we could identify what reports took too long to run and tune them
appropriately the next time we ran them in the system.

Another important key is to run the purge routines by a specific Workflow


type, with a specific time span in mind, as I found it gives you more control over
the purging. Additionally, we need to know what effect this will have on the
tables, tablespaces, and indexes in our system, and monitoring the reduction of data
in our tables can tell us how effective our purging is. Helping to keep the project
on course is the definition of a year barrier which is a general cut off year for
how far back you will purge; for instance, in our example we had data going back
to 2001 and we set a barrier of 2011 we did not want to cross so we had 10 years of

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 14
Analyzing Oracle Workflow data for increased system performance

data to direct our attention towards by having all of the workflows between Jan 1,
2001 and Dec 31, 2010 eligible for being purged.

In the end, why are we doing this? We are trying to make the system run
better by avoiding timeouts and other negative performance, while at the same time
also trying to save money in the process because time spent trying to use the
system but only having it time out is money wasted for the company.

Next steps

The results from our investigation had shown that we had 30 Gigs of data in
the Workflow tables residing within over 200 Million of items in our tables, of
which more than 90% were older than 2 years. Also we found Workflows that
were not being purged either due to a lack of reports being scheduled initially,
reports that were scheduled had become unscheduled due to system downtimes, or
the wrong persistence types having been selected which resulted in the Workflows
not being purged at all. This lead us to initiating the purging project, but what has
to happen first?

First, we need to finalize what the purge schedule should look like as a part
of finding out what is currently being purged, or what may have been purged
previously, to get a baseline for the purging policy that our group and business
needs to agree to for what is appropriate purging of the Workflows. From there,
you can decide what can be suggested to the business as an appropriate policy
which could be applied to the amount or type of data and how much it is used. For
instance, expense reports are typically submitted and worked quite quickly so
employees can be reimbursed in a timely fashion, but the business may want to
leave the metadata in the system for a 1 year period; while our access audit cycles
are up to 10 quarters so we chose to leave this metadata for 830 days.

Next, we want to create a master document for the scripts you have run and
the associated purge reports in QA which you will need to run again in PROD, as
well as a spreadsheet which shows our progress incorporating all of the tables,
indexes, data, and percentages that can be updated with each purge routine that you
run. My tack for this was to start with the oldest year, and move forward in time

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 15
Analyzing Oracle Workflow data for increased system performance

focusing on each Workflow in only that year until I have understood all of the data
and use that to prototype out a template that will hopefully fit the rest of the data
going forward.

As the final stage in this pre-work, we have to identify the workflows that
are dependant upon other workflows needing to be closed. In our case, we had
workflows which were dependant upon other workflow types by their very nature
and typically you will see WFERROR as a required workflow that needs to be
completed or purged first, just as POERROR has to be run before REQAPPRV as
well. I realized this when I started trying to purge items that had been in the
system for quite a while, yet they remained until the only change was to purge out
another Workflow and then retry the first one later so the original item set would
purge this time around.

How to purge

I have talked about purging for quite a while but I have not yet said how I
will accomplish it. The report Purge Obsolete Workflow Runtime Data is the
main report that does the heavy lifting for your Oracle Workflow purging activities
and here is a sample parameter that we ran it for in our system:

It is absolutely important to understand these parameters: the first parameter is the


name of the Workflow and not what you see as the ITEM_TYPE in the tables, the
279 days corresponded at the time to 01-SEP-2011 so everything created prior to
that date would be purged if eligible, and of critical importance is the correct
identification of the persistence type which in this case was Temporary, and 500
being the number of items processed before a commit point. Of the parameters
above the persistence type parameter is likely the trickiest and least understood as
the setting is done without any true guidelines and the development team can pick
either option in the workflow definitions. This means that if you run the purging
processes for a workflow which is set as Temporary, but you choose Permanent,
there will be nothing purged even if there are items eligible.

Quick tips

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 16
Analyzing Oracle Workflow data for increased system performance

There are some easy tricks here with regards to keeping track of your
purging because just as with a real life patient there are a lot of "vital signs" which
you need to be monitoring here to tell you the health of your purge efforts.
Choosing increments for performance by purging only a specific Workflow for a
specific year will allow your purge program to be more efficient and purging by
Workflow type to limit your activity also allows you track purging progress closer.
Monitoring our initial results helped to make sure items were being purged
appropriately and allowed me to change the approach if they were not being
purged, as I may have the wrong persistence type or I had to investigate our data to
understand why it does not match the criteria of being eligible for purging.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 17
Analyzing Oracle Workflow data for increased system performance

Executing the plan

Below is a cross section of workflow items that are eligible for purging:

This is a representation of one of our oldest Workflows and the time slice
associated with it, which I wanted to target. For some Workflows like the one
above I ran the purging report but they were not purging, so I had to investigate the
persistence type more and gained a better understanding of that factor. Even after
getting the persistence type right I ran the purge report but items were not purging,
which makes sense because they had failed to be purged years ago, so this is when
I had to look closer at the data in the tables and then correct the data in order to
close out the items.

Why did we have to correct the data? We had to "close out the items" as
they were left incomplete by the users or our the administrators as even the
System:Error (WFERROR) workflows need to be completed in order for the
underlying and associated workflow items to be purged out of the system. How
did we correct the data? I started with populating the END_DATE column in the
WF_ITEMS SQL 5 and WF_ITEM_ACTIVITY_STATUSES SQL 6 tables and in all
cases of what I was able to purge out of our system, this was the only data that
needed to be corrected or addressed. Then I re-ran the same purge routine with the
same correct parameters with regards to persistence type, and only after the data
was cleansed did we see it get purged. After this, the metadata in these tables were
purged and forever out of our system which gives us valuable disk space back.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 18
Analyzing Oracle Workflow data for increased system performance

How our purge went

Having identified and understood our baseline, I created a master document


to get some idea of what the Oracle Workflow data looked like in the system
before it was ever touched. In our QA environment the baseline by Workflow
looked like:

The eagle eyed observer will add up all the numbers here and see some
objects are missing, and that is true since I cut out several of the low hanging fruit
at the top so the graphic would fit on the presentation page but the key in this
graphic is the baseline of over 21 million objects in the
WF_ITEM_ACTIVITY_STATUSES table.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 19
Analyzing Oracle Workflow data for increased system performance

Here is the baseline by table in our QA environment and key in on the fact
that we have over 215 million items in the tables listed:

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 20
Analyzing Oracle Workflow data for increased system performance

Next is a side by side comparison of our QA baseline and the end result we
achieved after two weeks working on the project. You can see we purged almost
80% of the Oracle Workflow objects from the
WF_ITEM_ACTIVITY_STATUSES table!

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 21
Analyzing Oracle Workflow data for increased system performance

If we look at our purge progress in QA from a table perspective almost 75%


of the Oracle Workflow data in the tables has been purged out at the end of two
weeks!

One note of interest is that I found documentation which indicated the below
tables would be purged:
WF_LOCAL_USER_ROLES
WF_USER_ROLE_ASSIGNMENTS
WF_LOCAL_ROLES
Yet purge after purge the needle never moved for these tables, not even by a single
data point. I will be honest, this could have been due to operator error or some
other condition I am not yet aware of and so far it is a mystery that I can not quite
explain with the documentation as it was but one possibility is that there are no
Workflows in these tables which explains why they did not move at all. Karen
Brownfield did tell me that these tables are the directory services tables which act
as the base tables underlying the Define User form so they contain all
responsibilities for all employees. In short, any role to whom a notification can be

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 22
Analyzing Oracle Workflow data for increased system performance

sent. When the purge documentation states that these tables will be purged, it is
only for the rows where partition_id = 0 indicating ad-hoc users that no
longer have notifications addressed to them. This means that very little data would
ever be purged from these tables and now I need to understand why nothing was
purged out in our scenario.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 23
Analyzing Oracle Workflow data for increased system performance

Our problem

Of course no presentation (or paper) is complete without sharing something


that went wrong, so I will start showing you a set of snapshots from my master
document with some commentary in between and I am asking you to spot what our
problem is as well as what may have caused it.

Here we are at the start of the purging effort where you can see some of the
additional details I am tracking regarding the daily difference that was made in
terms of the number of rows, how much of the day previous was purged out today,
and overall how much has been purged out.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 24
Analyzing Oracle Workflow data for increased system performance

On the first day of purging, you can see we made a substantial dent in the
system by purging out 62 million rows for 29% of the volume. Great progress so
far!

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 25
Analyzing Oracle Workflow data for increased system performance

Obviously being on a production support team, and helping out with the
Exadata QA project as well, there were periods of time when I would only be able
to do a small amount of work on the purging project. This is one of those days as
only ten thousand items were purged for less than one tenth of a percent.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 26
Analyzing Oracle Workflow data for increased system performance

The next day was another story though, as 15% of the previous day's volume
was purged out making us hit 40% purge level for the tables overall.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 27
Analyzing Oracle Workflow data for increased system performance

Again, we are making steady progress here seeing 13% of the previous day's
volume was purged out getting us closer to hitting 50% of the tables being purged
overall.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 28
Analyzing Oracle Workflow data for increased system performance

Another 10% purged against the previous day's volume which gets us over
50% for the first time but still no issues.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 29
Analyzing Oracle Workflow data for increased system performance

A large chunk is purged out on this day with 45% of the previous day's
volume gone for almost 75% of our data in the tables being purged total.

While almost 80 thousand items are purged out of the tables, we are starting
to hit some real diminishing gains here as that only amounts for less than 1% of the
volume we had the day previous and really did not move the needle much overall.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 30
Analyzing Oracle Workflow data for increased system performance

So by now you must be asking yourself, Where is the problem? and it is


right here:

Unknown to me while doing this is that as we were going forward through the
project, day by day we were approaching a critical mass regarding the volume of
data left in our tables. It was at this point on June 14th that I was re-running my
scripts to check the project status regularly throughout the day and early in the
morning the union script started taking more than a minute or two as it normally
would. First it started to run for 20 minutes, then later in the day after more
purging the same script took an hour to run and finally the next day it would not
finish running to return me results.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 31
Analyzing Oracle Workflow data for increased system performance

A new problem

We started with a clear problem which we were trying to fix, and have
potentially created another problem so we need to investigate why the system is the
way it is now. Where did we go wrong and what caused our query to go slower?
When this happened, I had no idea what had happened but I was really glad it was
in QA and we had no end users in the system. I did some cursory unit testing of
the touch points for Oracle Workflow that I detailed initially, and nothing seemed
to be out of place or performing badly; yet I knew there was a problem in the
system and I could not leave it in this state.

Okay, I am not an Oracle genius yet but during the same year I started to get
more comfortable with some of the internals of the system so when the script
stopped performing on the 18th I was able to see in the GV$SESSION table that
the wait event it was on had HWM in it which lead to an investigation of what
HWM was or meant. What is HWM? The boundary between used and unused
space in a segment called the High Water Mark threshold. This is the definition of
what the HWM is, from Oracle, but that really does not tell me a whole lot. Other
sources on My Oracle Support lead me to draw the conclusion that the HWM
represents the last 25-35% of your table, which fits with the data shown as on the
14th this line in the sand was crossed for several tables:

Included in the Appendix are several references I used for understanding and
diagnosing the HWM issue, as well as some suggestions on what you can do to
remedy this without having to do an export and import like we were planning on
doing with Exadata anyways. Ive also been told that the newer versions of the
Workflow Analyzer do show this type of information as well!

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 32
Analyzing Oracle Workflow data for increased system performance

Not wanting to leave our production system in disarray, even for a day or
two before the Exadata implementation, we changed our approach for PROD by
dialing our purge target back from the 75-80% possible in QA and set our sights on
purging around 50% of our data in PROD which should leave the system in a safe
state.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 33
Analyzing Oracle Workflow data for increased system performance

Possible gains

I showed this before, but it bears repeating: in QA we purged about 75% of


the data in our table and almost 80% of the objects from the
WF_ITEM_ACTIVITY_STATUSES table and below is the detailed information:

Next up are the potential PROD space savings, which were achieved in QA:

Here we see the breakdown of the 30 Gigs dedicated to our Workflow product in
this instance, with 9 Gigs residing in our tables and 21 Gigs existing in our
indexes. This data distribution between tables and indexes doesnt seem exactly
right, but at the end of the results table you can see that the space I calculated was
saved by purging is remarkably similar to the actual item counts purged out of the
tables themselves so we have proven out the total reduction of space including the
indexes. Key in on the amount of data purged: over 160 million rows of data for
about 23 Gigs of data.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 34
Analyzing Oracle Workflow data for increased system performance

Actual gains

In this example below we see the same results table as before, but over the
course of just two days by utilizing our master documents we were able to purge
out about 33% of the data in our table and objects from the
WF_ITEM_ACTIVITY_STATUSES table in PROD:

Here are the actual space savings which we achieved in PROD with our
purge:

We start with the exact same breakdown of the 30 Gigs in this instance, with
9 Gigs still residing in our tables and 21 Gigs still existing in our indexes and you
can see that while we targeted 50% as our safe barrier we did not make it to that
level as we only hit 33% while purging almost 100 million LESS rows of data.
This might be seen as a failure, but note we did hit 50% space reduction in the
WF_ITEMS table so we felt this was a good place to stop.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 35
Analyzing Oracle Workflow data for increased system performance

Our end results

So we had a problem, identified the root cause, had a plan for resolution, put
it into effect, and now we need to understand what our results are now. Good
news! Concurrent manager reports, including the Workflow Background Process,
started to run in a more timely fashion and the approval of Internet Expenses was
quicker for the end users as problems with timeouts while submitting or approving
stopped. Power users are no longer received timeout messages in our access
provisioning platform and because they stopped having problems they started
saving time which as I said before is money saved for the company.

We removed 10 Gigs of space in our platform which reduced the time on


export and importing for the Exadata project in mock deployment runs in QA and
then once again when we went live in PROD; obviously the space savings are
directly related to the cost of storage for any system, and clearing out 10 Gigs of
space for tables and indexes has a direct positive result on our bottom line for the
core infrastructure of the system. Additionally, when you have multiple instances
for PROD, QA, DEV, projects, sandboxes, etc. you will get space savings in all of
these instances as well so you can reduce more from your bottom line with a single
purging effort.

We went into this project in QA to "just purge data" and we realized that
purging too much can have significant negative impacts, so we decided on where
our threshold for PROD should be and we purged just the data that we wanted to
remove from the system. This meant we did not introduce negative user
experiences the week before our massive system upgrade to the Exadata platform.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 36
Analyzing Oracle Workflow data for increased system performance

Operational changes

Having come to a better understanding of the Workflow product, we were


able to have our group be more proactive by creating alerts which will tell our team
about unpurged workflows which are just sitting in the system. My initial research
had shown that there were orphans in the system as well which were not being
purged because some of the data was missing from other tables so all of the
conditions were not being matched, and now we have alerting based on this
scenario. Additionally we get alerts based on what workflows have errored out in
the past day, or week, so we can see how many errors have come through the
system and fix potential user issues before they become tickets that get routed to
our team.

My boss had wanted some type of reporting on how much work is being
done with the Oracle Workflow product by our team members and this
investigation lead to an understanding of the data so this could be delivered at the
end of the week with an alert. We also added new purge routines for items that
were not being purged, restored other purge routines which had been lost over
time, and had a better understanding of how the routines are used and what other
items need to be addressed.

This may seem odd in a section about operational changes, but we need to
help the development team understand their choices for workflow persistence
types, and work with them closer so that when a new workflow goes in we are
aware of it to setup the appropriate purge reports. Our initial work with the
business did not mark the end of our operational responsibility with regards to
workflows and keeping the business informed, as we have given them the power to
make their own choices for their sections of the application.

Since we now know the operational tasks which need to be completed to


initiate the discussions with the business in order to make sure new purge routines
are scheduled, our data is not being kept forever and using up our resources. A key
here is that for all of these items, you need to periodically review the approach and
also continuously repeat the steps as needed.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 37
Analyzing Oracle Workflow data for increased system performance

The future

While I am not a fortune teller, there are some things I see on the horizon
with regards to the next phase of this project. In reliving this project, and going
back to some sources like My Oracle Support, there have been new things added
over the past year or even clarified so I feel like I need to start over with my
research. We need to continue working with our business partners because I want
to renew their approvals each year and this means revising our data to answer some
of these questions:

How has our data been during the past year?


Did we miss purging any workflows?
Did a new workflow go in that we have not purged?
Will we be able to purge more workflow items?
Do we know all of our gaps?

The possibility exists that we have missed something which needs resolution and
an alert could be created to make sure the condition does not happen again. There
is a lot of documentation we have about this, but it needs to be formalized along
with processes so others can pick up where I left off especially since we are
planning on going to R12 and depending on the mechanism behind how they will
do it we could have another export/import available to us where we can purge
another massive amount of data.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 38
Analyzing Oracle Workflow data for increased system performance

In Summary

This is pretty straightforward, but important to mention again, we delivered


savings to the Exadata implementation team in terms of space, hours and money.
Why is this SO important? This kind of purging effort can save you space, hours
and money with regards to all of your environments and future refresh efforts.

In addition, by doing this we reset the HWM for our tables, due to the
export/import, which will make avoiding hitting the HWM easier in the future
because we are now starting from 100% full tables instead of tables in or near that
HWM danger zone.

Not only is the business a participant in the process of purging now, but the system
is performing without issues as they expect so the business is a happy bunch.

Also, in the support game being proactive gets you ahead of issues, and lets you
get a more complete picture about your environment than if you were just being
reactive, so our new alerts fit that bill perfectly.

In the end, we were able to carry forward less data and had a much smaller Oracle
Workflow footprint while learning how to let the system maintain the Workflow
while telling us if adjustment is needed.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 39
Analyzing Oracle Workflow data for increased system performance

Acknowledgements

Greg Tripp Without his encouragement and guidance from initial suspicions of
a problem lurking in our system to making sure that I had everything possible
available in order to pull this project off, there would be no presentation or
whitepaper for OAUG.

Guna Bellampalli and Suresh Lakshmanan After drawing up the master


documentation for what our data looked like and showing it to Guna, he was
extremely supportive of this project running in parallel in the pre and post Exadata
upgrade environments in QA while Suresh was my point person from the DBA
team which worked with me to make sure I was not stepping on any toes during
their steps.

Ed Paquette and Al Rodriguez del Villar Both of these gentlemen have


allowed me to present for OAUG, so without their approvals you would not have
been able to read this whitepaper.

Morgan Mills, Pradeep Pai, Sunitha Uppala, Viswa Vadlamani, Duane Cluff,
Shashiprakash Ganji, and Varaprasad Balmoor the FSG and HR Support
teams During the implementation phases of this in QA and PROD, my focus was
almost entirely on this project so that left my team to pick up some of the slack for
me and it is greatly appreciated. Also appreciated is them taking the time to help
review a very early draft of my presentation, as well the HR Support team joining
us in this review to give me an outside perspective on the project.

Malynda Crockett, Guna Bellampalli, Sue Gentrup, Alpesh Patel, Tina


Blalock, Lisa Vasquez, Chuck Harrison, Kris Crockett, and Dan James
Again, appreciation for reviewing an almost finished product and giving me some
great advice or ideas for my presentation.

Jennifer McDonald, Greg Tripp, Mary Lou Hodgins, Kim Rodriguez, Steve
ODay, Karen Blum, and Mark A. MacFarlane Without the approvals of the
business and audit partners, we would not have been able to go forward with our
project at all so their time spent working through understanding the issue I was
bringing to them is greatly appreciated.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 40
Analyzing Oracle Workflow data for increased system performance

Bill Burbage, Lisa Scott, and the Oracle EBS Proactive CAB Bill deserves a
lot of credit for this presentation and whitepaper getting off the ground as he has
created a just awesome set of scripts which Oracle has called the Oracle Workflow
Analyzer. His presentation to the CAB really sparked my imagination and once I
started digging into the issues the Analyzer brought up, it was almost a never
ending rabbit hole I gladly went down.

Karen Brownfield Karen has been a fount of wisdom for many years, and in just
the short period Ive had the pleasure to interact with her she has been giving me
many tips or tricks which I did not know or even know to start looking for them.
She is an inspiration, and mentor in the dark arts of the Workflow Product.

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 41
Analyzing Oracle Workflow data for increased system performance

Appendix:

Direct References
External sources

Oracle Workflow Analyzer It is listed below in the Workflow Reference section


under My Oracle Support ID 1369938.1, but this tool from Oracle was the genesis
of my presentation so it bears explaining a bit here. Initially when introduced to
this tool, I ran it and received a graphic like this which freaked me out some and
made me dig into what the problem was. After I purged out all the data I just
showed you? I ran it again. I still got the same graphic. Dont panic. The tool is
meant to help you understand if there is a potential problem, but it does depend
upon your own situation and the data which your company has chosen to retain. It
is possible that you will have an Excessive or Critical gauge reported, while there
is not really an issue, but you need to do the leg work initially to make sure this is
really the case.

My Oracle Support

Solution Beacon http://www.solutionbeacon.com/WorkflowPerformanceTuningWP.pdf

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 42
Analyzing Oracle Workflow data for increased system performance

HWM Reference

Table/Index (partition) Growth Is Far More Than Expected [ID 729149.1]

How to View High Water Mark - Step-by-Step Instructions [ID 262353.1]

SEGMENT SHRINK and Details. [ID 242090.1]

How to determine the actual size of the LOB segments and how to free the
deleted/unused space above/below the HWM [ID 386341.1]

LOB HWM CONTENTION :Using AWR Reports to Identify the Problem;


Confirm and Verify the Fix [ID 837883.1]

How to find Objects Fragmented below High water mark [ID 337651.1]

ORA-1499. Table/Index row count mismatch [ID 563070.1]

How to Determine Real Space used by a Table (Below the High Water Mark) [ID
77635.1]

Reclaiming unused space in an E-Business Suite Instance tablespace [ID 303709.1]

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 43
Analyzing Oracle Workflow data for increased system performance

Workflow Reference

Performance Issues Caused by Purge Obsolete Workflow Runtime Data Not


Purging Everything [ID 148678.1]

A Detailed Approach To Purging Oracle Workflow Runtime Data [ID 144806.1]

Speeding Up And Purging Workflow [ID 132254.1]

bde_wf_data.sql - Query Workflow Runtime Data That Is Eligible For Purging [ID
165316.1]

Workflow Analyzer script for E-Business Suite Workflow Monitoring and


Maintenance [Video] [ID 1369938.1]

FAQ on Purging Oracle Workflow Data [ID 277124.1]

Quick Reference: How To Purge Obsolete Workflow Runtime Data For


Applications [ID 264191.1]

A Closer Examination Of The Concurrent Program Purge Obsolete Workflow


Runtime Data [ID 337923.1]

Workflow Scripts [ID 183643.1]

Troubleshooting Workflow Data Growth Issues [ID 298550.1]

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 44
Analyzing Oracle Workflow data for increased system performance

SQL References
SQL Reference 1 Script 1:

select count(*), 'WF_ITEMS' as "TABLE NAME" from


WF_ITEMS union

select count(*), 'WF_ITEM_ACTIVITY_STATUSES' from


WF_ITEM_ACTIVITY_STATUSES union

select count(*), 'WF_ITEM_ACTIVITY_STATUSES_H' from


WF_ITEM_ACTIVITY_STATUSES_H union

select count(*), 'WF_ITEM_ATTRIBUTE_VALUES' from


WF_ITEM_ATTRIBUTE_VALUES union
select count(*), 'WF_ACTIVITY_ATTR_VALUES' from
WF_ACTIVITY_ATTR_VALUES union

select count(*), 'WF_NOTIFICATIONS' from


WF_NOTIFICATIONS union
select count(*), 'WF_NOTIFICATION_ATTRIBUTES' from
WF_NOTIFICATION_ATTRIBUTES union

select count(*), 'WF_COMMENTS' from WF_COMMENTS union


select count(*), 'WF_ACTIVITY_TRANSITIONS' from
WF_ACTIVITY_TRANSITIONS union

select count(*), 'WF_PROCESS_ACTIVITIES' from


WF_PROCESS_ACTIVITIES union

select count(*), 'WF_ACTIVITY_ATTRIBUTES_TL' from


WF_ACTIVITY_ATTRIBUTES_TL union

select count(*), 'WF_ACTIVITY_ATTRIBUTES' from


WF_ACTIVITY_ATTRIBUTES union

select count(*), 'WF_ACTIVITIES' from WF_ACTIVITIES


union

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 45
Analyzing Oracle Workflow data for increased system performance

select count(*), 'WF_ACTIVITIES_TL' from


WF_ACTIVITIES_TL union

select count(*), 'WF_LOCAL_USER_ROLES' from


WF_LOCAL_USER_ROLES union

select count(*), 'WF_LOCAL_ROLES' from WF_LOCAL_ROLES


union

select count(*), 'WF_USER_ROLE_ASSIGNMENTS' from


WF_USER_ROLE_ASSIGNMENTS
SQL Reference 1 Script 2:

select num_rows, table_name from dba_tables

where table_name in ('WF_ACTIVITY_ATTRIBUTES',

'WF_ACTIVITIES',

'WF_ACTIVITY_ATTRIBUTES_TL',

'WF_ACTIVITIES_TL',

'WF_ACTIVITY_ATTR_VALUES',

'WF_PROCESS_ACTIVITIES',

'WF_ACTIVITY_TRANSITIONS',

'WF_ITEM_ACTIVITY_STATUSES_H',

'WF_ITEMS',

'WF_NOTIFICATIONS',

'WF_COMMENTS',

'WF_LOCAL_USER_ROLES',

'WF_USER_ROLE_ASSIGNMENTS',

'WF_LOCAL_ROLES',

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 46
Analyzing Oracle Workflow data for increased system performance

'WF_ITEM_ACTIVITY_STATUSES',

'WF_NOTIFICATION_ATTRIBUTES',

'WF_ITEM_ATTRIBUTE_VALUES')

order by num_rows
SQL Reference 1 Script 3:

select num_rows, table_name from dba_tables

where table_name in ('WF_ACTIVITY_ATTRIBUTES',

'WF_ACTIVITIES',

'WF_ACTIVITY_ATTRIBUTES_TL',

'WF_ACTIVITIES_TL',

'WF_ACTIVITY_ATTR_VALUES',

'WF_PROCESS_ACTIVITIES',

'WF_ACTIVITY_TRANSITIONS',

'WF_ITEM_ACTIVITY_STATUSES_H',

'WF_ITEMS',

'WF_NOTIFICATIONS',

'WF_COMMENTS',

'WF_LOCAL_USER_ROLES',

'WF_USER_ROLE_ASSIGNMENTS',

'WF_LOCAL_ROLES',

'WF_ITEM_ACTIVITY_STATUSES',

'WF_NOTIFICATION_ATTRIBUTES',

'WF_ITEM_ATTRIBUTE_VALUES')order by num_rows

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 47
Analyzing Oracle Workflow data for increased system performance

SQL Reference 2 Script 1:

select min(begin_date) from WF_ITEM_ACTIVITY_STATUSES

select min(begin_date) from WF_ITEMS

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 48
Analyzing Oracle Workflow data for increased system performance

SQL Reference 3 Script 1 Defined:

select name, display_name, persistence_type,


persistence_days from wf_item_types_vl
SQL Reference 3 Script 2 Actual Data:

select name, display_name, persistence_type,


persistence_days

from wf_item_types_vl

where name in (select unique item_type

from wf_item_activity_statuses)

order by name

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 49
Analyzing Oracle Workflow data for increased system performance

SQL Reference 4 Script 1 How to find WF Items by month:

select to_char(begin_date,'yyyy-Mon') "Date", count(*)


"Number of Items", ITEM_TYPE

from WF_ITEM_ACTIVITY_STATUSES

--where ITEM_TYPE = 'APEXP' this allows you to target


a specific ITEM_TYPE for more in depth troubleshooting

group by to_char(begin_date,'yyyy-Mon'), ITEM_TYPE

order by to_char(begin_date,'yyyy-Mon'), ITEM_TYPE


SQL Reference 4 Script 2 How to find WF Items by day:

select to_char(begin_date,'yyyy-Mon-dd') "Date",


count(*) "Number of Items" from
WF_ITEM_ACTIVITY_STATUSES

where to_char(begin_date,'yyyy-Mon') in

--a time period which you have found in your first


script

('2009-Dec', '2009-Oct')

group by to_char(begin_date,'yyyy-Mon-dd')

order by to_char(begin_date,'yyyy-Mon-dd')

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 50
Analyzing Oracle Workflow data for increased system performance

SQL Reference 5 Script 1:

update WF_ITEMS

set end_date = '31-DEC-2002'

where begin_date between to_date('01-JAN-2002') and


to_date('31-DEC-2002')

and item_type = 'POERROR'

and end_date is null

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 51
Analyzing Oracle Workflow data for increased system performance

SQL Reference 6 Script 1:

update WF_ITEM_ACTIVITY_STATUSES

set end_date = '31-DEC-2002'

where begin_date between to_date('01-JAN-2002') and


to_date('31-DEC-2002')

and item_type = 'POERROR'

and end_date is null

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 52
Analyzing Oracle Workflow data for increased system performance

Workflow alert references


Workflow alert reference 1 Script 1 Alert Name: Workflows by type and year:

select to_char(begin_date,'yyyy'), count(*), ITEM_TYPE

INTO

&YEAR,

&COUNT,

&TYPE

from WF_ITEM_ACTIVITY_STATUSES

group by to_char(begin_date,'yyyy'), ITEM_TYPE

order by to_char(begin_date,'yyyy'), ITEM_TYPE

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 53
Analyzing Oracle Workflow data for increased system performance

Workflow alert reference 2 Script 1 Alert Name: Workflows errored by week:

select item_type, item_key,

to_char(begin_date, 'DD-MON-YYYY HH24:MI:SS'),

to_char(end_date, 'DD-MON-YYYY HH24:MI:SS'),


error_name,

substr(error_message, 0, 60) AS ERROR_MESSAGE,

substr(error_stack, 0, 60) AS ERROR_STACK

from WF_ITEM_ACTIVITY_STATUSES

where (end_date > sysdate -7 or begin_date > sysdate -


7)

and error_message is not null

union all

select item_type, item_key,

to_char(begin_date, 'DD-MON-YYYY HH24:MI:SS'),

to_char(end_date, 'DD-MON-YYYY HH24:MI:SS'),


error_name,

substr(error_message, 0, 60) AS ERROR_MESSAGE,

substr(error_stack, 0, 60) AS ERROR_STACK

INTO

&ITEMT,

&ITEMK,

&BEGIN,

&END,

&ERRNM,

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 54
Analyzing Oracle Workflow data for increased system performance

&ERRMSG,

&ERRSTK,

from WF_ITEM_ACTIVITY_STATUSES_H

where (end_date > sysdate -7 or begin_date > sysdate -


7)

and error_message is not null

order by 1,2

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 55
Analyzing Oracle Workflow data for increased system performance

Workflow alert reference 3 Script 1 Alert Name: Workflows which will not be purged:

select 'WF_ITEM', item_type, count(*) from


WF_ITEM_ACTIVITY_STATUSES

where item_type not in (

'AMEUPDUN',

'APCCARD',

'APEXP',

'APO_FREQ',

'APOL_SUP',

'APOLCCTU',

'APOLEMRV',

'APOLPROM',

'APPEWF',

'APWRECPT',

'CREATEPO',

'POAPPRV',

'POERROR',

'PORCOTOL',

'POREQCHA',

'PORPOCHA',

'RAPIDAXS',

'REQAPPRV',

'RPDXDET',

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 56
Analyzing Oracle Workflow data for increased system performance

'RPDXIPD',

'WFERROR',

'WFTESTS')

and begin_date > sysdate - 180

group by item_type

union

select 'WF_NOTIF', message_type, count(*)

INTO

&TABLE,

&TYPE,

&COUNT

from wf_notifications

where message_type not in (

'AMEUPDUN',

'APCCARD',

'APEXP',

'APO_FREQ',

'APOL_SUP',

'APOLCCTU',

'APOLEMRV',

'APOLPROM',

'APPEWF',

'APWRECPT',

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 57
Analyzing Oracle Workflow data for increased system performance

'CREATEPO',

'POAPPRV',

'POERROR',

'PORCOTOL',

'POREQCHA',

'PORPOCHA',

'RAPIDAXS',

'REQAPPRV',

'RPDXDET',

'RPDXIPD',

'WFERROR',

'WFTESTS')

and begin_date > sysdate - 180

group by message_type

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 58
Analyzing Oracle Workflow data for increased system performance

Workflow alert reference 4 Script 1 Alert Name: Workflows errored in last day:

select item_type, item_key,

to_char(begin_date, 'DD-MON-YYYY HH24:MI:SS'),

to_char(end_date, 'DD-MON-YYYY HH24:MI:SS'),

substr(error_message, 0, 60) AS ERROR_MESSAGE,

substr(error_stack, 0, 60) AS ERROR_STACK

from WF_ITEM_ACTIVITY_STATUSES

where begin_date > sysdate -1

and activity_status = 'ERROR'

union all

select item_type, item_key,

to_char(begin_date, 'DD-MON-YYYY HH24:MI:SS'),

to_char(end_date, 'DD-MON-YYYY HH24:MI:SS'),

substr(error_message, 0, 60) AS ERROR_MESSAGE,

substr(error_stack, 0, 60) AS ERROR_STACK

INTO

&ITEMT,

&ITEMK,

&BEGIN,

&END,

&ERRMSG,

&ERRSTK,

from WF_ITEM_ACTIVITY_STATUSES_H

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 59
Analyzing Oracle Workflow data for increased system performance

where begin_date > sysdate -1

and activity_status = 'ERROR'

order by 1,2

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 60
Analyzing Oracle Workflow data for increased system performance

Workflow alert reference 5 Script 1 Alert Name: Workflows with no end dates:

select count(*), item_type

INTO

&COUNT,

&TYPE

from WF_ITEM_ACTIVITY_STATUSES

where begin_date between (sysdate - 180) and (sysdate -


14)

and end_date is null

group by item_type

order by count(*) desc

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 61
Analyzing Oracle Workflow data for increased system performance

Workflow alert reference 6 Script 1 Alert Name: Workflow items not being purged:

select notification_id, "TABLE", "DATE"

INTO

&ID,

&TABLE,

&DATE

from (select distinct notification_id, 'WF_COMMENTS' as


"TABLE", comment_date as "DATE" from wf_comments

where notification_id not in (select notification_id


from wf_notifications)

union all

select distinct notification_id, 'WF_NOTIF_ATT' as


"TABLE", date_value as "DATE" from
WF_NOTIFICATION_ATTRIBUTES

where notification_id not in (select notification_id


from wf_notifications))

order by notification_id

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 62
Analyzing Oracle Workflow data for increased system performance

Workflow alert reference 7 Script 1 Alert Name: Workflow item report:

select item_type, to_date(end_date), action,


performed_by, count(*) from WF_ITEM_ACTIVITY_STATUSES

where (end_date > sysdate -7 or begin_date > sysdate -


7)

and performed_by in (select user_name from fnd_user

where user_id in (select fu.user_id

from apps.fnd_user fu,

apps.per_people_f ppf,

apps.FND_USER_RESP_GROUPS_DIRECT rgd

where fu.USER_ID = rgd.USER_ID

and fu.EMPLOYEE_ID = ppf.PERSON_ID

and rgd.RESPONSIBILITY_ID in ('57694')

and (rgd.end_date = '31-DEC-4712' OR rgd.end_date is


null OR rgd.end_date > sysdate)
and (fu.END_DATE = '31-DEC-4712' OR fu.END_DATE is null
OR fu.END_DATE > sysdate)
and (ppf.EFFECTIVE_END_DATE = '31-DEC-4712' OR
ppf.EFFECTIVE_END_DATE is null OR
ppf.EFFECTIVE_END_DATE > sysdate)))

group by to_date(end_date), item_type, performed_by,


action

union all

select

item_type,

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 63
Analyzing Oracle Workflow data for increased system performance

to_date(end_date),

action,

performed_by,

count(*)

INTO &TYPE, &DATE, &ACTION, &USER, &COUNT

from WF_ITEM_ACTIVITY_STATUSES_H

where (end_date > sysdate -7 or begin_date > sysdate -


7)

and performed_by in (select user_name from fnd_user

where user_id in (select fu.user_id

from apps.fnd_user fu,

apps.per_people_f ppf,

apps.FND_USER_RESP_GROUPS_DIRECT rgd

where fu.USER_ID = rgd.USER_ID

and fu.EMPLOYEE_ID = ppf.PERSON_ID

and rgd.RESPONSIBILITY_ID in ('57694')


and (rgd.end_date = '31-DEC-4712' OR rgd.end_date is
null OR rgd.end_date > sysdate)

and (fu.END_DATE = '31-DEC-4712' OR fu.END_DATE is null


OR fu.END_DATE > sysdate)

and (ppf.EFFECTIVE_END_DATE = '31-DEC-4712' OR


ppf.EFFECTIVE_END_DATE is null OR
ppf.EFFECTIVE_END_DATE > sysdate)))

group by to_date(end_date), item_type, performed_by,


action order by 2, 1, 4, 3, 5

Apollo Group, Inc. ORACLE - August 30th 2013


Rusty Schmidt 64
Analyzing Oracle Workflow data for increased system performance

Version Information

Version Author Date Action

0.1 Rusty Schmidt 10/12/12 Presentation Abstract

0.3 Rusty Schmidt 05/25/13 Initial Presentation Notes

0.5 Rusty Schmidt 06/05/13 Presentation Notes v.1

0.7 Rusty Schmidt 06/25/13 Presentation Notes v.2

0.9 Rusty Schmidt 07/09/13 Presentation Notes v.3

1.0 Rusty Schmidt 07/11/13 Initial Whitepaper

1.04 Rusty Schmidt 07/18/13 Completed Whitepaper

1.06 Rusty Schmidt 08/11/13 Minor content changes

1.08 Rusty Schmidt 08/19/13 Major editing changes

1.1 Rusty Schmidt 08/30/13 This Version

Apollo Group, Inc. ORACLE - August 30th 2013

You might also like