Professional Documents
Culture Documents
Rusty Schmidt
Table of Contents
Executive Summary....................................................................................................................................... 4
About the author .......................................................................................................................................... 5
E-mail ........................................................................................................................................................ 5
Blog ........................................................................................................................................................... 5
LinkedIn ..................................................................................................................................................... 5
About the company ...................................................................................................................................... 5
About the Paper ............................................................................................................................................ 6
Introduction .................................................................................................................................................. 7
Initial Research.............................................................................................................................................. 8
Understanding the data ................................................................................................................................ 9
Who we work with ...................................................................................................................................... 11
Creating the plan......................................................................................................................................... 13
Next steps ............................................................................................................................................... 14
How to purge .......................................................................................................................................... 15
Quick tips ................................................................................................................................................ 15
Executing the plan....................................................................................................................................... 17
How our purge went ................................................................................................................................... 18
Our problem................................................................................................................................................ 23
A new problem............................................................................................................................................ 31
Possible gains .............................................................................................................................................. 33
Actual gains ................................................................................................................................................. 34
Our end results............................................................................................................................................ 35
Operational changes ................................................................................................................................... 36
The future ................................................................................................................................................... 37
In Summary ................................................................................................................................................. 38
Acknowledgements..................................................................................................................................... 39
Appendix ..................................................................................................................................................... 41
Direct References .................................................................................................................................... 41
External sources .................................................................................................................................. 41
HWM Reference.................................................................................................................................. 42
Workflow Reference ........................................................................................................................... 43
SQL References ....................................................................................................................................... 44
SQL Reference 1 Script 1: ................................................................................................................. 44
SQL Reference 1 Script 2: ................................................................................................................. 45
SQL Reference 1 Script 3: ................................................................................................................. 46
SQL Reference 2 Script 1: ................................................................................................................. 47
SQL Reference 3 Script 1 Defined: ................................................................................................ 48
SQL Reference 3 Script 2 Actual Data: .......................................................................................... 48
SQL Reference 4 Script 1 How to find WF Items by month: ......................................................... 49
SQL Reference 4 Script 2 How to find WF Items by day: .............................................................. 49
SQL Reference 5 Script 1: ................................................................................................................. 50
SQL Reference 6 Script 1: ................................................................................................................. 51
Workflow alert references ...................................................................................................................... 52
Workflow alert reference 1 Script 1 Alert Name: Workflows by type and year:....................... 52
Workflow alert reference 2 Script 1 Alert Name: Workflows errored by week: ....................... 53
Workflow alert reference 3 Script 1 Alert Name: Workflows which will not be purged:.......... 55
Workflow alert reference 4 Script 1 Alert Name: Workflows errored in last day: .................... 58
Workflow alert reference 5 Script 1 Alert Name: Workflows with no end dates: .................... 60
Workflow alert reference 6 Script 1 Alert Name: Workflow items not being purged: ............. 61
Workflow alert reference 7 Script 1 Alert Name: Workflow item report:................................. 62
Version Information .................................................................................................................................... 64
Executive Summary
There are very simple lessons within this whitepaper that can be picked up
by individuals of many different skill levels and departments so you can spot
negative trends, gain knowledge regarding the space requirements and space
available to be recovered, and change or modify current operational habits so the
Workflow product is no longer that large black box of data that just sits there
collecting dust.
Erwin (Rusty) Schmidt is a support focused individual with 7 years of EBS support
experience relating to the Financials suite of products such as Payables,
Receivables, Fixed Assets and Purchasing. His interests are in better ways to give
the end user a great experience and many times that lies within the internals of the
database or application. Our group within Apollo is responsible for the Financials
applications with products including the Hyperion and Oracle EBS suites.
E-mail: rusty.schmidt@apollo.edu
Blog: http://theoracleemt.blogspot.com
LinkedIn: www.linkedin.com/pub/rusty-schmidt/60/2a6/310/
The Apollo Group is the parent company of the University of Phoenix, with UoPX
currently having 356,000 students and an alumni population of 710,000 making up
over a million students through the history of the University as of Feb/Mar 2012.
Visit www.apollo.edu for more information.
This whitepaper was created as a result of being selected by the OAUG Committee
for the inaugural AppsTech Connection Point conference. If you are not a member
of the OAUG, please visit them at http://oaug.org/ and consider signing up as they
offer a wealth of knowledge regarding Oracle applications.
Introduction
Our end users were seeing the submission or approval of Internet Expense
items go slow until it would finally timeout after an excessive period of time if it
did not get to successfully process. At the same time our home grown access
provisioning system was hanging as well so when the power users would work on
the items in their queues the platform would actually time out for them so they
would have to try it multiple times before being successful. Adding another vector
to this perfect storm were warning signs such as concurrent reports like the
Workflow Background Process, a report that used to take 3-5 minutes, starting to
take 13 minutes to run. This does not sound like a horrible problem, but once you
realized that the report is scheduled to run every 15 minutes this resulted in the
report constantly running in our system for almost 21 hours every day adding
undue stress and CPU cycles which need to be reduced.
While we have identified this as a problem, at the same time we were in the
middle of upgrading to a new database version and hardware platform in the form
of a quarter rack Exadata Database Machine X2-2 which is obviously a faster
system, yet there will be an obvious advantage if we can clean up our data. We
will be able to provide that project some dividends as there will be less data that
needs to be exported and imported which can provide them saving in the hours
range for manpower and will give us more storage back in our other instances of
Exadata.
We have identified a problem, but we do not yet know what to do about it.
The first thing I do when I do not have all the facts is to investigate and learn more
about the issue. So where to start?
Initial Research
At a very high level, you need to know what the data looks like by
identifying what tables are involved with the Oracle Workflow product and
looking at your data distribution. Obviously an effort like this does not take place
within a vacuum, so you need to consult with external resources such as My Oracle
Support, which also is the home for the Oracle Workflow Analyzer script, as well
as a document from Solution Beacon which really started me going and the
specific MOS notes which were used in this project and documented at the end of
this whitepaper.
Getting some knowledge under our belts about our system and the Oracle
Workflow product in general allows us to better understand the data so we can find
out how large our data footprint is.SQL 1 We do this by calculating the space that
our tables and indexes are taking up with the Oracle Workflow product after
identifying what tables are being used and My Oracle Support Note 277124.1 has
the details I started with. This allowed me to understand what the oldest items in
our tables were and this was important for many reasons, but primarily due to
needing to know what our data distribution looked like so we could know where
we needed to be purging. SQL 2 Doing this allowed me to identify that the oldest
item in our tables dated back to 2001, plus that there was a sudden spike in the year
2009 of over 6 million items, which could be purged and told me where our efforts
needed to be focused initially.
Finding out what our Workflow distribution was like helped find where the
biggest bang for our buck was in the system which is important if you are planning
on purging as you may want to focus on the Workflow with several million open
Apollo Group, Inc. ORACLE - August 30th 2013
Rusty Schmidt 10
Analyzing Oracle Workflow data for increased system performance
This can obviously be different depending on who initiates this project, or how
involved groups such as DBAs, development and the business are but I believe that
DBAs and development are absolutely Responsible because DBAs need to be
aware of the project and watching disk space while the development team needs to
be Responsible for making sure new workflow items are going to be addressed
appropriately.
This helps the business get introduced to the idea of metadata, and you may have
to help them further to understand what metadata really is as one of our business
partners were thinking that we wanted to purge the actual items that this Workflow
data represents. Our discussions revolved around the fact that this data was merely
the representation of what had already gone through our system and it helped them
to understand the situation when I put forward the idea that this was metadata, or
otherwise, data about our data. This is also part of why we implemented a year
barrier as we did not want to purge anything that could have been in discussions
with the business and also there was a potential that we might wipe out something
that could possibly be "in flight" with too short of a purge duration.
Depending on the situation for your company, you may need to make sure
that the audit and compliance groups understand what you are doing and approve
which is especially important if your company or group may be under a data
retention policy. We stressed that this was a. metadata, b. naturally supposed to be
purged, and c. has been previously purged so our audit and compliance group
signed off. The business was also concerned about audit reports they were running
against expense reports, but once we showed that the reports were built to look at
the processed expense report tables and not this metadata they were able to sign
off. Of the parties Consulted with, the business was the largest and most important
because in our scenario they were the drivers of our purge schedule and were co-
signers as to the Acceptance of the project.
We know there is a problem, we have identified what we think the cause is,
we have worked to identify who needs to be involved, now we need to come up
with a plan based on what we found in our research showing:
All of these warning signs tell us that we need to take some kind of action to get
the system back on course.
Seeing that the year 2009 seemed to be a watershed moment, due to the
inflated number of items that were in the system from that year, this was our
eventual target after I had learned enough about the Oracle Workflow system in
doing our initial purge of the oldest Workflow items; going a step further for this,
we were targeting a specific Workflow type that had stopped being purged at the
end of 2008 which lead to this huge backlog in 2009 data. Seeing as how we were
starting a project which was new to us, we decided to gather as much data possible
in order to build a gameplan to attack the backlogged data and one of the most
important of these initial keys is to do this in QA first. Running this project in QA
first allowed us to avoid performance issues with the purge reports running in
PROD since we could identify what reports took too long to run and tune them
appropriately the next time we ran them in the system.
data to direct our attention towards by having all of the workflows between Jan 1,
2001 and Dec 31, 2010 eligible for being purged.
In the end, why are we doing this? We are trying to make the system run
better by avoiding timeouts and other negative performance, while at the same time
also trying to save money in the process because time spent trying to use the
system but only having it time out is money wasted for the company.
Next steps
The results from our investigation had shown that we had 30 Gigs of data in
the Workflow tables residing within over 200 Million of items in our tables, of
which more than 90% were older than 2 years. Also we found Workflows that
were not being purged either due to a lack of reports being scheduled initially,
reports that were scheduled had become unscheduled due to system downtimes, or
the wrong persistence types having been selected which resulted in the Workflows
not being purged at all. This lead us to initiating the purging project, but what has
to happen first?
First, we need to finalize what the purge schedule should look like as a part
of finding out what is currently being purged, or what may have been purged
previously, to get a baseline for the purging policy that our group and business
needs to agree to for what is appropriate purging of the Workflows. From there,
you can decide what can be suggested to the business as an appropriate policy
which could be applied to the amount or type of data and how much it is used. For
instance, expense reports are typically submitted and worked quite quickly so
employees can be reimbursed in a timely fashion, but the business may want to
leave the metadata in the system for a 1 year period; while our access audit cycles
are up to 10 quarters so we chose to leave this metadata for 830 days.
Next, we want to create a master document for the scripts you have run and
the associated purge reports in QA which you will need to run again in PROD, as
well as a spreadsheet which shows our progress incorporating all of the tables,
indexes, data, and percentages that can be updated with each purge routine that you
run. My tack for this was to start with the oldest year, and move forward in time
focusing on each Workflow in only that year until I have understood all of the data
and use that to prototype out a template that will hopefully fit the rest of the data
going forward.
As the final stage in this pre-work, we have to identify the workflows that
are dependant upon other workflows needing to be closed. In our case, we had
workflows which were dependant upon other workflow types by their very nature
and typically you will see WFERROR as a required workflow that needs to be
completed or purged first, just as POERROR has to be run before REQAPPRV as
well. I realized this when I started trying to purge items that had been in the
system for quite a while, yet they remained until the only change was to purge out
another Workflow and then retry the first one later so the original item set would
purge this time around.
How to purge
I have talked about purging for quite a while but I have not yet said how I
will accomplish it. The report Purge Obsolete Workflow Runtime Data is the
main report that does the heavy lifting for your Oracle Workflow purging activities
and here is a sample parameter that we ran it for in our system:
Quick tips
There are some easy tricks here with regards to keeping track of your
purging because just as with a real life patient there are a lot of "vital signs" which
you need to be monitoring here to tell you the health of your purge efforts.
Choosing increments for performance by purging only a specific Workflow for a
specific year will allow your purge program to be more efficient and purging by
Workflow type to limit your activity also allows you track purging progress closer.
Monitoring our initial results helped to make sure items were being purged
appropriately and allowed me to change the approach if they were not being
purged, as I may have the wrong persistence type or I had to investigate our data to
understand why it does not match the criteria of being eligible for purging.
Below is a cross section of workflow items that are eligible for purging:
This is a representation of one of our oldest Workflows and the time slice
associated with it, which I wanted to target. For some Workflows like the one
above I ran the purging report but they were not purging, so I had to investigate the
persistence type more and gained a better understanding of that factor. Even after
getting the persistence type right I ran the purge report but items were not purging,
which makes sense because they had failed to be purged years ago, so this is when
I had to look closer at the data in the tables and then correct the data in order to
close out the items.
Why did we have to correct the data? We had to "close out the items" as
they were left incomplete by the users or our the administrators as even the
System:Error (WFERROR) workflows need to be completed in order for the
underlying and associated workflow items to be purged out of the system. How
did we correct the data? I started with populating the END_DATE column in the
WF_ITEMS SQL 5 and WF_ITEM_ACTIVITY_STATUSES SQL 6 tables and in all
cases of what I was able to purge out of our system, this was the only data that
needed to be corrected or addressed. Then I re-ran the same purge routine with the
same correct parameters with regards to persistence type, and only after the data
was cleansed did we see it get purged. After this, the metadata in these tables were
purged and forever out of our system which gives us valuable disk space back.
The eagle eyed observer will add up all the numbers here and see some
objects are missing, and that is true since I cut out several of the low hanging fruit
at the top so the graphic would fit on the presentation page but the key in this
graphic is the baseline of over 21 million objects in the
WF_ITEM_ACTIVITY_STATUSES table.
Here is the baseline by table in our QA environment and key in on the fact
that we have over 215 million items in the tables listed:
Next is a side by side comparison of our QA baseline and the end result we
achieved after two weeks working on the project. You can see we purged almost
80% of the Oracle Workflow objects from the
WF_ITEM_ACTIVITY_STATUSES table!
One note of interest is that I found documentation which indicated the below
tables would be purged:
WF_LOCAL_USER_ROLES
WF_USER_ROLE_ASSIGNMENTS
WF_LOCAL_ROLES
Yet purge after purge the needle never moved for these tables, not even by a single
data point. I will be honest, this could have been due to operator error or some
other condition I am not yet aware of and so far it is a mystery that I can not quite
explain with the documentation as it was but one possibility is that there are no
Workflows in these tables which explains why they did not move at all. Karen
Brownfield did tell me that these tables are the directory services tables which act
as the base tables underlying the Define User form so they contain all
responsibilities for all employees. In short, any role to whom a notification can be
sent. When the purge documentation states that these tables will be purged, it is
only for the rows where partition_id = 0 indicating ad-hoc users that no
longer have notifications addressed to them. This means that very little data would
ever be purged from these tables and now I need to understand why nothing was
purged out in our scenario.
Our problem
Here we are at the start of the purging effort where you can see some of the
additional details I am tracking regarding the daily difference that was made in
terms of the number of rows, how much of the day previous was purged out today,
and overall how much has been purged out.
On the first day of purging, you can see we made a substantial dent in the
system by purging out 62 million rows for 29% of the volume. Great progress so
far!
Obviously being on a production support team, and helping out with the
Exadata QA project as well, there were periods of time when I would only be able
to do a small amount of work on the purging project. This is one of those days as
only ten thousand items were purged for less than one tenth of a percent.
The next day was another story though, as 15% of the previous day's volume
was purged out making us hit 40% purge level for the tables overall.
Again, we are making steady progress here seeing 13% of the previous day's
volume was purged out getting us closer to hitting 50% of the tables being purged
overall.
Another 10% purged against the previous day's volume which gets us over
50% for the first time but still no issues.
A large chunk is purged out on this day with 45% of the previous day's
volume gone for almost 75% of our data in the tables being purged total.
While almost 80 thousand items are purged out of the tables, we are starting
to hit some real diminishing gains here as that only amounts for less than 1% of the
volume we had the day previous and really did not move the needle much overall.
Unknown to me while doing this is that as we were going forward through the
project, day by day we were approaching a critical mass regarding the volume of
data left in our tables. It was at this point on June 14th that I was re-running my
scripts to check the project status regularly throughout the day and early in the
morning the union script started taking more than a minute or two as it normally
would. First it started to run for 20 minutes, then later in the day after more
purging the same script took an hour to run and finally the next day it would not
finish running to return me results.
A new problem
We started with a clear problem which we were trying to fix, and have
potentially created another problem so we need to investigate why the system is the
way it is now. Where did we go wrong and what caused our query to go slower?
When this happened, I had no idea what had happened but I was really glad it was
in QA and we had no end users in the system. I did some cursory unit testing of
the touch points for Oracle Workflow that I detailed initially, and nothing seemed
to be out of place or performing badly; yet I knew there was a problem in the
system and I could not leave it in this state.
Okay, I am not an Oracle genius yet but during the same year I started to get
more comfortable with some of the internals of the system so when the script
stopped performing on the 18th I was able to see in the GV$SESSION table that
the wait event it was on had HWM in it which lead to an investigation of what
HWM was or meant. What is HWM? The boundary between used and unused
space in a segment called the High Water Mark threshold. This is the definition of
what the HWM is, from Oracle, but that really does not tell me a whole lot. Other
sources on My Oracle Support lead me to draw the conclusion that the HWM
represents the last 25-35% of your table, which fits with the data shown as on the
14th this line in the sand was crossed for several tables:
Included in the Appendix are several references I used for understanding and
diagnosing the HWM issue, as well as some suggestions on what you can do to
remedy this without having to do an export and import like we were planning on
doing with Exadata anyways. Ive also been told that the newer versions of the
Workflow Analyzer do show this type of information as well!
Not wanting to leave our production system in disarray, even for a day or
two before the Exadata implementation, we changed our approach for PROD by
dialing our purge target back from the 75-80% possible in QA and set our sights on
purging around 50% of our data in PROD which should leave the system in a safe
state.
Possible gains
Next up are the potential PROD space savings, which were achieved in QA:
Here we see the breakdown of the 30 Gigs dedicated to our Workflow product in
this instance, with 9 Gigs residing in our tables and 21 Gigs existing in our
indexes. This data distribution between tables and indexes doesnt seem exactly
right, but at the end of the results table you can see that the space I calculated was
saved by purging is remarkably similar to the actual item counts purged out of the
tables themselves so we have proven out the total reduction of space including the
indexes. Key in on the amount of data purged: over 160 million rows of data for
about 23 Gigs of data.
Actual gains
In this example below we see the same results table as before, but over the
course of just two days by utilizing our master documents we were able to purge
out about 33% of the data in our table and objects from the
WF_ITEM_ACTIVITY_STATUSES table in PROD:
Here are the actual space savings which we achieved in PROD with our
purge:
We start with the exact same breakdown of the 30 Gigs in this instance, with
9 Gigs still residing in our tables and 21 Gigs still existing in our indexes and you
can see that while we targeted 50% as our safe barrier we did not make it to that
level as we only hit 33% while purging almost 100 million LESS rows of data.
This might be seen as a failure, but note we did hit 50% space reduction in the
WF_ITEMS table so we felt this was a good place to stop.
So we had a problem, identified the root cause, had a plan for resolution, put
it into effect, and now we need to understand what our results are now. Good
news! Concurrent manager reports, including the Workflow Background Process,
started to run in a more timely fashion and the approval of Internet Expenses was
quicker for the end users as problems with timeouts while submitting or approving
stopped. Power users are no longer received timeout messages in our access
provisioning platform and because they stopped having problems they started
saving time which as I said before is money saved for the company.
We went into this project in QA to "just purge data" and we realized that
purging too much can have significant negative impacts, so we decided on where
our threshold for PROD should be and we purged just the data that we wanted to
remove from the system. This meant we did not introduce negative user
experiences the week before our massive system upgrade to the Exadata platform.
Operational changes
My boss had wanted some type of reporting on how much work is being
done with the Oracle Workflow product by our team members and this
investigation lead to an understanding of the data so this could be delivered at the
end of the week with an alert. We also added new purge routines for items that
were not being purged, restored other purge routines which had been lost over
time, and had a better understanding of how the routines are used and what other
items need to be addressed.
This may seem odd in a section about operational changes, but we need to
help the development team understand their choices for workflow persistence
types, and work with them closer so that when a new workflow goes in we are
aware of it to setup the appropriate purge reports. Our initial work with the
business did not mark the end of our operational responsibility with regards to
workflows and keeping the business informed, as we have given them the power to
make their own choices for their sections of the application.
The future
While I am not a fortune teller, there are some things I see on the horizon
with regards to the next phase of this project. In reliving this project, and going
back to some sources like My Oracle Support, there have been new things added
over the past year or even clarified so I feel like I need to start over with my
research. We need to continue working with our business partners because I want
to renew their approvals each year and this means revising our data to answer some
of these questions:
The possibility exists that we have missed something which needs resolution and
an alert could be created to make sure the condition does not happen again. There
is a lot of documentation we have about this, but it needs to be formalized along
with processes so others can pick up where I left off especially since we are
planning on going to R12 and depending on the mechanism behind how they will
do it we could have another export/import available to us where we can purge
another massive amount of data.
In Summary
In addition, by doing this we reset the HWM for our tables, due to the
export/import, which will make avoiding hitting the HWM easier in the future
because we are now starting from 100% full tables instead of tables in or near that
HWM danger zone.
Not only is the business a participant in the process of purging now, but the system
is performing without issues as they expect so the business is a happy bunch.
Also, in the support game being proactive gets you ahead of issues, and lets you
get a more complete picture about your environment than if you were just being
reactive, so our new alerts fit that bill perfectly.
In the end, we were able to carry forward less data and had a much smaller Oracle
Workflow footprint while learning how to let the system maintain the Workflow
while telling us if adjustment is needed.
Acknowledgements
Greg Tripp Without his encouragement and guidance from initial suspicions of
a problem lurking in our system to making sure that I had everything possible
available in order to pull this project off, there would be no presentation or
whitepaper for OAUG.
Morgan Mills, Pradeep Pai, Sunitha Uppala, Viswa Vadlamani, Duane Cluff,
Shashiprakash Ganji, and Varaprasad Balmoor the FSG and HR Support
teams During the implementation phases of this in QA and PROD, my focus was
almost entirely on this project so that left my team to pick up some of the slack for
me and it is greatly appreciated. Also appreciated is them taking the time to help
review a very early draft of my presentation, as well the HR Support team joining
us in this review to give me an outside perspective on the project.
Jennifer McDonald, Greg Tripp, Mary Lou Hodgins, Kim Rodriguez, Steve
ODay, Karen Blum, and Mark A. MacFarlane Without the approvals of the
business and audit partners, we would not have been able to go forward with our
project at all so their time spent working through understanding the issue I was
bringing to them is greatly appreciated.
Bill Burbage, Lisa Scott, and the Oracle EBS Proactive CAB Bill deserves a
lot of credit for this presentation and whitepaper getting off the ground as he has
created a just awesome set of scripts which Oracle has called the Oracle Workflow
Analyzer. His presentation to the CAB really sparked my imagination and once I
started digging into the issues the Analyzer brought up, it was almost a never
ending rabbit hole I gladly went down.
Karen Brownfield Karen has been a fount of wisdom for many years, and in just
the short period Ive had the pleasure to interact with her she has been giving me
many tips or tricks which I did not know or even know to start looking for them.
She is an inspiration, and mentor in the dark arts of the Workflow Product.
Appendix:
Direct References
External sources
My Oracle Support
HWM Reference
How to determine the actual size of the LOB segments and how to free the
deleted/unused space above/below the HWM [ID 386341.1]
How to find Objects Fragmented below High water mark [ID 337651.1]
How to Determine Real Space used by a Table (Below the High Water Mark) [ID
77635.1]
Workflow Reference
bde_wf_data.sql - Query Workflow Runtime Data That Is Eligible For Purging [ID
165316.1]
SQL References
SQL Reference 1 Script 1:
'WF_ACTIVITIES',
'WF_ACTIVITY_ATTRIBUTES_TL',
'WF_ACTIVITIES_TL',
'WF_ACTIVITY_ATTR_VALUES',
'WF_PROCESS_ACTIVITIES',
'WF_ACTIVITY_TRANSITIONS',
'WF_ITEM_ACTIVITY_STATUSES_H',
'WF_ITEMS',
'WF_NOTIFICATIONS',
'WF_COMMENTS',
'WF_LOCAL_USER_ROLES',
'WF_USER_ROLE_ASSIGNMENTS',
'WF_LOCAL_ROLES',
'WF_ITEM_ACTIVITY_STATUSES',
'WF_NOTIFICATION_ATTRIBUTES',
'WF_ITEM_ATTRIBUTE_VALUES')
order by num_rows
SQL Reference 1 Script 3:
'WF_ACTIVITIES',
'WF_ACTIVITY_ATTRIBUTES_TL',
'WF_ACTIVITIES_TL',
'WF_ACTIVITY_ATTR_VALUES',
'WF_PROCESS_ACTIVITIES',
'WF_ACTIVITY_TRANSITIONS',
'WF_ITEM_ACTIVITY_STATUSES_H',
'WF_ITEMS',
'WF_NOTIFICATIONS',
'WF_COMMENTS',
'WF_LOCAL_USER_ROLES',
'WF_USER_ROLE_ASSIGNMENTS',
'WF_LOCAL_ROLES',
'WF_ITEM_ACTIVITY_STATUSES',
'WF_NOTIFICATION_ATTRIBUTES',
'WF_ITEM_ATTRIBUTE_VALUES')order by num_rows
from wf_item_types_vl
from wf_item_activity_statuses)
order by name
from WF_ITEM_ACTIVITY_STATUSES
where to_char(begin_date,'yyyy-Mon') in
('2009-Dec', '2009-Oct')
group by to_char(begin_date,'yyyy-Mon-dd')
order by to_char(begin_date,'yyyy-Mon-dd')
update WF_ITEMS
update WF_ITEM_ACTIVITY_STATUSES
INTO
&YEAR,
&COUNT,
&TYPE
from WF_ITEM_ACTIVITY_STATUSES
from WF_ITEM_ACTIVITY_STATUSES
union all
INTO
&ITEMT,
&ITEMK,
&BEGIN,
&END,
&ERRNM,
&ERRMSG,
&ERRSTK,
from WF_ITEM_ACTIVITY_STATUSES_H
order by 1,2
Workflow alert reference 3 Script 1 Alert Name: Workflows which will not be purged:
'AMEUPDUN',
'APCCARD',
'APEXP',
'APO_FREQ',
'APOL_SUP',
'APOLCCTU',
'APOLEMRV',
'APOLPROM',
'APPEWF',
'APWRECPT',
'CREATEPO',
'POAPPRV',
'POERROR',
'PORCOTOL',
'POREQCHA',
'PORPOCHA',
'RAPIDAXS',
'REQAPPRV',
'RPDXDET',
'RPDXIPD',
'WFERROR',
'WFTESTS')
group by item_type
union
INTO
&TABLE,
&TYPE,
&COUNT
from wf_notifications
'AMEUPDUN',
'APCCARD',
'APEXP',
'APO_FREQ',
'APOL_SUP',
'APOLCCTU',
'APOLEMRV',
'APOLPROM',
'APPEWF',
'APWRECPT',
'CREATEPO',
'POAPPRV',
'POERROR',
'PORCOTOL',
'POREQCHA',
'PORPOCHA',
'RAPIDAXS',
'REQAPPRV',
'RPDXDET',
'RPDXIPD',
'WFERROR',
'WFTESTS')
group by message_type
Workflow alert reference 4 Script 1 Alert Name: Workflows errored in last day:
from WF_ITEM_ACTIVITY_STATUSES
union all
INTO
&ITEMT,
&ITEMK,
&BEGIN,
&END,
&ERRMSG,
&ERRSTK,
from WF_ITEM_ACTIVITY_STATUSES_H
order by 1,2
Workflow alert reference 5 Script 1 Alert Name: Workflows with no end dates:
INTO
&COUNT,
&TYPE
from WF_ITEM_ACTIVITY_STATUSES
group by item_type
Workflow alert reference 6 Script 1 Alert Name: Workflow items not being purged:
INTO
&ID,
&TABLE,
&DATE
union all
order by notification_id
apps.per_people_f ppf,
apps.FND_USER_RESP_GROUPS_DIRECT rgd
union all
select
item_type,
to_date(end_date),
action,
performed_by,
count(*)
from WF_ITEM_ACTIVITY_STATUSES_H
apps.per_people_f ppf,
apps.FND_USER_RESP_GROUPS_DIRECT rgd
Version Information