You are on page 1of 8

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

Performance issues in summary Query performance analyse Cache monitor ST03n ST13 ST14 Statistics ST02 BW Administration Cockpit Optimizing performance of InfoProviders ILM (Information Lifecycle Management) BWA Query analyzing example General Hints

1. The common reasons for performance issues in summary


Causes for high DB-runtimes of queries

no aggregates/BWA DB-statistics are missing Indexes not updated read mode of the query is not optimal small sized PSAPTEMP DB-parameters not optimal (memory and buffer) HW: buffer, I/O, CPU, memory are not sufficient Useage of OLAP Cache?

Causes for high OLAP runtimes

high amount of transmitted cells, because read mode is not optimal user exits in query execution usage of big hirarchies

Causes for high frontend runtimes

high amount of transmitted cells and formattings to the front-end high latencies in refering WAN/LAN insuffincient client hardware

2. Query performance analyse

I think this is a really important point (including the OLAP cache) and should be explained a little bit deeper. TA RSRT To get exact runtimes for before/after analyze use this transaction with or without Cache/BWA etc. choose query execute and debug -> dont use cache -> show statistic data Button Properties activate cache mode (also able to activate for the whole InfoProvider) you should use the grouping, if you use multiprovider where data of only one Cube are changed independent from the other ones. So you can avoid the invalidation of the cache. Following grouping procedures are available: 1) no grouping 2) grouping depending on InfoProvider Types 3) grouping depending on InfoProvider Types InfoCubes Seperately 4) every Provider seperate 1) All results of an Infoprovider are stored together. If data of one of the Infoprovider are changed the whole cache must be recreated. This setting should be used when all the Infoprovider, which are used from the multiprovider, have the same load cycle. 2) All the results are stored grouped by the type of the InfoProvider. This option should be used when a basic InfoCubes are combined with an realtime InfoCube. 3) Is the same as 2) with additionally the feature that every result of an Infocubes are stored seperately. It should be used when you change/fill the cubes independent from each other. 4) Every results of a provider will be stored seperated (independent from the type). This option should be used when not only, but also other provider types InfoCubes are updated seperately.

2.1 RSRT Query Properties


You can turn off parallel processing for a single query. In the case of queries with very fast response times, the effort required for parallel processing can be greater than the potential time gain. In this case, it may also make sense to turn off parallel processing. Just play a little bit with RSRT and the different optionsto get the optimal settings for your queries! There are also some special read modes for a query. In the most cases the best choice is 'H' (Query to be read when you navigate or expand hierarchies - more information)

http://raj-bi.blogspot.in/2010/10/summary-of-bi-70-performance.html http://help.sap.com/saphelp_nw70/helpdata/en/43/e39fd25ff502d2e10000000a1553f7/co ntent.htm http://www.insiderlearningnetwork.com/bridgetkotelly/blog/2012/04/25/tips_from_joerg _boeke_on_tuning_your_sap_bw_systems:_recent_qa_on_improving_query_performanc e_(transcript)__ http://scn.sap.com/people/jens.gleichmann/blog/2010/10/12/summary-of-bibw-70performance-improvements http://www.skeneintelligence.com/2011/02/cache/


our DBSEL = 30 and DBTRANS = 1. so ration 30/1 i.e 30. This ration should not be more than 10. If it is, then you need aggregates. DBSEL= number of records selected , DBTRNAS = ultimately number of records transferred.

OLAP would be high if : 1. You have a lot of calculations like before aggregation / exception aggregation etc 2. If you have conditions OLAP time is high when the result set from BW is high and you do the formulae / calculations in the front end - basically the data comes into the OLAP processor and it has to do additional calculations .. more like your database returns 1 million rows and you have conditions that reduce this to 100 rows... You can try reducing the OLAP time by possibly using precalculations / filters wherever possible... As for front end time - it could also be because you might be using a custom excel stylesheet - what you could do is : if you are on BW7.0 or above - in BeX there is an option to enable statistics - it is a checkbox. CHeck that and then log out and log into BEX again and then run your query. Once you run your query - go bac to the same place you checked statistics and then when you check there another excel sheet should open up giving you a better visibility into the times.

Our BW systems may have many reports. After sometime some reports lose their importance but it remains in system unnecessarily. To figure out those queries there is a work around. Its better to create a Report for no. of times a BEx report executed by a user in a period of time. In this blog I am showing how to bring such information into BW reports. All the Query execution information resides in the table RSDDSTAT_OLAP. Below is the snapshot of the table contents.

The important Fields in this table are, HANDLETP, UNAME, CALDAY, UTIME, INFOPROV, OBJNAME, STARTTIME. A generic data source can be created on top of this Table. Where HANDLETP is equal to OLAP for all the BW query execution, so we can restrict our info package selection as HANDLETP=OLAP. STARTTIME is the time stamp of query start time which can be used as generic delta pointer. Other fields can be interpreted as follows, UNAME = User name, CALDAY = Day on which query executed, UTIME = Time at which query executed, INFOPROV = Infoprovider on which report was created, OBJNAME = Technical name of the Query/report. This data source can be replicated to BW system and necessary dataflow can be created to store the data in an Infoprovider. Tips: Add a field ZCOUNTER to Infoprovider and while creating the transformation set this field to constant transformation 1. This counter can be used for calculating the no. of times a report executed by a user in a specified time( Summation of COUNTER). By adding other characteristics like CALDAY, UTIME in free characteristics it is possible to have various drill downs.

You can start by identifying the major components of your query time. This is similar to the query statistics that you see in RSRT but then I have decided to analyze the same using statistics information available as part of standard content. I have installed the BI content as prescribed in the BI technical content installation and I am on BI 7.0 The same has not been attempted in 3.x for want of a system.

The OLAP statistics detailed cube has all the necessary information ( also a lot of it execute your queries on the same with care it might lead to too much of data to analyze!!! )

This exercise can be done in excel / you can have a query for the same I did this in excel since I was not sure if any business content query existed for the same. Here the fields of interest right now are :

Used Infoprovider Query Runtime Object Calendar Day ( Do not query for all days too much of data might lead to overflow on the key figures !!!)

Here it depends on how you want to do the analysis you can either go by Query or by Infoprovider ( Multiprovider also constitutes an infoprovider this is the infoprovider on which the query is built ) Enter the query technical name in the Query runtime object and give the necessary filters of Calendar day as indicated below I have taken only 3 days of data but then have not found any issues even while taking 3 to 6 months of data only that the resultset generation is a little slow for my system.

Make sure you check the Output number of Hits I will come to why this is required later in this blog. Under fields of selection select whichever is applicable Infoprovider / query and also check the following.

The statistics event is key for this analysis

I have found overflows to occur in Olap Counter and Step when selecting a larger slice of data in terms of days and hence have deselected them. I have used the number of hits as a counter instead. The output will be something like this have hidden the actual results but then going on to what we can do with the results

We have got a long list of Statistics events but then what do we do with the same....? Go to table RSDDSTATEVENTS this has the description of the statistics events which are displayed above..

Now go back to the base data you have and do the following :

Statistics Event| Rowcount | OLAP Time | Average time Here the Average time is OLAP Time / Rowcount. This is debatable but then compress the cube fully to get accurate rowcounts. Or use the OLAP counter if you do not get any overflow message. Now we will need to find the percentage contribution to the total This again is a simple Excel formula where you take the percentage contribution to total and you have the Events that contribute a major portion of the query time and accordingly look at resolving the same. We had used this to determine queries / cubes that were candidates for BI Accelerator but then the same analysis can be used for other purposes also. The cube contains a lot more detail and a lot more analysis can be done - this blog is to initiate this query analysis and take it further. The same can be analyzed using RSRT also - but then historical analysis can be done using the exercise mentioned above.

http://scn.sap.com/people/arun.varadarajan/content

You might also like