Professional Documents
Culture Documents
1B
SAP (SAP America, Inc. and SAP AG) assumes no responsibility for errors or omissions in
these materials.
These materials are provided “as is” without a warranty of any kind, either express or
implied, including but not limited to, the implied warranties of merchantability, fitness for a
particular purpose, or non-infringement.
SAP shall not be liable for damages of any kind including without limitation direct, special,
indirect, or consequential damages that may result from the use of these materials.
SAP does not warrant the accuracy or completeness of the information, text, graphics, links
or other items contained within these materials. SAP has no control over the information that
you may access through the use of hot links contained in these materials and does not endorse
your use of third party web pages nor provide any warranty whatsoever relating to third party
web pages.
mySAP BI “How-To” papers are intended to simplify the product implementation. While
specific product features and procedures typically are explained in a practical business
context, it is not implied that those features and procedures are the only approach in solving
a specific business problem using mySAP BI. Should you wish to receive additional
information, clarification or support, please refer to SAP Professional Services
(Consulting/Remote Consulting).
HOW TO … SET UP A SEM-BCS DATA MART IN BW
2.1 Introduction
In the standard solution the SEM-BCS uses just a virtual InfoCube for the reporting on its
transactional data. The virtual InfoCube has assigned a service, namely the function module
RSSEM_CONSOLIDATION_INFOPROV which applies the necessary SEM-BCS logics such as
hierarchy and edge information to the flat data coming from the transactional cube.
In that standard scenario you always see the real-time consolidation data in reporting as it
directly reads the transactional data from the transactional BasisCube it is based on.
The disadvantage of this scenario is that you have not many possibilities to optimize reporting
performance as it is not possible to define aggregates nor to use the query cache on a virtual
InfoCube. So if you want to have the full range of performance tuning possibilities you should
work with additional BasisCubes (in addition to the transactional BasisCube on which the
virtual InfoCube is defined). This can make your reporting much faster.
Depending how up to date you want the report data to be the following three scenarios can be
defined:
3.1 Introduction
In this paper we want to show how a Data Mart scenario can be set up to be used in/for SEM-
BCS reporting. It is valid for all scenarios mentioned at the beginning of this document. To stay
as simple as possible this guide describes only the principle so it works just with one BasisCube
(according to scenario 1).
You can apply the same procedure to the MultiCube scenarios (scenarios 2+3. The difference
between the scenarios has only an impact on queries because there you have to define the data
sources to be used.)
o Generate Metadata (“Export DataSource”) in Source BW ( = Target BW) using the SEM-BCS
virtual InfoCube
o Copy the Metadata from the virtual InfoCube (“Source BW”) to the BasisCube (“Target BW”)
o Create an InfoPackage
o Load Data
The next picture represents an illustration of the main steps. In our case the source and the target
system are the same (“MySelf-System”).
Before a BW can request data from another BW, it must have information about the structure of
the data to be requested. To do this, you have to upload the Metadata from the source BW into
the target BW.
You can generate an export DataSource for the respective data target in the source BW. This
export DataSource includes an extract structure, which contains all the characteristics and key
figures (Metadata) of the data target.
The export DataSource is needed to transfer data from a source BW into a target BW. As we use
the “Data Mart in the MySelf System” the export DataSource is the used BW/SEM system. The
Data Mart interface in the Myself System is used to connect the BW System to itself. This
means you can update data from data targets into other data targets within the system.
Action
Select the InfoProvider tree in the Administrator Workbench in the source BW. Select your
virtual InfoCube and use the right mouse menu and perform the function Generate Export
DataSource. See next picture.
Result
In the background all metadata about the structure of the data that will be needed in the
additional BasisCube will be created. This can last a while and you see each actual step in the
status bar. At the end, if no errors occurred, you get a success message. The technical name of
the generated export DataSource is derived from the number “8” together with the name of the
data target. In our case we get the export DataSource with the technical name “80BCS_VC11”
as you can see in the next picture (contained in the popup with the success message). Note that
the relevant export DataSource is not displayed in the InfoSource tree of the source BW.
This step is only needed if you have not yet an existing BasisCube which could be used for the
scenario. Otherwise continue with the next step Copy the Metadata from the virtual InfoCube
(“Source BW”) to the BasisCube (“Target BW”).
Description
If you have not yet an existing BasisCube which could be used for the scenario you have to
create a new one.
The additional BasisCube is needed as with this the full range of performance optimizations can
be exhausted like e.g. cube aggregates and compression.
Action
In the Administrator Workbench within the Modeling area select your InfoArea and create a
new InfoCube via the context menu. Perform the function Create InfoCube…. See next screen.
Result
You get the dialogue for editing an InfoCube. See next screen.
Action
Enter the technical name as well as the description of the new InfoCube and use your virtual
InfoCube as template for the new BasisCube; therefore enter the technical name of your virtual
InfoCube into the ‘Copy from’ field. So you have already all InfoObjects as you need for the
new InfoCube. Ensure that the InfoCube Type is ‘BasisCube’ and that the flag for
‘Transactional’ is not set; this is already predefined. Push the button ‘Create’.
Result
The new BasisCube is created but still in version ‘New’ and Object status ‘Inactive, not
executable’. You see this in the appearing detail screen for editing the InfoCube (No screen
available).
Action
A new BasisCube was created with the same structure like the existing virtual InfoCube.
3.3.3 Copy the Metadata from the virtual InfoCube (“Source BW”) to the
BasisCube (“Target BW”)
Action
Create update rules for the new BasisCube. In the context menu of the BasisCube choose Create
Update Rules, see next screen:
Result
The detail screen for creating Update Rules appears (no screen available).
Action
For ‘Data Source’ choose ‘InfoCube’ and select your virtual InfoCube. Select the ‘Next screen’
button in the button bar. See next screen.
Result
You get the next screen with the proposal for the update rules with the version ‘New’:
Action
Result
The InfoCube update program is generated and the update rules are created. You get the
following result screen:
If you want to see the update rules then go back (via green ‘Back’ button) to the overview
screen. There you see that your BasisCube also has assigned the created update rules and that
they are generated (as they begin with the ‘8’ as prefix, in our example it is ‘80BCS_VC11’),
see next screen:
Result
The DataSource Overview is displayed and the cursor is positioned on your DataSource which
is still not mapped to an InfoSource ( ), see next picture:
Action
Assign your generated DataSource (in the example this is ‘80BCS_VC11’) via context menu
(right mouse button) to an InfoSource by choosing the function ‘Assign InfoSource’. See next
screen.
Result
A popup screen for the InfoSource Assignment appears with an ‘Application Proposal’ for an
InfoSource. This proposed InfoSource is yet not existing.
Action
Just confirm this popup and confirm the subsequent dialogue for confirming save of changes,
too.
Result
The proposed InfoSource is generated. After being generated the detail screen of the InfoSource
with its transfer structure/transfer rules and communication structure appears.
Action
Create a transfer rule for your InfoSource by just activating the proposed transfer rule appearing
within the detail screen (see next picture).
Result
A transfer rule for your InfoSource has been created and activated.
Create an InfoPackage for your InfoSource. To perform this, change to the InfoSource overview
(see next picture). If you don’t see your newly generated InfoSource choose Settings->Display
generated Objects :
Important information:
In the following screen it is very important to make correct selection entries in the data selection
(first tab). The virtual InfoCube has a special logic which uses the characteristic values of this
selection to identify the consolidation area.
You must ensure that the characteristic values you enter here identify exactly one single
consolidation area and are not valid for two or more consolidation areas.
Therefore open the consolidation workbench and choose the customizing of your consolidation
area and take those characteristics into your selection which have values in the definition area of
the consolidation area (see next picture) you are working with.
Additionally, you can filter the selections / data packages if you e.g. restrict the fiscal year /
period. But of course this depends on your own scenario and requirements.
Result
After saving an InfoPackage is created and can be scheduled for data transfer.
Result
Switch the last setting ‘Compress after Roll Up (Or After Load, W/Out Active Aggregates)’ on
and press the Save button. See next screen.
Result
The InfoCube content is automatically compressed after the rollup. If aggregates exist, only
requests that have already been rolled up are compressed. If no aggregates exist, all requests that
have yet to be compressed are compressed.
The tab ‘Schedule’ appears with the default setting ‘Start Data Load Immediately’ (see next
picture):
You can either start one single data transfer immediately or later or you can define a job that
performs the data transfer on a regular basis. The latter will be done in our example:
Action
Result
Action
In the area ‘Date/Time’ the appropriate fields are shown and ready for input. Furthermore, a
checkbox for the flag ‘Periodic job’ is shown at the bottom. See next screen:
Action
Action
Another popup appear for selecting finer time periods. See next screen.
Action
In the field for months enter ‘1’ and press the save button.
Result
The popup disappears and your are back in the popup for period values.
Action
Press the save button in the remaining popup for period values.
Result
The popup disappears and you are back in the schedule tab.
Action
You are finished now. The InfoPackage is now scheduled and the data transfer from your virtual
InfoCube into your BasisCube will take place every month.
4.1 Add-Move-Scenario
Warum "Add-Move"?
Wieviel gewinnt man so durchschnittlich?
The data mart has one disadvantage, it is working with full update. That means before you load
the same period from the virtual provider again, you have to delete the whole period slice in the
basic cube before.
a) Delta-Logic
A solution for this could be: load the data first into a delta calculating ODS before updating the
basic cube.
The virtual provider for SEM-BCS can deliver data package-wise. But it cannot be guaranteed
that the packages are disjunctive. There might be a record with the same key e.g. both in
package 7 and in package 8. The correct value for this key combination is the sum of the key
figures of both records.
In order to make sure that BCS data is duplicate free another ODS (an aggregating one) has to
be introduced and put before the delta-calculating ODS.
Attention:
The content of this aggregating ODS has to be deleted after every data load. Alternatively an
artificial characteristic has to be introduced that contains a unique identifier for every data load.
It can be filled e.g. with the request ID. Unterschied requestID/Package-ID?
Then it’s sufficient to delete the content of the aggregating ODS e.g. when a certain amount of
data is reached.
BW-Side
Reporting Cube
SEM-BCS-Side
Data Mart
Virtual Cube
TransactionalCube
1. Parallelization
N.B.: the activation of the data in an ODS can be started after the last loading request has been
finished. This can be modelled with a process chain in BW.
How to parallelize?
If you have several hierarchies for a characteristic reporting group, you can parallelize per
hierarchy:
- precondition: the hierarchies you want to load are disjunctive
- if there is a hierarchy not relevant for reporting but overlapping with another hierarchy
this is no problem
- you have to select the top node of the hierarchy and all groups below
E.g.:
H1
TOP1
CG1
CG2
CG21
CG22
CG3
H2
TOP2
CG4
CG5
CG6
H3
TOP3
CG3
CG5
CG7
Because these two are disjunctive. You can choose not to load the hierarchy H3 if it’s not
relevant for reporting.
Specifying the group selection in the infopackage is cumbersome because there is no hierarchy
selection available.
Therefore the simple recommendation is just not to restrict the group and select all groups by
this.
But if there are hierarchies which are not relevant for reporting and these hierarchies contain
different groups than the reporting relevant hierarchies you can save data volume and runtime
by extracting “hierarchy-wise”.
It’s possible to start the infopackage via BAPI. You can program the group selection.