Professional Documents
Culture Documents
Henry Cook,
January 2016
Special Note:
This slide deck is provided for those wishing to gain a copy of the slides to the Why HANA
presentation published on YouTube and as a Blog.
The best way to consume this presentation is to first watch it being presented, then to use
these slides as reminders, or as supporting material for your own meetings
The video can be reached through the following two links. The first is a blog which provides
context, an introduction, and a link to the video. The second link goes directly to the video.
https://blogs.saphana.com/2016/03/11/hana-the-why/
https://www.youtube.com/watch?v=VCEr9Y8ZrVQ&feature=youtu.be
Once downloaded, the first part of this SlideShare (Slides 1-46) can be viewed or used just as
they appear in the video itself.
The second part of the SlideShare (Slides 48-92) provides speaker notes for all the slides. This
can be used to revise or clarify particular topics within the presentation
We hope that you find this useful in progressing along your own HANA journey!
2016 SAP SE or an SAP affiliate company. All rights reserved.
Public
ERP
R/3
R/2
1979
1992
2004
2015
Mainframe
Client Server
Web SOA
Public
Impeded by complexity,
large, complex suite of applications
15 month release cycles,
Surrounded by increasingly nimble competitors,
Dependent upon others for data management,
Incurring large development costs
Key Question: How to get ahead and stay ahead of the market ?
Strategic response: Massive Simplification of what we do
This simplification is what we now know as HANA
Our Mission
help organizations
become best-run
businesses
Our Vision
help the world run better
and improve peoples
liveslives
Our Passions
teamwork, integrity,
accountability,
professionalism and trust
Public
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
DB
Public
Cost
Effort / Services
/ Admin
Innovation
Software
Effort / Services / Admin
Simplification
Software
Hardware
Hardware
Public
Productivity
Users
Developers
Agility
Faster response, time to market
Easier change
TCO
Radical simplification of IT landscape
2016 SAP SE or an SAP affiliate company. All rights reserved.
Public
Data
Data
Data
Ans.
Predictive
Text
Ans
SQL
Data
Data
Qn.
Data
Data
Data
Public
Public
Instant BI
(on Transactional Data)
Live Conversation
(Instead of Briefing Books)
No Aggregates or
Materialized Cubes
(Dynamic Views)
Views on Views
(Up to 16 Levels)
Public
10
MULTI-CORE /
PARALLELIZATION
NO DISK
OPERATION
DYNAMIC
MULTI-THREADING
INSERT ONLY
VIRTUAL
AGGREGATES
LIGHTWEIGHT
COMPRESSION
REDUCTION IN
LAYERS
ANY ATTRIBUTE AS
AN INDEX
CALL
+ T +
ANALYTICS
ON HISTORICAL DATA
TRANSACTIONAL
COLUMN STORE
SQL INTERFACE ON
COLUMNS AND ROWS
OBJECT TO RELATIONAL
MAPPING
GROUP KEYS
BEYOND SQL
PARTITIONING
MAP REDUCE
ON THE FLY
EXTENSIBILITY
MINIMAL
PROJECTIONS
3D
SPATIAL
TEXT
ANALYTICS
LIBRARIES FOR
STATS & BIZ
REAL-TIME
REPLICATION
Global development, 2007: Seoul, Shanghai, Ho Chi Minh, Bangalore, Tel Aviv, Berlin, Walldorf, Paris, Toronto, Vancouver, Dublin CA, Palo Alto
2016 SAP SE or an SAP affiliate company. All rights reserved.
Public
11
https://www.youtube.com/watch?v=jB8rnZ-0dKw
2016 SAP SE or an SAP affiliate company. All rights reserved.
12
SAP BusinessObjects
SQL
SQL
BICS
SAP HANA
Modeling
Studio
Data Services for ETL capabilities from SAP Business Suite, SAP BW and 3rd Party
Systems
Calculation and
Planning Engine
RealTime Replication,
Federation
SAP Business
Suite
Data Services
ETL / ELT
SAP NetWeaver
BW
3rd Party
Components
Row Store
Column Store
Calc Engine
Graph Engine
R Interface
Text Engine
Planning Engine
Spatial
13
Now
Memory
1GB
X 6,000
CPU
4 x 50 Mhz
X 1,800
Memory
Memory
6 TB
48 TB
CPU
Cores
120 x 3 GHz
480 (8 x 4 x 15)
Transistors (CPU)
Transistors (CPU)
~ 1 million
2.6 Billion
Near Future
14
Panic
Server Blade
DRAM Memory
DRAM Memory
100+ns
CPU Chip
Cache
80+ns
60+ns
CPU Chip
CPU Chip
L3 Cache
500+ns
3Gb
12.8Gb
15ns
Core 50 Gb
Core
L2 Cache
SSD
200,000 ns
0.5 Gb
DRAM
Memory
L1 Cache
L2 Cache
4ns
L1 Cache
1.5+ns
Mechanical
Disk
10,000,000 ns
0.07 Gb
Processor
Processor
Data Access Latency
Bandwidth
Source: Intel
Public
15
A Useful Analogy
CPU
L1 CACHE,
TABLE (1m)
L2 CACHE
KITCHEN FRIDGE (3m)
L3 CACHE
THE GARAGE (9m)
2016 SAP SE or an SAP affiliate company. All rights reserved.
Public
16
A Useful Analogy
CPU
L1 CACHE,
TABLE (1m)
DISK BREWERY , MILWAUKEE USA (6,000,000 metres)
L2 CACHE
KITCHEN FRIDGE (3m)
L3 CACHE
THE GARAGE (9m)
2016 SAP SE or an SAP affiliate company. All rights reserved.
Public
17
When you store tabular data you get to store it in one of two ways
Table of Information
A
10
35
by column
many columns
40
12
10
35
40
12
or by row
A 10
35
40
12
memory address
2016 SAP SE or an SAP affiliate company. All rights reserved.
Public
18
For rows, data is moved to the processor in chunks but only a tiny
proportion of those chunks are useful
By Row
A 10
35
40
12
fetch data
Summing numbers
Lots of padding
Processor
3 Bn ticks / sec
Public
19
10
35
12
53
101
2
40
CPU Chip
L3 Cache
44
L2 Cache
L1 Cache
Processor
3 Bn ticks / sec
Public
20
Columnar data stores offer huge benefits, if you can use them in a
general purpose way
Table - by column
10
35
40
12
Transaction processing
Text
Spatial
Predictive, etc.
Public
21
CPU Chip
3.4 GHz
L3 Cache
Logs
Persistence
L2 Cache
DRAM
Register
L1 Cache
CPU
More processor
cores per chip
Computations in
registers seldom wait
Fast memory to
keep cores fed
Public
22
Core
Register
L1 Cache
L2 Cache
L3 Cache
CPU
DRAM
Logs
Persistence
ALL Operations
RDBMS
Transaction
Update
Text
Predictive
Spatial
Enables:
Simplicity
Productivity
Public
23
Core
Register
L1 Cache
L2 Cache
L3 Cache
CPU
DRAM
Logs /Persistence
Traditional use
Doesnt get you to 100,000x
Other uses limited (e.g. no OLTP)
Public
24
.
Main Master Record
Application-built secondary Index tables
ABAP
VDM / SQL
Public
25
Validity Vector
No Aggregates.
We have the ability to dynamically
produce aggregations on the fly
Public
26
MARD
MSTQ
MSPR
MSTE
MSLB
MCHB
MSEG
MARC
MSTB
MSKU
MSSQ
MSSA
MSKA
MKOL
MSTQH
MSLBH
MSTEH
MSKUH
MARCH
MSPRH
MSTBH
MSSQH
MSSAH
MSKAH
MCHBH
MARDH
MKOLH
MSEG
New
SAP sLog
1 table
Public
27
From
Flexibility
Customer Benefits
Harmonized internal
FI
Document
Logistics
Document
Processing
Financial Accounting
Stability
and external
reporting
Significantly reduced
Pre-Defined Aggregates
CO
Document
To
Analytics
Management Account.
reconciliation effort
Significantly reduced
memory
consumption
Flexibility
Higher flexibility in
Processing
Logistics
Document
FI Document
CO Document
Logical document
2016 SAP SE or an SAP affiliate company. All rights reserved.
reporting and
analysis
heterogeneous
system landscapes
Public
28
2016
SAP=SE
an SAP
affiliate company. All rights reserved.
1 block
10ordata
objects
29
2016
SAP=SE
an SAP
affiliate company. All rights reserved.
1 block
10ordata
objects
30
Public
31
SAP HANA
In-memory platform
2010/11
SAP Business
Warehouse
powered
by SAP HANA
SAP Business
Suite powered
by SAP HANA
SAP Simple
Finance
powered
by SAP HANA
Real-time analysis
Real-time business
Real-time reporting
No aggregates
2012
2013
2014
2015
Public
32
10x
7x
higher throughput
1800x
faster analytics & reporting
4x
less process steps
Public
33
Simultaneous Real time update (Xm tx/s) and complex analysis on single copy of the data
Developing using a much simpler data model the logical data model
Using sophisticated math for forecasting, predicting and simulation, with fast response
Being able to make changes on the fly rather than N week mini-projects
Faster changes to simpler data models, metadata rather than physical data changes
Public
34
Query Results
Query
Aggregates
Indexes
Query Results
Query
SAP HANA
Data Warehouse
Copy
ETL
Data
Optional
Copy
Data
Public
35
Spread
Sheet
View
Business
Transaction
Analytical
Application
View
Any Software
View
View Layer
View
View
View
View
Persistence
Layer
(Main
Memory)
Log
Source: In-Memory Data Management, Hasso Plattner/Alexander Zeier
2016 SAP SE or an SAP affiliate company. All rights reserved.
Public
36
Current Mode of
Operation (CMO)
Mth 1
~7 month project
4 weeks
Develop
2 days data
3 weeks
Test
Mth 2
3-4 weeks
Rework
Mth 3
2 weeks
Tune
2 weeks
Backload
4 weeks
Volume Test
Mth 5
Mth 4
2-3 weeks
Report
Mth 6
2 weeks
Implement
Mth 7
Traditional RDBMS
Example:
~3 months project
Future Mode of
Operation (FMO)
SAP HANA
(column-store in-memory
DB)
4 weeks
Define
Jun
Aug
4 weeks
Develop/Test/Rework
unlimited data!
Jul
Sep
1-3 days
Backload
1 day Tune!
2 weeks
Report Dment & Volume Test
1-2 weeks
Implement
Less Development
Activate replication rather than ETL
No index (re)-build
Virtual Model
No physical layers
No need to change
physical data model (e.g.
aggregations)
Replication or ETL
can go 50X faster
(e.g. BW PoC)
Less Testing
Replication easier to test
No embedded calculation
Fewer transformations
Model-driven development
Higher self-service/analysis
means less reports to build
No need to renew sematic
layer
Virtual Model
Easily transported
Faster reload (no
intermediate physical
layers, in-memory)
We took an
analytic that took
us 6 months to
develop and we
redeployed it on
HANA in two
weeks. Results
come back so
quickly now, we
dont have time to
get coffee
Justin Replogle,
Director IT,
Honeywell
Public
37
1.
2.
3.
Simpler Administration
4.
5.
6.
7.
Public
38
Public
39
S/4HANA the primary reason for HANA and the culmination of a five
year release process
SAP HANA
In-memory platform
2011
SAP Business
Warehouse
powered
by SAP HANA
SAP Business
Suite powered
by SAP HANA
SAP Simple
Finance
powered
by SAP HANA
Real-time analysis
Real-time business
Real-time reporting
No aggregates
2012
2013
2014
2015
Public
40
Public
41
https://www.youtube.com/watch?v=q7gAGBfaybQ
2016 SAP SE or an SAP affiliate company. All rights reserved.
Public
42
https://www.youtube.com/watch?v=q7gAGBfaybQ
2016 SAP SE or an SAP affiliate company. All rights reserved.
Public
43
https://www.youtube.com/watch?v=q7gAGBfaybQ
2016 SAP SE or an SAP affiliate company. All rights reserved.
Public
44
Agility
Faster response, time to
market
Easier change
TCO
Radical simplification of
IT landscape
2016 SAP SE or an SAP affiliate company. All rights reserved.
NEXT STEPS
45
https://blogs.saphana.com/2016/03/11/hana-the-why/
https://www.youtube.com/watch?v=VCEr9Y8ZrVQ&feature=youtu.be
HANA The Why Video Jan 2016.pptx
Mark Mitchell
Thank You
Q&A
Henry Cook
www.saphana.com
www.sap.com/hana
www.youtube.com/user/saphanaacademy
In order to make this presentation self contained the speaker notes for the slides are
included
They are also useful if you want to refresh your memory regarding a particular topic
Public
47
Public
48
Public
49
Public
50
Public
51
Public
52
Public
53
Public
54
Public
55
Public
56
Public
57
Pepsi Ad
If we think about our slide HANA Techniques: How to Build a New
Kind of Enterprise System which shows all the techniques that are
used by HANA, both those that are adopted, and those that we
invented.
There is an analogy with an award winning series of adverts that Pepsi
had in the 1970s, which summarised all the things you associate with
Pepsi in one high energy snappy sentence.
The theme of the ads, and the strapline that went with it was Lip
smackin, Thirst Quenching, Ace Tasting, Motivating, Good Buzzing,
Cool Talking, High Walkin, Fast living, Ever Giving, Cool Fizzin Pepsi!
Pepsi, fizzes, but they didnt just call it Fizzing Pepsi - that would be
selling it short.
Glancing back at the previous slide we see that we have a Massively
Parallel, Hyperthreaded, Column Based, Dictionary Compressed, CPU
Cache Aware, Vector Processing, General Purpose Processing, ACID
compliant, Persistent, Data Temperature Sensitive, transactional,
analytic, Relational, Predictive, Spatial, Graph, Planning, Text
Processing, In Memory Database HANA!
Every one of those things contributes to the unique thing weve done.
Weve shortened that for convenience to The In Memory Database
HANA, but we should never forget that there is a whole lot more to it
than just In Memory.
https://www.youtube.com/watch?v=jB8rnZ-0dKw
Public
58
Public
59
Public
60
Public
61
A Useful Analogy
We are not good at thinking in terms of billionths of a second, since that
is so far away from our day to day experience. So here is a good analogy
thought up by one of my German colleagues. Imagine we are sitting in a
house in London, enjoying a drink. In this analogy we substitute beer for
data.
The Beer you are consuming is the data in the CPU, it is immediately
available and being processed.
The beer in Level 1 cache is the beer on the table, within easy reach, 1
metre away.
If we need more then we can go to the kitchen refrigerator, 4 metres away,
this is like level 2 cache
Then theres the refrigerator in our garage not more than 15 metres away
- level 3 cache.
Up to this point we are still just using beer in our house (that is data from
memory on the CPU chip) we have not even yet gone to DRAM, that is left
our premises.
If we need more than this then we can go down the street not more than
40 metres away, fortunately we are next door to a liquor store thats our
RAM memory.
But what happens if we run out of beer (data) and have to go further to
the bulk store to the brewery warehouse? the equivalent of the hard
drive.
Public
62
A Useful Analogy
What happens if we run out of beer (data) and have to go further to the
bulk store to the brewery warehouse the equivalent of the hard drive. In
that case we have to go to Milwaukee, USA 6 million metres, or 6,000 Km
away !!!
(Of course if we wanted to save some time we could use SSD, and just go
to the south coast of the UK! , that would reduce the distance to just 120
kilometres to get our next beer.
What this shows is the huge difference in the ability of silicon to process
data and the ability of mechanical devices to feed them all of which has
happened in the last 7-8 years. Software written before then cannot exploit
these features because they dont know theyre there. Where these
techniques are starting to be used by others they are typically bolt-ons to
existing complex systems and have various restrictions imposed on them.
Rough approximations but they give a good sense for the relative
distance. (Check for current numbers, they are improving all the time)
ns
m
km
CPU
0
0.00
0.00
L1
1.5
1.00
0.00
L2
4
2.67
0.00
L3
15
10.00
0.01
RAM
60
40.00
0.04
SSD
200,000
133,333.33
133.33
HDD
10,000,000
6,666,666.67
6,666.67
Public
63
When you store tabular data you get to store it in one of two ways
Weve now established that to get access to the amazing speed of modern
processors we have to use all those multiple cores, and feed them via the
cache memories held within the chips.
Column Based data stores are one key technique that helps us do our
work in-memory, they have become both proven and popular in recent
years.
We tend to hold our data in tabular format, consisting of rows and
columns, this is the format used by all relational databases, and this is the
way HANA represents data too, in this very familiar and standard format.
When you store any data in memory, or on disk, you need to do this in
some kind of linear sequence where data bytes are strung out one after
another.
You can either store the data row by row (most databases do this).
Or you can store the data column by column. We see this illustrated
above, well now explore the implications for each, and most importantly
how this affects our ability to exploit these modern advances in computer
chips. This may not be immediately obvious, but it soon will be.
Public
64
For rows, data is moved to the processor in chunks but only a tiny
proportion of those chunks are useful
Here we see the data laid out and physically stored in rows, in the more
traditional manner weve used for decades.
Using this row based format we have to skip over the intervening fields to
get the values we want. E.g. if we want to sum the number fields highlighted
with red boxes above, first we read the first one, then have to skip over
some fields to find the second one, then skip over some more to find the
third and so on. These rows can be hundreds of attributes long, each row
may be hundreds or thousands of bytes, 1,000 bytes would not be unusual.
Processors typically fetch data in chunks, and bring them to the processor
to have computations done on them.
In this diagram the alternating blue and green lines show the successive
memory fetches which are retrieving data ready for computation to take
place.
A processor typically fetches data from cache memory 64 bytes at a time.
But a row may be 200, 300, 500 or more bytes long. Therefore it is going to
take many fetches to get to the next useful value to be added, so most of
the time were skipping over padding between the useful values. All the
while this is going on the processor is spinning its wheels, ticking away
waiting for the next chunk of data that has a useful value contained within it
to operate on.
So, to run at full speed and get the maximum out of these fast chips its not
enough to have many fast processors, we also need to make sure that the
next data that the fast processor wants to process is sitting waiting for it in
the cache memory and will be retrieved as soon as the processor is ready to
process it.
Public
65
Public
66
Columnar data stores offer huge benefits, if you can use them in a
general purpose way
Of course using column store data also provides us with the benefits of
easier compression (because all the data in a column is of the same type)
and the ability to add new columns non disruptively that is without
affecting those already there.
Column stores had already come into use for analytic applications
they were so efficient for this kind of working, the data compressed well,
the data was tightly packed together and youd only retrieve from disk
those columns mentioned by a query. So if our query only looks at three
columns in a hundred column data we only have to scan three percent of
the columns. This saves a huge amount of disk IO and data movement,
hence the query speed up. But it doesnt get us anywhere near the
100,000x speedup we see through the CPU cache working.
But it is the ability to use the local cache memory for our main
computation, and thus make full use of the potential of modern
microchips, that is the important concept here.
In order to be able to fully take advantage of modern CPUs in this way we
need to be able to use this technique across a full range of workloads.
OLTP, OLAP, Text, Predictive, and it turns out that this technique is suited to all of these too, of course you need to design your system from the
ground up to do this.
In the past column based storage has performed poorly at updating, therefore SAP invented a way of doing high speed updates against a column store this
is a unique capability, and it is this which makes the use of column stores general purpose, and we can also use them for text processing, spatial, graph,
predictive etc. etc and any mix of them.
This means that whatever components of a modern workload we have we can get the full benefit of the modern CPUs across the full range of workload we
might wish to do. All data is held this way, and all processing makes use of it.
Other systems are starting to use these techniques but often they are bolted on to the existing database, so all their traditional cost and complexity are still
there. You have to nominate which data to hold in memory, usually its used only for read only queries, you have to do your updating somewhere else and other
styles of processing like text, spatial, predictive cant use these techniques. So you dont get the simplicity of design and development or the runtime speed
advantages.
2016 SAP SE or an SAP affiliate company. All rights reserved.
Public
67
Public
68
Public
69
Public
70
Public
71
Public
72
Public
73
Public
74
Public
75
Public
76
SAP HANA
More than just a database a platform
Of course we need to build these capabilities into a complete working
system, and we have done this with what we call the SAP HANA Platform.
At its core is the in memory computing engine weve discussed, but this
is complemented by extending this with large scale disk based column
stores, stream processors, data replication, extract transform and load
capabilities and easy integration with open source systems such as
Hadoop. The modern architecture that contains a number of different
complementary engines. The core engines run in memory to provide the
simplicity and agility that in-memory gives, but it is extended by many
other engines external to the in-memory core, engines for multi-tier
storage, Hadoop as an engine for large volumes of unstructured data,
engines that happen to sit in legacy systems, and specialist engines
such as for streaming or for communications with mobile devices.
This architecture is simple, elegant and modern. It allows for any mix of
processing, provides very cost effective IT and yet gives you the
productivity and agility advantages of in-memory.
These are generic advantages and the can be used for both SAP
applications, non-SAP applications or mixtures of the two.
This allows us to use the platform to address a wide range of information
requirements and match the right kind of tool to the right job. It allows us
to very easily integrate the SAP HANA Platform into existing IT estates,
complementing what is already there, and then meeting requirements in
a simpler and more cost effective manner.
Well not dwell on this now, as SAP has many comprehensive
presentations to take you through the SAP HANA Platform, the point here
is that having invented a fundamentally simpler and more effective way
of building enterprise systems we have built this out into a complete
platform.
2016 SAP SE or an SAP affiliate company. All rights reserved.
Public
77
Public
78
Public
79
Public
80
Public
81
Public
82
At the top we see the traditional way of doing things, and at the bottom
we see have we do them with HANA.
This is again pretty simple, many of the tasks shrink and many of them
just go away. There are no physical cubes to design, place on disk, and
tune they dont exist any more.
When we are checking out the data we can do this with queries that give
us instant results so we can go through many cycles of data checking
very quickly.
We dont need a separate volume test phase. Usually wed functionally
test on 10% or less of the data, then do a separate phase where we crank
up the volumes, and which point we discover the data errors wed missed
and scaling problems. With HANA we are more likely to used production
scale volumes from the beginning so we dont need a separate phase.
As we implement our different queries and as the requirements get
modified we would normally have to design the database, that is define
the physical data structures in a way that that will provide adequate
performance for our many users. With HANA we dont need to do this as
we dont need to do this, we can run pretty much on just the logical
model with no aggregates, cubes and indexes required. It is just
common sense that this takes much less effort.
Public
83
Public
84
Public
85
S/4HANA the primary reason for HANA and the culmination of a five
year release process
So, we can now see why we developed SAP HANA, and how we are able to
bring about the significant simplification that it offers.
Weve seen that by fully exploiting modern microchips through innovative
techniques we can fully exploit them to get speed ups of 100,000x or more.
This then allows use to eliminate unnecessary data structures such as preaggregated data, indexes etc, thus significantly simplifying applications,
making them easier to develop, maintain and use and there is enough speed
increase left over, after making the simplifications, to still have the
applications run hundreds or thousands of times faster, thus adding to our
productivity by eliminating wait times.
We can thus see, in the picture above, that S/4HANA was both the long term
goal in the development of HANA, the next generation of ERP with significant
capabilities that can only be provided using the simplicity and speed of HANA
capabilities that cannot be provided using previous technology.
We can also see that this is the culmination of a careful and methodical five
year release schedule, first releasing the core technology, then moving on to a
non-mission critical application (BW), then porting the Business Suite to
HANA, then taking the core application (that the others depend on) Finance
and showing how it can be simplified, then releasing then moving on to other
parts of the Suite to do the same thing there.
At different parts of HANAs history it has been type-cast into a number of different roles, an in-memory query accelerator, a replacement for BW Accelerator, a
turbo charger for the Suite, so its easy to see how the perception of HANA can get stuck in any one of these roles. But now, with the successful introduction
of Simple Finance and S/4HANA that its true, general purpose and multi-purpose role becomes clear. It represents a fundamentally better, simpler and more
effective way of using information that can be applied to any information task.
The radical simplifications that HANA brings allows the new generation of ERP S/4HANA to evolve from its predecessors, R/2, R/3 and ERP and be able to
provide significantly different business capabilities, such as fast close, predictive close, creation of real time reports on the fly, easy restructuring, more flexible
reporting on any attribute stored and many more.
2016 SAP SE or an SAP affiliate company. All rights reserved.
Public
86
https://www.youtube.com/watch?v=q7gAGBfaybQ
Public
87
Public
88
Public
89
Public
90
Public
91
Public
92