Professional Documents
Culture Documents
Disclaimer
These slides represent the work and opinions of the
presenter and do not constitute official positions of
Oracle or any other organization.
This material has not been peer reviewed and is
presented here with the permission of the presenter.
This material should not be reproduced without the
written permission of interRel Consulting.
About interRel
Reigning Oracle Award winner
EPM & BI Solution of the year
Three Oracle ACE Directors
Authors of 10+ of the Best Selling
books on Hyperion & Essbase
Oracle Platinum Partner
One of the 100 fastest growing tech
companies in the USA (CRN
Magazine)
One of the fastest growing companies
in USA (Inc. Magazine 2007-present)
Infra
structure
Press
Consulting
Strategy
Support
Training
Re-Benchmarking Everything
Hourglass on a stick
shape
Dense/sparse dont
impact database size
Small block size (8Kb)
Bitmap compression
Hold the index in cache
Turn off hyper
threading
Dimension Ordering
12
Block Size
13
Compression Type
VM reporting
RLE then Bitmap then zLib
20% total reporting time difference
Exalytics reporting
Really has no impact (less than 1% from best to worst)
VM calculation
RLE then Bitmap then zLib
20% total calc time difference
Exalytics calculation
Bitmap then RLE then No Compression then zLib
39% total calc time difference
Default
Buffered I/O: 1024 KB (1048576 bytes)
Direct I/O: 10240 KB (10485760 bytes)
Guideline:
Combined size of all essn.ind files, if possible;
otherwise, as large as possible
Do not set this cache size higher than the total
index size, as no performance improvement
results
Index Cache
Hyper Threading
Data Cache
Inconsistent on reporting
3Mb-300Mb data cache but difference is less than
15% anyway
Definitely not as much as 1/8 page file size
(setting it that large definitely hurts performance in
all environments)
VM calculation
300Mb better but only 14% than defaults or
600Mb
Exalytics calculation
Default is best (literally, 3Mb) by as much as 68%
Direct I/O
Dynamic Calculations
Incorrect anyway
FIXing on dense is fine if you're not doing something different to different
members
Parallel Calculations
Time (sec)
170
150
130
110
90
70
50
0
10
20
30
40
50
Threads
60
70
80
90
100
Optimizations: VM Retrievals
Optimizations: VM Calculations
Sparse dimensions in
ratio order
Rest of the dimensions
dont matter
Small block size
RLE compression
Index cache has
minimal impact
Sparse dimensions in
ratio order
Rest of the dimensions
dont matter
Large block size (2M)
Bitmap compression
Default for index cache
Use hyperthreading
In conclusion