Professional Documents
Culture Documents
Bob Sneed
Sun Microsystems, Inc.
Performance and Availability Engineering
Performance
Roundtable
February 8 & 9, 2005
Rev 1.0
Agenda
The problem
Strategic Topics
Platform Technical Factors
UFS, VxFS, Oracle Technical Factors
Filesystems & Feature Selection
The Future
Conclusions
Apologies
The Problem
Technology trends
Customer motivations
Customer expectations
Filesystem performance escalations
Escalation root causes
ISV Factors (Oracle)
Technology Trends
Customer Motivations
Rational
Irrational
Ease of administration
Worry-free memory sharing
Performance
We've always done it that way - Habit
It's our standard - Lack of testing
Unfortunate
Bad advice
Taking the defaults (buffered I/O)
Old advice (VxFS direct I/O)
Customer Expectations
Bigger is better
Applications will scale near-linearly
Their capacity planning techniques are OK
StarCat is just a bigger box
Performance of high-end gear should be
qualitatively different and better
Environments
Databases, mostly Oracle
Near-realtime systems (eg: stock trading)
Very high file counts (eg: mail servers)
Complaints
Fail to scale (with more CPU, more RAM)
Statistics: High %sys, high xcal, high I/O latency
Performance regression on move to StarCat
Poor fsync() performance on S8
Poor predictability characteristics
Backup windows exceeded
Slow database start/stop times
Highest impact: unhappy StarCat customers
Sun Proprietary/Confidential: Internal Use Only
Concurrency controls
I/O size controls
Basic space/speed tradeoffs
Strategic Factors
High-Level Strategy
Precision Semantics
Best Practice
Tuning
Teamwork
Publications
Some References
Performance Forensics
StarCat Architecture
Excerpted from: Solaris
Memory Placement Optimization
and SunFire Servers, by Alan
Charlesworth, March 2003
Situational Assessment
Overview
UFS Features
* UFS direct I/O performs quite similarly to RAW or VxFS 'Quick I/O'
* Except for its 'discovered direct I/O' feature for large operations
* qio mount option only allows/disallows QIO - it does not cause QIO to be used
* QIO will not be used unless a license is present; 'vxlicense' shows licensing; varies
between releases; FDD is the QIO module
* QIO requires special symlinks which show as character-special targets with 'ls -lL'
Pros
Cons
Pros
Cons
RAW
UFS
UFS direct I/O
VxFS
VxFS direct I/O
VxFS Quick I/O (QIO)
VxFS Cached Quick I/O (CQIO)
VxFS Oracle Disk Manager (ODM)
[1] Unless, of course, a 3rd-party volume manager is used, like VxVM
Administrativ
e Complexity
HIGH
VERY LOW
LOW
VERY LOW
LOW
HIGH
HIGH
MODERATE
Performance
Relative to Raw
BASELINE
NEARLY EQUAL
NEARLY EQUAL
NEARLY EQUAL
TEMP I/O
Backup Operations
The Future
The Future
Conclusions
Conclusions
Q&A