You are on page 1of 20

DEPENDABILITY AND

PERFORMANCE EVALUATION
DEPENDABILITY
• The Infrastructure providers offer Service Level
Agreement (SLA) or Service Level Objectives (SLO)
to guarantee that their networking or power services
would be dependable.
• Systems alternate between 2 states of service with
respect to an SLA:
• 1. Service accomplishment, where the service is
delivered as specified in SLA
• 2. Service interruption, where the delivered service
is different from the SLA
• Failure = transition from state 1 to state 2
• Restoration = transition from state 2 to state 1
• The two main measures of Dependability are:
• Module Reliability and Module Availability.
• Module reliability is a measure of continuous service
accomplishment (or time to failure) from a reference initial
instant.
• 1. Mean Time To Failure (MTTF) measures Reliability
• 2. Failures In Time (FIT) = 1/MTTF, the rate of failures
• Traditionally reported as failures per billion hours of operation
• Mean Time To Repair (MTTR) measures Service Interruption
• Mean Time Between Failures (MTBF) = MTTF+MTTR
• Module availability measures service as alternate between the
2 states of accomplishment and interruption (number between
0 and 1, e.g. 0.9)
• Module availability = MTTF / ( MTTF + MTTR)
PERFORMANCE
• Execution time or Response time - The time between the start
and completion of an event.
• Throughput - The total amount of work done in a given time is
defined as the t.
• The Administrator of a data center may be interested in
increasing the Throughput.
• Bandwidth : The amount of data that can be carried out from
one point to another in a given time period.(usually expressed
in Bps)
MEASURING AND REPORTING
PERFORMANCE

• The computer user is interested in reducing


response time also referred to as execution time.
The manager of a large data processing center
may be interested in increasing throughput (the
total amount of work done in a given time).
• Even execution time can be defined in different
ways such as including disk accesses, memory
accesses, input/output activities, operating system
overhead, CPU execution time, etc.
PERFORMANCE EVALUATION
• To maximize performance of the system, response time or
execution time hs to be minimized for some task.
• Hence relate the performance and execution time for a
computer X as:
PerformanceX= 1/ Execution TimeX
Both are reciprocals : increasing performance requires
decreasing execution time.
PerformanceX > PerformanceY

1/ Execution TimeX > 1/ Execution TimeY

Execution TimeY > Execution TimeX


RELATIVE PERFORMANCE
• Relative Performance (n):
• The performance of two different computers quantitatively are
related by relative performance as :

• Hence The performance of the computer can be improved by


decreasing execution time.
BENCHMARKS
• Real applications are best choice of benchmarks to evaluate
the Performance.
• Hence Benchmark program which resemble the real
applications are chosen.
• Three types of Benchmarks:
• Kernels - Small Key pieces of real applications.
• Toy programs-100 line programs from beginning
programming assignments, such quicksort etc..
• Synthetic Benchmarks-Fake programs invented to try to match
the profile and behavior of real applications such as
Dhrystone.
SPEC Model
Collection of Benchmarks to try measure the performance of processors.
SPEC (Standard Performance Evaluation Corporation) - Benchmarks for workstations

Benchmark Name Benchmark Description

Business Winstone 99 Runs a script Consisting of Netscape Navigator, Several


Office Suites (Microsoft,Corel,Wordperfect).
This suites the user in switching and running among
different applications.
High End Winstone 99 Multiple applications running simultaneously and focuses
on Adobe Photoshop

CC Winstone 99 Multiple Applications focused on content creation, such as


Photoshop. Premier navigator and various audio editing
programs.
Winbench 99 Runs variety of Scripts that test CPU performance, video
system and disk performance using kernels focused on
each subsystem.
DESKTOP BENCHMARKS
• Two broad classes:
• Processor – intensive benchmarks
• Graphics – intensive benchmarks
• SPEC CPU- Created for processor performance
• SPEC CPU int2006- 12 integer benchmarks and 17
floating point benchmarks.
SERVER BENCHMARKS
• It supports multiple benchmarks.
• SPEC CPU 2000- it construct a simple throughput benchmarks
–processing rate of multiprocessor can be measured by
running multiple copies.
• SPEC Rate- Significant I/O activity.
• SPECSFS- (NFS File server) and SPEC Web(Web Server)-
added as server benchmarks
• Transaction processing council measures Server performance
and cost performance for databases:
• TPC-C- Complex Query for online Transaction processing
• TPC-H-models ad hoc decision support
• TPC-W-transactional web benchmark
• TPC-App- Application server and web server benchmarks
EMBEDDED BENCHMARKS
• The EEMBC benchmark fall into five classes:
• Automotive / Industria
• Consumer
• Networking
• Office Automation
• Telecommunication
EMBEDDED BENCHMARKS
Benchmark Type # of Type Example Benchmarks
6 Micro benchmarks (arithmetic operations,
pointer chasing, memory performance, matrix
Automotive / Industria 16 arithmetic, table lookup ,bit manipulation),5
automobile control benchmarks, and 5 filter or
FFT Benchmarks.
5 Multimedia Benchmarks(JPEG compress,
Consumer 5 Decompress, filtering and RGB conversions)
Shortest path calculation, IP routing and packet
Networking 3 flow operations.
Graphics and text benchmarks(Image rotation,
Office Automation 4 Text processing)
Filtering and DSOP Benchmarks (autocorrelation,
Telecommunication 6 FFT, decoder and encoder)
SPEC RATIO
• Normalize the execution time to reference computer, yielding a ratio
proportional to performance = time on reference computer/time on
computer being rated.
• If program SPECR on computer A is 1.25 times bigger than
computer B, then
QUANTITATIVE PRINCIPLES OF
COMPUTER DESIGN
While designing computer, the advantage of the following
points can be exploited to enhance the performance
PARALLELISM:
 Achieved through pipelining.
 Exploited at the level of detailed design level.
 Set-associate caches use multiple blanks of memory.
 Carry look ahead adder uses parallelism.
PRINCIPLE OF LOCALITY:
 Reuse data and instructions –recently used space or locations.
 90%-execution,10%-coding –Program.
 Prediction for future based on recent past.
CONTD…..
FOCUS ON THE COMMON CASE:
 While making the design trade-off, favours the frequent case
over the infrequent case.
AMDAHL’S LAW:
 To find the performance gain that can be obtained by
improving some portion or a functional unit of a computer.
 The speed up can be gained by using a particular feature.

Alternate:
AMDAHL’S LAW - FACTORS
1.The fraction of the computational time of the original computer
can be converted for enhancement.
2.Speedup enhanced is the time of the original mode over the
time of the enhanced mode and is always greater than 1.
CPU PERFORMANCE &
FACTORS
• CPU Performance- Realistic-Model of computation
• CPU clock cycles = Instruction Count(IC) * Average Clock
cycles per instruction (CPI)
Where IC= number of programs per instruction.
CPI= average number of clock cycle each instruction
takes to execute.
CPU Performance Equation:
CPU Time= CPU Clock cycles for a program * clock cycle time.

CPU Time= IC * CPI/Clock rate.


• The following are the equation to more accurately determine
the number of cycles incurred when a program is executed:

Overall CPI:

• The frequency distribution of Instructions is comprised of


IC1,IC2…….. ICn is obtained by a technique called
execution profiling-commercial software tools.
PROBLEM
• Suppose we have made the following measurement :
Frequency of FP operations = 25%
Average CPI of FP operations = 4.0
Average CPI of other Instructions = 1.33
Frequency of FPSQR = 2%
CPI of FPSQR = 20
Assume that the two design alternatives are to decrease the
CPI of FPSQR to 2 or to decrease the average CPI of all FP
operations to 2.5.Compare these two design alternatives
using the Processor Performance Equation.

You might also like