You are on page 1of 79

PART B Explain in detail about hierarchical model of Quality.

Hierarchical Model
To compare quality in different situations (quantitatively & qualitatively), it is needed to establish a model of quality in hierarchical structure Ex: Assessment and Reporting method used in School Student progress report is prepared under series of headings such as Subject name with both qualitative & quantitative assessment These measures are derived from a examination or a formal test

Hierarchical Models of Quality


Hierarchical Model (Cont)
Traditional Assessment Method

Subject

Teachers Comments

Term Grade (A-E)

Exam Mark (%)

English Maths

Science
Tamil Social Science Total

Hierarchical Models of Quality


Hierarchical Model (Cont)
Presently, the assessment of a person has become complicated cos a subject may be broken into different level of skills where each of the skills are measured collectively to give a more detailed result i.e.,

Hierarchical Models of Quality


Hierarchical Model (Cont)
Student Maths English Science Has evolved into Student Social

Maths
Oral Skills

English
Writing

Science
Creative

Social

Reading Skills

Hierarchical Models of Quality


Hierarchical Model (Cont)
Based on different measures of software, its quality can be depicted as: Quality factor

Quality criterion Ex: Reliability

Quality criterion Ex: Maintainability

Quality criterion Ex: Usability

Quality Metrics

Quality Metrics

Quality Metrics

Ex of Metrics: Accuracy, Consistency, Error Tolerance and simplicity

Hierarchical Models of Quality


Hierarchical Models of Quality
These models are used to identify the needed quality criteria's that can be used in improving the resultant quality of the product Types of quality models 1. McCall Model 2. Boehm Model 3. FURPS Model 4. ISO 9126 Model 5. Dromey Model

Hierarchical Models of Quality


Hierarchical Model of McCall Proposed by McCall in 1977. It is also referred as General Electronics model after McCall This model is originates from US military developed for US Air force This model can be used during the development process by the system developers

In this model, McCall attempts to bridge the gap between users & developers by focusing number of quality factors that reflect both the user views & developer priorities

Hierarchical Models of Quality


Hierarchical Model of McCall

Maintainability Flexibility Testability

Portability Reusability Interoperability

Product Revision

Product Transition

Product operations
Correctness, Reliability, Efficiency, Integrity, and Usability

Hierarchical Models of Quality


Hierarchical Model of McCall McCall model identifies 3 areas of software work or 3 types of quality characteristics 1. Product operation 2. Product revision 3. Product Transition Product Operation It explains the working operation of the product that can be learned easily efficiently Product Revision - It concerned with error correction of the system - The system may undergo changes Product Transition This is applicable in distributed applications & adaptable to new environments

Hierarchical Models of Quality


Hierarchical Model of McCall McCall defines following 11 quality factors: Usability Integrity Execution efficiency Efficiency Storage efficiency Correctness Reliability Maintainability Flexibility Testability Portability Reusability Interoperability

Hierarchical Models of Quality


Hierarchical Model of McCall McCall identifies 3 areas of software work in a hierarchy of factors, criteria and metrics 11 factors (To specify) External view of the software seen by the users 23 criteria's (To build) Internal view of the software seen by developers Metrics (To control) method of measurement

Hierarchical Models of Quality

Hierarchical Models of Quality

Hierarchical Models of Quality


Hierarchical Model of Boehm (1978) This model is presented by Barry W Boehm Boehm model defines set of well defined, differentiated characteristics of software quality where the quality criteria's are subdivided. In this first division is named as general utility and the next one is As-is utility that is the subtype of first one.

Hierarchical Models of Quality


Hierarchical Model of Boehm (1978) It is similar to McCalls model where a quality of software is structured around Higher level characteristics Intermediate level characteristics Primitive characteristics

Hierarchical Models of Quality


Hierarchical Model of Boehm (1978) Higher level characteristics it addresses answers for 3 questions:
As-is Utility [How well (easily, reliably, efficiently) can I use as-is?] Maintainability [How easy is it to understand, modify and retest?] Portability [can I still use it even if I change the environment]

Hierarchical Models of Quality


Hierarchical Model of Boehm (1978) Intermediate level characteristics It identifies 7 quality factors that are expected by the software: Portability Reliability As-is Utility Efficiency Usability Testability Understandability Maintainability characteristics Flexibility

Hierarchical Models of Quality


Hierarchical Model of Boehm (1978) Primitive characteristics Provides the foundation for defining quality metrics

Portability As-is Utility Reliability

Device Independence Self-contained ness Accuracy

Efficiency General Utility Human Engineering

Completeness
Robustness / Integrity Consistency Accountability Device Efficiency

Testability Maintainability Understandability Modifiability

Accessibility
Communicativeness Self descriptiveness Structuredness conciseness Legibility Augment ability

Hierarchical Models of Quality


Summary of McCall & Boehm Models Boehms and McCalls models might appear very similar. the difference is that: McCalls model primarily focuses on the precise measurement of the high-level characteristics As-is utility Boehms quality model is based on a wider range of characteristics with an extended and detailed focus on primarily maintainability.

Hierarchical Models of Quality


Comparison of McCall & Boehm Models by QF to QF

Hierarchical Models of Quality


Comparison of McCall & Boehm Models by QF to QF

Explain in detail about software quality measurement.


Measurement of quality is one of the key problem highlighted by IT practitioners Quality measurement is expressed in terms of Metrics that is a measurable property which is an indicator of one or more quality criteria that are seeking to measure Conditions that the quality metric are: It must be linked to the quality criterion that it seek to measure It must be sensitive to different criterion It provides determination of the criterion

Quality Measurement

Quality Measurement
Measurement technique to a software is similar to the traditional science methods. But it is more complex Structuredness is the main factor for the software thro which quality can be measured.

Well structured code is easy to maintain & adapt which can be calculated in terms of average length of code modules in the program

Structuredness

modularity

Lines of code ---------------------------Number of modules

Quality Measurement
Software Metrics Structuredness is used to predict the maintainability, reliability & adaptability of the software later in the lifecycle Classified into 2 types: Predictive Metrics Descriptive Metrics

Quality Measurement
Software Metrics (Cont) Predictive Metrics It is used to make predictions about the software later in the life cycle Descriptive Metrics It describes the state of the software at the time of the measurement

For Ex: reliability metric might be based upon the number of system crashes during the given period

Quality Measurement
Software Metrics (Cont) McCall & Boehm have defined many number of approaches to find metrics McCalls approach is quantitative where structuredness of a software can be measured by

n01 Structuredness = ----------ntot

This metric value lies between 0 to 1

Where, n01 = number of modules having one or zero exit points ntot = total number of modules

Quality Measurement
Software Metrics (Cont) Using Boehm concept, structuredness can be measured by addressing the answer for following questions: Have the rules for transfering control between modules been followed? (Y/N) Are modules are limited in size? (Y/N) Do all modules have one exit point? (Y/N) Do all modules have only one entry point? (Y/N)

Quality Measurement
Software Metrics (Cont) What makes a good Metric? After McCall & Boehm concepts, a scientist Watts had publishes an analysis of metrics. And he suggests 7 quality criterias of good software metric

Objectivity Reliability Validity Standardization Comparability Economy Usefulness

Quality Measurement
Software Metrics (Cont) Metric Ranking Watts had cited /quoted 40 metrics from the software engineering Among these 40 metrics, 3 quarters of 40 metrics are concerned with 2 criteria namely reliability & maintainability 4 criterias are not metricated 3 criterias have only one metric

Quality Measurement
Software Metrics (Cont) Watts had ranked following criterias based on no. of metrics
Quality Criteria Maintainability Reliability Number of metrics cited 18 12

Usability
Correctness Integrity Expandability

4
3 1 1

Portability
Efficiency Adaptability Interoperability Reusability

1
0 0 0 0

Quality Measurement
Software Metrics Ranking The set of metrics quoted is based on 7 distinct measurable properties: 1. Readability 2. Error Prediction 3. Error Detection 4. Complexity 5. MTTF (Mean Time To Failure) 6. Modularity 7. Testability

Quality Measurement
Software Metrics Ranking (Cont) Relationship b/w Quality Criteria & Measurable properties are:
Readability Error Prediction Error Detection Complexity MTTF Modularity Testability Others Usability Integration Correctness Reliability Maintainability Expandability Portability

Measurable Properties

Quality Criteria

Quality Measurement
Software Metrics Ranking (Cont) 1. Readability as a measure of usability: This measure is applied in documentation to asses how such documentation assist in the usability of a piece of a software

2 methods:
a) Flesch-Kincaid readability index forms b) Fog Index

First method works at a syllable level & calculated using the formulae:
Grade Level = 0.39a + b - c
a = no. of words in the sentence b=mean no. of syllables per 100 words c = 15.59

Quality Measurement
Software Metrics Ranking (Cont) 1. Readability as a measure of usability: Fog index method:
Fog Index = 0.4a + b

a = no. of words in the sentence b= percentage of words with more than 2 syllables

Quality Measurement
Software Metrics Ranking (Cont) 2. Readability as a measure of Maintainability: Readability of source code can be done in terms of
Statement lines Average length of variable names Total number of program branches

Quality Measurement
Software Metrics Ranking (Cont) 3. Error Prediction as a measure of Correctness: With reference to the quote of Halstead, using Basic parameters such as number of operators & operands one can predict Number of errors found during validation Total number of errors found during development

Quality Measurement
Software Metrics Ranking (Cont) 4. Error Detection as a measure of Correctness: Total number of errors which are not detected has to be predicted using Remus, Zilles model (says no. of detected errors) and error detection efficiency Number of defect removal rate Program length and defect

Quality Measurement
Software Metrics Ranking (Cont) 5. MTTF as a measure of Reliability: MTTF is given by tTOT MTTF = --------Rt
tTOT = Total time period

Rt = number of failures in tTOT

Quality Measurement
Software Metrics Ranking (Cont) 5. MTTF as a measure of Reliability: MTTF can be assessed by measurement, estimation or prediction Reliability can be calculated using the measures Current MTTF (tF) Length of the operation phase (tOP)
tOP Reliability, R1 = exp --------tF

Quality Measurement
Software Metrics Ranking (Cont) 6. Complexity as a measure of Reliability: If a Complexity of software gets increases then the reliability will decreases For Ex: Logical complexity of the software can be measured by the concept of Cyclomatic Complexity 7. Complexity as a measure of Maintainability: If a Complexity of software gets increases then the maintainability will get affected

Quality Measurement
Software Metrics Ranking (Cont) 8. Modularity as the measure of Maintainability: If a modularity increases in the software then the Maintainability of software will also increases

9.

Testability as the measure of Maintainability: Ease & effective testing will have a positive impact on a product The effectiveness of testing will be measured by the concept raised by Woodward

Quality Measurement
Problems on Software Metrics The overall measure of quality can be calculated by following measures: 1. Simple Scoring 2. Weighted Scoring 3. Phased weighting factor method 4. The Kepner-Tregoe Method 5. The Cologne combination method

Quality Measurement
Problems on Software Metrics 1. Simple Scoring In this method, each quality criteria is allocated with a score Overall quality is given by the mean of individual scores Sum of each Metric Score Simple Score = --------------------------------------------------Total number of Metric scores

Quality Measurement
Problems on Software Metrics 1. Example: Simple Scoring Quality Criteria Usability Metric Value 0.7

Security
Efficiency Correctness Reliability Maintainability Adaptability Expandability

0.6
0.4 0.8 0.6 0.6 0.7 0.7 Simple Score = ?

Quality Measurement
Problems on Software Metrics 2. Weighted Scoring In this method, each Quality Criteria can be weighted based on its importance Each quality criterion is evaluated to produce a score between 0 & 1 Sum of Product of each Metric Weighted Score = ------------------------------------------Sum of each Metric weight Where, Product = Value of each Metric Score X weight of each Metric score

Quality Measurement
Problems on Software Metrics 2. Problem 1: Weighted Scoring Quality Criteria Usability Metric Value 0.7 Weight 0.5

Security
Efficiency Correctness Reliability Maintainability Adaptability Expandability

0.6
0.4 0.8 0.6 0.6 0.7 0.7

0.2
0.3 0.5 0.4 0.4 0.1 0.1 Weighted Score = ?

Quality Measurement
Problems on Software Metrics 3. Phased weighting factor method It is an extension of weighted scoring Here weighting is assigned to a group of characteristics based on work areas defined by McCall In this method, 2 work areas [Product Operation & Product Transition] are focused to produce quality where each area will be assumed with additional weightage of characteristics

Quality Measurement
Problems on Software Metrics 3. Phased weighting factor method 3 measures has to be calculated: Product Operation Weighted Mean (POWM) Product Transition Weighted Mean (PTWM) Overall Measure by PWF method Sum of Product of each PO quality criteria POWM = ----------------------------------------------------------Sum of weight of each PO quality criteria

Quality Measurement
Problems on Software Metrics 3. Phased weighting factor method Sum of Product of each PT quality criteria PTWM = -------------------------------------------------------Sum of weight of each PT quality criteria

PWF is the additional weighting factor that will be assumed for the calculation. For Ex: PWF for Product Operations = 2/3 PWF for Product Transition = 1/3

Overall Measure by PWF = ((2/3) X POWM) + ((1/3) X PTWM)

Quality Measurement
Problems on Software Metrics 3. Problem 2: Phased Weighted Scoring Group Quality Criteria Usability Metric Weight PWF Value 0.7 0.5

Product Operation

Security
Efficiency Correctness Reliability Maintainability

0.6
0.4 0.8 0.6 0.6 0.7 0.7

0.2
0.3 0.5 0.4 0.4 0.1 0.1 1/3 2/3

Product Transition

Adaptability Expandability

POWM = ? ; PTWM = ?; Overall Measure = ?

Quality Measurement
Problems on Software Metrics 3. The Kepner-Tregoe Method In this method, quality criteria is divided into 2 parts Essential minimum value will be specified for each
criteria

desirable 4. The Cologne combination method This method facilitates comparative evaluation Each product will be ranked based on chosen criteria

Quality Measurement
Problems on Software Metrics Problem 3: Using the following data & the PWF method, calculate the following: POWM, PTWM and overall measure of the product
Group Quality Criteria Usability Product Operation Security Efficiency Metric Value 0.7 0.5 0.6 Weight 0.5 0.5 0.2 2/3 PWF

Correctness
Reliability Maintainability Product Transition Adaptability

0.7
0.4 0.8 0.7

0.5
0.4 0.4 0.1 1/3

Expandability

0.7

0.1

Quality Measurement
Polarity Profiling In this scheme, Quality of a product is specified in the ranges from -3 to +3 Comparison analysis is made between required quality & the actual quality achieved 4 quality criterias have been focused for this method which are namely Efficiency These criterias would be Reliability Useful to analyze user Maintainability satisfaction of the product Adaptability

Quality Measurement
Polarity Profiling (Cont) When a user complaints about the quality, software engineer has to improve the product in these areas Polarity Profiling is the graphical profiles representation of quality criterias to asses the satisfaction / expectation of the user Using this scheme, an engineer can asses the level of user happiness where by user expectations & the developer tension can be minimized

Quality Measurement
Example: Polarity Profiling (Cont)
-3 -2 -1 0 1 2 3

Unusable Insecure Inefficient Incorrect Unreliable Unmaintainable Not Adaptable


-3 -2 -1 0 1 2 3

Usable Secure Efficient Correct Reliable Maintainable Adaptable

Where,

= Achieved quality & = required quality

Polarity Profiling (Cont)

Reliability & efficiency not up to the reqd. std. where as adaptability & maintainability appears to be beyond the level Users wont get Happy

Consider the following situation

-3

-2

-1

Unusable Insecure Inefficient Incorrect Unreliable Unmaintainable Not Adaptable


-3 -2 -1 0 1 2 3

Usable Secure Efficient Correct Reliable Maintainable Adaptable

Where,

= Achieved quality & = required quality

Polarity Profiling (Cont)

Reliability & efficiency reaches the reqd. std. where as adaptability & maintainability also appears to be reaches the level Users will be Happy

Consider the following situation -3 -2 -1 0 1

Unusable Insecure Inefficient Incorrect Unreliable Unmaintainable Not Adaptable


-3 -2 -1 0 1 2 3

Usable Secure Efficient Correct Reliable Maintainable Adaptable

Where,

= Achieved quality & = required quality

Polarity Profiling (Cont)


Problem

Quality Criteria Usability

Actual Quality 0.3

Required Quality 0.7

Security
Efficiency Correctness Reliability Maintainability Adaptability

0.5
0.3 0.9 0.9 0.4 0.4

0.7
0.6 0.9 0.9 0.8 0.8

Discuss the possibilities of User Satisfaction in this module?

Explain in detail about Gilbs Approach

Quality Measurements
Gilbs Approach 5 problem areas highlighted Simple fact that the method is different Need of training & re-training and associated costs Need of effective management Need to measure progress towards the ultimate goal Picking up errors

Quality Measurements
Gilbs Approach With reference to Gilbs approach, product quality can be measured in terms of Quality Template It models quality in terms of Quality attributes & Resource attributes. This is because quality of a product can be constrained by the available resources

Quality Measurements
Gilbs Approach Quality Template can be pictured as:

Workability Availability
Adaptability Usability Other Qualities

People Time
Money Tools Other Resources

Qualities

Resources

Quality Measurements
Gilbs Approach Quality Attributes 1) Workability 2) Availability 3) Adaptability 4) Usability

Quality Measurements
Gilbs Approach: Quality Attributes & its sub-attributes
Process Capacity Workability Storage Capacity Responsiveness Reliability Availability Maintainability

Integrity
Improvability Adaptability

Extendability
Portability Entry Level Requirements

Learning level requirements


Usability Handling ability Likability

Quality Measurements
Gilbs Approach: Quality Attributes 1) Workability It is defined as the ability of the system to do work (i.e., transaction processing) Divided into sub-attributes of: Process capacity It is the ability of the system to process transactions with in a given unit of time Storage capacity It is the ability of the system to store information Responsiveness It is a measure of the response to a single event

Quality Measurements
Gilbs Approach: Quality Attributes 2) Availability It is the ability of the system to be used with the proportion of elapsed time Classified into Sub attributes of: a) Reliability b) Maintainability c) Integrity

Quality Measurements
Gilbs Approach: Quality Attributes 2) Availability a) Reliability It is the ability of the system that should not fail from its operating environment under any circumstances It is the degree to which the system does what it should to do.

Coz purpose of a system is different & the purpose of parts of a system will be different. So the assessment of reliability will also vary

Quality Measurements
Gilbs Approach: Quality Attributes 2) Availability a) Reliability Based on the analysis of Dickson, Gilbs have suggested that reliability can be assessed in terms of Fidelity For both Logicware (code) Veracity and Dataware (data files) Viability

Quality Measurements
Gilbs Approach: Quality Attributes Dicksons classification of Reliability
Fidelity Logicware Veracity Viability Concerned with the accuracy of algorithm implementation Concerned with the representation of real world be an algorithm Is the extent to which an algorithm meets its specification in terms of performance & requirements Says how accurately an idea is represented by the data within a application How well the data matches with the world How well the required data fits the design constraints

Fidelity

Dataware
Veracity Viability

Quality Measurements
Gilbs Approach: Quality Attributes 2) Availability b) Maintainability It is the effort required to locate & fix a fault in the program within its operating environment

It is the process of fault handling


Sub-attributes of Maintainability are:
Problem recognition Administrative Delay Tool Collection Problem analysis Correction Inspection Time Active Correction Testing Test Evaluation Recovery

Quality Measurements
Gilbs Approach: Quality Attributes 2) Availability c) Integrity It is the protection of the program from unauthorized access

It is a measure of a system to remain intact under threat


Integrity may affects availability So. A system with poor integrity is likely to be unavailable for much time

Quality Measurements
Gilbs Approach: Quality Attributes 3) Adaptability Classified into sub-attributes of: Improvability It is the time taken to make minor changes to the system

Extendability It is the ease of adding new functionality to a system


Portability It is the east of moving a system from one environment to another

Quality Measurements
Gilbs Approach: Quality Attributes 4) Usability It is the ability of the system that should facilitate the ease of use & effectiveness of a system Classified into sub-attributes of: Handling ability It is a measure that says how well
productivity can be proceeded after the error is detected

Entry level requirements are human capabilities such as


intelligence level, language proficiency

Learning level requirements are resources such as time


needed to reach the performance of the system

Likability It says how well people like the system

Quality Measurements
Gilbs Approach: Resource Attributes

People Time Money Tools Other Resources

Quality Measurements
Gilbs Approach: Resource Attributes Time Resource 2 types Calendar time to delivery Time taken by the system to carry out the task People Resource Measured in terms of Man-years But the availability of People for the particular development is critical Ex: can not utilize PASCAL programmers for C programming

Quality Measurements
Gilbs Approach: Resource Attributes Money Resources Concerned with both development & maintenance costs In general, 80% of cost would be spent to maintenance for quality improvement Tool Resources It comprises all physical resources

Quality Measurements
Gilbs Approach: Resource Attributes For continuous improvements, these resources will be considered as constraints to a product
People

Money

Time

Tools

Quality Measurements
Gilbs Approach: Resource Attributes Gilbs have defined some measures to quantify those attributes. And these can be measured in terms of units per time For ex: Transactions per second Records per minute Bytes per line Bits per node per second

Quality Measurements
Gilbs Approach: Resource Attributes
Process Capacity
Workability Storage Capacity Responsiveness

Units per time


Units stored Actions per time

Transactions per sec.


Bytes per second Response time

Attribute

Sub-attribute

General measure

Example

You might also like