You are on page 1of 15

International Journal of Computer and Technology (IJCET), ISSN 0976 6367(Print), International Journal of Computer Engineering Engineering ISSN

N 0976 6375(Online) Volume 1, Number 1, 6367( (2010), and Technology (IJCET), ISSN 0976May - June Print) IAEME
ISSN 0976 6375(Online) Volume 1 Number 1, May - June (2010), pp. 235-249

IJCET

IAEME, http://www.iaeme.com/ijcet.html

IAEME

A STUDY ON QUALITY PARAMETERS OF SOFTWARE AND THE METRICS FOR EVALUATION


J.Emi Retna Karunya University Karunya nagar, Coimbatore E-Mail: jemiretna@yahoo.co.in Greeshma Varghese Karunya University Karunya nagar, Coimbatore E-Mail: greeshma_2010@gmail.com Merlin Soosaiya Karunya University Karunya nagar, Coimbatore E-Mail: merlinsoosaiya@gmail.com Sumy Joseph Karunya University Karunya nagar, Coimbatore E-Mail: almerah.joseph@gmail.com

ABSTRACT
Software Quality is one of the illusive targets to achieve in the software development for the successful software projects. Software Quality activities are conducted throughout the project life cycle to provide objective insight into the maturity and quality of the software processes and associated work products. Software Quality activities are performed during each traditional development phase. There are many parameters or attributes which helps to ensure the quality of the software. The paper analyses and a detailed report are presented on each quality attribute parameter.

INTRODUCTION
Gone are the days when software quality was thought of as a luxury. But now software quality is one of the illusive targets to achieve in the software development. The successful software projects achieve excellence in software quality. Today it is viewed as 235

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), ISSN 0976 6375(Online) Volume 1, Number 1, May - June (2010), IAEME

an essential parameter with any software delivered. Software quality has several definitions and is viewed in several perspectives. Software built with all the requirements specified by the client or software that has 0% defect (practically impossible) cannot be deemed as of high quality. Quality however is not a single parameter but it is a collection of parameters and it has a multidimensional concept. According to Crosby, quality is defined as a conformance to requirements, which means the extent to which the product conforms to the intent of design. Quality of design can be regarded as the determination of requirements and specifications and quality of conformance is conformance to requirements. Some of the parameters that add up to the quality of software are as given below: Capability (Functionality), Scalability, Usability, Performance, Reliability, Maintainability, Durability, Serviceability, Availability, Installability, Structured ness and Efficiency. There are two types of parameters namely functional parameters and non functional parameters. Functional parameters deal with the functionality or functional aspects of the application while non functional parameters deal with the non-functional parameters (but desirable) like usability, maintainability that a developer usually doesnt think of at the time of development. Generally the non functional parameters are considered only in the maintenance phase or after the software is being developed which causes rework or additional effort requirement. Hence it is a best practice to consider quality even in the initial phases of software development and deliver the software with high quality and on time. Some of the quality parameters are interrelated as specified Figure 1 Interrelationships of Software Attributes in figure 1 [1].

236

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), ISSN 0976 6375(Online) Volume 1, Number 1, May - June (2010), IAEME

Parameters Capability (Functionality) Usability

Models and Metrics 1. Function Point 1. 2. 3. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 1. 2. 3. 4. 5. 1. 2. 3. 4. 5. 6. 7. 8. 9. 1. 2. 3. Questionnaires, Testing Error Rate ISO 9241-11 Execution Time Time Reduction Idle Time Reduction Response Time Completion Rate Throughput Service Unit Reduction Meeting users expectations Problem resolution Timeliness exponential distribution models Weibull distribution model Thompson and Chelson's model Jelinski Moranda model LittleWood Model MTTF (Mean Time To Failure) MTTR (Mean Time To Repair) POFOD (Probability of failure on demand) Rate of fault occurrence Reliability = MTBF/(1+MTBF) Models like HPMAS Polynomial assessment tool Principle Components Analysis Aggregate Complexity Measure Factor Analysis Reliability Availability Markov Chain Model Supplier / Component Subsystem quality audits PPM Defects % right first time Initial Quality Survey Customer Satisfaction Warranty Claim Rates Mean Time To Repair = Unsheduled downtime/ no. of failures Mean nodenours to repair = Unsheduled downtime nodehours/ no.of failures Mean time to boot system = sum of wall clock time booting system/ no of boot events System Uptime Mean Time To Failure Mean Time To Failure+Mean Time To Repair *100%

Features to improve parameters 1. Security of overall system 2. Feature set and capabilities of the program 1. GUI 2. Component reuse 1. Reduced hit ratio 2. Cache 3. Improved Processing Power 4. Software engineering practices 5. Design 6. Understanding user requirements 7. Architecture Design

Performance

Reliability

1. Consistency 2. Data Integrity 3. Accuracy

Maintainability

1. Stability

Durability (also include session durability)

1. 2. -

Data Redundancy Replication Erasure Coding Data Repair Failure Detection Repair

Serviceability

Availability

1. 2.

1. Help desk notification of exceptional events 2.Network Monitoring 3.Documentation 4.Event Logging 5Training 6Maintenance Work product is operational and available for use

237

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), ISSN 0976 6375(Online) Volume 1, Number 1, May - June (2010), IAEME

Installability

Scalability

1. 2. 3. 4. 5. 6. 7. 8. 1. 2. 3. 4. 1.

Total installability time Total count of installers % of installers involved only in tuning Time spent for user training Count of database reports written custom features percentage Customers negotiation strength GQM Load scalability Service scalability Data scalability Page/screen response time Programmer productivity = LOC Produced Person Months of Effort amount of output Productivity = effort input Cyclomatic Complexity (or conditional complexity), V (G) = E N + 2 where E is the number of flow graph edges and N is the number of nodes. McClures Complexity Metric= C + V where C is the number of comparisons in a module and V is the number of control variables referenced in the module Cohesion STRENGTH = where X is reciprocal of the number of assignment statements in a module and Y is the number of unique function outputs divided by number of unique function inputs. Coupling = where Zi =

1. Effort needed to install software

1. Application specific factors 2. Load generation tool related 3. Hardware configuration of load client 4. Architecture Design 1. To reduce defect levels 2. Defect prevention and removal technologies 3. Defect Removal Efficiency 1. Increase in RFC(Request for a class) 2. Inheritance 3. module strength 4. Degree of data sharing

Productivity

2. Complexity 1.

2.

3.

4. Efficiency (of algorithms) 1. 2. 3.

Reusability

Resource Utilization Level of performance End-to-end error detection like performance defects appear under heavy load 1. Reuse percent- the de facto standard measure of reuse level = (Reused Software / Total Software) *100 2. Use objective metrics on subjective data to obtain reusability readings

1.Usage of CPU 2.I/O capacity 3.Usage of RAM


1. Software system independence. 2. Machine independence. 3. Generality. 4. Modularity. 1. Design documentation 2. Code

Portability

Testability

Effort/Cost

1. Portability =1(Resources needed to move system to the target environment/Resource needed to create system for the resident environment) 1. Effort needed for validating the modified software 2. Effort required to test a program 1. Boehms COCOMO model 2. Putnams SLIM model 3. Albrechts Function Point model

1. Related with code

1. Preparation and execution 2. Ease of maintenance 3. Effort on duplicates and invalids

Table 1: Quality Parameters with models /metrics

238

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), ISSN 0976 6375(Online) Volume 1, Number 1, May - June (2010), IAEME

Generally if there is a reliability problem the usability of an application is rated poor. If the application becomes unavailable (availability factor decreases), then again the performance is rated low. If the performance of software improves it means factors such as availability are high. Hence all the parameters are inter-related and proportional. Table1 proposes the quality parameters, the key models and their key metrics. The feature that enhances the quality parameter is also listed.

1. CAPABILITY (FUNCTIONALITY)
Functionality captures the amount of function contained on a product. Often functionality is measured rather than the physical code size. The software being developed needs to satisfy all the functional requirements. Functional requirements include almost all the user requirements or business requirements the what for the software is being developed. Once all the functionality is developed then the software becomes deliverable. The software developers, testers and quality practitioners are entrusted to verify if all the functional requirements are covered in software. If the functionality is developed the defects will be minimal in software. There are some sub characteristics that can be derived from the quality features of software [3].

Figure 2 Functionality Suitability: Attributes of software that bear on the presence and appropriateness of a set of functions for specified tasks. Accurateness: Attributes of software that bear on the provision of right or agreed results or effects. Interoperability: Attributes of software that bear on its ability to interact with specified systems.

239

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), ISSN 0976 6375(Online) Volume 1, Number 1, May - June (2010), IAEME

Compliance: Attributes of software that make the software adhere to application related standards or conventions or regulations in laws and similar prescriptions. Security: Attributes of software that bears on its ability to prevent unauthorized access, whether accidental or deliberate, to programs or data. The most widely used metric is the Function Point metric. Function Points are measures of software size, functionality, and complexity used as a basis for software cost estimation [3].The procedure to calculate the Function Point is as explained below:

Step 1: Determine the unadjusted function point count(UFP)


1. Count the number of external inputs. External inputs are those items provided by the user (e.g.) file names and menu selections) 2. count the number of external outputs External outputs are those items provided to the user (e.g.) reports and messages 3. Count the number of external inquiries External inquiries are interactive inputs that needs a response 4. Count the number of internal logical files 5. Count the number of external interface files Table 2 Determination of UFP Item External inputs External output External inquiries External files Internal files Weighting Simple Factor Average 3 4 4 5 3 4 7 10 5 7 Complex 6 7 6 15 10

UFC= ((Number of items of variety i) *weight i) i=1 VAF = 0.65 + (0.01 * TDI) FP = UFP * VAF

240

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), ISSN 0976 6375(Online) Volume 1, Number 1, May - June (2010), IAEME

Table 3 The general system characteristics

General Characteristic 1.

System

Brief Description How many communication facilities are there to aid in the transfer or exchange of information with the application or system?

Data communications Distributed processing Performance Heavily configuration Transaction rate On-Line data entry End-user efficiency On-Line update Complex processing Reusability Installation ease Operational ease Multiple sites used data

2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

How are distributed data and processing functions handled? Was response time or throughput required by the user? How heavily used is the current hardware platform where the application will be executed? How frequently are transactions executed daily, weekly, monthly, etc.? What percentage of the information is entered On-Line? Was the application designed for end-user efficiency? How many ILFs are updated by On-Line transaction? Does the application have extensive logical or mathematical processing? Was the application developed to meet one or many users needs? How difficult is conversion and installation? How effective and/or automated are start-up, back-up, and recovery procedures? Was the application specifically designed, developed, and supported to be installed at multiple sites for multiple organizations? Was the application specifically designed, developed, and supported to facilitate change?

14.

Facilitate change

241

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), ISSN 0976 6375(Online) Volume 1, Number 1, May - June (2010), IAEME

Step 2: Determine the value adjustment factor (VAF)


VAF is based on the Total Degree of Influence (TDI) of the 14 general system characteristics. TDI = Sum of Degree of Influence of 14 General System Characteristics (GSC). These 14 GSC are listed in Table 3.

II. USABILITY
Boehm defined software usability as the extent to which the product is convenient and practical to use. The software usability is considered as a combined form of understandability, learn ability, operability and finally the attractiveness of the product to the final user or user group [Pressman, 1999].How usable is software? Determines its longevity. Usability evaluation yields better results only if there is a good usability design. The usability evaluation can be user based or expert based. The software has to be usable. Usability depends on parameters such as ease-of-use, comfort level, simplicity etc. Functional software without the usable factor will be least desired. If the User Interface is designed with user friendliness in mind it adds to usability. Figure 2 depicts the consolidated usability model by Abron. As in the figure usability factor increase only if an application is effective, efficient, user expectations satisfied, ease to use and highly secure.

Figure 3 Usability Some of the methods adopted to measure usability are a. How satisfied is the user b. Whether the user is able to perform the task intended (task success) c. The problems encountered when using the software (failure) d. The time taken to complete the task d. Task completion rates, task time, and satisfaction and error counts.

242

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), ISSN 0976 6375(Online) Volume 1, Number 1, May - June (2010), IAEME

e. Ease of use f. Download delay g. More navigability h. Interactivity i. More responsive j. Higher quality To improve usability it is important to improve page reduction so that the page is informative and the information needs of the user are satisfied. Website download delay affects usability. Table 4. Usability Design Factors

III.

PERFORMANCE

There are lots of factors that affect the performance of software like communication failure, poor bandwidth, hardware or component failure. The parameters for performance evaluation are: 1. Execution time 2. Service unit reduction 3. Idle time reduction 4. No of tasks completed A high performance application can be designed using various technologies like:

243

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), ISSN 0976 6375(Online) Volume 1, Number 1, May - June (2010), IAEME

1. Caching 2. Multiple servers running on different machines 3. Thread management 4. Improved page design The selection of tools, technology/platform, design, knowledge base etc plays a vital role in the performance of an application. Appropriate models needs to be chosen at the early stage of software development itself. For e.g., Let us consider website performance analysis. The performance of a website is analyzed from a customers perspective. When the user is only interested in checking the mails , he may log into any websites that provide mailing services. The leading email providers have their own design and functionality built-in. Some of the email service providers, on login directs to the users mailbox (category 1)while some other email service providers, on logging in provide a informative or a little junk page rather than directing to the users mailbox on log in (category 2). So category 1 provides much better performance than category 2 since when the users perspective is to check mails timeliness is met by category 1 and not category 2.The responsiveness of websites and the performance can be measured using tools that are available like ECperf

Figure 4 Reliability Maturity: Attributes of software that bear on the frequency of failure by faults in the software Fault Tolerance: Attributes of software that bear on its ability to maintain a specified level of performance in case of software faults or of infringement of its specified interface.

244

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), ISSN 0976 6375(Online) Volume 1, Number 1, May - June (2010), IAEME

Recoverability: Attributes of software that bear on the capability to re-establish its level of performance and recover the data directly affected in case of a failure and on the time and effort needed for it. Probability of failure on demand (POFOD) is a measure of the likelihood that the system will fail when a service request is made. It is relevant for safety critical or nonstop systems. Rate of fault occurrence is relevant for operating systems, transaction processing systems etc. It considers the frequency of unexpected behavior. Mean Time To Failure is the time between observed failures. Reliability is the capability of the software product to maintain a specified level of performance when used under specified conditions [9]. Reliability measures include 3 components: 1) Measuring the number of system failures for a given number of system inputs 2) Measuring the time or number of transactions between system failures 3) Measuring the time to restart after failure

IV. MATHEMATICAL CONCEPTS OF RELIABILITY


Mathematically reliability R(t) is the probability that a system will be successful in the interval from time 0 to time t: [6] R(t) = P(T > t), t => 0 where T is a random variable denoting the time-to-failure or failure time. Unreliability F(t), a measure of failure, is defined as the probability that the system will fail by time t. F(t) = P(T =< t), t => 0. In other words, F(t) is the failure distribution function. The following relationship applies to reliability in general. The Reliability R(t), is related to failure probability F(t) by: R(t) = 1 - F(t).

V.

MAINTAINABILITY

The capability of the software product to be modified. Modifications may include corrections, improvements or adaptation of the software to change in environment, and in requirements and functional specifications" [7]. Proper documentation needs to be maintained for maintaining software. The document needs to be complete and updated.

245

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), ISSN 0976 6375(Online) Volume 1, Number 1, May - June (2010), IAEME

Software Maintainability assessment can be conducted at various levels of granularity. At the component level models can be used to monitor changes to the system as they occur and to predict fault occurring at software components. At the file level we can use models to identify subsystem that are not well organized. Models like HPMAS [5], a hierarchical multidimensional assessment model can be used. The important sub attributes for Maintainability are: Analyzability: Attributes of software that bear on the effort needed for diagnosis of deficiencies or causes of failures, or for identification of parts to be modified. Changeability: Attributes of software that bear on the effort needed for modification, fault removal or for environmental change. Stability: Attributes of software that bear on the risk of unexpected effect of modifications. Testability: Attributes of software that bear on the effort needed for validating the modified software

Fig. 5 Maintainability

VI. DURABILITY
Software usability efforts improve software durability. Software durability can be data durability or session durability. Session durability generally speaking is of short duration. Technologies like data replication, data repair can be used to enhance durability. Reliability and availability can be considered as metrics for data durability.

VII.

SERVICEABILITY
Serviceability is the ability to offer promised services by the software or

application. With software it deals with the support offered in terms of user manual, technical help, problem resolvement etc. As new versions of software get released, the

246

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), ISSN 0976 6375(Online) Volume 1, Number 1, May - June (2010), IAEME

software get released, the support for the older versions vanishes. Incorporating serviceability facilitating features typically results in more efficient product maintenance, reduced operational cost etc. The maintainability feature can be inbuilt within the system. Systems can be built with features that automatically shoot mails or log a service call on experiencing a fault

VIII.

AVAILABILITY
Availability is the measure of how likely the system is available for use. It

considers the repair or restart time into account. Reliability and availability goes hand-inhand. Software needs to be available even as the load increases. Availability of 0.997 means the software is available 997 out of 1000 time units.

IX.

INSTALLABILITY

Software needs to be easily installable. The parameter that needs to be set to install software needs to be easy. The software needs to be configurable.

X.

SCALABILITY
The software needs to be highly scalable in all environments. Scalability can be

interms of load, service, data atc. A system inorder to be highly scalable needs to have the ability to handle large transactions and load. As the load increases in production a desirable system will be able to scale up with additional hardware resources either vertically or horizontally. Vertical Scaling is when inorder to support an increase in load, an individual server is given increased memory or processing power. Horizontal scaling is when increased load is handled by adding servers to a distributed system.

XI.

COMPLEXITY

Low coupling is often a sign of well-structured system and good design [5]. High levels of coupling usually will be associated with lower productivity. Complexity can be of two types. a) Apparent complexity, is the degree to which a system or component has a design or implementation that is difficult to understand and verify. b) Inherent complexity, is the degree of complications of a system or system component determined by such factors as the number and intricacy of

247

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), ISSN 0976 6375(Online) Volume 1, Number 1, May - June (2010), IAEME

interfaces[4], the number and intricacy of conditional branches, the degree of nesting, and the type of data structures Metrics used for measuring the qualities of Software

Some software metrics include: Total Lines of code Number of characters Number of Comments Number of comment characters Code characters Halstead's estimate of program length metric Jensen's estimate of program length metric Cyclomatic complexity metric

CONCLUSION
Good engineering methods can largely improve software reliability. Before the deployment of software products, testing, verification and validation are necessary steps. Software testing is heavily used to trigger, locate and remove software defects. Software testing is still in its infant stage; testing is crafted to suit specific needs in various software development projects in an ad-hoc manner. Various analysis tools such as trend analysis, fault-tree analysis, Orthogonal Defect classification and formal methods, etc, can also be used to minimize the possibility of defect occurrence after release and therefore improve software reliability. After deployment of the software product, field data can be gathered and analyzed to study the behavior of software defects. Fault tolerance or fault/failure forecasting techniques will be helpful techniques and guide rules to minimize fault occurrence or impact of the fault on the system.

REFERENCES:
[1] Stephen H. Kan, Metrics and Models in Software Quality Engineering, Second Edition. [2] N.E. Fenton and S.L. Pfleeger, Software Metrics: A Rigorous and Practical Approach, Second Edition. [3] http://davidfrico.com/fpfull.pdf

248

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), ISSN 0976 6375(Online) Volume 1, Number 1, May - June (2010), IAEME

[4] Software Quality assurance: Principles and Practice -Nina S God bole [5] Software Reliability Engineering-John D.Musa [6] Roger S. Pressman. Software Engineering a practitioner's Approach. McGraw-Hill, Inc., 1992. [7] International Standard. ISO/IEC 9126-1. Institute of Electrical and Electronics Engineers, Part 1, 2, 3: Quality model, 2001. http://www.iso.ch. [8] International Standard. ISO/IEC 9126-1 (2001), Institute of Electrical and Electronics Engineering, Part 1,2,3: Quality Model, 2001, http://www.iso.ch

249

You might also like