You are on page 1of 18

Test metric Definition Purpose

PRODUCT

The total number of remarks found in a


given time period/phase/test type. A
One of the earliest indicators to
remark is a claim made by test engineer
measure once the testing commences;
Number of remarks that the application shows an undesired
provides initial indications about the
behavior. It may or may not result in
stability of the software.
software modification or changes to
documentation.

A more meaningful way of assessing


The total number of remarks found in a the stability and reliability of the
given time period/phase/test type that software than number of remarks.
Number of defects
resulted in software or documentation Duplicate remarks have been
modifications. eliminated; rejected remarks have been
done.
The status of the defect could vary
depending upon the defect-tracking tool
that is used. Broadly, the following Track the progress with respect to
statuses are available: To be solved: entering, solving and retesting the
Logged by the test engineers and waiting remarks. During this phase, the
Remark status
to be taken over by the software engineer. information is useful to know the
To be retested: Solved by the developer, number of remarks logged, solved,
and waiting to be retested by the test waiting to be resolved and retested.
engineer. Closed: The issue was retested
by the test engineer and was approved.

Provides indications about the quality


of the product under test. High-severity
The severity level of a defect indicates the
defects means low product quality, and
potential business impact for the end user
Defect severity vice versa. At the end of this phase, this
(business impact = effect on the end user
information is useful to make the
x frequency of occurrence).
release decision based on the number
of defects and their severity levels.

Provides a direct measurement of the


An index representing the average of the
Defect severity index quality of the product—specifically,
severity of the defects.
reliability, fault tolerance and stability.
Shows how fast the defects are being
found. This metric indicates the
Time to find a defect The effort required to find a defect.
correlation between the test effort and
the number of defects found.

Provides an indication of the


Effort required to resolve a defect maintainability of the product and can
Time to solve a defect
(diagnosis and correction). be used to estimate projected
maintenance costs.

This metric is an indication of the


Defined as the extent to which testing completeness of the testing. It does not
Test coverage covers the product’s complete indicate anything about the
functionality. effectiveness of the testing. This can be
used as a criterion to stop testing.
This metric provides an indication of
The extent to which test cases are able to
Test case effectiveness the effectiveness of the test cases and
find defects.
the stability of the software.

This metric indicates the quality of the


product under test. It can be used as a
The number of defects per 1,000 lines of
Defects/ KLOC basis for estimating defects to be
code.
addressed in the next phase or the next
version.

PROJECT
This metric helps in detecting issues
Ratio of the planned workload and the
related to estimation and planning. It
Workload capacity ratio gross capacity for the total test project or
serves as an input for estimating similar
phase.
projects as well.

The planned value related to the actual


Test planning performance Shows how well estimation was done.
value.

The effort spent in testing, in relation to


Test effort is the amount of work spent, in
the effort spent in the development
hours or days or weeks. Overall project
activities, will give us an indication of
Test effort percentage effort is divided among multiple phases of
the level of investment in testing. This
the project: requirements, design, coding,
information can also be used to
testing and such.
estimate similar projects in the future.
An attribute of the defect in relation to the
quality attributes of the product. Quality
This metric can provide insight into the
attributes of a product include
Defect category different quality attributes of the
functionality, usability, documentation,
product.
performance, installation and
internationalization.

PROCESS

Are we able to find the right defects in


An attribute of the defect, indicating in the right phase as described in the test
Should be found in which phase which phase the remark should have been strategy? Indicates the percentage of
found. defects that are getting migrated into
subsequent test phases.
The goal is to achieve a defect level
An estimate of the number of defects that
that is acceptable to the clients. We
Residual defect density may have been unresolved in the product
remove defects in each of the test
phase.
phases so that few will remain.

Provides an indication of the level of


Ratio of the number of remarks that understanding between the test
Defect remark ratio resulted in software modification vs. the engineers and the software engineers
total number of remarks. about the product, as well as an indirect
indication of test effectiveness.

Percentage of valid remarks during a


certain period. Valid remarks = number of
Indicates the efficiency of the test
Valid remark ratio defects + duplicate remarks + number of
process.
remarks that will be resolved in the next
phase or release.
Indicates the effectiveness of the
Percentage of the number of resolved
defect-resolution process, plus indirect
Bad fix ratio remarks that resulted in creating new
indications as to the maintainability of
defects while resolving existing ones.
the software.

Indicates the efficiency of defect


The number of defects that are removed removal methods, as well as indirect
Defect removal efficiency
per time unit (hours/days/weeks) measurement of the quality of the
product.

Shows the effectiveness of the defect


Defined as the number of defects found
removal. Provides a direct
during the phase of the development life
Phase yield measurement of product quality; can be
cycle vs. the estimated number of defects
used to determine the estimated number
at the start of the phase.
of defects for the next phase.
Indicates how well the software
The number of remarks that are yet to be
Backlog development engineers are coping with the testing
resolved by the development team.
efforts.

The number of resolved remarks that are Indicates how well the test engineers
Backlog testing yet to be retested by the development are coping with the development
team. efforts.

The number of changes that were made to Indicates requirements stability or


Scope changes
the test scope. volatility, as well as process stability.
How to calculate

Total number of remarks found.

Only remarks that resulted in


modifying the software or the
documentation are counted.
This information can normally be
obtained directly from the defect
tracking system based on the
remark status.

Every defect has severity levels


attached to it. Broadly, these are
Critical, Serious, Medium and
Low.

Two measures are required to


compute the defect severity index.
A number is assigned against each
severity level: 4 (Critical), 3
(Serious), 2 (Medium), 1 (Low).
Multiply each remark by its
severity level number and add the
totals; divide this by the total
number of defects to determine the
defect severity index.
Divide the cumulative hours spent
on test execution and logging
defects by the number of defects
entered during the same period.

Divide the number of hours spent


on diagnosis and correction by the
number of defects resolved during
the same period.

Coverage could be with respect to


requirements, functional topic list,
business flows, use cases, etc. It
can be calculated based on the
number of items that were covered
vs. the total number of items.
Ratio of the number of test cases
that resulted in logging remarks
vs. the total number of test cases.

Ratio of the number of defects


found vs. the total number of lines
of code (thousands)
Computation of this metric often
happens in the beginning of the
phase or project. Workload is
determined by multiplying the
number of tasks against their norm
times. Gross capacity is nothing
but planned working time,
determined by workload divided
by gross capacity.

The ratio of the actual effort spent


to the planned effort.

This metric can be computed by


dividing the overall test effort by
the total project effort.
This metric can be computed by
dividing the defects that belong to
a particular category by the total
number of defects.

Computation of this metric is done


by calculating the number of
defects that should have been
found in previous test phases.
This is a tricky issue. Released
products have a basis for
estimation. For new versions,
industry standards, coupled with
project specifics, form the basis
for estimation.

The number of remarks that


resulted in software modification
vs. the total number of logged
remarks. Valid for each test type,
during and at the end of test
phases.

Ratio of the total number of


remarks that are valid to the total
number of remarks found.
Ratio of the total number of bad
fixes to the total number of
resolved defects. This can be
calculated per test type, test phase
or time period.

Computed by dividing the effort


required for defect detection,
defect resolution time and
retesting time by the number of
remarks. This is calculated per test
type, during and across test phases.

Ratio of the number of defects


found by the total number of
estimated defects. This can be
used during a phase and also at the
end of the phase.
The number of remarks that
remain to be resolved.

The number of remarks that have


been resolved.

Ratio of the number of changed


items in the test scope to the total
number of items.

You might also like