You are on page 1of 13

An Experience Factory to Improve Software

Development Effort Estimates

Ricardo Kosloski, Káthia Marçal de Oliveira

Universidade Católica de Brasília


SGAN 916 - Módulo B - Pós Graduação - Campus II
Brasília - DF – Brasil, CEP: 70790-060
rikoslos@brturbo.com, kathia@ucb.br

Abstract: It is well known that effort estimate is an important issue for software
project management. Software development effort can be obtained from the size of
the software and the productivity of its development process. Nowadays the Function
Point Analysis stands out as an approach largely used for software size estimate, while
productivity values are extracted from international historical databases. Some
databases show the median value of various productivity projects while others present
all data according to some specific characterization of the projects. We argue that
defining a framework of characteristics that impact on software project productivity
can improve comparison between finished projects and the new ones that need an
effort estimate. This article presents an approach to effort estimate with continuous
improvement of the estimates using some characterization of the projects. Continuous
improvement is based on the use of an experience factory.

1 Introduction

Planning and controlling software development projects demand a lot of attention from the
managers. Appropriate effort estimate is an important part of this task. It is particularly
important to the organizations, because too high an estimate may result in loosing a contract
to a competitor, whereas too low an estimate could result in a loss for the organization [2].
One way of defining effort estimates consists in relating the software size, to the effort
required to produce it, by means of a value of productivity [18] (i.e., effort = productivity x
software size). The effort is defined in man/hours and is the quantity of work required to
realize the software development project [9]. The software size can be defined by its
functional content using methods such as the well estabilished Function Point Analysis [17,
18]. The productivity can be computed as the relationship between the effort required for the
software construction and the functional size of this software [18].
Productivity values can be found, organized according to some characteristics, in historical
databases [13, 19]. However, each organization should set up its own historical productivity
database, so as to better reflect its own relationship with its customers in software
development [8]. The similarity of old projects with new ones could thus result in better
productivity value itself resulting in better effort estimate of the future projects. Another
important part of this work is the definition of appropriate characteristics to compare the
similarity of projects.
This article presents an approach to effort estimate based on appropriate characterization of
the projects. The characterization of the projects is continuously improved so as to reach a
better productivity definition for each new project based on past projects with similar
characterization. Continuous improvement approaches for software organizations have been
defended by different authors [2,14,20]. In this context, Basili et al. proposed the concept of
experience factory to institutionalize the collective learning of the organization that is at the
root of continual improvement and competitive advantage. Since the main idea of experience
factory is to use past experiences and since we believe that the effort estimate of old projects
can help in a more accurate definition of new estimates, we decided to base our approach on
the experience factory.
In the following sections we present a brief review of the use of productivity for effort
estimates (section 2) and experience factory (section 3). Next, we present the definition
(section 4) and use (section 5) of the experience factory for software development effort
estimate. Section 6 presents our conclusions and ongoing work.

2 Using productivities in software effort estimates

The productivity can be defined as the ratio between the quantity of work required to
develop a software and the size of this software [9,21]. In this case, the effort is accounted in
man/hours while the size can be measured by different kinds of metrics (e.g., Function Point
Analysis–FPA [12], NESMA [22] and COSMIC [6]). Benchmarking studies concluded that
the productivity obtained by sizing the software in function points is more consistent and
allow better comparisons [17,18]. Also FPA is a functional software size metric that can
have its elements extracted from the requirements of the system, available early in the
developing process [7], and FPA is independent of the technology used for implementation
[12] and can be used to determine whether a tool, an environment, a language or a process is
more productive than an other within an organization or among organizations [2,10].
We found different factors, in the relevant literature, which can impact the software
development productivity. Some authors [5,8,21] show that the experience of the software
development team in the technology used or in the system’s business area, is an important
factor for the correct definition of productivity. Others defend that schedule restrictions
imposed to the software development [2,23] and software complexity [19,23] impact the
productivity. Software size is also a relevant factor in effort estimate and impacts the
productivity [2,5,17,21,23]. Maxwell [19] say that the use of CASE tools in software
development projects can improve the productivity. Finally, productivity increases with
reuse because consumers of reusable work products do less work in the software
construction [15].
Two main historical databases for software productivity are proposed by the Software
Productivity Research (SPR) group [29] and the Institute of Software Benchmarking
Standards Group (ISBSG) [13]. SPR presents an average of productivity values
characterized by the programming language used in the projects and ISBSG has more than
two thousand projects registered according to a specific taxonomy for characterization.
However, both SPR and ISBSG miss some important characteristics found in the literature as
relevant for software development productivity (for instance, experience of the development
team and schedule restrictions are not considered). Another problem is that the ISBSG
historical database only shows the realized projects productivity, without supplementary
information about the accuracy of the early projects’ estimates. These problems make it
difficult for a company to use the productivity proposed by these groups and apply it to its
own projects.
Also, registered productivity is not enough to improve the accuracy of the effort estimate
[19]. In this paper, the authors report an analysis on software projects characteristics to
understand their influence on the productivity. For them, identifying the characteristics or
combination of characteristics, that help to understand the values registered in the historical
database of productivities, can be useful to find new software projects productivity by
comparison with similar past projects.

3 Experience Factory

Since 1939 when Schewart or Deming proposed the PDCA (Plan-Do-Check-Act) cycle for
process continuous improvement, the idea of an evolutionay improvement in an organization
has been applied in different contexts [14]. The PDCA has four steps. It begins with the
planning and the definition of improvement objectives. Next, the planning is executed (Do)
and then evaluated (Check) to see if the initial objectives are achieved. Actions are taken
based on what was learnt in the check step. If the changes are successful, they will be
incorporated as new practices in the process. If not, it is necessary to go through the cycle
again with modifications in the original plans.
In 1994, Basili et al [3] defined the concept of experience factory aimed at capitalization
and reuse of life cycle experience and products. The experience factory is a logical and
physical organization supported by two major concepts: a concept of evolution - the Quality
Improvement Paradigm (QIP), and a concept of measurement and control - the Goal-
Question-Metric approach.
QIP is based on the PDCA, it emphasizes the continuous improvement by learning from
experience, both at the level of projects and at the level of organizations. It builds on
experimentation and application of measurement [28]. The model consists of the following
six steps:
• Characterize: Understand the environment based on available models, data, etc.
• Set Goals: On the basis of the initial characterization set quantifiable goals for success and
improvement;
• Choose Process: On the basis of the characterization and of the goals, choose the
appropriate process to improve;
• Execute: Execute the process constructing the products and providing project feedback
based upon the data on goal achievement that are being collected;
• Analyze: At the end of each specific project, analyze the data and the information gathered
to evaluate the current practices, identify problems, record findings, and make
recommendations for future projects improvement;
• Package: Consolidate the experience gained in the form of new, or updated and refined,
models and other forms of structured knowledge gained from this and prior projects, and
store it in an experience base so it is available for future projects.
An appropriate and unambiguous characterization of the environment is a prerequisite to a
correct application of the paradigm [3]. This characterization requires that we classify the
current project with respect to a variety of characteristics. It allows to isolate a class of
projects with characteristics similar to the project being developed. Characterization provides
a context for reuse of experience and products, process selection, evaluation and comparison,
and prediction.
The Goal-Question-Metric approach (GQM) [4] is the mechanism used by the QIP for
defining and evaluating a set of operational goals using measurement. The main idea of the
GQM is that measurement should be goal-oriented [4,28]. It is a top-down methodology
starting with the definition of an explicit measurement goal that is refined in several
questions that break down the goal into its major components. Then, each question is refined
into metrics that, when measured, will provide information to answer the questions. By
answering the questions we will be able to analyze if the goal has been attained. As a result
of this process we have a model in three levels [4]: the conceptual level, where a goal is
defined for an object (product, process or resource) from various points of view; an
operational level, where questions are defined for the goal and characterize the object of
measurement with respect to a selected quality issue from the selected point of view; and, the
quantitative level, representing a set of data associated with each question in order to answer
it in a quantitative way.

4 Continuous improvement for software development effort estimates

With the need of providing better estimates for each new software development project and
inspired by the idea of continuous improvement based on prior, we decided to define and
build an experience factory for software effort estimate. To define this experience factory we
had to attend each one of the steps of the QIP. The following sections will present the
characterization (Section 4.1), the improvement goals definition (section 4.2) and the
estimate process defined (section 4.3). The three other steps (execute, analyze and package)
should be executed for each new project that need effort estimate.

4.1 Characterization

To address this step we had to define how we will characterize each project that need effort
estimate. Considering that effort is the product of a software size and the productivity of the
team, and that software size is already established by Function Point Analysis approach, we
decided to look for the factors that have influence on the productivity of personnel in the
software development.
In order to define these factors, we looked carefully in the literature and interviewed two
specialists (with more than five years of experience) in the effort estimate area. Table 1
shows those factors with the references that relate them to software productivity. It also
describes the possible values for each factor. The factors were grouped in different
categories: (C1) software project, with 4 general characteristics about the software to be
developed; (C2) software project development, with 15 factors about the software
development itself; (C3) software size, with 4 factors about the size and its measurement
method; (C4) work effort, with 4 factors related to the work needed to the software
construction; and, (C5) team experience, with 6 factors related to the human aspects.
We consider that the software size will be estimated with FPA [12] or NESMA [22]. Both
methods produce the size in function points, but the measurement obtained by NESMA
(called estimated count) is more generic, based on media values, whereas the measurement
obtain with FPA (called detailed count) is more detailed.
Thus, at the beginning of each software development project, we need to charaterize the
software project according to these factors. From this characterization, we can select, in the
experience database, the projects with similar characteristics. The recorded productivity of
these projects will support the project manager in estimating the effort for his-her project.
The search for similar projects is done by a simple query in the project database using the
characteristics selected by the project manager.
Table 1. Software development project characterization
Impact Factors [authors] Possible values
C1.1–Precedentedness [5] Very low, high or extra high as defined by [5] analyzing
the software project according with the organizational
understanding of product objectives, the experience in
working with related software systems, the concurrent
development of associated new hardware and
operational procedures and the need for innovative data
processing architectures algorithms
C1.2-Application type [15,16] Transaction and production systems, management
information systems, etc
C1.3-Software complexity[5, 19] The functional complexity as defined by FPA [12]
method in simple, media or complex functions
C1.4-Business area type [13,19] Accounting, banking, legal, health, etc
C2.1-Development Type [13] New development, maintenance–enhancement or
corrections of bugs
C2.2-Development platform[13] PC, Mainframe or mixed;
C2.3-Development paradigm[13] Object oriented or structured
C2.4-Primary programming The main programming language used
language[13,29];
C2.5-Development techniques [13] 1-Data modeling; 2-Prototyping; 3-Project management;
4-Metrics; 5-Tests; 6-JAD; 7- Requirement management
C2.6-Use of CASE Tools Levels [5;9] As defined by [9]:
0-Tools not used at all
1-Tools used only as an aid to less than 20% of
documentation
2-Tools used for documenting at least 50% of the high-
level design
3-Tools used for documenting at least 50% of the high-
level and detailed design
4-Tools used for design and automatic code generation
of at least 50% of the system
5-Tools used for design and automatic code generation
of at least 90% of the system
C2.7-Reuse Level [5,15] 0 – there is no part of the software ready to be to reused
1 – there is less than 20% of the software ready to be to
reused
2 - there is between 20 and 50% of the software ready to
be to reused
3 - there is between 50 and 80% of the software ready
to be to reused
4 - there is more then 80% of the software ready to be to
reused
C2.8-Development schedule The schedule restrictions imposed in the software
restrictions [2,23] development by the client.
C2.9-Effort estimate schedule The time spent to do the effort estimate
restrictions [2]
C2.10-Software development process As defined by CMMI [11]: levels 1, 2,3, 4 or 5, The
maturity level [5,25] company level while developing the software project.
C2.11-Max team size [2,5,16] The maximum size of the team during the devlopment
C2.12-Size team variation tax [13]; the relationship between the maximum and minimum
team size
C2.13-Registration data method As defined by [13]:
Method A: Staff Hours recorded in their daily activties
Method B: Derived from time records that indicate, for
example, the assignment of people to the project.
Method C: Productive time only spent by each person
on the project.
C2.14-Project Scope[13] The software development phases (planning, design,
specify, build, test and/o implementation) that were
considered in the effort estimate
C2.15-Project elapsed time [13]; The time in months to develop the software project
estimated at the beginning and collected at the end of
the software development.
C3.1-Count approach [13, 17,18,22] FPA-IFPUG or NESMA
C3.2-Function size metric used [13] IFPUG-3.0,IFPUG-4.0
C3.3-Software size The value obtained by the application of FPA method at
[2,7,10,12,13,16,17,18,21,23]; the beginning and end of the software development
C3.4-Function point adjustment factor From the FPA/NESMA count at the beginning and end
[12,13]; of the software development
C4.1-Total work effort [2,16,19,21] Effort for the whole software development estimated at
the beginning and collected at the end of the software
development.
C4.2-Work effort by activity in the Effort for each activity (planning, design, specify, build,
development life cycle [13] test, implementation) estimated at the beginning and
collected at the end of the software development.
C4.3-Efficiency of the use of the Percentage of the project time, lost with team health
project time [8,27]; problems, vacations, training, meetings and general
absences
C4.4-Rework [11] Total work effort recorded by each person for the
rework tasks
C5.1-Size measurement technical Mean of the team evaluation, defined using [5]:
experience [7,10,17] 0-No previous experience at all
C5.2-Size measurement experience in 1-Familiarity (through course or book) but no working
the application business area [7,10,17]; experience
C5.3-Software development technical 2-Working experience on one previous project or up to
experience [2,21] 20 worked hours of use
C5.4-Software development 3-Working experience on several previous projects or
experience in the application business between 21 and 100 worked hours of use
area) [2,21] 4-Thoroughly experienced
C5.5-Requirements development
technical experience [9]
C5.6-Requirements development
experience in the application business
area [7]

4.2 Goals definition

According to the frame proposed by [28] we defined the following goal:


Analyze: The effort estimates
For the purpose of: Understanding
With respect to: The accuracy of the estimate
From the viewpoint of: The project management
As previously defined effort = size x productivity. Therefore, the questions and metrics of
the GQM were defined to evaluate both size and productivity (Table 2). We focus on the
absolute and relative errors in the measurements and on understanding their respective
causing factors. The absolute error is the difference between the values of a variable,
measured at the beginning and at the end of a software project development. The relative
error is the ratio between these two values in percentage.

Table 2. Questions and metrics to analyze the effor estimate


Questions Metrics
1-What is the error in the size M 1.1-Size estimate absolute error
estimate? M 1.2-Size estimate relative error
2-What were the schedule constraints M 2.1-Fastness of the count (number of function points counted
imposed to the initial count of the by day and by each counter);
software development project? M 2.2-Percentage of additional hours used in the count (by
“additional hours” we mean that hours beyond the normal work
day journey)
3-What is the effort estimate error? M 3.1-Absolute error between estimated and real work effort;
M 3.2-Relative error between estimated and real work effort
M 3.3–Initial productivity (estimated)
M 3.4–Final productivity (measured)
M 3.5-Productivity absolute error
M 3.6-Productivity relative error
4-What is the elapsed time estimate M 4.1-Absolute error between estimated and real elapsed time
error? M 4.2-Relative error between estimated and real elapsed time
5-What were the schedules M 5.1-Percentage of additional work hours with respect to the
constraints imposed to the software total work effort of the software development project.
development project? Additional hours means hours beyond the normal day of work.
4.3 Choose Process

Since the improvement expected is the accuracy of the effort estimate, the process choosen
for the experience factory is the software effort estimate process. We found two proposals in
the literature: the Personal Software Measurement process (PSM) [24], and the process
described by Agarwal [2]. The PSM process consists in the following steps: select estimate
approaches, map analogies, compute and evaluate estimates. We notice that some activities
are too generic and would require some more detail (for example: “compute estimates”
depends on the method used). Also the activity applied after the estimate (evaluate estimates)
is not part of the effort estimate that interest us. Agarwal proposal also deals with this
process in a generic way, but adds the importance of an historical database as a source of
useful information to calibrate the estimates.
We are focused in the effort estimated obtained by the use of the software size and the
development productivity values. We defined, therefore, a specific process based on those
proposals. The resulting process is presented in table 3.

Table 3. Effort Estimate Process

Activity Description
Choose an estimate approach Evaluate the level of detail of the system development to
choose between the software size estimate method to be
used in the beginning of the project (detailed – IFPUG
or estimated – NESMA)
Plan the estimate Plan the steps of the estimate, that is, who will do it,
when and what effort will be required.
Approve the plan Obtain acceptance and commitment from the managers
for the plan defined.
Obtain the system documentation Get the system documents needed for the estimate
Execute the software size estimate / Use the FPA/NESMA to measure (estimate) the
measurements software size
Execute the effort estimate Select the characteristics of the framework to search
similiar projects, then choose a productivity value and
compute the effort estimates
Approval of the estimate Present the results and approve them with the
management

5 Applying the approach

We are applying this approach in a large company in Brazil. This company has been
working with effort estimate for more than five years and is continually worried about
getting better estimates in its projects. Currently, it uses historical databases from SPR and
ISBSG to get productivity values. To apply this approach we followed some steps:
i) extended the company’s tool to register the 33 characteristics defined;
ii) populate the database with prior projects to have a starting point;
iii) simulate some effort estimate by similarity considering the prior projects; and,
iv) doing effort estimate for new software projects.
To do the first step (i) we just made an evolutive maintenance in the tool already used in
the company. This tool already supported the function point count of projects. We extended
the data model and included three new facilities: one to characterize a software project using
the 33 characteristics defined, another to allow searching similar projects in the database, and
one to support the collection of the results of the metrics defined by the GQM approach.
In [26] the authors report that a difficulty with the experience factory is to have the first
experiences registered in the database with lessons learned to motivate its use. They suggest
populating it with information from the literature and from prior experience done before the
use of the experience factory. Therefore, to populate the database with prior projects (step
(ii)) we analyzed the documentation of 30 projects that had the effort estimated during the
last 5 years in the company. Unfortunatelly since we have 33 characteristics it was difficult
to find projects well documented to get all the characteristics. We got 27 of the 33
characteristic defined for 7 projects (it was not possible to find out information about C2.8,
C2.9, C4.2, C4.3, e C4.4). For the other projects we did not have the information, neither in
the documentation, nor by interviewing the managers (some of them were not working in the
company anymore).
Table 4 shows the characterization of the 7 projects. Table 5 shows the projects with
similar characteristics. Table 6 shows the metrics’ results. We see that we had effort error
more than 100% (S1, S2, S5 and S7) and elapsed time error more than 50% for three systems
(S1, S4 and S5).
With this information we could start the QIP cycle for each new project. The complete
evaluation of this approach will take some years since we need to estimate the effort at the
beginning of a software development project, wait for its completion and then analyze the
metrics. Thus, to have an idea about the pontential improvement we could get, we decide to
do some simulations (step (iii)) with the 7 projects already characterized. The idea was to
look for two similar projects, define the effort estimate of one based on the productivity of
the other and check if it would be better than the effort actually estimated for this project.
Table 6 shows the absolute estimate error with the SPR database (used in the company for
those projects) and our approach. Project 1 and 5 were used as reference in evaluation of
estimates of the other projects. We can see that just with this small simulation we can have
improvement in the effort estimate (two projects around of 30% and two around of 70% of
improvement).
The step (iv) is in progress.

Table 4. System Characteristics


Char. S1 S2 S3 S4 S5 S6 S7
C 1.1 Very low Very low High Very low Very low Extra high Very low
C 1.2 MIS MIS MIS MIS MIS MIS MIS
simple=71 simple =67 simple =51 simple =61 simple =79 simple =62 simple =89
C 1.3 media=12 media =18 media =23 media =18 media =13 media =14 media =0
complex=17 complex =15 complex =26 complex =21 complex =8 complex =24 complex =11
C 1.4 banking banking banking
banking legal banking legal banking Legal health
pensions administration administration
C 2.1 develop. develop. develop. develop. develop. develop. develop.
PC,
C 2.2 PC, mid-range PC,mid-range PC,mid-range PC,mid-range PC,mid-range PC,mid-range
mainframe
object object object object
C 2.3 structured structured structured
oriented oriented oriented oriented
C 2.4 Natural Natural Java Java Java Cold Fusion Java
C 2.5 1,2 1,2,3,4,5 1,2,3,4,5,7 1,2,3,4,5,6 1,2,3,4,5 1,2,3,4,5,7 1,2,3,4,5
C 2.6 1 2 3 2 2 3 2
C2.7 0 1 1 1 1 2 0
C2.10 1 1 1 1 1 1 1
C 2.11 22 18 14 14 6 14 9
C 2.12 > 250% 80% 63% 55% 100% 50% 80%
C 2.13 C C B B B A B
C2.14 all phases all phases all phases all phases all phases all phases all phases
C2.15i 12 12 16 7 5 11 6
C2.15f 27 17 21 11 8 13 9
C 3.1/
NESMA IFPUG4.1 NESMA NESMA NESMA IFPUG4.1 NESMA
C3.2 i
C 3.1 /
IFPUG4.1 IFPUG4.1 IFPUG4.1 IFPUG4.1 IFPUG4.1 IFPUG4.1 IFPUG4.1
C3.2 f
C.3.3i
2324 1967 1710 698 277 2008 224
(FP)
C3.3f
3023 3446 2421 1124 625 2183 395
(FP)
C3.4i 1.12 1.12 1.05 1.01 0.94 1.22 0.97
C3.4f 1.12 1.12 1.05 1.01 0.94 1.22 0.97
4.1i
22032 18647 19152 7818 3103 19678 3164
(hour)
4.1f
49010 43091 32617 13903 8106 21705 7194
(hour)
C5.1 1.0 2.30 2.5 2.0 3.0 2.5 1.7
C5.2 0.50 1.0 1.5 1.0 1.0 3.8 0.3
C5.3 2.4 3.4 2.6 2.25 1.0 3.0 2.0
C5.4 1.6 2.3 1.8 1.5 0.5 3.0 1.0
C5.5 3.5 2.10 2.3 2.6 2.5 2.7 1.3
C5.6 0.75 1.30 2.7 1.2 1.0 3.7 0.0
Legend: i=Initial of the software development / f = final/end of the software development
FP=function point

Table 5. Analogous systems (same magnitude when needed)

System Analogies
1 and 2 C1.1, C2.8, C1.2, C2.1, C2.3, C2.4, C 3.1, C3.2, C2.13, C2.11, C3.3 (difference in initial
size less than 20%), C3.4, C1.3
3 and 5 C1.4, C2.1, C2.3, C2.4, C3.1, C3.2, C2.13, C2.5 (only one different technique used)
3 and 1 C1.2, C2.1, C2,8, C3,1, C3.2, C3.3, M2.1, C1.3
4 and 5 C2.8, C3.1, C3.2, C3.3, C2.1, C2.3, C2.4, C2.13, C2.5 (only one different technique used)

Table 6: Metrics Results


Metrics S1 S2 S3 S4 S5 S6 S7
M 1.1 (FP) 699 1479 711 426 348 175 171
M 1.2 (%) 30.1 75 42 61 126 8.7 76
M 2.1 (FP/day) 186 65,2 57 -X- 92,3 29,8 75
M 3.1 (hours) 26978 24444 13465 6085 5003 2027 4030
M 3.2 (%) 123 131 70 78 161 10.3 127
M 3.3 (1)
9.48 (SPR) 9.48 (SPR) 11.20 (SPR) 11.20 (SPR) 11.20 (SPR) 9.8 (*) 12.9 (*)
(hour/FP)
M 3.4 (2)
16.2 12,50 13.40 12.40 13.00 9.94 18.2
(hour/FP)
M 3.5 (hour/FP) 6.70 3.02 2.20 1.20 1.20 0.14 5.3
M 3.6 (%) 71 32 20 11 16 1.5 40.5
M 41 (months) 15 5 5 4 3 2.3 3
M 4.2 (%) 136 42 31 57 60 21 50
Legend:FP=function point ; * - ISBSG

Table 7: Precisions of Effort Estimate

Using SPR Using our approach Final Precisions Errors (%)


Initial SPR Effort Hist. Effort Effort SPR Our Improvem
Size Producti (h) Productivity (hour) appro ent
System (h)
(FP) vity (h/FP) ach
(h/FP)
From System 1 31865 43091
2 1967 9.48 18648 131 35 73
(16,2 h/FP)
From system 5 22230
3 1710 11.20 19152 32617 70 46 35
(13.0 h/FP)
From System 1 27702
3 1710 11.20 19152 32617 70 18 74
(16,2 h/FP)
From system 5
4 698 11.20 7818 9074 13903 78 53 32
(13.0 h/FP)

6 Conclusion and ongoing work

Effort estimate is a fundamental input for software development project management. The
continuous improvement of the effort estimate accuracy would lead an organization to
improve its capacity to carry out its commitments, delivering its software products on time
and, therefore, bring competitive advantage.
This article presents an approach for continuous effort estimate improvement by using the
experience factory concept. We are applying this approach in a company in Brazil that use
measurement process for more than five years. We expect that, at each new project, we will
reach better accuracy of the effort estimates by using more realistic productivity values
obtained from past similar projects, registered in an historical database. We already
populated the experience database with 7 projects and did some simulation to evaluate our
approach. Our next step will be to use this initial database for estimating software effort in
new software projects. We are also studying new ways of computing the similarity of a new
project to old ones in the database. We plan to experiment using Case Based Reasoning [1].
Acknowledgments
This work is part of the “Knowledge Management in Software Engineering” project, which is
supported by the CNPq, an institution of the Brazilian government for scientific and
technological development.

References
1 AAMODT, Agnar; PLAZA, Enric; CASE-BASED Reasoning: Foundational Issues, Methodological
Variations and Systems Approaches, AI Communications, IOS Press, Vol. 7:1, pp. 39-59, 1991.
2 AGARWAL, Manish, Kumar; YOGESH, S. Mallick; BRARADWAJ, R. M., et all, Estimating
Software projects, ACM SIGSOFT, p.60, 2001
3 BASILI,V.;CALDIERA,Gianluigi;ROMBACH,H.Dieter “The Experience Factory”, Encyclopedia of
Software Engineering, John Wiley & Sons, 1994, V.1 pp. 476-496
4 BASILI,V.,Rombach, H.Goal Question Metric Paradigm; Encyclopedia of Software Engineering – 2,
1994
5 BOEHM, Barry et al,Software Cost Estimate With COCOMO II, Prentice Hall PTR, 2000
6 COSMIC, Measurement Manual, The COSMIC Implementation Guide for ISO/IEC 19761:2003,
Version 2.2, 2003
7 DEKKERS, Carol; AGUIAR, Mauricio; Using Function Points Analysis (FPA) do Check the
Completeness (Fullness) of Functional User Requirements, Quality Plus, 2000
8 FARLEY,Dick. Making Accurate Estimates, IEEE, 2002
9 FENTON, N., PFLEEGER, S. Software Metrics A Rigorous & Practical Approach, 2nd. Ed., PWS
Publishing Company, 1997.
10 FUREY, Sean, Why we should use Function Points, IEEE, 1997
11 JOHNSON, Donna I.; BRODMAN, Judith G.; “Realities and Rewards of Software Process
Improvement”, IEEE 1996
12 IFPUG, CPM – Counting Practices Manual, release 4.1.1; IFPUG – International Function Point Users
Group; 2000
13 ISBSG. International Software Benchmarking Standards Group. The Benchmarking, Release 8, ISBSG;
2003.
14 JOHNSON, Corine N., The Benefits of PDCA, Quality Progress, pp.120, 2002
15 LIM, Wayne C., Effects of Resue on quality, Productivity, and Economics, IEEE, 1994
16 LOKAN, Chris; TERRY, Wright; HILL, Peter R.; STRINGER, Michael; “Organizacional
Benchmarking Using the ISBSG Data Repository”, IEEE, 2001
17 LOW,Graham C., ROSS, D.Jeferry, “Function Points in the Estimate and Evaluation of the Software
Process, IEEE, 1990
18 Martin, Arnold; PEDROSS, Peter; Software Size Measurement and Productivity Rating in Large Scale
Softwre Development Department, IEEEE, 1998
19 MAXWELL, Katrina, FORSELIUS, Pekka, Benchmarking Software Development Productivity, IEEE,
2000
20 MCFEELEY, Bob, IDEAL: A User´s Guide for Software Process Improvement, SEI,1996
21 MORASCA,GIULIANO, ,Sandro, GIULIANO, Russo. An Empirical Study of Software Productivity,
IEEE, 2002
22 NESMA, Estimate Counting, Netherlands Software Metrics Users Association, 2003
23 POTOK,VOUK, Tom; VOUK, Mladen. Development productivity for comercial SW Using OO
Methods; ACM, 1995
24 PSM, Pratical Software Measurement, Adison Wesley, 2002
25 RUBIN, Howard A ., Software Process Maturity: Measuring its Impact on Productivity and Quality,
IEEE, 1993
26 RUS,Ioana, LINDVALL, Mikael, SEAMAN, Carolyn, BASILI, Victor, Packaging and Disseminating
Lessons Learned from COTS-Based Software Development, IEEE, 2003
27 SHEPPERD, Martin; CARTWRIGHT, Michele; Predicting with Sparse Data; IEEE, 2000
28 SOLINGEN, R., BERGHOUT, Egon, The Goal / Qestion / Metric Method: A practical Guide for
Quality Improvemnet of Software Development. London: McGraw-Hill. 1999. 195 p.
29 SPR, “The Programming Language Table”, Software Productivity Research, 2001

You might also like