Professional Documents
Culture Documents
www.sjmmf.org
Abstract
Reusing software components without proper analysis is
very risky because software components can be used
differently from the way that they were developed. This
paper describes a new testing approach for reusing software
components. With this new approach, it is possible to
automatically decide if software components can be reused
without testing. In addition, when retesting is required for
reusing software, test cases are generated more efficiently
using the previous testing history.
Keywords
Software Testing; Software Reuse; Markov Chain; Statistical
Testing
Introduction
The reusability of software components is a major
issue for software developers because reusing
software can save development time and effort. In
addition, it can reduce errors (Basili et al., 1996).
Generally, reusable software components are regarded
as safe because they are tested during development.
However, reusing software components can be risky if
they are not reanalyzed for reuse. Even though
software components have passed testing in the
original development environments, there can be
problems in reusing the components in their new
environment. Therefore, reanalysis of software
components must be required before they are reused,
and often, reanalysis of software components requires
additional tests. For testing software for reuse, test
case generation methods are needed. In addition, new
measurements are needed to decide if the software
components need more testing before reusing and
when retesting should stop. This paper describes a
new approach to solve problems in testing software
for reuse. This new approach is based on studies of
software testing using a Markov chain (Oshana, 1997;
Whittaker, 1992; Whittaker and Thomason, 1994).
www.sjmmf.org
FIG. 1 OVERVIEW
www.sjmmf.org
www.sjmmf.org
Stopping Criterion
When a software component is reused, the same
stopping criterion that is used in the software
development process is not very useful because a
software component must satisfy the stopping
criterion before its release. Therefore, a new stopping
criterion is needed for software reuse.
In this paper, the difference between the usage model
and the test model Is used as the stopping criterion for
software reuse. When a software component is reused,
the new usage model might have different arc
probabilities from the previous usage model.
Therefore, in order to find that the new usage model is
similar enough to the test model, the difference is
measured. If the usage and test models are similar, it
means that a software component has been tested
according to the actual usage in the new environment.
In this case, no more tests are required to reuse a
are
between
and
www.sjmmf.org
www.sjmmf.org
2. (0)
3. (1, null_1)
4. (1, not_null)
5.(2, null_both)
6. (2, null_1)
7. (2, null_2)
8. (2, not_null)
9. (Error)
10. (Termination)
API
Group
No.
APIs
of
From
State
To State
Probability
Each Input
0.5
0.5
0.045455
0.045455
0.026316
21
0.026316
0.027027
0.027027
0.014286
0.027027
0.014286
24
0.014286
24
0.014286
0.014286
0.016949
0.014286
0.016949
32
0.016949
13
0.045455
12
0.026316
22
0.027027
32
0.014286
32
0.014286
14
0.016949
10
0.045455
10
0.026316
10
0.027027
10
0.014286
10
0.014286
10
0.016949
for
www.sjmmf.org
2.
3.
4.
www.sjmmf.org
SS
Df
MS
Regression
242730806.17
242730806.17
Error
1978613.581
308
6424.070
Total
244709419.76
309
F
37784.58
SS
Df
MS
240563387.44
240563387.44
26575.61
Error
2488027.21
308
9052.04
Total
243351414.65
309
Regression
2) Reducing Differences
In the new approach, it is expected that the
difference between the usage model and the test
model is more efficiently reduced. Therefore, in
this case study, the results of the new test case
generation method are compared to the results of
the random test case generation method to show
how efficiently the new approach is in reducing the
difference. In this experiment, only one usage
model is used, and although the difference has
reached the acceptance points, we continuously
run a certain number of retests to monitor how the
differences are reduced. To compare the results, the
intercept, the confidence intervals for the intercept,
the slope, and the confidence intervals for the slope
are monitored at every 2,000 test cases. These plots
help to see how both the usage model and the test
model become close in both methods. In this study,
90% confidence interval for the slope and the
intercept is used.
Numerical results for the difference between the
usage model and the test models are presented in
Tables 4 and 5. Table 4 presents the slopes, the 90%
confidence intervals and the width of the
confidence intervals when the new method is used.
Whereas Table 5 presents the slopes, the 90%
confidence intervals and the width of the
confidence intervals when the existing method is
used. Also a graphical representation of Tables 4
and 5 is presented in Figures 8, 9, 10, and 11.
www.sjmmf.org
2003).
TABLE 4 DIFFERENCE BETWEEN TEST CASES OF USAGE MODEL AND NEW TEST MODEL
No. of
Test Cases
Confidence Intervals
Confidence Intervals
Slope
Intercept
Low
High
Width
Low
High
width
Initial
0.93314
0.88572
0.98057
0.09485
5.17460
-4.50623
14.85540
19.36123
2000
0.98302
0.95765
1.00839
0.05074
1.68873
-6.97656
10.35400
17.33056
4000
0.98878
0.97158
1.00598
0.03440
1.36145
-6.93931
9.66221
16.60152
6000
0.99152
0.97868
1.00437
0.02569
1.21444
-6.80879
9.23767
16.04646
8000
0.99311
0.98289
1.00333
0.02044
1.14281
-6.69723
8.98285
15.68008
10000
0.99418
0.98576
1.00259
0.01683
1.09675
-6.55749
8.75099
15.30848
12000
0.99495
0.98781
1.00208
0.01427
1.06508
-6.44548
8.57563
15.02111
14000
0.99555
0.98938
1.00172
0.01234
1.03873
-6.33643
8.41389
14.75032
16000
0.99613
0.99075
1.00150
0.01075
0.99310
-6.20173
8.18792
14.38965
18000
0.99653
0.99172
1.00133
0.00961
0.97254
-6.14714
8.09222
14.23936
20000
0.99687
0.99256
1.00119
0.00863
0.94848
-6.06203
7.95899
14.02102
22000
0.99717
0.99325
1.00109
0.00784
0.92710
-6.00271
7.85691
13.85962
24000
0.99740
0.99379
1.00100
0.00721
0.91551
-5.97525
7.80627
13.78152
26000
0.99759
0.99427
1.00091
0.00664
0.90672
-5.91130
7.72474
13.63604
28000
0.99781
0.99475
1.00086
0.00611
0.87905
-5.83789
7.59598
13.43387
30000
0.99798
0.99513
1.00082
0.00569
0.86188
-5.79628
7.52003
13.31631
www.sjmmf.org
TABLE 5 DIFFERENCES BETWEEN TEST CASES OF USAGE MODEL AND EXISTING TEST MODEL
No. of
Test Cases
Slope
Confidence Intervals
Low
High
Width
Intercept
Confidence Intervals
Low
High
Width
Initial
0.93314
0.88572
0.98057
0.09485
5.17460
-4.50623
14.85540
19.36123
2000
0.96587
0.93837
0.99337
0.05500
3.51306
-5.92465
12.95080
18.87545
4000
0.97713
0.95775
0.99652
0.03877
2.95072
-6.45433
12.35580
18.81013
6000
0.98371
0.96895
0.99846
0.02951
2.52148
-6.74121
11.78420
18.52541
8000
0.98732
0.97542
0.99922
0.02380
2.28319
-6.88404
11.45040
18.33444
10000
0.98962
0.97963
0.99961
0.01998
2.13505
-6.98829
11.25840
18.24669
12000
0.99165
0.98311
1.00018
0.01707
1.93239
-7.08554
10.95030
18.03584
14000
0.99281
0.98530
1.00032
0.01502
1.84799
-7.16741
10.86340
18.03081
16000
0.99361
0.98689
1.00033
0.01344
1.80486
-7.22705
10.83680
18.06385
18000
0.99431
0.98826
1.00037
0.01211
1.74960
-7.24838
10.74760
17.99598
20000
0.99492
0.98943
1.00041
0.01098
1.69096
-7.26114
10.64310
17.90424
22000
0.99530
0.99023
1.00038
0.01015
1.68699
-7.31698
10.69100
18.00798
24000
0.99575
0.99106
1.00044
0.00938
1.63406
-7.35709
10.62520
17.98229
26000
0.99580
0.99141
1.00019
0.00878
1.72487
-7.32134
10.77110
18.09244
28000
0.99581
0.99165
0.99996
0.00831
1.82978
-7.33542
10.99500
18.33042
30000
0.99599
0.99206
0.99992
0.00786
1.85523
-7.37874
11.08920
18.46794
10
www.sjmmf.org
FIG. 8 SLOPE
FIG. 12 INITIAL
11
www.sjmmf.org
ACKNOWLEDGMENT
productivity
in object-oriented
system.
Conclusions
12
R.,
Software
Engineering:
Practitioners
www.sjmmf.org
statistical
software
testing.
IEEE
Transaction
on
Zhou, K., Wang, X., Hou, G., Wang, J., and Ai, S. Software
reliability based on Markov usage model. Journal of
Software 7 (2012): 2061-2068.
http://sourceforge.net/projects/booch95/
13