You are on page 1of 27

Software Testing:

Building Quality

A GDI Infotech White Paper


ABSTRACT ................................................................................................................................... 1
A GLOBAL OUTSOURCING MODEL..................................................................................... 2
BUILDING VALUE...................................................................................................................... 3
OBJECTIVES OF TESTING ...................................................................................................... 5
GETTING STARTED .................................................................................................................. 7
METHODOLOGY ........................................................................................................................ 8
Phase 1: Develop a Testing Strategy ............................................................................. 8
The Testing Environment ........................................................................................... 8
The Testing Team ....................................................................................................... 9
Requirements Review ............................................................................................... 10
Testing Procedures and Standards ............................................................................ 11
Phase 2: Develop a Test Design................................................................................... 11
Types of Tests ........................................................................................................... 11
Unit.................................................................................................................................... 11
Functional......................................................................................................................... 12
System.............................................................................................................................. 13
Integration ........................................................................................................................ 16
Test Cases ................................................................................................................. 16
Test Data ................................................................................................................... 17
Test Scripts................................................................................................................ 18
Phase 3: Build and Execute Unit, System, and Integration Tests................................ 19
Launching of Scripts....................................................................................................... 19
Results.............................................................................................................................. 19
Measurement Matrices .................................................................................................. 19
Defect Log/Issue Tracking Database .......................................................................... 20
Database Reports........................................................................................................... 20
GDI’S TESTTRAX™ PROGRAM ........................................................................................... 21
SERVICE OFFERING ............................................................................................................... 22
TESTING TOOLS ...................................................................................................................... 22
Abstract

This paper addresses the importance of software testing as part of development


cycle and offers TestTrax™ methodology developed by GDI Infotech, Inc. to
achieve software quality.

Software applications play an increasingly important role in every organization.


The numbers of mission critical applications – those that have high cost of failure
and high cost to fix, have increased exponentially. Hence the need for proactive
quality assurance is higher than ever before. Finding defects in the software
earlier in the development cycle saves thousands of dollars to the organization.

GDI’s TestTrax methodology involves the following points and is discussed in


detail in this White Paper:
• Test Plans
• Test Specifications
• Test Cases
• Automated test scripts
• Testing based on manual and automated scripts
• Defect logs

For additional information, please contact GDI Infotech, Inc. at 800-608-7682


extension 201 or at testing@gdii.com.

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 1 of 27


A Global Outsourcing Model

As monolithic enterprises that own all products and services, are becoming a
thing of the past, companies are evolving into a mesh of partnerships and
outsourced services. For many years, outsourcing was predominantly a means to
manage and optimize an enterprise’s ever-growing IT infrastructure and thus
ensure cost-effective operations. Today, outsourcing as a business strategy and
a relationship model, has evolved to become a dominant force in enterprise IT
strategy.

Outsourcing plays a crucial role in achieving a broad variety of benefits in a


timely manner. Companies use outsourcing as a way to adopt new technologies
and to access processes and services viewed as non-core. Outsourcing is used
as a contracting model to acquire IT services from third parties to support
business goals.

Understanding the relative maturity of different IT services available in the market


enables enterprises to establish reasonable expectations for the cost and quality
of services to be delivered. As new services mature, methodologies and standard
procedures can be repeatedly tried, tested and refined.

Successful enterprises today are based on an extended or ‘virtual’ enterprise


model in which a critical business competency is the ability to establish,
implement and manage a comprehensive sourcing strategy.

Traditionally, the outsourcing market in the United States has been dominated by
large external service providers (ESP’s) as well as a number of regional and local
companies. Recently, a strong offshore market has emerged, spurred by the
increased acceptance of offshore outsourcing. US vendors are implementing
global delivery models through geographically dispersed, offshore delivery
centers with a goal of doing work where it makes economic sense and providing
24x7service.

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 2 of 27


Building Value

Three major challenges in software development projects include the inability of


software systems to solve the intended business problem, the lack of software
reliability, and late or over-budget delivery. Most experienced managers,
developers and test engineers have encountered such challenges. They face the
unenviable choice of releasing the product that has not been fully tested and may
be wracked with defects, or delaying the release.

Most problems with effectiveness, reliability, and cost are due to defects found in
later phases of the development cycle. Software defects and subsequent repairs
may cost 10 to 100 times as much to fix during testing phase if not caught earlier
in the design and coding phase. Studies have also demonstrated their cost will
grow up to 40 to 1000 times the original is not found until after the software has
been released.1

Cost of Defect

Production

Construction
Design

Development Cycle

There are two major industry trends adding to the pressure. The first is
accelerated release cycles. A number of changes take place in a short duration,
necessitating incorporation followed by release. Secondly, while releases are
more frequent and cycles shorter, the cost of failure has increased dramatically.

1
B Boehm, Software Engineering Economics, Prentice-Hall, 1994

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 3 of 27


Cutting costs and improving quality requires finding and correcting those defects
earlier in the development process. The way to facilitate early discovery of
defects is software quality assurance – a set of activities to monitor and control
software quality. Testing verifies that the user requirements are met, and it
maximizes software reliability and quality.

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 4 of 27


Objectives of Testing

Software development involves four types of work – design, coding, testing and
maintenance. Most businesses focus on
Design
designing and coding the software, that is
all too often immediately followed by putting Coding
the new product in production. Testing
Maintenance
Experience shows that testing on medium and
large-scale systems can consume 30-70 % of
a project’s development and implementation budget. However, most businesses
do not foresee the substantial amount of effort required. Instead, they typically
consider testing a follow-on activity and approach it in an ad hoc manner.

The most effective way to reduce risk and costs is to start testing early in the
development cycle and to test successively, with every build. With this approach,
defects are removed as the features are implemented. This way when testing is
used to improve software quality earlier in the development, the cost of testing
and correcting the software in later phases of the cycle falls dramatically. Testing
early and testing with every iteration requires up-front planning between
developers and testers. The key advantages of early testing are:

• Risks are identified and reduced in the primary stages of development.


• Repairs to the problems are less costly.
• The release date can be more accurately predicted.
• The product development process is accelerated.

The software product under development should address the business problem
and satisfy the user requirements. The difference in the software, between the
project as planned and the state that it has been verified in, is called the quality
gap. Large quality gaps mean that the applications do not serve the business

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 5 of 27


needs they have been built for. Testing is used as a means for closing the quality
gap and ensuring that the system can be used effectively in the target
environment. The system is formally tested for:

• Reliability – Does the application operate without crashing, hanging or


other run-time errors?
• Functionality – Does the application meet the business requirements
established for it?
• Performance – Does the application respond in a timely manner?
• System Performance – Does the application continue to perform correctly
in a timely manner when it is subjected to production load?

The other objective of testing is to ensure operational reliability by uncovering


defects in the system. This objective is achieved by deliberately designing sets of
input data and rigorously testing the system against these. The defects
uncovered include:

• Logic errors,
• Coding errors,
• Technical language syntax errors, and
• Database integrity errors.

Testing should be viewed as a broad workflow encompassing a continuous


series of tests focused on identifying and eliminating defects, and accessing
product quality early and continuously throughout the development life cycle.

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 6 of 27


Getting Started

Testing effectiveness is measured by the difference between the required and


actual test response. In this process one examines software components with the
intent of finding errors. This explains why many people find it difficult. This is
particularly true if developers test their own work. Thus, to obtain a more
objective evaluation, independent test teams should test software whenever
possible.

Increased application complexity, shorter development cycles, and the


unacceptable cost of failure are mandating a well thought-out methodology and a
set of automated tools for implementing effective software quality practices. The
testing process should include the following:

• Test plans
• Test specifications
• Test cases
• Automated test scripts
• Testing based on manual and automated scripts
• Defect logs

Test management should be integrated with the defect management system to


ensure that the detected problems are tracked, fixed, and then re-tested.
Additionally, extensive and thorough status reports and graphs must be included
so project leaders and managers can monitor the progress and effectiveness of
testing.

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 7 of 27


TestTrax™ Methodology

Phase 1: Develop a Testing Strategy


This phase includes determining what products to test and how and when to test
them. Also involved is planning for and procuring test tools, developing a test
database, determining the team required to test, and developing test
specifications. Once the testing policy and team have been defined, the next
step is to define testing standards and procedures. Testing standards define how
to accomplish various activities in the development process. The professional
tester can use any technique that provides the desired effectiveness and
productivity. There are standards available for software testing (ISO 9000 and
IEEE) that form the baseline for measuring an organization’s compliance with
quality management system. The test plan will also set the criteria for acceptable
success and failure rates and determine when to stop testing.

The Testing Environment


The testing environment determines the conditions under which a test is carried
out. These include all of the technologies, operating system software,
communications software, connectivity software, database management system,
test tools, and other support software and hardware necessary for testing. .A
major step in the planning phase is establishing the test environment. The test
environment should be separate from the development and the production
environments and should be able to restored easily without interrupting different
test cycles or other activities occurring on the hardware or network. The test
team will need to be able to control the environment and backups and restores.
Depending on whether tests can be done in parallel, different test environments
may need to be established. As much as possible, the test environments should
parallel the production environment in terms of type of computers, number of
processors, network bandwidth, and memory. For volume and stress tests, the
environment will need to reflect the production environment at projected capacity
with other applications running simultaneously.

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 8 of 27


The Testing Team
The test team is a group of dedicated professionals with expertise in different
phases of the test and quality assurance process. Generally, an organization
should avoid testing its own software, because it will have difficulty testing
objectively. The test team should be as far removed from the development team
in order to guarantee independent quality information and achieve measurable
improvements. The following matrix shows the primary and secondary
responsibilities of each member of the testing team:

Test Team Manager


The Test Team Manager is responsible for the overall testing methodology and
the delivery of the testing project. He/she has responsibility for general project
oversight and adherence to testing standards.

Application Team Leaders

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 9 of 27


These team members will be the liaison between the test team and the client
staff. They will be required to have management experience in the way that QA
is applied to the Software Development Life Cycle.

Application Developers
The Application Developers are primarily responsible for the design of the test
scripts, the test cases, and test data. Usually, the Application Developer also
executes the unit tests.

Test Team
The test team has experience in specific tests (i.e. stress, load, performance)
and with the automated tools necessary to executing that test. The test team
also will report on the test results and manage the change control process.

Business Unit Representative


The requirements, acceptable failure rates, and the acceptance test criteria for
the system is developed by the Business Unit Representative. This person also
will review the test data and the test scripts to ensure that the tests accurately
reflect the specifications of the system.

Requirements Review
The first testing objective is to prove that the system addresses the business
problem and satisfies the user’s requirements. Too often, applications do not
serve these very important needs. An important aspect of a requirements review
is to determine what the system is supposed to do. During the planning process,
the test team will review the functional specifications, the design requirements,
and the documentation of the system. The goal of the testing is to verify that the
system can be used effectively in the target environment. The application's
support of business needs is addressed by formally testing the functionality,

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 10 of 27


operations, procedures, performance, usability, and maintainability of the
application.

Testing Procedures and Standards

ISO9000, a public domain quality standard, focuses on three major areas of


quality management and quality assurance:
• Quality Management System (QMS) Framework
This describes requirements for a software developer to establish a quality
management system, including a plan for the system, organizational
requirements to support the plan, documentation, internal audits of the plan, and
so forth.
• Life Cycle Development
This addresses development planning, quality planning, design and
implementation, testing and validation, acceptance, delivery and installation, and
maintenance.
• Support Activities
This addresses such activities as configuration management, document control,
quality records, measurement, conventions, tools and techniques, purchasing,
and training.

Phase 2: Develop a Test Design


Types of Tests
Unit
Unit testing is the most basic level of code testing. It has two major objectives: to
verify that the application software component's code works according to its
specifications, and to validate the program's logic. Unit testing accomplishes
these objectives by pinpointing discrepancies between what an application
software component does and what it is supposed to do. The advantage of unit
testing is that it permits the testing and debugging of small units of code, thereby

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 11 of 27


providing a better way to manage the integration of the units into larger
components.

Functional
The objective of the Functional testing phase is to ensure that each element of
the application meets the functional requirements of the business as outlined in
the systems specification documents, and any other functional documents
produced during the course of the project.

Once the Business Requirements have been successfully completed, design of


the Functional test plan may begin. Test plans will be used to validate that all
features within the system function as intended. Individual plans will be tailored
for each unique product to be tested.

The enforcement of rigorous, methodical, and thorough testing, will ensure the
program operates according to the pre-defined set of specifications.
Examples of types of Functional testing to be performed on the system:

• Menu availability
• Form accuracy
• Online Help
• Field input checking
• Hyperlink validation
• Security

User Acceptance
This testing, that is planned and often executed by the Business
Representative(s), ensures that the system operates in the manner expected,
and any supporting material such as procedures, forms etc. are accurate and
suitable for the purpose intended. It is high level testing, ensuring that there are
no gaps in functionality. The objective of an acceptance testing event is the final

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 12 of 27


acceptance of the system. Acceptance testing, like system testing, should
demonstrate that the application meets the original business objectives, satisfies
the user and IS environment requirements, and operates within the constraints
that were defined.

System
System Testing evaluates the functionality and performance of an application. It
encompasses usability testing, final requirements testing, volume and stress
testing, security and controls testing, recovery testing, documentation and
procedures testing, and multi-site testing.

Documentation and Procedural


The objective of documentation and procedures testing is to evaluate the
accuracy of the user and operations documentation and to determine whether
the manual procedures will work correctly as an integral part of the system.
Many defects are identified when a competent tester thoroughly checks the user
documentation and manuals. The writer looks at the system from a different
perspective than the programmer, so the manual may reveal different problems
than those that programmers and testers seek. The tester can test every
combination that the manual describes.

Volume
The objective of volume testing is to subject the system to heavy volumes of data
to show if the system can handle the volume of data specified in its objectives. Of
particular concern is the impact of the system on allocated storage and other
production applications.

Stress/ Load
Stress testing determines how a system handles an unanticipated amount of
work. Criteria will be gathered to build a plan that will provide the necessary
information to allow the scaling of a system to meet its specific needs.

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 13 of 27


The goal of the Stress testing will be to provide an analysis of the breaking points
of a system. Using a set of tools specifically designed to perform this phase of
testing will remove the need for end-user participation. Once the test data
becomes available, informed decisions can be made about the future of software
and hardware of the system.

Testing strategy in the test plan will incorporate:

• Emulation of tens, hundreds, or thousands of users that will interact with


the application to generate load
• Comparing of results from various runs
• Identify the points that the system will break or fail

Load testing will determine if your system can handle the expected amount of
work as defined by the specifications of your system. Having this information
available before release to a production environment can be critical to the
success of a project.

The plan developed will verify if your system can handle that workload by using:

• Coordination of the operations of users


• Measurement of response times
• Repetition of tests in a consistent way

Regression
The Regression phase is performed after the completion of the Functional
testing. The testing will be represented by the test plan and script development.

A Regression test will be performed after each new release to validate:

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 14 of 27


• There is no negative impact on previously released software
• That there is an increase in the functionality and stability of the software.

The test plan developed for the Functional testing phase will also be used for this
phase. Each regression cycle performed will likely be unique. The testing may
consist of retesting the entire system, or individual modules of the system. The
scope of the testing will be determined by the coding changes that have taken
place against the system, and the collective decisions of the management team.

Performance
The objective of application performance testing is to determine if your system
can handle the anticipated amount of work in the required time. The major
activities of performance testing are to: compare the system's actual performance
to the performance requirements, tune the system to improve the performance
measurements, and project the system's future load-handling capacity.
Some performance issues include:
• Logic errors
• Inefficient processing
• Poor design: too many interfaces, instructions, and I/O's
• Bottlenecks on hand: e.g., disk, CPU, I/O channels
• System throughput: number of transactions per second
• Response time: the time between pressing the enter key and receiving a
system response
• Storage capacity
• Input/output data rate: how many I/Os per transaction
• Number of simultaneous transactions that can be handled
The reusable nature of GDI’s methodology will allow the application of test plan
and script development concepts from the stress/load testing. A plan will be
developed that will use sets of control transactions to be measured against

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 15 of 27


specific load criteria on the system. Response times can then be measured with
varying loads against the system.

Strategies invoked in the planning:

• Replace manual testers with automated virtual users


• Automatically measure transaction response time
• Easily repeat scenarios to validate design and performance changes
• Show detailed performance results that can be easily understood and
analyzed to quickly pinpoint the root cause of problems

Integration
During integration testing, test the combined individual and unit-based pieces of
software of a complete system. Integration testing includes the integration of
modules into programs, programs into subsystems, and subsystems into the
overall system. This testing event uncovers errors that occur in the interactions
and interfaces between units, which cannot be found during unit testing.
The major objectives of integration testing are to verify that:
• Interfaces between application software components function properly
• Interfaces between the application and its users function properly.

Test Cases
A test case is a specific set of test data, associated test procedures and
expected results, designed to test whether a particular objective is met correctly.
Just as software is designed, software tests can be designed. One of the most
important testing considerations is the design of effective test cases.
A good test case has the following characteristics:
• It is not too simple
• It is not too complex
• It has a reasonable chance of locating a defect
• It is not redundant (two tests should not test the same error)

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 16 of 27


• It makes defects obvious (it documents expected results)
Because of the infinite number of possible test cases, time required, costs and
motivation -- complete testing is impossible. The key issue is finding what subset
of tests has the highest probability of detecting the most defects with the
minimum amount of effort. Effective testing techniques act as filters for selecting
test cases intelligently. Once the outline of the test plan for each phase of testing
has been completed and approved, the individual detail of each test case can be
designed. All test cases will be created in a straightforward manner. This
approach will generate cases that can be understood by technical experts, as
well as the users of your system.

Each test case will follow a standard:

• Concise objective
• Detailed steps
• Clear expected results
• Ability to be executed manually/automated

These test cases, and the test plan as a whole, will be readily accessible, and be
able to be reviewed online. Once the test plan has received final approval, it will
ready for either manual testing or for the QA Engineers to begin scripting tasks.

Test Data
Data sets will be created during this phase of the methodology that will simulate
data inputs as placed into the system by typical end users. This will allow scripts
that can be run multiple times with only the input data changing from one run to
the next.

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 17 of 27


Test Scripts
The goal of the automated scripts is to save time and provide reliability of testing.
One issue with staff during the manual testing of software is boredom, where the
repetition of steps eventually leads to mistakes. Manual testing can also prove to
be needlessly time consuming. Scripts will be developed, based on test plan
detail, which can reduce the extent of manual testing issues.

Change Management
Scripting tasks will be assigned to team members, where they will apply change
management techniques to their development work. Using change management
will allow the archiving of work that may be useful for testing of past versions of
the system.

Code Reuse and Script Code Testing


The script development team will create code that has reusable components, is
well commented, and is tested thoroughly for correct operation. The testing of
the script code is a critical portion of this phase, as it helps to eliminate questions
during the execution phase whether an issue is due to script logic or system
logic. Each script will be run prior to the execution phase to validate its
correctness.

Portability
Testing of various components can be a critical factor in the testing of a system.
Scripts can be designed to test specific functionality across varying hardware,
operating systems, and web browsers.

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 18 of 27


Phase 3: Build and Execute Unit, System, and Integration Tests
Once the planning and design of the testing strategy has been completed,
execution may begin. Tools are used that will automatically schedule script runs,
and record the results of the testing. Data are generated within the testing
application suite that will detail the passing/failure rate of the individual test
cases.

Launching of Scripts

The scripts will have the ability to run during normal business hours, or to run in
unattended sessions. The error handling built into the scripting will allow the
grouping of various sets of scripts to be run together where it is sensible. Where
the environment can handle it, scripts can be run concurrently instead of via end-
to-end processing. The running of scripts in this fashion can provide great time
saving in the testing.

Results
The results will include whether the running of the test was successful, the
amount of time taken, and which team member launched the testing. All of the
results will be archived so they may be viewed at any future time.

Measurement Matrices
Once the scripts have been executed, software metric data shall be collected that
supports the quantitative evaluation and analysis of trends for the entire testing process.
Metrics to be collected may include, but are not limited to:

• Number of testing requirements established/modified/deleted


• Defect rates
• Software defects found by week
• Software defects found by type
• Software defects found by release
• Software defects found vs. Module size

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 19 of 27


• Testing coverage
• Problem reports opened/closed/remaining open/cumulative
• Staff effort data

The collection, reporting and analysis of metrics are automated to the fullest
extent practicable and shall be performed on a timely basis. Many of the metrics
reports can be generated with the use of the incorporated tools. Line and scatter
graphs will be among the presentation formats.

Defect Log/Issue Tracking Database


An application will be used to track the issues and defects that are encountered
during the testing of the system. The application will require:
• Security features to control field access to specific users and groups
• Application data made available via a LAN/WAN
• If required, secure web access availability.

Database Reports
Standard reports will be presented after the execution of each phase of testing.
The reports will be presented so that the audience will be able to make
assessments about the state of the project. Key fields will be used to group
reports, and the available reports can be represented by the following example:

• Project/application
• Date
• Priority/severity
• Status
• Reporting individual
• Assigned to correct
• Issue introduction
• Issue discovery

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 20 of 27


The list of available reports will be customized to the specific needs of each
project.

GDI’s TestTrax™ Program

TestTraxTM is a rapid, reliable and cost-effective alternative to performing


software testing solely in-house. GDI’s Implementation of industry-standard tools,
attention to producing quality detailed documentation, and a delivery model to
match specific needs, makes certain that our customers release quality software
even under heavy pressure of time constraints. TestTraxTM provides capabilities
that allows completion of software projects quickly, shortening time-to-market or
internal deployment, achieving high levels of quality, and in most cases lowering
total cost of operations.

GDI has setup a ‘Software Test Factory’ that provides 24x7 support for software
testing. Experienced QA engineers are assigned to a customer’s project to
provide detailed documentation including measurement matrices, test plans, and
test cases. Test scripts are also developed for automated testing. Testing is
accomplished thoroughly and continuously with the goal of providing complete
test coverage. The defect log is professionally documented. On-line access to
test results is available to the customer during the course of testing.

Design specification
document

Test plan
Defect Documentation
log Development Team

Detail test cases

Execution Automated test


scripts

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 21 of 27


GDI’s flexible approach allows for the ramp-up or ramp-down of these resources
during the customer’s software development life cycle. GDI also makes use of its
offshore resources making the entire testing process cost-effective.

Service Offering

9 Preparing test plans

9 Preparing detailed test cases

9 Developing automated test scripts

9 Execution and defect log generation for functional tests, application


performance tests, load/stress tests, user acceptance tests and
regression tests

9 Preparation of reports

Testing Tools

GDI applies expert use of industry leading testing applications during the
TestTraxTM project. Tool use will provide consistency and significant time saving
in the testing of customer systems. The GDI team has proven experience using
offerings from Rational, Mercury, and other vendor supported test tools. The
TestTraxTM methodology has been developed so that any combination of
customer tools and market tools can be successfully implemented.

TestTrax is positioned to provide the expert use of testing best practices by


applying the most efficient tools for the testing of your system. The tools used
have been proven over time, and have been created by the industry leaders of
the Quality Assurance discipline.

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 22 of 27


Test Plan Documentation
During each phase of test plan development, exacting requirements will be
followed that will deliver the best plan available. Analysts will provide planning
expertise by using quality methods:

• Review of requirements.
• Creation of high-level outlines that will provide the basis for specific test
cases to be built upon.
• Developing plans to facilitate either automated or manual testing.
• Review and signoff of test plan development.

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 23 of 27


Case Studies

STATEWIDE INSURANCE ORGANIZATION - PRODUCTION RELEASE TESTING


ASSESSMENT

Background
A large statewide, franchised insurance company with a large flow of production
applications was concerned with the low volume of interoperability testing being
performed on all the developed applications. To ensure proper quality assurance
and to mitigate risk, the company needed a defined strategy to increase their
testing throughput in order to handle every program that was released.

Problem
ƒ Present testing model only covered 10% of production applications
ƒ Personnel headcount could not be significantly increased
ƒ Overall effectiveness of the testing performed needed to be maintained

Solution
ƒ A dedicated project team performed an assessment to determine the
current state of quality assurance practices. This was done by
interviewing key staff, reviewing process documents and understanding
the interactivity between testing lab administration and personnel.
ƒ Initial findings were compared against industry standard processes and
procedures to determine areas for improvement along with a qualified gap
analysis.
ƒ The team provided a final results document including all findings, areas of
strength, areas needing improvement and a detailed and cost-prioritized
roadmap for implementation.

Results
ƒ Implementation of the recommended strategies is expected to result in a
1000% increase in testing capacity with only a 142% increase in staff.
ƒ Once fully implemented, the company will enjoy 100% testing coverage of
all production releases.
ƒ Most results were directly gained from improved process documentation
and increased procedure efficiency, resulting in very low resource
investment coupled with excellent return.

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 24 of 27


AUTOMOTIVE MANUFACTURER – JUST-IN-TIME INVENTORY CONTROL
SYSTEM

Background
An automotive manufacturer was seeking outsourced quality assurance for
implementation of a just-in-time inventory control system. The system was to be
developed off-shore with testing performed domestically. Deployment was to be
performed at automotive assembly plants worldwide.

Problem
ƒ Merging of multiple technologies: computer software and hardware along
with assembly plant hardware (PLC modules, robotics, etc.)
ƒ Physical, cultural and time separation of development and testing teams

Solution
ƒ Defined a “Component Matrix” defining each component and functionality
and then generated unit test scripts across like components. Began with a
high-level view and then drilled down to lower levels
ƒ Implemented automated testing to ensure proper and efficient testing of
routine functionality, static screens and data tables using proven coding
practices, clear documentation and code function reusability

Results
ƒ Time spent on component testing was integral in reducing defects from
previous development deliveries by 50%
ƒ Due to less defects, time gained by testing team was then invested in
automated testing improvement
ƒ On-time, on-budget delivery with excellent user-acceptance at seven
assembly plants around the world
ƒ Retained deliverables with unit test scripts and processes that were
reused
ƒ Higher profit by minimizing “sleeping inventory” in supply chains, as well
as maximizing utilization of existing plant floor space for increased
assembly capacity

©2001 - 2004 GDI Infotech, Inc. All Rights Reserved. Page 25 of 27

You might also like