Professional Documents
Culture Documents
Building Quality
As monolithic enterprises that own all products and services, are becoming a
thing of the past, companies are evolving into a mesh of partnerships and
outsourced services. For many years, outsourcing was predominantly a means to
manage and optimize an enterprise’s ever-growing IT infrastructure and thus
ensure cost-effective operations. Today, outsourcing as a business strategy and
a relationship model, has evolved to become a dominant force in enterprise IT
strategy.
Traditionally, the outsourcing market in the United States has been dominated by
large external service providers (ESP’s) as well as a number of regional and local
companies. Recently, a strong offshore market has emerged, spurred by the
increased acceptance of offshore outsourcing. US vendors are implementing
global delivery models through geographically dispersed, offshore delivery
centers with a goal of doing work where it makes economic sense and providing
24x7service.
Most problems with effectiveness, reliability, and cost are due to defects found in
later phases of the development cycle. Software defects and subsequent repairs
may cost 10 to 100 times as much to fix during testing phase if not caught earlier
in the design and coding phase. Studies have also demonstrated their cost will
grow up to 40 to 1000 times the original is not found until after the software has
been released.1
Cost of Defect
Production
Construction
Design
Development Cycle
There are two major industry trends adding to the pressure. The first is
accelerated release cycles. A number of changes take place in a short duration,
necessitating incorporation followed by release. Secondly, while releases are
more frequent and cycles shorter, the cost of failure has increased dramatically.
1
B Boehm, Software Engineering Economics, Prentice-Hall, 1994
Software development involves four types of work – design, coding, testing and
maintenance. Most businesses focus on
Design
designing and coding the software, that is
all too often immediately followed by putting Coding
the new product in production. Testing
Maintenance
Experience shows that testing on medium and
large-scale systems can consume 30-70 % of
a project’s development and implementation budget. However, most businesses
do not foresee the substantial amount of effort required. Instead, they typically
consider testing a follow-on activity and approach it in an ad hoc manner.
The most effective way to reduce risk and costs is to start testing early in the
development cycle and to test successively, with every build. With this approach,
defects are removed as the features are implemented. This way when testing is
used to improve software quality earlier in the development, the cost of testing
and correcting the software in later phases of the cycle falls dramatically. Testing
early and testing with every iteration requires up-front planning between
developers and testers. The key advantages of early testing are:
The software product under development should address the business problem
and satisfy the user requirements. The difference in the software, between the
project as planned and the state that it has been verified in, is called the quality
gap. Large quality gaps mean that the applications do not serve the business
• Logic errors,
• Coding errors,
• Technical language syntax errors, and
• Database integrity errors.
• Test plans
• Test specifications
• Test cases
• Automated test scripts
• Testing based on manual and automated scripts
• Defect logs
Application Developers
The Application Developers are primarily responsible for the design of the test
scripts, the test cases, and test data. Usually, the Application Developer also
executes the unit tests.
Test Team
The test team has experience in specific tests (i.e. stress, load, performance)
and with the automated tools necessary to executing that test. The test team
also will report on the test results and manage the change control process.
Requirements Review
The first testing objective is to prove that the system addresses the business
problem and satisfies the user’s requirements. Too often, applications do not
serve these very important needs. An important aspect of a requirements review
is to determine what the system is supposed to do. During the planning process,
the test team will review the functional specifications, the design requirements,
and the documentation of the system. The goal of the testing is to verify that the
system can be used effectively in the target environment. The application's
support of business needs is addressed by formally testing the functionality,
Functional
The objective of the Functional testing phase is to ensure that each element of
the application meets the functional requirements of the business as outlined in
the systems specification documents, and any other functional documents
produced during the course of the project.
The enforcement of rigorous, methodical, and thorough testing, will ensure the
program operates according to the pre-defined set of specifications.
Examples of types of Functional testing to be performed on the system:
• Menu availability
• Form accuracy
• Online Help
• Field input checking
• Hyperlink validation
• Security
User Acceptance
This testing, that is planned and often executed by the Business
Representative(s), ensures that the system operates in the manner expected,
and any supporting material such as procedures, forms etc. are accurate and
suitable for the purpose intended. It is high level testing, ensuring that there are
no gaps in functionality. The objective of an acceptance testing event is the final
System
System Testing evaluates the functionality and performance of an application. It
encompasses usability testing, final requirements testing, volume and stress
testing, security and controls testing, recovery testing, documentation and
procedures testing, and multi-site testing.
Volume
The objective of volume testing is to subject the system to heavy volumes of data
to show if the system can handle the volume of data specified in its objectives. Of
particular concern is the impact of the system on allocated storage and other
production applications.
Stress/ Load
Stress testing determines how a system handles an unanticipated amount of
work. Criteria will be gathered to build a plan that will provide the necessary
information to allow the scaling of a system to meet its specific needs.
Load testing will determine if your system can handle the expected amount of
work as defined by the specifications of your system. Having this information
available before release to a production environment can be critical to the
success of a project.
The plan developed will verify if your system can handle that workload by using:
Regression
The Regression phase is performed after the completion of the Functional
testing. The testing will be represented by the test plan and script development.
The test plan developed for the Functional testing phase will also be used for this
phase. Each regression cycle performed will likely be unique. The testing may
consist of retesting the entire system, or individual modules of the system. The
scope of the testing will be determined by the coding changes that have taken
place against the system, and the collective decisions of the management team.
Performance
The objective of application performance testing is to determine if your system
can handle the anticipated amount of work in the required time. The major
activities of performance testing are to: compare the system's actual performance
to the performance requirements, tune the system to improve the performance
measurements, and project the system's future load-handling capacity.
Some performance issues include:
• Logic errors
• Inefficient processing
• Poor design: too many interfaces, instructions, and I/O's
• Bottlenecks on hand: e.g., disk, CPU, I/O channels
• System throughput: number of transactions per second
• Response time: the time between pressing the enter key and receiving a
system response
• Storage capacity
• Input/output data rate: how many I/Os per transaction
• Number of simultaneous transactions that can be handled
The reusable nature of GDI’s methodology will allow the application of test plan
and script development concepts from the stress/load testing. A plan will be
developed that will use sets of control transactions to be measured against
Integration
During integration testing, test the combined individual and unit-based pieces of
software of a complete system. Integration testing includes the integration of
modules into programs, programs into subsystems, and subsystems into the
overall system. This testing event uncovers errors that occur in the interactions
and interfaces between units, which cannot be found during unit testing.
The major objectives of integration testing are to verify that:
• Interfaces between application software components function properly
• Interfaces between the application and its users function properly.
Test Cases
A test case is a specific set of test data, associated test procedures and
expected results, designed to test whether a particular objective is met correctly.
Just as software is designed, software tests can be designed. One of the most
important testing considerations is the design of effective test cases.
A good test case has the following characteristics:
• It is not too simple
• It is not too complex
• It has a reasonable chance of locating a defect
• It is not redundant (two tests should not test the same error)
• Concise objective
• Detailed steps
• Clear expected results
• Ability to be executed manually/automated
These test cases, and the test plan as a whole, will be readily accessible, and be
able to be reviewed online. Once the test plan has received final approval, it will
ready for either manual testing or for the QA Engineers to begin scripting tasks.
Test Data
Data sets will be created during this phase of the methodology that will simulate
data inputs as placed into the system by typical end users. This will allow scripts
that can be run multiple times with only the input data changing from one run to
the next.
Change Management
Scripting tasks will be assigned to team members, where they will apply change
management techniques to their development work. Using change management
will allow the archiving of work that may be useful for testing of past versions of
the system.
Portability
Testing of various components can be a critical factor in the testing of a system.
Scripts can be designed to test specific functionality across varying hardware,
operating systems, and web browsers.
Launching of Scripts
The scripts will have the ability to run during normal business hours, or to run in
unattended sessions. The error handling built into the scripting will allow the
grouping of various sets of scripts to be run together where it is sensible. Where
the environment can handle it, scripts can be run concurrently instead of via end-
to-end processing. The running of scripts in this fashion can provide great time
saving in the testing.
Results
The results will include whether the running of the test was successful, the
amount of time taken, and which team member launched the testing. All of the
results will be archived so they may be viewed at any future time.
Measurement Matrices
Once the scripts have been executed, software metric data shall be collected that
supports the quantitative evaluation and analysis of trends for the entire testing process.
Metrics to be collected may include, but are not limited to:
The collection, reporting and analysis of metrics are automated to the fullest
extent practicable and shall be performed on a timely basis. Many of the metrics
reports can be generated with the use of the incorporated tools. Line and scatter
graphs will be among the presentation formats.
Database Reports
Standard reports will be presented after the execution of each phase of testing.
The reports will be presented so that the audience will be able to make
assessments about the state of the project. Key fields will be used to group
reports, and the available reports can be represented by the following example:
• Project/application
• Date
• Priority/severity
• Status
• Reporting individual
• Assigned to correct
• Issue introduction
• Issue discovery
GDI has setup a ‘Software Test Factory’ that provides 24x7 support for software
testing. Experienced QA engineers are assigned to a customer’s project to
provide detailed documentation including measurement matrices, test plans, and
test cases. Test scripts are also developed for automated testing. Testing is
accomplished thoroughly and continuously with the goal of providing complete
test coverage. The defect log is professionally documented. On-line access to
test results is available to the customer during the course of testing.
Design specification
document
Test plan
Defect Documentation
log Development Team
Service Offering
9 Preparation of reports
Testing Tools
GDI applies expert use of industry leading testing applications during the
TestTraxTM project. Tool use will provide consistency and significant time saving
in the testing of customer systems. The GDI team has proven experience using
offerings from Rational, Mercury, and other vendor supported test tools. The
TestTraxTM methodology has been developed so that any combination of
customer tools and market tools can be successfully implemented.
• Review of requirements.
• Creation of high-level outlines that will provide the basis for specific test
cases to be built upon.
• Developing plans to facilitate either automated or manual testing.
• Review and signoff of test plan development.
Background
A large statewide, franchised insurance company with a large flow of production
applications was concerned with the low volume of interoperability testing being
performed on all the developed applications. To ensure proper quality assurance
and to mitigate risk, the company needed a defined strategy to increase their
testing throughput in order to handle every program that was released.
Problem
Present testing model only covered 10% of production applications
Personnel headcount could not be significantly increased
Overall effectiveness of the testing performed needed to be maintained
Solution
A dedicated project team performed an assessment to determine the
current state of quality assurance practices. This was done by
interviewing key staff, reviewing process documents and understanding
the interactivity between testing lab administration and personnel.
Initial findings were compared against industry standard processes and
procedures to determine areas for improvement along with a qualified gap
analysis.
The team provided a final results document including all findings, areas of
strength, areas needing improvement and a detailed and cost-prioritized
roadmap for implementation.
Results
Implementation of the recommended strategies is expected to result in a
1000% increase in testing capacity with only a 142% increase in staff.
Once fully implemented, the company will enjoy 100% testing coverage of
all production releases.
Most results were directly gained from improved process documentation
and increased procedure efficiency, resulting in very low resource
investment coupled with excellent return.
Background
An automotive manufacturer was seeking outsourced quality assurance for
implementation of a just-in-time inventory control system. The system was to be
developed off-shore with testing performed domestically. Deployment was to be
performed at automotive assembly plants worldwide.
Problem
Merging of multiple technologies: computer software and hardware along
with assembly plant hardware (PLC modules, robotics, etc.)
Physical, cultural and time separation of development and testing teams
Solution
Defined a “Component Matrix” defining each component and functionality
and then generated unit test scripts across like components. Began with a
high-level view and then drilled down to lower levels
Implemented automated testing to ensure proper and efficient testing of
routine functionality, static screens and data tables using proven coding
practices, clear documentation and code function reusability
Results
Time spent on component testing was integral in reducing defects from
previous development deliveries by 50%
Due to less defects, time gained by testing team was then invested in
automated testing improvement
On-time, on-budget delivery with excellent user-acceptance at seven
assembly plants around the world
Retained deliverables with unit test scripts and processes that were
reused
Higher profit by minimizing “sleeping inventory” in supply chains, as well
as maximizing utilization of existing plant floor space for increased
assembly capacity