You are on page 1of 14

c 

is a process of technical investigation, performed on behalf of


stakeholders, that is intended to reveal quality-related information about the product with
respect to the context in which it is intended to operate.

c 
        
   

  
 
      

 An error or defect in software or hardware that causes a program to malfunction

 ë 

Jt refers to an incorrect action or calculation performed by software

a

An accidental condition that causes a functional unit to fail to perform its required function


  

¦? Ô  

 
 



¦? c    

¦? ë 
 

¦?  

 


¦?   

¦?  

¦? ë  
 

¦? c  
 

ë
 

A number of testing principles have been suggested over the past 40 years and offer
general
Guidelines common for all testing.

ë
  
  
 
Testing can show that defects are present, but cannot prove that there are no defects.
Testing reduces the probability of undiscovered defects remaining in the software but,
even if no defects are found, it is not a proof of correctness.

ë
 !   
    
Testing everything (all combinations of inputs and preconditions) is not feasible except
for trivial cases. Jnstead of exhaustive testing, risk analysis and priorities should be used
to focus testing efforts.
ë
 " 

Testing activities should start as early as possible in the software or system
development life cycle, and should be focused on defined objectives.

ë
 # $

A small number of modules contain most of the defects discovered during pre-release
testing, or are responsible for the most operational failures.

ë
 % ë    
Jf the same tests are repeated over and over again, eventually the same set of test
cases will no longer find any new defects. To overcome this ³pesticide paradox´, the test
cases need to be regularly reviewed and revised, and new and different tests need to be
written to exercise different parts of the software or system to potentially find more
defects.

ë
 & 
 
 


Testing is done differently in different contexts. For example, safety-critical software is
tested differently from an e-commerce site.

ë
 ' (
) ) 
Finding and fixing defects does not help if the system built is unusable and does not
fulfill the users¶ needs and expectations

* 
 
     
    
   

* 
                
  

 
+
 



 
 

c
* 

¦?  + 
¦? ë  
¦? c  

¦? 
$  
¦? ·  
¦? c$  
¦? 
  

 ,   


¦? 
 
 
--.  
¦? 
   
  /
 
0
¦? 
 
     
  

   
ë

 
Ôoney required to prevent errors and to do the job right the first time.
Ex. Establishing methods and procedures, Training workers, acquiring tools

(   
Ôoney spent to review completed products against requirements.
Ex. Cost of inspections, testing, reviews



a  
All costs associated with defective products that have been delivered to the user or
moved in to the production
Ex. repairing cost, cost of operating faulty products, damage incurred by using them

c   



Correctness

Reliability

Efficiency

Jntegrity

Usability

Ôaintainability

Testability

Flexibility

Portability

Reusability

Jnterpretability

 

  
     

   
o planning and control;
o analysis and design;
o implementation and execution;
o evaluating exit criteria and reporting;
o test closure activities.

c  
 

1Ô 

"The Waterfall" approach, the whole process of software development is divided into
separate process phases.
The phases in Waterfall model are: Requirement Specifications phase, Software
Design, Jmplementation and Testing & Ôaintenance. All these phases are cascaded to each
other so that second phase is started as and when defined set of goals are achieved for first
phase and it is signed off, so the name "Waterfall Ôodel"

 
 (
  2 $

: All possible requirements of the system to be
developed are captured in this phase. Requirements are set of functionalities and
constraints that the end-user (who will be using the system) expects from the system. The
requirements are gathered from the end-user by consultation, these requirements are
analyzed for their validity and the possibility of incorporating the requirements in the
system to be development is also studied. Finally, a Requirement Specification document is
created which serves the purpose of guideline for the next phase of the model.

c2c $
: Before a starting for actual coding, it is highly important to
understand what we are going to create and what it should look like? The requirement
specifications from first phase are studied in this phase and system design is prepared.
System Design helps in specifying hardware and system requirements and also helps in
defining overall system architecture. The system design specifications serve as input for the
next phase of the model.

3 

 2 
 
: On receiving system design documents, the work is
divided in modules/units and actual coding is started. The system is first developed in small
programs called units, which are integrated in the next phase. Each unit is developed and
tested for its functionality; this is referred to as Unit Testing. Unit testing mainly verifies if
the modules/units meet their specifications.
3
 
 2 c 
: As specified above, the system is first divided in units
which are developed and tested for their functionalities. These units are integrated into a
complete system during Jntegration phase and tested to check if all modules/units
coordinate between each other and the system as a whole behaves as per the
specifications. After successfully testing the software, it is delivered to the customer.

4 
2Ô


: This phase of "The Waterfall Ôodel" is virtually never ending
phase (Very long). Generally, problems with the system developed (which are not found
during the development life cycle) come up after its practical use starts, so the issues
related to the system are solved after deployment of the system. Not all the problems come
in picture directly but they arise time to time and needs to be solved; hence this process is
referred as Ôaintenance.

3 Ô 5  6

An iterative lifecycle model does not attempt to start with a full


specification of requirements. Jnstead, development begins by specifying and implementing
just part of the software, which can then be reviewed in order to identify further
requirements. This process is then repeated, producing a new version of the software for
each cycle of the model. Consider an iterative lifecycle model which consists of repeating
the following four phases in sequence:

A x    phase, in which the requirements for the software are gathered and
analyzed. Jteration should eventually result in a requirements phase that produces a
complete and final specification of requirements.


  , in which a software solution to meet the requirements is designed. This
may be a new design, or an extension of an earlier design.
An J       phase, when the software is coded, integrated and tested.

A x   phase, in which the software is evaluated, the current requirements are


reviewed, and changes and additions to requirements proposed.

For each cycle of the model, a decision has to be made as to whether the software produced
by the cycle will be discarded, or kept as a starting point for the next cycle (sometimes
referred to as incremental prototyping). Eventually a point will be reached where the
requirements are complete and the software can be delivered, or it becomes impossible to
enhance the software as required, and a fresh start has to be made.

´)Ô 

The V-model is a software development model which can be presumed to be the extension
of the waterfall model. Jnstead of moving down in a linear way, the process steps are bent
upwards after the coding phase, to form the typical V shape. The V-Ôodel demonstrates the
relationships between each phase of the development life cycle and its associated phase of
testing.

 ë   Ô 

The prototype model is intended for large, expensive, and complicated projects.

The steps in the prototype model can be generalized as follows:


^.? The new system requirements are defined in as much detail as possible. This usually
involves interviewing a number of users representing all the external or internal
users and other aspects of the existing system.
2.? A preliminary design is created for the new system.
3.? A first prototype of the new system is constructed from the preliminary design. This
is usually a scaled-down system, and represents an approximation of the
characteristics of the final product.
4.? A second prototype is evolved by a fourfold procedure: (^) evaluating the first
prototype in terms of its strengths, weaknesses, and risks; (2) defining the
requirements of the second prototype; (3) planning and designing the second
prototype; (4) constructing and testing the second prototype.
5.? At the customer's option, the entire project can be aborted if the risk is deemed too
great. Risk factors might involve development cost overruns, operating-cost
miscalculation, or any other factor that could, in the customer's judgment, result in a
less-than-satisfactory final product.
6.? The existing prototype is evaluated in the same manner as was the previous
prototype, and, if necessary, another prototype is developed from it according to the
fourfold procedure outlined above.
7.? The preceding steps are iterated until the customer is satisfied that the refined
prototype represents the final product desired.
8.? The final system is constructed, based on the refined prototype.
9.? The final system is thoroughly evaluated and tested. Routine maintenance is carried
out on a continuing basis to prevent large-scale failures and to minimize downtime.

 (  
$ 
5($6Ô 

The RAD models a linear sequential software development process that emphasizes an
extremely short development cycle. The RAD model is a "high speed" adaptation of the
linear sequential model in which rapid development is achieved by using a component-
based construction approach. Used primarily for information systems applications.
 ($   
     
 :

^.? 
 Ô 
 The information flow among business functions is defined by
answering questions like what information drives the business process, what
information is generated, who generates it, where does the information go, who
process it and so on.

2.? $Ô 


 information collected from business modeling is refined into a set
of data objects (entities) that are needed to support the business. The attributes
(character of each entity) are identified and the relation between these data objects
(entities) is defined.
3.? ë  Ô 
  The data object defined in the data modeling phase are
transformed to achieve the information flow necessary to implement a business
function. Processing descriptions are created for adding, modifying, deleting or
retrieving a data object.
4.? (  


Automated tools are used to facilitate construction of the
software; even they use the 4th GL techniques.
5.? 


 Ôany of the programming components have already been
tested since RAD emphasis reuse. This reduces overall testing time. But new
components must be tested and all interfaces must be fully exercised.
,





Component testing searches for defects in, and verifies the functioning of, software (e.g.
modules, programs, objects, classes, etc.) that are separately testable. Jt may be done in
isolation from the rest of the system, depending on the context of the development life
cycle and the system. Stubs, drivers and simulators may be used

Component testing may include testing of functionality and specific non-functional


characteristics, such as resource-behaviour (e.g. memory leaks) or robustness testing, as
well as structural testing (e.g. branch coverage). Test cases are derived from work products
such as a specification of the component, the software design or the data model.

Typically, component testing occurs with access to the code being tested and with the
support of the development environment, such as a unit test framework or debugging tool,
and, in practice, usually involves the programmer who wrote the code. Defects are typically
fixed as soon as they are found, without formally recording incidents.

One approach to component testing is to prepare and automate test cases before coding.
This is
Called a test-first approach or test-driven development. This approach is highly iterative and
is
based on cycles of developing test cases, then building and integrating small pieces of code,
and
Executing the component tests until they pass.

3
 



Jntegration testing tests interfaces between components, interactions with different parts of
a
System, such as the operating system, file system, hardware, or interfaces between
systems.
There may be more than one level of integration testing and it may be carried out on test
objects of varying size. For example:

^. Component integration testing tests the interactions between software components and is
done after component testing;

2. System integration testing tests the interactions between different systems and may be
done
After system testing. Jn this case, the developing organization may control only one side of
the
Jnterface, so changes may be destabilizing. Business processes implemented as workflows
Ôay involve a series of systems. Cross-platform issues may be significant

 $ 
3
 



Top down integration testing is an incremental integration testing technique which begins by
testing the top level module and progressively adds in lower level module one by one.
Lower level modules are normally simulated by stubs which mimic functionality of lower
level modules. As you add lower level code, you will replace stubs with the actual
components. Top Down integration can be performed and tested in breadth first or depth
firs manner

   3
 



Jn bottom up integration testing, module at the lowest level are developed first and other
modules which go towards the 'main' program are integrated and tested one at a time.
Bottom up integration also uses test drivers to drive and pass appropriate data to the lower
level modules. As and when code for other module gets ready, these drivers are replaced
with the actual module. Jn this approach, lower level modules are tested extensively thus
make sure that highest used module is tested properly

 
3
 



Jn big bang Jntegration testing, individual modules of the programs are not integrated until
everything is ready. This approach is seen mostly in inexperienced programmers who rely
on 'Run it and see' approach. Jn this approach, the program is integrated without any formal
integration testing, and then run to ensures that all the components are working properly.

c


System testing is concerned with the behaviour of a whole system/product as defined by
the scope of a development project or programme. Jn system testing, the test environment
should correspond to the final target or production environment as much as possible in
order to minimize the risk of environment-specific failures not being found in testing.

System testing may include tests based on risks and/or on requirements


specifications, business processes, use cases, or other high level descriptions of system
behaviour, interactions with the operating system, and system resources.

System testing should investigate both functional and non-functional


requirements of the system. Requirements may exist as text and/or models. Testers also
need to deal with incomplete or undocumented requirements. System testing of functional
requirements starts by using the most appropriate specification-based (black-box)
techniques for the aspect of the system to be tested. For example, a decision table may be
created for combinations of effects described in business rules. Structure-based techniques
(white-box) may then be used to assess the thoroughness of the testing with respect to a
structural element, such as menu structure or web page navigation.

  c


  
Testing the application for its friendliness to use.

     
This testing is also known as ë   
. During this test,
test engineer validates correctness of functionality of our build on different customer
expected platforms (OS, Compiler, Browser and other system software¶s).

 
During this test, test engineer validates that whether our application
build changes from abnormal state to normal state or not?



 

This test is also known as Hardware Compatibility testing. During
this test, test engineer validates our application build existence with different technology
hardware devices.

 Different technology printers, different technology LAN

ë 

 Performance testing is the process of determining the speed or
effectiveness of a computer, network, software program or device.

´ 
 Volume Testing belongs to the group of non-functional tests, which are
often misunderstood and/or used interchangeably. Volume testing refers to testing a
software application for a certain data volume. This volume can in generic terms be the
database size or it could also be the size of an interface file that is the subject of volume
testing. For example, if you want to volume test your application with a specific database
size, you will explode your database to that size and then test the application's performance
on it.

, 
, 
is the process of putting demand on a system or device and
measuring its response.

c
When the load placed on the system is raised beyond normal usage
patterns, in order to test the system's response at unusually high or peak loads, it is known
as stress testing.

c 
: The Process to determine that an JS (Jnformation System) protects data
and maintains functionality as intended.

The six basic security concepts that need to be covered by security testing are:
confidentiality, integrity, authentication, authorization, availability and non-repudiation

c  
 part of the battery of non-functional tests, is the testing of a
software application for measuring its capability to scale up or scale out [^ - in terms of any
of its non-functional capability - be it the user load supported, the number of transactions,
the data volume etc.

c   ë?
When a build is received, a smoke test is run to ascertain if the build is stable and it can be
considered for further testing.
Smoke testing can be done for testing the stability of any interim build.
Smoke testing can be executed for platform qualification tests.

c  
ë
Once a new build is obtained with minor revisions, instead of doing a through regression, a
sanity is performed so as to ascertain the build has indeed rectified the issues and no
further issue has been introduced by the fixes. Jts generally a subset of regression testing
and a group of test cases are executed that are related with the changes made to the app


c  c

Smoke testing originated in the A sanity test is a narrow regression


^ hardware testing practice of turning on a test that focuses on one or a few
new piece of hardware for the first time areas of functionality. Sanity testing
and considering it a success if it does is usually narrow and deep.
not catch fire and smoke. Jn software
industry, smoke testing is a shallow and
wide approach whereby all areas of the
application without getting into too
deep, is tested.
2 A smoke test is scripted--either using a A sanity test is usually unscripted.
written set of tests or an automated test
3 A Smoke test is designed to touch every A Sanity test is used to determine a
part of the application in a cursory way. small section of the application is still
Jt's is shallow and wide. working after a minor change.
4 Smoke testing will be conducted to Sanity testing is a cursory testing; it
ensure whether the most crucial is performed whenever a cursory
functions of a program work, but not testing is sufficient to prove the
bothering with finer details. (Such as application is functioning according
build verification). to specifications. This level of testing
is a subset of regression testing.
5 Smoke testing is normal health check up sanity testing is to verify whether
to a build of an application before taking requirements are met or not,
it to testing in depth. Checking all features breadth-first.




3
 

involves the testing of the different components of an application,
e.g., software and hardware, in combination. This kind of combination testing is done to
ensure that they are working correctly and conforming to the requirements based on which
they were designed and developed.

The other kind of testing we are going to discuss here is



. Jnterface
testing is different from integration testing, in that interface testing is done to check that
the different components of the application or system being developed are in sync with each
other or not. Jn technical terms, interface testing helps determine that different functions
like data transfer between the different elements in the system are happening according to
the way they were designed to happen.

The term "



 " refers to the concept of collecting information and attempting to
spot a pattern, or Ê , in the information.

Trend Analysis is collection of data past and presents to help predict the future. or based on
the data/ trend trying to draw conclusions which may be used to increase profits/reduce
defects etc.

We can plot trend of the defects classified as functional integration and system testing
issues/defects. This will help in knowing the cause for the delays/rework. Let¶s assume
integration has highest defects then we can concentrate on the integration issues and
convey the development team to look for the integration.





Ôulti-user testing geared towards determining the effects of accessing the same application
code, module or database records. Jdentifies and measures the level of locking, deadlocking
and use of single-threaded code and locking semaphores.


 

 

Jn software engineering the Configuration Controller tracks and controls changes in the
software. Their practices include revision control and the establishment of baselines.
Depending on the size of the company the role can include any number of the following
tasks:

^) Jdentify configurations configuration items and baselines.

2) Jmplement a controlled change process. This is typically achieved by setting up a change


control board whose primary function is to approve or reject change requests that are set
against any baseline.

3) Recording and reporting all the necessary information on the status of the development
process.

4) Ensuring that configurations contain all their intended parts and are sound with respect
to their specifying documentation including requirements architectural specifications and
user manuals.

5) Ôanaging the process and tools used for software builds.

6) Ensuring adherence to the organization's development process.

7) Ôanaging the software and hardware that host systems.

5) 6) 7) is generally performed in other roles in larger companies.


$ 

 
7´


A version is a number assigned to a software package having a particular set of


functionalities. The authors of the software or application keep on removing previous bugs &
adding new features to the applications and the newer application is marked with a newer
number called version. Jf there is a small change the version number is increased by a
decimal point. Jf there is a larger change in the application the version number is
incremented by the whole number. Ôost of the applications have the feature to check if the
higher version of that application is available on the internet. Jf the higher version of
application is available on the internet we can uninstall the lower version & install the higher
version of the application. At times we need not uninstall the lower version of the
application and some patch or driver is available installing which the application acquires the
functionality of higher version & the application shows the version number of the higher
version application. So a complete version of application is a complete software package
which can be installed independently. An upgrade is not a complete software package which
can work on its own.

You might also like