You are on page 1of 20

Exploratory Testing As the name suggests exploratory testing is about exploring more into the software and finding

about the software. In exploratory testing tester focuses more on how the software actually works, testers do minimum planning and maximum execution of the software by which they get in depth idea about the software functionality, once the tester starts getting insight into the software he can make decisions to what to test next. As per CemKaner exploratory testing is "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project." Exploratory Testing is mostly performed by skilled testers. Exploratory testing is mostly used if the requirements are incomplete and time to release the software is less. Confirmation Testing or Re-testing Confirmation testing is also known as re-testing. Confirmation Testing is done to make sure that the tests cases which failed in last execution are passing after the defects against those failures are fixed. For Example: Suppose you were testing some software application and you found defects in some component. 1. You log a defect in bug tracking tool. 2. Developer will fix that defect and provide you with the official testable build. 3. You need to re-run the failed test cases to make sure that the previous failures are gone. This is known as confirmation Testing or Re-testing

Maintenance Testing Maintenance Testing is done on the already deployed software. The deployed software needs to be enhanced, changed or migrated to other hardware. The Testing done during this enhancement, change and migration cycle is known as maintenance testing. Once the software is deployed in operational environment it needs some maintenance from time to time in order to avoid system breakdown, most of the banking software systems needs to be operational 24*7*365. So it is very necessary to do maintenance testing of software applications. In maintenance testing, tester should consider 2 parts. Any changes made in software should be tested thoroughly. The changes made in software does not affect the existing functionality of the software, so regression testing is also done. Why is Maintenance Testing required User may need some more new features in the existing software which requires modifications to be done in the existing software and these modifications need to be tested. End user might want to migrate the software to other latest hardware platform or change the environment like OS version, Database version etc. which requires testing the whole application on new platforms and environment. Functional Testing Functional testing is also known as component testing. It tests the functioning of the system or software i.e. What the software does. The functions of the software are described in the functional specification document or requirements specification document. Functional testing considers the specified behavior of the software. Non Functional Testing Non functional testing tests the characteristics of the software like how fast the response is, or what time does the software takes to perform any operation.

Some examples of Non-Functional Testing are: 1.PerformanceTesting 2.LoadTesting 3.StressTesting 4.UsabilityTesting 5. Reliability Testing Non functionality testing focuses on the software's performance i.e. How well it works. Performance Testing Performance Testing is done to determine the software characteristics like response time, throughput or MIPS (Millions of instructions per second) at which the system/software operates. Performance Testing is done by generating some activity on the system/software, this is done by the performance test tools available. The tools are used to create different user profiles and inject different kind of activities on server which replicates the end-user environments. The purpose of doing performance testing is to ensure that the software meets the specified performance criteria, and figure out which part of the software is causing the software performance go down. Performance Testing Tools should have the following characteristics: It should generate load on the system which is tested It should measure the server response time It should measure the throughput Performance Testing Tools 1. IBM Rational Performance Tester Its a performance testing tool from IBM, it supports load testing for applications such as HTTP, SAP, Siebel etc. It is supported on Windows and Linux.

2. Loadrunner Loadrunner is HP's (formerly Mercury's) load/stress testing tool for web and other applications, it supports a wide variety of application environments, platforms, and databases. Large suite of network/app/server monitors to enable performance measurement of each tier/server/component and tracing of bottlenecks. 3. Apache jmeter Jmeter is Java desktop application from the Apache Software Foundation designed to load test functional behavior and measure performance. This was originally designed for testing Web Applications but has expanded to other test functions; may be used to test performance both on static and dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP Servers and more). It can also be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types; can make a graphical analysis of performance or test server/script/object behavior under heavy concurrent load. 4. DBUnit Open source JUnit extension (also usable with Ant) targeted for database-driven projects that, among other things, puts a database into a known state between test runs. Enables avoidance of problems that can occur when one test case corrupts the database and causes subsequent tests to fail or exacerbate the damage. Has the ability to export and import database data to and from XML datasets. Can work with very large datasets when used in streaming mode, and can help verify that database data matches expected sets of values. Load Testing Load testing tests the software or component with increasing load, number of concurrent users or transactions is increased and the behavior of the system is examined and checked what load can be handled by the software. The main objective of load testing is to determine the response time of the software for critical transactions and make sure that they are within the specified limit. It is a type of performance testing. Load Testing is non-functional testing.

Stress Testing Stress testing tests the software with a focus to check that the software does not crashes if the hardware resources(like memory, CPU, Disk Space) are not sufficient. Stress testing puts the hardware resources under extensive levels of stress in order to ensure that software is stable in a normal environment. In stress testing we load the software with large number of concurrent users/processes which cannot be handled by the systems hardware resources. Stress Testing is a type of performance testing and it is a non-functional testing. Examples: 1. Stress Test of the CPU will be done by running software application with 100% load for some days which will ensure that the software runs properly under normal usage conditions. 2. Suppose you have some software which has minimum memory requirement of 512 MB RAM then the software application is tested on a machine which has 512 MB memory with extensive loads to find out the system/software behavior.

Usability Testing Usability means the software's capability to be learned and understood easily and how attractive it looks to the end user. Usability Testing is a black box testing technique. Usability Testing tests the following features of the software. 1. How easy it is to use the software. 2. How easy it is to learn the software. 3. How convenient is the software to end user. Static Testing

Static testing is the form of software testing where you do not execute the code being examined. This technique could be called non-execution technique. It is primarily syntax checking of the code or manually reviewing the code, requirements documents, design documents etc. to find errors. From the black box testing point of view, static testing involves reviewing requirements and specifications. This is done with an eye toward completeness or appropriateness for the task at hand. This is the verification portion of Verification and Validation. The fundamental objective of static testing technique is to improve the quality of the software products by finding errors in early stages of software development life cycle. Following are the main Static Testing techniques used: 1. Informal Reviews 2. Walkthrough 3. Technical Reviews 4. Inspection 5. Static Code Analysis Dynamic Testing Dynamic Testing is used to test the software by executing it. Dynamic Testing is also known as Dynamic Analysis, this technique is used to test the dynamic behavior of the code. In dynamic testing the software should be compiled and executed, this analyses the variable quantities like memory usage, CPU usage, response time and overall performance of the software. Dynamic testing involves working with the software, input values are given and output values are checked with the expected output. Dynamic testing is the Validation part of Verification and Validation.

Unit Testing Unit is the smallest testable part of the software system. Unit testing is done to verify that the lowest independent entities in any software are working fine. The smallest testable part is isolated from the remainder code and tested to determine whether it works correctly. Why is Unit Testing important Suppose you have two units and you do not want to test the units individually but as an integrated system to save your time. Once the system is integrated and you found error in an integrated system it becomes difficult to differentiate that the error occurred in which unit so unit testing is mandatory before integrating the units. When developer is coding the software it may happen that the dependent modules are not completed for testing,in such cases developers use stubs and drivers to simulate the called(stub) and caller(driver) units. Unit testing requires stubs and drivers, stubs simulates the called unit and driver simulates the calling unit. Lets explain STUBS and DRIVERS in detail. STUBS: Assume you have 3 modules, Module A, Module B and module C. Module A is ready and we need to test it, but module A calls functions from Module B and C which are not ready, so developer will write a dummy module which simulates B and C and returns values to module A. This dummy module code is known as stub. DRIVERS: Now suppose you have modules B and C ready but module A which calls functions from module B and C is not ready so developer will write a dummy piece of code for module A which will return values to module B and C. This dummy piece of code is known as driver. Integration Testing In integration testing the individual tested units are grouped as one and the interface between them is tested. Integration testing identifies the problems that

occur when the individual units are combined i.e it detects the problem in interface of the two units. Integration testing is done after unit testing. There are mainly three approaches to do integration testing. Top-down Approach Top down approach tests the integration from top to bottom, it follows the architectural structure. Example: Integration can start with GUI and the missing components will be substituted by stubs and integration will go on. Bottom-up approach In bottom up approach testing takes place from the bottom of the control flow, the higher level components are substituted with drivers Big bang approach In big bang approach most or all of the developed modules are coupled together to form a complete system and then used for integration testing. System Testing Testing the behavior of the whole software/system as defined in software requirements specification(SRS) is known as system testing, its main focus is to verify that the customer requirements are fulfilled. System testing is done after integration testing is complete. System testing should test functional and non functional requirements of the software. Following types of testing should be considered during system testing cycle. The test types followed in system testing differ from organization to organization however this list covers some of the main testing types which need to be covered in system testing. Acceptance Testing Acceptance testing is performed after system testing is done and all or most of the major defects have been fixed. The goal of acceptance testing is to establish confidence in the delivered software/system that it meets the end user/customers

requirements and is fit for use Acceptance testing is done by user/customer and some of the project stakeholders. Acceptance testing is done in production kind of environment. For Commercial off the shelf (COTS) software's that are meant for the mass market testing needs to be done by the potential users, there are two types of acceptance testing for COTS software's. Alpha Testing Alpha testing is mostly applicable for software's developed for mass market i.e. Commercial off the shelf(COTS), feedback is needed from potential users. Alpha testing is conducted at developers site, potential users, members or developers organization are invited to use the system and report defects. Beta Testing Beta testing is also know as field testing, it is done by potential or existing users/customers at an external site without developers involvement, this test is done to determine that the software satisfies the end users/customers needs. This testing is done to acquire feedback from the market. Black Box Testing Black box testing tests functional and non-functional characteristics of the software without referring to the internal code of the software. Black Box testing doesn't require knowledge of internal code/structure of the system/software. It uses external descriptions of the software like SRS (Software Requirements Specification), Software Design Documents to derive the test cases. Black box Test Design Techniques Typically Black box Test Design Techniques include: EquivalancePartitioning BoundaryValueAnalysis

StateTransitionTesting UsecaseTesting Decision Table Testing Boundary Value Analysis What is a Boundary Value A boundary value is any input or output value on the edge of an equivalence partition. Let us take an example to explain this: Suppose you have a software which accepts values between 1-1000, so the valid partition will be (1-1000), equivalence partitions will be like: Invalid Partition Valid Partition Invalid Partition 0 1-1000 1001 and above

And the boundary values will be 1, 1000 from valid partition and 0,1001 from invalid partitions. Boundary Value Analysis is a black box test design technique where test case are designed by using boundary values, BVA is used in range checking. Example:2 A store in city offers different discounts depending on the purchases made by the individual. In order to test the software that calculates the discounts, we can identify the ranges of purchase values that earn the different discounts. For example, if a purchase is in the range of $1 up to $50 has no discounts, a purchase over $50 and up to $200 has a 5% discount, and purchases of $201 and up to $500 have a 10% discounts, and purchases of $501 and above have a 15% discounts. We can identify 4 valid equivalence partitions and 1 invalid partition as shown below. Invalid Valid Partition(No Valid Valid Valid

Partition $0.01

Discounts) $1-$50

Partition(5%) Partition(10%) Partition(15%) $51-$200 $201-$500 $501-Above

From this table we can identify the boundary values of each partition. We assume that two decimal digits are allowed. Boundary values for Invalid partition: 0.00 Boundary values for valid partition(No Discounts): 1, 50 Boundary values for valid partition(5% Discount): 51, 200 Boundary values for valid partition(10% Discount): 201,500 Boundary values for valid partition(15% Discount): 501, Max allowed number in the software application

UseCase Testing Before explaining Usecase Testing lets first understand what is a UseCase 1. A Usecase is a description of a particular use of the system by the end user of the system. 2. Usecases are a sequence of steps that describe the interactions between the user and the software system. 3. Each usecase describes the interactions the end user has with the software system in order to achieve a specific task. What is Usecase Testing? Use case testing is a technique that helps us identify test cases that exercise the whole system on a transaction by transaction basis from start to finish. Use cases are defined in terms of the end user and not the system, use case describe what the user does and what the user sees rather than what inputs the software system expects and what the system outputs. Usecases use the business language rather than technical terms. Each usecase must specify any preconditions that need to be met for the use case to work. Use cases must also specify post conditions that are observable results and a

description of the final state of the system after the use case has been executed successfully. Decision Table Testing What is a Decision Table It is a table which shows different combination inputs with their associated outputs, this is also known as cause effect table. In EP and BVA we have seen that these techniques can be applied to only specific conditions or inputs however if we have different inputs which result in different actions being taken or in other words we have a business rule to test where there are different combination of inputs which result in different actions. For testing such rules or logic decision table testing is used. It is a black box test design technique. White Box Testing White Box testing tests the structure of the software or software component. It checks what going on inside the software. Also Know as clear box Testing, glass box testing or structural testing. Requires knowledge of internal code structure and good programming skills. It tests paths within a unit and also flow between units during integration of units. White Box Test Design Techniques Typically White box Test Design Techniques include: 1. LineCoverageorStatementCoverage 2.DecisionCoverage 3.ConditionCoverage 4.MultipleConditionDecisionCoverage 5. Multiple Condition Coverage Line Coverage or Statement Coverage

Statement coverage is also known as line coverage. The formula to calculate statement coverage is: Statement Coverage=(Number of statements exercised/Total number of statements)*100 Studies in the software industry have shown that black-box testing may actually achieve only 60% to 75% statement coverage, this leaves around 25% to 40% of the statements untested. To illustrate the principles of code coverage lets take one pseudo-code which is not specific to any programming language. We have numbered the code lines just to illustrate the statement coverage example however this may not always be correct. Decision Coverage or Branch Coverage Decision Coverage is also known as Branch Coverage. Whenever there are two or more possible exits from the statement like an IF statement, a DO-WHILE or a CASE statement it is known as decision because in all these statements there are two outcomes, either TRUE or FALSE. With the loop control statement like DO-WHILE or IF statement the outcome is either TRUE or FALSE and decision coverage ensures that each outcome(i.e TRUE and FALSE) of control statement has been executed at least once. Alternatively you can say that control statement IF has been evaluated both to TRUE and FALSE. The formula to calculate decision coverage is: Decision Coverage=(Number of decision outcomes executed/Total number of decision outcomes)*100% Research in the industries have shown that even if through functional testing has been done it only achieves 40% to 60% decision coverage. Decision coverage is stronger that statement coverage and it requires more test cases to achieve 100% decision coverage.

Let us take one example to explain decision coverage: READ X READ Y IF "X > Y" PRINT X is greater that Y ENDIF To get 100% statement coverage only one test case is sufficient for this pseudocode. TEST CASE 1: X=10 Y=5 However this test case won't give you 100% decision coverage as the FALSE condition of the IF statement is not exercised. In order to achieve 100% decision coverage we need to exercise the FALSE condition of the IF statement which will be covered when X is less than Y. So the final TEST SET for 100% decision coverage will be: TEST CASE 1: X=10, Y=5 TEST CASE 2: X=2, Y=10 Note: 100% decision coverage guarantees 100% statement coverage but 100% statement coverage does not guarantee 100% decision coverage. READ X READ Y IF X>Y PRINT X is greater than Y ENDIF Let us see how can we achieve 100% code coverage for this pseudo-code, we can have 100% coverage by just one test set in which variable X is always greater than variable Y. TEST SET 1: X=10, Y=5 A statement may be a single line or it may be spread over several lines. A statement can also contain more than one statement. Some code coverage tools

group statements that are always executed together in a block and consider them as one statement

Condition Coverage or Predicate Coverage Condition coverage is also known as Predicate Coverage Condition coverage is seen for Boolean expression, condition coverage ensures whether all the Boolean expressions have been evaluated to both TRUE and FALSE. Let us take an example to explain Condition Coverage IF ("X && Y") In order to suffice valid condition coverage for this pseudo-code following tests will be sufficient. TEST 1: X=TRUE, Y=FALSE TEST 2: X=FALSE, Y=TRUE Note: 100% condition coverage does not guarantee 100% decision coverage. Smoke Testing Smoke testing is done for the software in order to verify that the software is stable enough for further testing. it has a collection of written tests that are performed on the software prior to being accepted for further testing. Smoke testing "touches" all areas of the application without getting too deep, tester looks for answers to basic questions like, "Does the application window opens", "Can tester launch the software?" etc. The purpose is to determine whether the application is stable enough so that a more detailed testing can be performed. The test cases can be performed manually or by using an automated tool. A subset of planned test cases is decided which covers the main functionality of the software, but does not bother with finer software component details. A daily

build and smoke test is among industry best practices. Smoke testing is done by testers before accepting a build for further testing. In software engineering, a smoke test generally consists of a collection of tests that can be applied to a newly created or repaired computer program. Sometimes the tests are performed by the automated system that builds the final software. In this sense a smoke test is the process of validating code changes before the changes are checked into the larger production official source code collection or the main branch of source code. Sanity Testing When there are some minor issues with software and a new build is obtained after fixing the issues then instead of doing complete regression testing a sanity is performed on that build. You can say that sanity testing is a subset of regression testing. Sanity testing is done after thorough regression testing is over, it is done to make sure that any defect fixes or changes after regression testing does not break the core functionality of the product. It is done towards the end of the product release phase. Sanity testing follows narrow and deep approach with detailed testing of some limited features. Sanity testing is like doing some specialized testing which is used to find problems in particular functionality. Sanity testing is done with an intent to verify that end user requirements are met on not. Sanity tests are mostly non scripted. Security Testing Security Testing tests the ability of the system/software to prevent unauthorized access to the resources and data. Security Testing needs to cover the six basic security concepts: confidentiality, integrity, authentication, authorization, availability and non-repudiation.

Confidentiality A security measure which protects against the disclosure of information to parties other than the intended recipient that is by no means the only way of ensuring the security. Integrity A measure intended to allow the receiver to determine that the information which it is providing is correct. Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding additional information to a communication to form the basis of an algorithmic check rather than the encoding all of the communication. Authentication The process of establishing the identity of the user. Authentication can take many forms including but not limited to: passwords, biometrics, radio frequency identification, etc. Authorization The process of determining that a requester is allowed to receive a service or perform an operation. Access control is an example of authorization. Availability Assuring information and communications services will be ready for use when expected. Information must be kept available to authorized persons when they need it. Non-repudiation A measure intended to prevent the later denial that an action happened, or a communication that took place etc.

In communication terms this often involves the interchange of authentication information combined with some form of provable time stamp Interoperability Testing Interoperability means the capability of the software to interact with other systems/software or software components. Interoperability testing means testing the software to check if it can inter-operate with other software component, softwares or systems. As per IEEE Glossary interoperability is: The ability of two or more systems or components to exchange information and to use the information that has been exchanged.

What is Defect Life Cycle


In the figure shown below all the defect reports move through a series of clearly identified states.

Defect Life Cycle (Bug Life cycle) is the journey of a defect from its identification to its closure.
1. A defect is in open state when the tester finds any variation in the test results during testing, peer tester reviews the defect report and a defect is opened. Now the project team decides whether to fix the defect in that release or to postpone it for future release. If the defect is to be fixed, a developer is assigned the defect and defect moves to assigned state. If the defect is to be fixed in later releases it is moved to deferred state. Once the defect is assigned to the developer it is fixed by developer and moved to fixed state, after this an e-mail is generated by the defect tracking tool to the tester who reported the defect to verify the fix. The tester verifies the fix and closes the defect, after this defect moves to closed state. If the defect fix does not solve the issue reported by tester, tester re-opens the defect and defect moves to re-opened state. It is then approved for re-repair and again assigned to developer. If the project team defers the defect it is moved to deferred state, after this project team decides when to fix the defect. It is re-opened in other development cycles and moved to re-opened state. It is then assigned to developer to fix it.

2.

3. 4.

5. 6.

7.

8. What is configuration Management? Configuration management aims to establish consistency in an enterprise. This is attained by continuously updating processes of the organization, maintaining versioning and handling the entire organization network, hardware and software components efficiently. In software, Software Configuration management deals with controlling and tracking changes made to the software. This is necessary to allow easy accommodation of changes at any time.

When to stop testing? Answer: a) When all the requirements are adequately executed successfully through test cases b) Bug reporting rate reaches a particular limit c) The test environment no more exists for conducting testing d) The scheduled time for testing is over e) The budget allocation for testing is over]
Defect Severity determines the defect's effect on the application where as Defect Priority determines the defect urgency of repair. Severity is given by Testers and Priority by Developers 1. High Severity & Low Priority: For example an application which generates some banking related reports weekly, monthly, quarterly & yearly by doing some calculations. If there is a fault while calculating yearly report. This is a high severity fault but low priority because this fault can be fixed in the next release as a change request.

2. High Severity & High Priority : In the above example if there is a fault while calculating weekly report. This is a high severity and high priority fault because this fault will block the functionality of the application immediately within a week. It should be fixed urgently. 3. Low Severity & High Priority : If there is a spelling mistake or content issue on the homepage of a website which has daily hits of lakhs. In this case, though this fault is not affecting the website or other functionalities but considering the status and popularity of the website in the competitive market it is a high priority fault. 4. Low Severity & Low Priority : If there is a spelling mistake on the pages which has very less hits throughout the month on any website. This fault can be considered as low severity and low priority.

You might also like