You are on page 1of 22

What is Boundary value analysis and Equivalence partitioning?

Posted In | Testing Concepts, Questions & answers, Testing Interview questions Boundary value analysis and Equivalence partitioning, explained with simple example: Boundary value analysis and equivalence partitioning both are test case design strategies in black box testing. Equivalence Partitioning: In this method the input domain data is divided into different equivalence data classes. This method is typically used to reduce the total number of test cases to a finite set of testable test cases, still covering maximum requirements. In short it is the process of taking all possible test cases and placing them into classes. One test value is picked from each class while testing. E.g.: If you are testing for an input box accepting numbers from 1 to 1000 then there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data. Using equivalence partitioning method above test cases can be divided into three sets of input data called as classes. Each test case is a representative of respective class. So in above example we can divide our test cases into three equivalence classes of some valid and invalid inputs. Test cases for input box accepting numbers between 1 and 1000 using Equivalence Partitioning: 1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should be sufficient. 2) Input data class with all values below lower limit. I.e. any value below 1, as a invalid input data test case. 3) Input data with any value greater than 1000 to represent third invalid input class. So using equivalence partitioning you have categorized all possible test cases into three classes. Test cases with other values from any class should give you the same result.

We have selected one representative from every input class to design our test cases. Test case values are selected in such a way that largest number of attributes of equivalence class can be exercised. Equivalence partitioning uses fewest test cases to cover maximum requirements. Boundary value analysis: Its widely recognized that input values at the extreme ends of input domain cause more errors in system. More application errors occur at the boundaries of input domain. Boundary value analysis testing technique is used to identify errors at boundaries rather than finding those exist in center of input domain. Boundary value analysis is a next part of Equivalence partitioning for designing test cases where test cases are selected at the edges of the equivalence classes. Test cases for input box accepting numbers between 1 and 1000 using Boundary value analysis: 1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 1000 in our case. 2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999. 3) Test data with values just above the extreme edges of input domain i.e. values 2 and 1001. Boundary value analysis is often called as a part of stress and negative testing. Note: There is no hard-and-fast rule to test only one value from each equivalence class you created for input domains. You can select multiple valid and invalid values from each equivalence class according to your needs and previous judgments. E.g. if you divided 1 to 1000 input values in valid data equivalence class, then you can select test case values like: 1, 11, 100, 950 etc. Same case for other test cases having invalid data classes. This should be a very basic and simple example to understand the Boundary value analysis and Equivalence partitioning concept.

What is BVT? Build Verification test is a set of tests run on every new build to verify that build is testable before it is released to test team for further testing. These test cases are core functionality test cases that ensure application is stable and can be tested thoroughly. Typically BVT process is automated. If BVT fails that build is again get assigned to developer for fix. BVT is also called smoke testing or build acceptance testing (BAT) New Build is checked mainly for two things:

Build validation Build acceptance

Some BVT basics:


It is a subset of tests that verify main functionalities. The BVTs are typically run on daily builds and if the BVT fails the build is rejected and a new build is released after the fixes are done. The advantage of BVT is it saves the efforts of a test team to setup and test a build when major functionality is broken. Design BVTs carefully enough to cover basic functionality. Typically BVT should not run more than 30 minutes. BVT is a type of regression testing, done on each and every new build.

BVT primarily checks for the project integrity and checks whether all the modules are integrated properly or not. Module integration testing is very important when different teams develop project modules. I heard many cases of application failure due to improper module integration. Even in worst cases complete project gets scraped due to failure in module integration. What is the main task in build release? Obviously file check in i.e. to include all the new and modified project files associated with respective builds. BVT was primarily introduced to check initial build health i.e. to check whether - all the new and modified files are included in release, all file formats are correct, every file version and language, flags associated with each file. These basic checks are worth before build release to test team for testing. You will save time and money by discovering the build flaws at the very beginning using BVT. Which test cases should be included in BVT?

This is very tricky decision to take before automating the BVT task. Keep in mind that success of BVT depends on which test cases you include in BVT. Here are some simple tips to include test cases in your BVT automation suite:

Include only critical test cases in BVT. All test cases included in BVT should be stable. All the test cases should have known expected result. Make sure all included critical functionality test cases are sufficient for application test coverage.

Also do not includes modules in BVT, which are not yet stable. For some underdevelopment features you cant predict expected behavior as these modules are unstable and you might know some known failures before testing for these incomplete modules. There is no point using such modules or test cases in BVT. You can make this critical functionality test cases inclusion task simple by communicating with all those involved in project development and testing life cycle. Such process should negotiate BVT test cases, which ultimately ensure BVT success. Set some BVT quality standards and these standards can be met only by analyzing major project features and scenarios. Example: Test cases to be included in BVT for Text editor application (Some sample tests only): 1) Test case for creating text file. 2) Test cases for writing something into text editor 3) Test case for copy, cut, paste functionality of text editor 4) Test case for opening, saving, deleting text file. These are some sample test cases, which can be marked as critical and for every minor or major changes in application these basic critical test cases should be executed. This task can be easily accomplished by BVT. BVT automation suits needs to be maintained and modified time-to-time. E.g. include test cases in BVT when there are new stable project modules available. What happens when BVT suite run: Say Build verification automation test suite executed after any new build. 1) The result of BVT execution is sent to all the email IDs associated with that project. 2) The BVT owner (person executing and maintaining the BVT suite) inspects the result of BVT. 3) If BVT fails then BVT owner diagnose the cause of failure. 4) If the failure cause is defect in build, all the relevant information with failure logs is sent to respective developers. 5) Developer on his initial diagnostic replies to team about the failure cause. Whether this is really a bug? And if its a bug then what will be his bug-fixing scenario.

6) On bug fix once again BVT test suite is executed and if build passes BVT, the build is passed to test team for further detail functionality, performance and other testes. This process gets repeated for every new build. Why BVT or build fails? BVT breaks sometimes. This doesnt mean that there is always bug in the build. There are some other reasons to build fail like test case coding error, automation suite error, infrastructure error, hardware failures etc. You need to troubleshoot the cause for the BVT break and need to take proper action after diagnosis. Tips for BVT success: 1) Spend considerable time writing BVT test cases scripts. 2) Log as much detailed info as possible to diagnose the BVT pass or fail result. This will help developer team to debug and quickly know the failure cause. 3) Select stable test cases to include in BVT. For new features if new critical test case passes consistently on different configuration then promote this test case in your BVT suite. This will reduce the probability of frequent build failure due to new unstable modules and test cases. 4) Automate BVT process as much as possible. Right from build release process to BVT result - automate everything. 5) Have some penalties for breaking the build Some chocolates or team coffee party from developer who breaks the build will do. Conclusion: BVT is nothing but a set of regression test cases that are executed each time for new build. This is also called as smoke test. Build is not assigned to test team unless and until the BVT passes. BVT can be run by developer or tester and BVT result is communicated throughout the team and immediate action is taken to fix the bug if BVT fails. BVT process is typically automated by writing scripts for test cases. Only critical test cases are included in BVT. These test cases should ensure application test coverage. BVT is very effective for daily as well as long term builds. This saves significant time, cost, resources and after all no frustration of test team for incomplete build.

What is Testing in Software Engineering Testing is running the program(or product) under various circumstances and conditions to find errors and bugs in it. This is important as releasing a faulty product will not only cause serious problems to the end user, it will also harm the companies reputation. There are various kind of Testing conditions and which one to use depends on type of product. Types of Testing in Software Engineering As per Test Target, there is * Unit Testing * Integration Testing * System Testing As per Test Objective, there is * User Acceptance Testing * Installation Testing * Functional * Alpha / Beta testing * Regression * Performance * Stress * Usability * Configuration * Smoke (Sanity Test) Let's see them one by one Unit Testing - The Tools used in Unit Testing are debuggers, tracers and is Done by Programmers. Unit testing verifies the functioning in isolation of software pieces which are separately testable. Integration Testing - According to IEEE, Integration Testing is An orderly progression of testing in which software elements, hardware elements, or both are combined and tested, to evaluate their interactions, until the entire system has been integrated. It Test against system design and Focuses on communication between modules start with one module, then add incrementally. Various Types of Integration Testing are: * big bang approach - Integrate Everything at once * top-down approach - Keep on breaking the system in parts one by one and than test each part. * bottom-up approach - Test the small parts first and than keep on integrating the system and keep on testing the bigger module of it. * mixed approach - Done by help of stubs (Dummy modules) Sysem Testing - IEEE Defines it as The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It is tested against system specification. May test manual procedures, restart and recovery, user interface, stress, performance. In System Testing, real data is used and sometimes users participations is also used. Objective Based Testing Some of the testing under this category and what they mean are as follows - * User Acceptance Testing - Testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. It is Done against requirements and is done by actual users. * Installation Testing - System testing conducted once again according to hardware configuration requirements. Installation procedures may also be verified * Functional Testing - It checks that the functional specifications are correctly implemented. Can also check if Non Funcctional behavior is as per expectations. * Stress testing (aka endurance testing)

- We impose abnormal input to stress the capabilities of the software. Input data volume, input data rate, processing time, utilization of memory, etc. are tested beyond the designed capacity. * Regression Testing - Regression Testing accdoing to IEEE is "selective retesting of a system or component to verify that modifications have not caused unintended effects". It is repetition of tests intended to show that the softwares behavior is unchanged, except insofar as required. It Can be done at each test level. * Performance Testing - It is verifying that the software meets the specified performance requirements (response time, volume ..) Alpha / Beta testing - Probably one term which you must be aware of as we often hears this software is in Alpha phase and in Beta phase. Here Testing is done by representative set of potential users for trial use. Please Note - in-house (alpha testing) - external (beta testing) Usability Testing - It evaluate the Human Computer Interface. Verifies for ease of use by end-users. Verifies ease of learning the software, including user documentation. Checks how effectively the software functions in supporting user tasks. Checks the ability to recover from user errors. Configuration Testing - It is Used when software meant for different types of users. It also check that whether the software performs for all users. Smoke (Sanity Test) - It is used to Verify whether the build is ready for feature/requirement based testing). Recovery Testing - It is used in verifying software restart capabilities after a disaster Security Testing - It is ued toverify proper controls have been designed. How to Design a Test Case This Question is often asked in microsoft of from any other company who is looking to hire you for testing work. A test case will have 5 section

Are you going to start on a new project for testing? Dont forget to check this Testing Checklist in each and every step of your Project life cycle. List is mostly equivalent to Test plan, it will cover all quality assurance and testing standards. Testing Checklist: 1 Create System and Acceptance Tests [ ] 2 Start Acceptance test Creation [ ] 3 Identify test team [ ] 4 Create Workplan [ ] 5 Create test Approach [ ] 6 Link Acceptance Criteria and Requirements to form the basis of acceptance test [ ] 7 Use subset of system test cases to form requirements portion of acceptance test [ ] 8 Create scripts for use by the customer to demonstrate that the system meets requirements [ ] 9 Create test schedule. Include people and all other resources. [ ] 10 Conduct Acceptance Test [ ] 11 Start System Test Creation [ ] 12 Identify test team members [ ] 13 Create Workplan [ ] 14 Determine resource requirements [ ] 15 Identify productivity tools for testing [ ] 16 Determine data requirements [ ] 17 Reach agreement with data center [ ] 18 Create test Approach [ ] 19 Identify any facilities that are needed [ ] 20 Obtain and review existing test material [ ] 21 Create inventory of test items [ ]

22 Identify Design states, conditions, processes, and procedures [ ] 23 Determine the need for Code based (white box) testing. Identify conditions. [ ] 24 Identify all functional requirements [ ] 25 End inventory creation [ ] 26 Start test case creation [ ] 27 Create test cases based on inventory of test items [ ] 28 Identify logical groups of business function for new sysyem [ ] 29 Divide test cases into functional groups traced to test item inventory [ ] 1.30 Design data sets to correspond to test cases [ ] 31 End test case creation [ ] 32 Review business functions, test cases, and data sets with users [ ] 33 Get signoff on test design from Project leader and QA [ ] 34 End Test Design [ ] 35 Begin test Preparation [ ] 36 Obtain test support resources [ ] 37 Outline expected results for each test case [ ] 38 Obtain test data. Validate and trace to test cases [ ] 39 Prepare detailed test scripts for each test case [ ] 40 Prepare & document environmental set up procedures. Include back up and recovery plans [ ] 41 End Test Preparation phase [ ] 42 Conduct System Test [ ] 43 Execute test scripts [ ] 44 Compare actual result to expected [ ] 45 Document discrepancies and create problem report [ ] 46 Prepare maintenance phase input [ ] 47 Re-execute test group after problem repairs [ ] 48 Create final test report, include known bugs list [ ] 49 Obtain formal signoff [ ]

How to test software requirements specification (SRS)?


Posted In | Testing Life cycle, Software Testing Templates Do you know Most of the bugs in software are due to incomplete or inaccurate functional requirements? The software code, doesnt matter how well its written, cant do anything if there are ambiguities in requirements. Its better to catch the requirement ambiguities and fix them in early development life cycle. Cost of fixing the bug after completion of development or product release is too high. So its important to have requirement analysis and catch these incorrect requirements before design specifications and project implementation phases of SDLC. How to measure functional software requirement specification (SRS) documents? Well, we need to define some standard tests to measure the requirements. Once each requirement is passed through these tests you can evaluate and freeze the functional requirements. Lets take an example. You are working on a web based application. Requirement is as follows: Web application should be able to serve the user queries as early as possible How will you freeze the requirement in this case? What will be your requirement satisfaction criteria? To get the answer, ask this question to stakeholders: How much response time is ok for you? If they say, we will accept the response if its within 2 seconds, then this is your requirement measure. Freeze this requirement and carry the same procedure for next requirement. We just learned how to measure the requirements and freeze those in design, implementation and testing phases. Now lets take other example. I was working on a web based project. Client (stakeholders) specified the project requirements for initial phase of the project development. My manager circulated all the requirements in the team for review. When we started discussion on these requirements, we were just shocked! Everyone was having

his or her own conception about the requirements. We found lot of ambiguities in the terms specified in requirement documents, which later on sent to client for review/clarification. Client used many ambiguous terms, which were having many different meanings, making it difficult to analyze the exact meaning. The next version of the requirement doc from client was clear enough to freeze for design phase. From this example we learned Requirements should be clear and consistent Next criteria for testing the requirements specification is Discover missing requirements Many times project designers dont get clear idea about specific modules and they simply assume some requirements while design phase. Any requirement should not be based on assumptions. Requirements should be complete, covering each and every aspect of the system under development. Specifications should state both type of requirements i.e. what system should do and what should not. Generally I use my own method to uncover the unspecified requirements. When I read the software requirements specification document (SRS), I note down my own understanding of the requirements that are specified, plus other requirements SRS document should supposed to cover. This helps me to ask the questions about unspecified requirements making it clearer. For checking the requirements completeness, divide requirements in three sections, Must implement requirements, requirements those are not specified but are assumed and third type is imagination type of requirements. Check if all type of requirements are addressed before software design phase. Check if the requirements are related to the project goal. Some times stakeholders have their own expertise, which they expect to come in system under development. They dont think if that requirement is relevant to project in hand. Make sure to identify such requirements. Try to avoid the irrelevant requirements in first phase of the project development cycle. If not possible ask the questions to stakeholders: why you want to implement this specific requirement? This will describe the particular requirement in detail making it easier for designing the system considering the future scope. But how to decide the requirements are relevant or not? Simple answer: Set the project goal and ask this question: If not implementing this requirement will cause any problem achieving our specified goal? If not, then this is irrelevant requirement. Ask the stakeholders if they really want to implement these types of requirements.

In short requirements specification (SRS) doc should address following: Project functionality (What should be done and what should not) Software, Hardware interfaces and user interface System Correctness, Security and performance criteria Implementation issues (risks) if any Conclusion: I have covered all aspects of requirement measurement. To be specific about requirements, I will summarize requirement testing in one sentence: Requirements should be clear and specific with no uncertainty, requirements should be measurable in terms of specific values, requirements should be testable having some evaluation criteria for each requirement, and requirements should be complete, without any contradictions Testing should start at requirement phase to avoid further requirement related bugs. Communicate more and more with your stakeholder to clarify all the requirements before starting project design and implementation.

What you need to know about BVT (Build Verification Testing)


Posted In | Testing Concepts, Testing Life cycle, Automation Testing What is BVT? Build Verification test is a set of tests run on every new build to verify that build is testable before it is released to test team for further testing. These test cases are core functionality test cases that ensure application is stable and can be tested thoroughly. Typically BVT process is automated. If BVT fails that build is again get assigned to developer for fix. BVT is also called smoke testing or build acceptance testing (BAT) New Build is checked mainly for two things:

Build validation Build acceptance

Some BVT basics:


It is a subset of tests that verify main functionalities. The BVTs are typically run on daily builds and if the BVT fails the build is rejected and a new build is released after the fixes are done. The advantage of BVT is it saves the efforts of a test team to setup and test a build when major functionality is broken. Design BVTs carefully enough to cover basic functionality. Typically BVT should not run more than 30 minutes. BVT is a type of regression testing, done on each and every new build.

BVT primarily checks for the project integrity and checks whether all the modules are integrated properly or not. Module integration testing is very important when different teams develop project modules. I heard many cases of application failure due to improper module integration. Even in worst cases complete project gets scraped due to failure in module integration. What is the main task in build release? Obviously file check in i.e. to include all the new and modified project files associated with respective builds. BVT was primarily introduced to check initial build health i.e. to check whether - all the new and modified files are included in release, all file formats are correct, every file version and language, flags associated with each file. These basic checks are worth before build release to test team for testing. You will save time and money by discovering the build flaws at the very beginning using BVT. Which test cases should be included in BVT? This is very tricky decision to take before automating the BVT task. Keep in mind that success of BVT depends on which test cases you include in BVT. Here are some simple tips to include test cases in your BVT automation suite:

Include only critical test cases in BVT. All test cases included in BVT should be stable. All the test cases should have known expected result. Make sure all included critical functionality test cases are sufficient for application test coverage.

Also do not includes modules in BVT, which are not yet stable. For some underdevelopment features you cant predict expected behavior as these modules are unstable and you might know some known failures before testing for these incomplete modules. There is no point using such modules or test cases in BVT. You can make this critical functionality test cases inclusion task simple by communicating with all those involved in project development and testing life cycle. Such process should negotiate BVT test cases, which ultimately ensure BVT success. Set some BVT quality standards and these standards can be met only by analyzing major project features and scenarios. Example: Test cases to be included in BVT for Text editor application (Some sample tests only): 1) Test case for creating text file. 2) Test cases for writing something into text editor 3) Test case for copy, cut, paste functionality of text editor 4) Test case for opening, saving, deleting text file.

These are some sample test cases, which can be marked as critical and for every minor or major changes in application these basic critical test cases should be executed. This task can be easily accomplished by BVT. BVT automation suits needs to be maintained and modified time-to-time. E.g. include test cases in BVT when there are new stable project modules available. What happens when BVT suite run: Say Build verification automation test suite executed after any new build. 1) The result of BVT execution is sent to all the email IDs associated with that project. 2) The BVT owner (person executing and maintaining the BVT suite) inspects the result of BVT. 3) If BVT fails then BVT owner diagnose the cause of failure. 4) If the failure cause is defect in build, all the relevant information with failure logs is sent to respective developers. 5) Developer on his initial diagnostic replies to team about the failure cause. Whether this is really a bug? And if its a bug then what will be his bug-fixing scenario. 6) On bug fix once again BVT test suite is executed and if build passes BVT, the build is passed to test team for further detail functionality, performance and other testes. This process gets repeated for every new build. Why BVT or build fails? BVT breaks sometimes. This doesnt mean that there is always bug in the build. There are some other reasons to build fail like test case coding error, automation suite error, infrastructure error, hardware failures etc. You need to troubleshoot the cause for the BVT break and need to take proper action after diagnosis. Tips for BVT success: 1) Spend considerable time writing BVT test cases scripts. 2) Log as much detailed info as possible to diagnose the BVT pass or fail result. This will help developer team to debug and quickly know the failure cause. 3) Select stable test cases to include in BVT. For new features if new critical test case passes consistently on different configuration then promote this test case in your BVT suite. This will reduce the probability of frequent build failure due to new unstable modules and test cases. 4) Automate BVT process as much as possible. Right from build release process to BVT result - automate everything. 5) Have some penalties for breaking the build Some chocolates or team coffee party from developer who breaks the build will do. Conclusion: BVT is nothing but a set of regression test cases that are executed each time for new build. This is also called as smoke test. Build is not assigned to test team unless and until the BVT passes. BVT can be run by developer or tester and BVT result is communicated

throughout the team and immediate action is taken to fix the bug if BVT fails. BVT process is typically automated by writing scripts for test cases. Only critical test cases are included in BVT. These test cases should ensure application test coverage. BVT is very effective for daily as well as long term builds. This saves significant time, cost, resources and after all no frustration of test team for incomplete build.

Top 20 practical software testing tips you should read before testing any application.
Posted In | Testing best practices, Testing Skill Improvement, Testing Tips and resources I wish all testers read these software testing good practices. Read all points carefully and try to implement them in your day-to-day testing activities. This is what I expect from this article. If you dont understand any testing practice, ask for more clarification in comments below. After all you will learn all these testing practices by experience. But then why not to learn all these things before making any mistake? Here are some of the best testing practices I learned by experience:

1) Learn to analyze your test results thoroughly. Do not ignore the test result. The final test result may be pass or fail but troubleshooting the root cause of fail will lead you to the solution of the problem. Testers will be respected if they not only log the bugs but also provide solutions. 2) Learn to maximize the test coverage every time you test any application. Though 100 percent test coverage might not be possible still you can always try to reach near it. 3) To ensure maximum test coverage break your application under test (AUT) into smaller functional modules. Write test cases on such individual unit modules. Also if possible break these modules into smaller parts. E.g: Lets assume you have divided your website application in modules and accepting user information is one of the modules. You can break this User information screen into smaller parts for writing test cases: Parts like UI testing, security testing, functional testing of the User information form etc. Apply all form field type and size tests, negative and validation tests on input fields and write all such test cases for maximum coverage. 4) While writing test cases, write test cases for intended functionality first i.e. for valid conditions according to requirements. Then write test cases for invalid conditions. This will cover expected as well unexpected behavior of application under test. 5) Think positive. Start testing the application by intend of finding bugs/errors. Dont think beforehand that there will not be any bugs in the application. If you test the application by intention of finding bugs you will definitely succeed to find those subtle bugs also. 6) Write your test cases in requirement analysis and design phase itself. This way you can ensure all the requirements are testable. 7) Make your test cases available to developers prior to coding. Dont keep your test cases with you waiting to get final application release for testing, thinking that you can log more bugs. Let developers analyze your test cases thoroughly to develop quality application. This will also save the re-work time. 8 ) If possible identify and group your test cases for regression testing. This will ensure quick and effective manual regression testing. 9) Applications requiring critical response time should be thoroughly tested for performance. Performance testing is the critical part of many applications. In manual testing this is mostly ignored part by testers due to lack of required performance testing large data volume. Find out ways to test your application for performance. If not possible to create test data manually then write some basic scripts to create test data for performance test or ask developers to write one for you.

10) Programmers should not test their own code. As discussed in our previous post, basic unit testing of developed application should be enough for developers to release the application for testers. But you (testers) should not force developers to release the product for testing. Let them take their own time. Everyone from lead to manger know when the module/update is released for testing and they can estimate the testing time accordingly. This is a typical situation in agile project environment. 11) Go beyond requirement testing. Test application for what it is not supposed to do. 12) While doing regression testing use previous bug graph (Bug graph - number of bugs found against time for different modules). This module-wise bug graph can be useful to predict the most probable bug part of the application. 13) Note down the new terms, concepts you learn while testing. Keep a text file open while testing an application. Note down the testing progress, observations in it. Use these notepad observations while preparing final test release report. This good habit will help you to provide the complete unambiguous test report and release details. 14) Many times testers or developers make changes in code base for application under test. This is required step in development or testing environment to avoid execution of live transaction processing like in banking projects. Note down all such code changes done for testing purpose and at the time of final release make sure you have removed all these changes from final client side deployment file resources. 15) Keep developers away from test environment. This is required step to detect any configuration changes missing in release or deployment document. Some times developers do some system or application configuration changes but forget to mention those in deployment steps. If developers dont have access to testing environment they will not do any such changes accidentally on test environment and these missing things can be captured at the right place. 16) Its a good practice to involve testers right from software requirement and design phase. These way testers can get knowledge of application dependability resulting in detailed test coverage. If you are not being asked to be part of this development cycle then make request to your lead or manager to involve your testing team in all decision making processes or meetings. 17) Testing teams should share best testing practices, experience with other teams in their organization. 18) Increase your conversation with developers to know more about the product. Whenever possible make face-to-face communication for resolving disputes quickly and to avoid any misunderstandings. But also when you understand the requirement or resolve any dispute - make sure to communicate the same over written communication ways like emails. Do not keep any thing verbal.

19) Dont run out of time to do high priority testing tasks. Prioritize your testing work from high to low priority and plan your work accordingly. Analyze all associated risks to prioritize your work. 20) Write clear, descriptive, unambiguous bug report. Do not only provide the bug symptoms but also provide the effect of the bug and all possible solutions. Dont forget testing is a creative and challenging task. Finally it depends on your skill and experience, how you handle this challenge.

How to find a bug in application? Tips and Tricks


Posted In | Testing Skill Improvement, Bug Defect tracking, Testing Tips and resources, How to be a good tester

A very good and important point. Right? If you are a software tester or a QA engineer then you must be thinking every minute to find a bug in an application. And you should be! I think finding a blocker bug like any system crash is often rewarding! No I dont think like that. You should try to find out the bugs that are most difficult to find and those always misleads users. Finding such a subtle bugs is most challenging work and it gives you satisfaction of your work. Also it should be rewarded by seniors. I will share my experience of one such subtle bug that was not only difficult to catch but was difficult to reproduce also. I was testing one module from my search engine project. I do most of the activities of this project manually as it is a bit complex to automate. That module consist of traffic and revenue stats of different affiliates and advertisers. So testing such a reports is always a difficult task. When I tested this report it was showing the data accurately processed for some time but when tried to test again after some time it was showing misleading results. It was strange and confusing to see the results. There was a cron (cron is a automated script that runs after specified time or condition) to process the log files and update the database. Such multiple crons are running on log files and DB to synchronize the total data. There were two crons running on one table with some time intervals. There was a column in table that was getting overwritten by other cron making some data inconsistency. It took us long time to figure out the problem due to the vast DB processes and different crons. My point is try to find out the hidden bugs in the system that might occur for special conditions and causes strong impact on the system. You can find such a bugs with some tips and tricks. So what are those tips: 1) Understand the whole application or module in depth before starting the testing. 2) Prepare good test cases before start to testing. I mean give stress on the functional test cases which includes major risk of the application. 3) Create a sufficient test data before tests, this data set include the test case conditions and also the database records if you are going to test DB related application. 4) Perform repeated tests with different test environment. 5) Try to find out the result pattern and then compare your results with those patterns. 6) When you think that you have completed most of the test conditions and when you think you are tired somewhat then do some monkey testing.

7) Use your previous test data pattern to analyse the current set of tests. Try some standard test cases for which you found the bugs in some different application. Like if you are testing input text box try inserting some html tags as the inputs and see the output on display page. 9) Last and the best trick is try very hard to find the bug break the application! As if you are testing only to

L2/l3 protocol testing in bangalore Details : Networking (Datacom - L2, L3) & Telecom Protocol Testing Courses 1. Perl, Shell scripting, TTCN, TCL/TK 2. Protocols (TCP/IP, RIP, OSPF, NMS - SNMP, VOIP, VLAN, STP, SIP, SS7, CDMA, GSM) 3. Wireless (WiMAX, WLAN - 802.11 a, b, g & n, ZigBee) 4. Security ( AES, DES, ZigBee security, MD5 etc) 5. SDLC (SRS, HLD, LLD) 6. Testing Methodologies (STLC, Test case & Test plan design, Functional testing, System testing, Regression testing, Integration testing, Stress testing, load testing etc.) 7. Testing tools and applications (Simulators, CVS, BugZilla, Wireshark, Clear case, clear quest, remedy, Test director etc) 8. OS (Unix, Linux, HP-UX) 9. Advanced real time concepts for manual & automation testing Our Success: This year nearly 100 people have been placed in big MNCs all over India !! Call us for more details: Address: FuGenX Technologies. Contact Number : 919945918162 (Please quote IndiaList.com when calling)

You might also like