Professional Documents
Culture Documents
and definitions:
Writing effective test cases is a skill and that can be achieved by some experience and in-depth
study of the application on which test cases are being written.
Here I will share some tips on how to write test cases, test case procedures and some basic
test case definitions:
There are levels in which each test case will fall in order to avoid duplication efforts.
Level 1: In this level you will write the basic test cases from the available specification and
user documentation.
Level 2: This is the practical stage in which writing test cases depend on actual functional
and system flow of the application.
Level 3: This is the stage in which you will group some test cases and write a test
procedure. Test procedure is nothing but a group of small test cases maximum of 10.
Level 4: Automation of the project. This will minimize human interaction with system and
thus QA can focus on current updated functionalities to test rather than remaining busy
with regression testing.
So you can observe a systematic growth from no testable item to an Automation suit.
The basic objective of writing test cases is to validate the testing coverage of the
application. If you are working in any CMM level company then you will strictly follow test
cases standards. So writing test cases brings some sort of standardization and minimizes
the ad-hoc approach in testing.
Verify
Using [tool name, tag name, dialog, etc]
With [conditions]
To [what is returned, shown, demonstrated]
For any application basically you will cover all the types of test cases including functional,
negative and boundary value test cases.
Keep in mind while writing test cases that all your test cases should be simple and easy to
understand. Don’t write explanations like essays. Be to the point.
Try writing the simple test cases as mentioned in above test case format. Generally I use
Excel sheets to write the basic test cases. Use any tool like ‘Test Director’ when you are
going to automate those test cases.
1. visit LoginPage
2. enter userID
3. enter password
4. click login
5. see the terms of use page
6. click agree radio button at page bottom
7. click submit button
8. see PersonalPage
9. verify that welcome message is correct
username
This task identifies the key application areas that must be involved in
testing. It should also identify the testing group's responsibilities to
those areas. For example, testing might be responsible to development
for integration testing and system testing, and to solution delivery for
release testing.
3. Test Manager
4. Test Analysts
5. Project Manager
7. Analysts
8. Programmers
The first action is to list the testing tasks to be completed. This should
be followed by a review of the tasks by all of the test team members.
When consensus has be reached that the list is correct and complete,
an individual team member must be assigned to each task. A final
review based on each member's % of the workload should be
completed. MS Project makes this easy as it has several reports that
will provide workload, as well as, other statistics.
Output: Team Work Plan - The work plan defines milestones and
tentative completion dates for all assigned tasks. A project
management tools such as Microsoft Project can make this task very
easy and the resulting document is a Gantt Chart that illustrates who
is responsible for what and when.
Coverage or
Activity Description
Frequency
Every public
method
Every public We will use if-statements at the beginning of public
method in methods to validate each argument value. This helps
Preconditions
COMPONENT- to document assumptions and catch invalid values
NAME before they can cause faults.
All public methods
that modify data
Every private
method Assertions will be used to validate all arguments to
Every private private methods. Since these methods are only called
method in from our other methods, arguments passed to them
Assertions
COMPONENT- should always be valid, unless our code is defective.
NAME Assertions will also be used to test class invariants
All private methods and some post conditions.
that modify data
We will use source code analysis tools to
Strict compiler
automatically detect errors. Style checkers will help
warnings
make all of our code consistent with our coding
Automated style
standards. XML validation ensures that each XML
Static analysis checking
document conforms to its DTD. Lint-like tools help
XML validation
detect common programming errors. E.g.: lint,
Detect common
lclint/splint, jlint, checkstyle, Jcsc, PyLint,
errors
PyChecker, Tidy
All changes to
Whenever changes must be made to code on a release
release branches
branch (e.g., to prepare a maintenance release) the
All changes to
Buddy review change will be reviewed by another developer before
COMPONENT-
it is committed. The goal is to make sure that fixes do
NAME
not introduce new defects.
All changes
We will hold review meetings where developers will
perform formal inspections of selected code or
Weekly documents. We choose to spend a small,
Review meetings Once before release predetermined amount of time and try to maximize
Every source file the results by selecting review documents carefully. In
the review process we will use and maintain a variety
of checklists.
We will develop and maintain a unit test suite using
100% of public the JUnit framework. We will consider the boundary
methods, and 75% conditions for each argument and test both sides of
of statements each boundary. Tests must be run and passed before
Unit testing
100% of public each commit, and they will also be run by the testing
methods team. Each public method will have at least one test.
75% of statements And, the overall test suite will exercise at least 75% of
all executable statements in the system.
The QA team will author and maintain a detailed
100% of UI screens written suite of manual tests to test the entire system
Manual system and fields through the user interface. This plan will be detailed
testing 100% of specified enough that a person could repeat ably carry out the
requirements tests from the test suite document and other associated
documents.
100% of UI screens
The QA team will use a system test automation tool to
Automated system and fields
author and maintain a suite of test scripts to test the
testing 100% of specified
entire system through the user interface.
requirements
Regression testing Run all unit tests We will adopt a policy of frequently re-running all
before each commit automated tests, including those that have previously
Run all unit tests been successful. This will help catch regressions
nightly (bugs that we thought were fixed, but that appear
Add new unit test
when verifying again).
fixes
We use a load testing tool and/or custom scripts to
simulate heavy usage of the system. Load will be
defined by scalability parameters such as number of
concurrent users, number of transactions per second,
Simple load testing
or number/size of data items stored/processed. We
Load, stress, and Detailed analysis of
will verify that the system can handle loads within its
capacity testing each scalability
capacity without crashing, producing incorrect results,
parameter
mixing up results for distinct users, or corrupting the
data. We will verify that when capacity limits are
exceeded, the system safely rejects, ignores, or defers
requests that it cannot handle.
4 current customers We will involve outsiders in a beta test, or early
40 members of our access, program. We will beta testers directions to
Beta testing developers network focus on specific features of the system. We will
1000 members of actively follow up with beta testers to encourage them
the public to report issues.
As part of our SLA, we will monitor the behavior of
Monitor our ASP
servers to automatically detect service outages or
Instrumentation servers
performance degradation. We have policies and
and monitoring Remotely monitor
procedures in place for failure notification, escalation,
customer servers
and correction.
We want to understand each post-deployment system
failure and actively take steps to correct the defect.
Prompt users to
The system has built-in capabilities for gathering
Field failure report failures
detailed information from each system failure (e.g.,
reports Automatically
error message, stack trace back, operating system
report failures
version). This information will be transmitted back to
us so that we may analyze it and act on it.