You are on page 1of 2

Criterios para evaluas casos de uso

Another point of quality for test cases is how it's written. I generally require my teams to
write cases which contain the following (and I'm fine with letting them write 'titles only'
and returning to flesh out later; as a matter of fact, one-time projects I generally shy
away from much more than that).

• Has a title which does NOT include 'path info' (ie, Setup:Windows:Installer
recognizes missing Windows Installer 3.x and auto-installs). Keep the title short,
sweet, and to the point.
• Purpose: think of this as a mission statement. The first line of the description
field explains the goal of the test case, if it's different from the title or needs to be
expanded.

• Justification: this is also generally included in the title or purpose, but I want
each of my testers to explain why we would be spending $100, $500, or more to
run this test case. Why does it matter? If they can't justify it, should they
prioritize it?

• Clear, concise steps "Click here, click there, enter this"

• One (or more - another topic for a blog someday) clear, recognizable validation
points. IE, "VALIDATION: Windows begins installing the Windows Installer v
3.1" It pretty much has to be binary; leave it to management to decide what's a
gray area (ie, if a site is supposed to handle 1,000 sessions per hour, it's binary -
the site handles that, or not. Management decides whether or not 750 sessions
per hour is acceptable)

• Prioritization: be serious... Prioritize cases appropriately. If this case failed,


would this be a recall-class issue, would we add code to an update for this case,
would we fix it in the next version, or would we never fix it. Yes this is a bit of a
judgment call but it's a valid way of looking a the case. Another approach is to
consider the priority of a bug in terms of data loss, lack of functionality,
inconvenience, or 'just a dumb bug' way.
• Finally, I've flip-flopped worse than John Kerry on the idea of atomic cases.
Should we write a bazillion cases covering one instance of everything, or should
we write one super-case. I've come up with a description which I generally have
to coach my teams on during implementation. Basically, write a case which will
result in one bug. So for instance, I would generally have a login success case, a
case for failed log in due to invalid password, a case for failed log in due to non-
existent user name, a case for expired user name or password, etc. It takes some
understanding of the code, or at least an assumption about the implementation.
Again, use your judgment.

I read the response that a good case is one that has a high probability of finding a bug.
Well... I see what the author is getting at, but I disagree with the statement if read at
face-value. That implies a tester would 'filter' her case writing, probing more into a
developer's areas of weakness. That's not helpful. Hopefully your cases will cover the
project enough that all the important bugs will be exposed, but there's no guarantee. I
think the middle ground is appropriate here - a good case 1) validates required
functionality (proves the app does what it should) and 2) probes areas where, if a bug is
found, the bug would be fixed (in a minimal QA environment) or (in a deeper quality
environment) advances produce quality significantly.

BTW: one respondent to the question replied and said a good test case is one which
brings you closer to your goal. Succinct!

http://thoughtsonqa.blogspot.com/2008/01/what-makes-good-test-case.html

You might also like