You are on page 1of 52

LEADER

CONFESSIONS OF A CODER

n interesting website has recently been brought to my


attention. On the site you will find all sorts of confessions from
guilty developers about all manner of petty and not so
petty coding crimes and misdemeanours. If it werent for
the fact that posts to the site (www.codingconfessional.
com) are anonymous, Im sure there wouldnt be much action there,
either that or there would be some serious consequences out there in
developer world.
Before you head over there to either lambast your
developer friends or confess to any testing crimes
though perhaps its worth taking a deep breath and
remembering the lessons of last issues Leader; those
posts you make, those tweets you post, they may all
come back to haunt you (either in this life or once you
have shuffled off this mortal coil).
While many areas are covered in the roll of shame, of
special interest to us is the following comment from
an amorous tester/developer: While in college, I had
a group project with four others. I was responsible for
most of the heavy coding stuff. I had a curious bug
which took me five hours into the night to find, I didnt
fix it at the time and the next day I asked a girl I liked
to help me find it. Took six hours with us sitting close
together, pretending I couldnt find it. The next day
the group over ran the deadline by four hours.

none
of them
working
on it.
Deeply
upsetting!
Finally, one
sneaky poster
comments: Im
considering making a fake
replica of this site so I can trick
co-workers into anonymously
confessing their sins to me.
Beware, he may have already
done so!

SOMETIMES,
IF SOME
CODE DOES FAIL
OCCASIONALLY I
JUST PUT A LOOP THAT
TRIES IT AGAIN. TEST
AUTOMATION AT
ITS FINEST!

Until next time...

And this worrying and somewhat grammatically


scrambled admission: sometimes, if some codes
does fail occasionally I just put a loop that tries it
again. Test automation at its finest!
And then we have this worrying but all too familiar
outburst: I dont give a **** about UnitTests, not to
mention TDD (test driven development)! I havent
written a single UnitTest for a complex business
application where I am the lead backend developer.
Want to know why? My stuff works. Its because I know
how to program. I dont need UnitTests to convince
me that it works and it is reasonably well-designed,
that only the greatest idiots can break it. We have

2013 31 Media Limited.


All rights reserved.
TEST Magazine is edited, designed, and published
by 31 Media Limited. No part of TEST Magazine may
be reproduced, transmitted, stored electronically,
distributed, or copied, in whole or part without the
prior written consent of the publisher. A reprint service
is available.
Opinions expressed in this journal do not necessarily
reect those of the editor or TEST Magazine or its
publisher, 31 Media Limited.
ISSN 2040-01-60

JUNE 2013 | www.testmagazine.co.uk

Matt Bailey, Editor

TO ADVERTISE CONTACT:
Sarah Walsh
sarah.walsh@31media.co.uk
Tel: +44(0)203 668 6945

EDITORIAL & ADVERTISING ENQUIRIES


31 Media Ltd,
41-42 Daisy Business Park,
19-35 Sylvan Grove,
London, SE15 1PD
Tel: +44 (0) 870 863 6930
Fax: +44 (0) 870 085 8837
Email: info@31media.co.uk
Web: www.testmagazine.co.uk

PRODUCTION & DESIGN


Tina Harris
tina.harris@31media.co.uk

PRINTED BY
Pensord, Tram Road,
Pontllanfraith, Blackwood, NP12 2YA

EDITOR
Matthew Bailey
matthew.bailey@31media.co.uk
Tel: +44 (0)203 056 4599

PAGE 1

CONTENTS

INSIDE THIS ISSUE


LEADER COLUMN

Confessions of a coder what developers (and


testers) get up to when were not looking..

NEWS

THE EVOLUTION OF
TEST AUTOMATION

6. THE EVOLUTION
OF TEST AUTOMATION
NARAYANA MARUVADA
LOOKS AT HOW AUTOMATION
IS DEVELOPING IN AN EVERCHANGING TESTING LANDSCAPE
AND HOW AUTOMATION
FRAMEWORKS HAVE
EVOLVED IN RESPONSE TO
THESE CHANGES.

Narayana Maruvada looks at how automation is


developing in an ever-changing testing landscape
and how automation frameworks have evolved in
response to these changes.

THE BENEFITS OF BDD :

10

Roy de Kleijn assesses the benefits and challenges of


adopting a behaviour-driven approach to software
development.

MOBILE APP DEVELOPMENT

LEAVES TRADITIONAL PERFORMANCE


TESTING IN THE DUST

14

The entire model of traditional application testing is


obsolete when it comes to mobile apps according to
Bruno Duval.

KEYWORD DRIVEN TESTING

18

California-based test automation expert Mark Lehky looks


at keyword-driven testing.

THE RESULTS ARE IN...

22

Analysis of the results of the online Software Testing


Survey provided by Borlands Brian Fangman, Julian
Dobbins and Becky Wetherill.

TESTA NEWS EXTRA

26

More support for the TESTA software testing awards


as four new sponsors and five new judges come on
board.

THE TESTING WORLD

AGILE PEOPLE

28

Angelina Samaroo continues her analysis of Agile


principles, this issue she focuses on the people and
the motivation.

TEST PROFILE

32

INTERESTING DEVELOPMENTS
If the inspiration to create innovative testing products
is Borlands lifeblood then, as development director,
Mark Conway must keep the arteries of this software
company busy. We found out what makes him tick.

JUNE 2013 | www.testmagazine.co.uk

SWEATING YOUR IT ASSETS

TESTING AND BIG DATA

36

In a challenging economic climate you need to make


the most of what youve got. Thomas Coles says its time
to sweat those assets.

AVOIDING TEST SLIPPAGE

38

Poor test environment management is the common


denominator when there is test slippage in large
organisations according to Sean Hamawi.

WATCH WHAT YOU TWEET!

42

There used to be a clearer line between your professional


and personal life, but with the use of social networking
sites for business and pleasure growing, the line is
increasingly blurred. Gemma Murphy reports.

DESIGN FOR TEST -

COMPREHENSION MATTERS

43

This issue Mike Holcombe investigates automatic test set


generation in particular and test automation in general.

BRINGING IT ALL BACK HOME

44

You could be forgiven for thinking that a recession would


drive further off-shoring but it seems that CIOs are no longer
prepared to take the risk. Mark Bargh reports.

LAST WORD

SPECIAL OFFER!

48

Dave Whalen assesses the value of free automated


testing tools.

PAGE 3

NEWS

GOOGLE GLASS
BRINGS INTERNET OF
EVERYTHING CLOSER
Google Glass, the wearable
computer with a headmounted display currently under
development by the company
that is set to offer continuous
augmented reality to the user
will bring the goal of the Internet
of Everything closer according to
Mark Dunleavy, the UK managing
director at data integration
specialist Informatica.
Google Glass is taking a giant
leap in the journey towards the
Internet of Everything, says
Dunleavy. The security implications
of this type of technology are rightly
being scrutinised as we accelerate
towards the next generation of
technology. Whilst Google Glass is
still in its infancy, the possibilities with
this new technology are endless.
The ability to scan barcodes and
instantly get information about
a product or showcase traffic
blocks on a GPS route, will change
our lives. Yet, there is a stumbling
block. How do we get the new
innovations to talk to retailers or
a GPS system to convey traffic
updates to a new device?
The Internet of Everything is
a race - to connect the vast
range of things and processes in
the physical and digital worlds.
Innovation has jumped ahead
of the integration of everything.
We need everything to be able to
talk to everything else, in the same
language. And, this is no mean
feat. The fact is; data integration
underpins absolutely everything,
every device and every system
that is connected in some way to
another object. The trick is to get
this right and in real time.
For the Internet of Everything
to work, concludes Dunleavy,
the industry needs to get data
integration and security right.
Organisations often underestimate
the need for data to be
integrated in real time. And, think
about this after the fact. Once
data is transferring smoothly from
one organisation to another, the
more people will want to know
who holds information on them
and how and when it is used.
And, the devices will need to be
enabled to participate in our ever
changing world.

PAGE 4

KEEPING UP WITH THE MOBILE APPS GOLD RUSH


It currently takes an average of five
months to deliver a new version of
a mobile applications for an existing
mobile device update, according to
an independent global research study
undertaken by Vanson Bourne and
commissioned by Borland, confirming that
developers cannot keep up with device
vendors releasing updates every couple
of months.
Of the 590 CIOs and IT directors polled
from nine countries around the globe,
the majority (79 percent) confirmed the
teams delivering these mobile apps are a
mix of in-house and outsourced support.
However, a third labelled their mobile
development team as sluggish, middling
or outpaced, showing a distinct lack of
faith in their ability to develop and deliver
against business requirements. This poses
a particular challenge given respondents
predict a 50 percent increase in the
number of business apps that need to be
made accessible on mobile devices over
the next three years (from 31 percent in
2013 to 46 percent in 2016).
The ability to deliver timely mobile apps
presents an even greater problem to
mainframe organizations. 78 percent
of CIOs said that having a mainframe
makes developing or implementing
mobile applications that work with their
existing systems more difficult. 86 percent
confirmed mobile application vendors
and developers are more reticent to
work with mainframe organisations.

These findings confirm there is a real


need to bridge the world of mainframe
and mobile to ease the challenges for
mainframe organisations
CIOs made a clear choice to back
Android as their mobile operating
system, with 78 percent of organisations
developing their mobile apps for this
system today. Apple iOS came second,
with 65 percent developing for it, and
Windows Phone third at 52 percent.
Although Android is expected to maintain
its pole position in two years time with
77 percent, iOS and Windows Phone
will close the gap with 71 percent and
65 percent respectively. CIOs are not
predicting a comeback for Blackberry
OS. Lagging fourth at 36 percent,
respondents estimate a miserly one
percent growth to 37 percent in two years
time. Unsurprisingly, Symbian is the clear
loser with only seven percent choosing to
develop for the operating system.
Archie Roboostoff, Borland Solutions
Portfolio Director at Micro Focus, said:
Mobile apps play a critical role in
every organizations business strategy
today. However, the consumer in all of
us is demanding more, and companies
are under increasing pressure to
release higher quality mobile apps
faster and more often than ever
before. A shift in thinking is needed
when it comes to mobile quality,
performance and development.

SOFTWARE TESTING MOVES INTO THE CLOUD


at any time. It says the
flexible services can be
configured to ensure
client-specific data
protection and security
levels and it can also
combine the cloud
services with training
and consultancy.

Software quality
specialist SQS has
revealed plans to offer
cloud-based testing
services. According
to the company its
Quality Cloud offers four
standardised services
that can be accessed

Quality cloud will offer


four main service areas,
test management;
performance and stress
testing (in partnership
with HP tools); security
testing; and analyses of
code which is carried
out with the aid of
the SQS Code Quality
Testsuite. Services are
billed according to
usage time. For the
packages that include
consulting or training as
well as cloud services,

SQS quotes fixed prices


based on the agreed
scope of services to
be provided.
Right from when they
first start software testing
in the cloud, companies
have to have a clear
definition of their testing
requirements and
processes. This is the only
way to make the most of
the potential savings from
the cloud, says Kai-Uwe
Gawlik, global head of
service management at
SQS. That is why we offer
consultancy and training
along with the SQS
Quality Cloud itself. SQS
consulting is based on
over 30 years experience
in software testing.

JUNE 2013 | www.testmagazine.co.uk

NEWS
DIGITAL SKILLS ACADEMY LAUNCHED
With recent research from e-skills UK
showing that optimising ICT could
raise the nations GVA by 50bn over
the next five to seven years, improving
the skills of individuals and SMEs is an
important step in generating growth
and prosperity.
Skills Minister Matthew Hancock MP
backs the aims of the Digital Skills
Academy. Nowadays, anyone
looking for work needs to be
technology-savvy. These exciting new
developments will make it easier for
people at all stages of their careers to
gain the skills they need, and will give
employers a simple but powerful way
to help their staff.

The National Skills Academy for


IT is marking the creation of its new
Digital Skills Academy with two new
products: the Go ON you can do
IT App and the Enterprise IT Guide.
Both products support the Digital
Skills Academys mission to help
individuals and businesses make the
most of technology.

Graham Walker, Go ON UK CEO


says: Its hard to imagine life without
the web. Yet a full 7.4 million people
in the UK have never been online,

SOCIAL COLLABORATION ON THE RISE


Results from a global survey
on adoption of social
collaboration technologies
of 4,000 end users and
1,000 business and IT
decision-makers in 22
countries have revealed
that the majority of
businesses are using social
networking technologies in
the enterprise.

wanted to adopt in the


next year, Facebook
fell to the very end of
USINESSES HAVE A
the list with only eight
VARIETY OF NEEDS
percent of decisionAND EXPECTATIONS
makers respondents
DRIVING ADOPTION
noting it as a priority.
OF COLLABORATION

However, the research also


reveals common misconceptions
when it comes to social
collaboration. Those who have
adopted social networking
technologies in their company
reported using consumer-oriented
social technologies including
Facebook (74 percent) for
collaboration at twice the rate of
Microsoft SharePoint (39 percent),
four times more than IBM Open
Connections (17 percent), and
six times more than Salesforce
Chatter (12 percent).
But, data shows this trend may
change in the next 12 months.
Decision-makers planning to
adopt social technologies
report Microsoft SharePoint (23
percent) and Salesforce Chatter
(23 percent) at the top of their
list of collaboration deployments
planned in the coming year.
Though Facebook is currently
ranked No. 1 among social
collaboration technologies in
use today (74 percent), when
asked what social tools businesses

TOOLS.

Businesses have a
variety of needs and
expectations driving
adoption of collaboration
tools. And the consumerisation of
IT has raised expectations among
employees about the social
technologies that they can use
to collaborate inside and outside
of the organization, said Andr
Huizing, global collaboration
lead at the company that
commissioned the research,
Avanade. To fully maximise the
opportunity around these tools,
successful collaboration strategies
will align closely to the goals of the
business, while prioritizing the needs
of end users, and be supported by
the right tools, training and policies
to promote adoption throughout
the organization. Furthermore, as
reported in the 2013 Accenture
Technology Vision, integrating
collaboration into business
processes will also transform the
way work is done in the enterprise.
We see huge opportunity for
business results with the right social
collaboration strategy in place.

JUNE 2013 | www.testmagazine.co.uk

and 16 million people do not have


the Basic Online Skills to confidently
take advantage. Thats why were
delighted to support the launch
of the Digital Skills Academy, and
believe the Go ON you can do IT
App will help many more people and
organisations get the skills they need
to benefit from the web.
e-skills UK has always been proud
to be a leader in the field of digital
literacy, adds Karen Price, CEO of
e-skills UK. The Digital Skills Academy
is an important step forward in
achieving technical skills for all something thats vital for economic
growth, but also for peoples personal
engagement and satisfaction.

E-SPONSORED
ATTACKS CONTINUE
News that the US has been bombarded by
attacks from Chinese hackers using different
techniques to steal data from scores of
American companies and government
agencies indicates the importance that
cyberspace now has in government circles
according to IT security specialist Lancope.
The companys director of security research,
Tom Cross comments that the fact that statesponsored attacks are on the rise means that
IT professionals and their managers - need to
review their technology defences.
Were hearing more and more about statesponsored attacks, so you can be sure that this
form of technology subversion and compromises
are now firmly part of the modern security
threat landscape. The reality is, however, that
governments and their agencies have access
to the very latest attack techniques and
technologies, meaning that organisations need
to significantly raise the bar on their security
defences, he said.
As we said in our report on APT attack
vectors, few organisations currently view
their incident responders as the front line in
their defensive posture, yet it is obvious from
the evolution of APTs and, of course, statesponsored attacks that intelligence forms a
key role when developing a security strategy
to better defend your businesses data and
allied IT assets, he added.
The Lancope director of security research
went on to say that this means that the
incident response team should become a
central part of the defences that organisations
employ to protect their network.

PAGE 5

COVER STORY
NARAYANA MARUVADA
SENIOR QA ENGINEER
VALUELABS

THE EVOLUTION OF TEST


AUTOMATION
Narayana Maruvada, a senior QA engineer with
ValueLabs, looks at how automation is developing in an
ever-changing testing landscape and how automation
frameworks have evolved in response to these changes.

ssuring the quality of a


software has become very
challenging especially when
you look at the size and
complexity of applications
which has been increasing steadily
or exponentially with the onset and
demand for rapidly developed and
deployed web-clients, while the
customers and end-users are becoming
more and more demanding.
Under these circumstances, verification
and validation activities tend to lose their
momentum and take a compromising
walk since the testing-cycles and
schedules themselves tend to drop straight
down at a high speed. Now, in order to
move in pace with ever-changing quality
dynamics and end-user expectations,
conventional manual testing may not be
a suitable option and test automation
becomes an increasingly critical and
strategic necessity.

APPRECIATING BENEFITS OF
TEST AUTOMATION
Getting started, the following is a list of
the benefits of test automation:

BUILDING
AN AUTOMATED
TESTING SYSTEM
OR AUGMENTING AN
EXISTING SYSTEM WITH
AUTOMATION IS DEFINITELY
A TECHNICAL CHALLENGE
WHICH SHOULD BE
GIVEN THOUGHTFUL
CONSIDERATION

PAGE 6

Running more tests more often: Test


automation means faster test execution
which encourages more test iterations.
Further, it also makes creating new test
cases a quick and easy process.
Consistency and repeatability of tests:
If tests always run the same way, test
results can be consistently compared
to results from previous iterations.
Importantly, tests can be repeated in
different environments.
Reuse of tests: Generally, reusing tests
or test scripts from previous projects will
give a kick start to a new project.

JUNE 2013 | www.testmagazine.co.uk

COVER STORY

Optimum use of resources: Automating a repeated or


regular check-list of tasks releases test engineers to focus on
more demanding work.
Quicker time-to-market: Just as with reusing tests and
shortening execution times, automation will hasten
feedback cycles to developers ultimately shortenning the
time to market.
Increased condence in application roll-out: This could
be the most promising benet of test automation since
running an extensive set of tests often, consistently
and on different environments successfully increases
the condence that the product is really ready to be
released or rolled out.
Effortless regression testing: With test automation, you
can run previously created tests against any new
functionality or environment without any extra effort
which clearly makes testing more efcient.

COMMON PROBLEMS WITH


TEST AUTOMATION
Of course there is another side to test automation apart
from all of the benets it promises There are general
problems which get introduced if the tester forgets the
fact that any larger test automation project is typically a
software project on its own right; and just as any software
project can fail if it does not adhere to processes and it is
not managed properly, test automation projects are no
different. The list below outlines the common problems
observed with test automation:
Unrealistic expectations: If managers believe that test
automation will solve all their testing problems and make
the software quality better they may be disappointed.
Test automation experts and their teams should help
managers with managing their expectations.
Misconceptions about test automation; will it nd lots
of new defects?: It has to be clearly understood that
after automated testing has been run successfully once,
it is not very likely to nd new bugs unless the tested
functionality changes. It is obvious that test automation
will normally nd more defects while tests are being
developed rather than when tests are re-executed.
Poor testing practice: If the existing testing practices
and processes are inadequate, then it is better to start
improving them rather than bringing in test automation to
solve the problem.
Test Maintenance: Generally, when the SUT changes,
then the corresponding tests change. So the
conventional / manual test engineers can handle even
major changes without any problems, but same cannot
be said when testing is automated since even the
slightest change with SUT, may cause the automated test
to fail. Further, if maintaining a test automation system
takes more time than testing manually when taking all
the changes into account, then automation should
be abandoned. This can also be the case when new
features are added to a system.
Technical Challenges: Building an automated testing
system or augmenting an existing system with automation
is denitely a technical challenge which should be
given thoughtful consideration since it is very unlikely to
proceed without any problems. So, a strong team with
the requisite technical skill-set should be put together to
build and troubleshoot any issues.

JUNE 2013 | www.testmagazine.co.uk

TEST AUTOMATION FAILURE


An organization that has set out on evaluating the
various commercial automation tools that are available
in the market should consult the technical sales staff from
the various vendors, watch their videos, presentations
and demonstrations and perform thorough internal
evaluations of each tool available.
After thorough deliberation, one organisation chose
a particular vendor and placed an initial order
worth millions of dollars for test automation products,
maintenance contracts and onsite training. Finally,
the tools and training were delivered and distributed
throughout the organisation into its various test
departments where each was working on its own project.
Since there were a number of projects and none of
them had anything in common with the applications
being vastly different, and each of these projects
had individual schedules and deadlines to meet,
every one of the test departments began separately
coding functionally identical common libraries and
they made routines for setting up the appropriate
test environment and for accessing the requisite
programming interfaces. Further, they continued to
make le-handling routines, string utilities, database
access routines etc. which eventually lead to code
and design duplication and increased complexity.
Likewise, for their respective test designs, they each
captured application specic interactive tests using
the typical capture / replay tools. Some test groups
went on to the next step and modularised key reusable
sections, creating reusable libraries of applicationspecic test functions or scenarios. This was intended
to reduce the amount of code duplication and
maintenance that occurs in purely captured test
scripts. Nevertheless, for some of the projects this might
have been very appropriate and helpful if done with
sufcient planning and making use of an appropriate
automation framework. But this was seldom the case.
However, now with all these modularised libraries,
testers could manage to create functional automated
tests in the automation tools proprietary scripting
language via a combination of interactive test
capture, manual editing, and manual scripting.
A major problem occurred because the separate test
teams did not think past their own individual projects
and although they were each setting up a reusable
framework, each was completely unique even where
the common library functions were the same. This
meant duplicate development, duplicate debugging,
and duplicate maintenance. So, understandably each
separate project still had looming deadlines, and each
was forced to limit their automation efforts in order to get
real testing done.
Above all, as changes to the various applications
started breaking the existing automated tests and
hence the script maintenance and debugging
became a signicant challenge to the team.
Additionally, upgrades in the automation tools
themselves caused signicant and unexpected script
failures. In some cases, the necessity to revert back
(downgrade) to older versions of the automation tools
was indicated and was even made mandatory.
Resource allocation for continued test development
and test code maintenance became a difcult issue.

PAGE 7

COVER STORY
ONE MUST
DEVELOP A
HIGHLY FLEXIBLE,
RE-USABLE AND
MANAGEABLE TEST
STRATEGY THAT REALLY
FITS INTO ANY TEST
AUTOMATION
REQUIREMENTS
SEAMLESSLY

Eventually, most of these test automation projects were


put on hold and the teams reviewed their processes
to understand and deliberate on what went wrong
and how to fix it and still have a comprehensive test
automation solution.

but also other activities for example set-up, clean-up and


error detection and recovery. Test data is still embedded
into the scripts though, and there is one driver script per
test case. Code is mostly written manually and both
implementation and maintenance require programming
skills which test engineers may not have.

EVOLUTION OF TEST
AUTOMATION FRAMEWORKS

3. Third generation frameworks: have the same good


characteristics found in the second generation, however,
they go forward by taking test data out of the scripts
which has two significant benefits. The first benefit is
that one driver script may execute multiple similar test
cases by just altering the data and adding new tests.
The second benefit is that test design and framework
implementation are separate tasksthe former can be
given to someone with the domain knowledge and the
latter to someone with programming skills. This concept is
called data-driven testing. Unlike, Keyword-driven testing,
it takes the concept even further by adding keywords
driving the test executing into the test data.

Generally, a testing team is actually a pool of testers


supporting many diverse applications completely
unrelated to each other. If each project implements a
unique test strategy, then the testers moving among
different projects can potentially be more of a
hindrance than a help. So the time needed for a tester
to become productive in a new environment may not
be available and this could reduce the productivity of
new testers. Hence in order to handle these situations
one should look forward and strive to develop a single
framework that will grow with each application and
every diverse project that challenges the team and
organisation. A collective thought process along these
lines has led to the evolution of a framework which
addresses test automation requirements.
These test automation frameworks have evolved over
time and the following are the three major generations:
1. First generation frameworks: are unstructured, have
test data embedded into the scripts and there is normally
one script per test case. Scripts are mainly generated
using capture and replay tools but may also be manually
coded. This kind of script is virtually non-maintainable
and when the tested system changes they need to be
captured or written again.
2. Second generation frameworks: scripts are welldesigned, modular, robust, documented and thus
maintainable. Scripts do not only handle test execution

PAGE 8

A SENSITIVE TRIPOD
There could be various underlying reasons that govern
the overall success or failure of test automation but
the predominant among them will be test strategy
and design. You need to ensure that there is always a
separation of concerns within the test design, strategy
and test automation framework ie, one must develop a
highly flexible, re-usable and manageable test strategy
that really fits into any test automation requirements
seamlessly. So, to achieve it and also in order to make
the most out of testing, one needs to adhere to the
following essential guiding principles and observe the
same when developing the overall test strategy:
1. Test automation is a full-time effort
This refers to the fact that the test framework design and
the coding of that design together require significant

JUNE 2013 | www.testmagazine.co.uk

COVER STORY
front-loaded time and effort. This is not something that
someone can accommodate and do when they have
a little extra time here, or there, or between projects.
Importantly, it demands some consistent effort in the
test framework which must be well thought out, it must
be documented, it should be reviewed and it should
be tested. Typically, it is expected to undergo the same
phases that any full software development project will.
2. Test design and the test framework are two
separate entities
The test design details how the particular functions and
features of our application will be tested. It will tell us
what to do, how and when to do it, what data to use
as input and what results we expect to find. Generally,
all of this is specific to the particular application or item
being tested and this requires knowledge of whether the
application will be tested automatically or manually. On
the other hand, the test framework, or specifically, the
test automation framework is an execution environment
for automated tests. It is the overall system in which
our tests will be automated. The development of this
framework requires completely different technical skills
than those needed for the test design.
3. The Test framework should always be
application-independent.
Although applications are different, the components
that comprise them in general, are not. Hence, one
should focus on the automation framework to deal with
the common components that make up applications.
By doing this, one can remove all application-specific
context from the framework and reuse virtually
everything we develop for every application that comes
through the automated test process.
From a design standpoint, nearly all applications come
with some form of menu system. They also have buttons
to push, boxes to check, lists to view, and so on. So, in a
typical automation tool script there is, generally a very
small number of component functions for each type of
component. These functions work with the component
objects independent of the applications that contain them.
Conventionally, captured automation scripts are filled with
thousands of calls to these component functions. So the
tools already exist to achieve application independence.
Now, the problem is, most of these scripts construct the
function calls using application-specific, hard coded values.
This immediately reduces their effectiveness as applicationindependent constructs.
Furthermore from a technical standpoint, the functions
by themselves are prone to failure unless a very specific
application state or synchronization exists at the time
they are executed. There is little error correction or
prevention built-in to these functions. So, to deal with
this in conventional scripts, one must place additional
code before and\or after the command, or a set of
commands, to insure the proper application state and
synchronization is maintained. Further, for maximum
robustness, one should have to code these state and
synchronization tests for every component function call in
the scripts. Realistically, one could never afford to do this
since it would make the scripts huge, nearly unreadable,
and difficult to maintain. Yet, where one forego this extra
effort implies one increase the possibility of script failure.
So, the solution to the above situation is what one must
do is develop a truly application-independent framework
for these component functions. This will allow us to
implement that extra effort just once, and execute it for
every call to any component function. This framework
should handle all the details of insuring we have the

JUNE 2013 | www.testmagazine.co.uk

correct window, verifying the element of interest is in the


proper state, doing something with that element, and
logging the success or failure of the entire activity.
One can achieve this by using variables, and
providing application-specific data to our applicationindependent framework. In essence, we will provide
our completed test designs as executable input into our
automation framework.
4. The Test framework must be easy to expand, maintain,
and perpetuate
One of the primary goals should be to have a highly
modular and maintainable framework. Generally, each
module should be independent and separate from
all the other modules - what happens inside one is of
no concern to the others. So with this modular blackbox approach, the functionality available within each
module can be readily expanded without affecting any
other part of the system. This makes code maintenance
much simpler. Additionally, the complexity of any one
module will likely be quite minimal. However, modularity
alone will not be enough to ensure a highly maintainable
framework because just like any good software project,
it must be fully documented and published because
without adequate, published documentation it will
be very difficult for anyone to decipher what it is the
framework is intended or designed to do.
5. The Test strategy/design vocabulary should be test
framework independent
Generally, the overall test strategy will define the format
and low-level vocabulary that one uses to test
all applications, much like an automation tool
defines the format and syntax of the scripting
language it provides. The vocabulary, however, will
be independent of any particular test framework
employed. At times, the same vocabulary will migrate
with us from framework to framework, and application
to application. This means for example, the syntax used
to click a button will be the same regardless of the tool
we use to execute the instruction or the application
that contains the button. Further, the test design for a
particular application however will define a high-level
vocabulary that is specific to that application. While this
high-level vocabulary will be application specific, it is still
independent of the test framework used to execute it.
This means that the high-level instruction to login to our
website with a particular user ID and password will be the
same regardless of the tool we use to execute it.
Further, when one provides all the instructions necessary
to test a particular application, we should be able
to use the exact same instructions on any number of
different framework implementations capable of testing
that application. This means that overall test strategy
should not only facilitate test automation, it should
also support manual testing. Consequently, the format
and vocabulary that are used to test the applications
should be intuitive enough to enable any novice to
comprehend and execute.
6. The strategy/design should remove most testers from
the complexities of the test framework
In reality, one cannot expect all the test personnel to
become proficient in the use of the automation tools
that fit into any of the existing suitable test frameworks.
So, the test strategy and/or test design should act as a
abstract layer with which automation testers can plan
and deploy the requisite tools and utilities necessary to
automate the tests without having any concern about
designing the tests.

PAGE 9

BEHAVIOUR DRIVEN DEVELOPMENT

ROY DE KLEIJN
TECHNICAL TEST SPECIALIST
WWW.RDEKLEIJN.NL

THE BENEFITS OF BDD


Technical test specialist Roy de Kleijn assesses the benefits and challenges
of adopting a behaviour-driven approach to software development and
explains how you can use BDD to avoid problems with misinterpreted
requirements and wrongly design and tested software.

common challenge with traditional software


development is that all parties involved
speak different professional languages. This
can cause misinterpreted requirements and
wrongly designed, developed and tested
software products which dont meet expectations.
Another common phenomenon is that functionality gets
developed which doesnt add any (immediate) business
value. This is a waste of time and money. This article will
explain how you can use Behaviour Driven Development
to avoid such problems. There is a hands-on example, in
case reading this article has made you really interested
and eager to try it.

in order for them to gain a common understanding of


what has to be built and which requirements have to be
met. Its important that business representatives, business
analysts, developers, testers and even other relevant
parties collaborate well with each other.
A good practice to achieve common understanding is
to organise an interactive session with all parties involved,
so they can define the acceptance criteria together in
an open conversation. Its important that everybody can
give his or her input, ensuring no surprises arise during
development and more important - after delivery.

BEHAVIOUR DRIVEN DEVELOPMENT

TEST DRIVEN DEVELOPMENT FROM A


DIFFERENT ANGLE

Behaviour Driven Development (BDD) is an Agile


software development approach, where business
value is developed and delivered in short iterations.
This approach focuses on the improvement of
communication between all relevant parties involved,

The process of BDD is similar to the process of Test Driven


Development (TDD), with the main difference being
that BDD is Acceptance Test Driven and TDD is Unit Test
Driven. A characteristic of BDD is that it is an outsidein approach. It means that the product is designed,

PAGE 10

JUNE 2013 | www.testmagazine.co.uk

BEHAVIOUR DRIVEN DEVELOPMENT

FIG 1: Behaviour Driven


Development process.

developed and tested from the outside. In contrast


to TDD (inside-out approach) whereby the product
is designed, developed and tested through the inside
with unit tests. Unit testing tells you that the product is
developed in the right way, but not if the right product is
developed. The outside-in approach makes it suitable
to test both new and legacy systems, because you
approach the application from the outside and you dont
need to know about the application code and structure.

intended purpose is described first, since this is what it is


all about. Based on these stories the acceptance criteria
are written by the team in the format of scenarios. It is
important that the right people are involved, so that all
demands are put on the table and you dont rely on one
single person, but rather on the team.

At a high level we can describe the process of BDD as


follows. Firstly the functional, (automatic) executable,
acceptance tests get written. For each scenario, we have
to walk through a number of steps (Figure 1). The inner circle
reflects the implementation of the steps per scenario, which
have to go through the following states: failure of the step,
because there is no test object (red); then just enough
application code is written to let the step pass (green), finally
we can refactor the implemented code (blue). Once all
steps are implemented you will drop into the outer circle,
where the final states will be: all steps are implemented, so
the scenario will pass (green), to make it more maintainable
and robust we can start refactoring the implemented code
(blue) again. This process is repeated for all the scenarios
(acceptance criteria).

Given <pre-condition>

Using this approach we implement the minimum amount


of application code, to satisfy the requirements. Despite
the BDD approach, it is recommended to implement and
test the application code using the TDD approach.

|testerA|passwordA|

Scenarios have a uniform syntax:


Scenario: <scenario title>
When <action>
And <optional follow-up action>
Then <post-condition>
This format has the advantage that it is easy to read and
understand for all people involved, so they discuss the
same things. In addition, this notation makes it possible to
parse the scenarios with a tool. Scenarios are surrounded
by examples where needed. Examples are tables with test
data. Each line in a table is a test and may look like this:
Examples:
|username|password|
|testerB|passwordB|
Example:

STORIES/ACCEPTANCE CRITERIA

Scenario: calculating with two number

When writing the acceptance criteria you will


immediately see that BDD is focused on raising awareness
of the added business value we have to realise in a
product.

Given I open the calculator

A Behaviour Driven Development story is described with


the following syntax:

And I enter <input2> into the calculator

In order to <receive benefit> as a <role>, I


want <goal/desire>
Example:
In order to make proper calculations
as a mathematician,
I want to perform mathematical operations
This functional description is written by and/or on behalf
of representatives of the business. Striking is, that the

JUNE 2013 | www.testmagazine.co.uk

When I enter <input1> into the calculator


And I press <button>
And I press the result button
Then the result <result> is displayed
Examples:
|input1|button|input2|result|
|15|*|2|30|
|30|/|2|15|

PAGE 11

TESTING VIRTUALISED ENVIRONMENTS

EXECUTABLE SPECIFICATION
The main advantage of Behaviour
Driven Development is that the
scenarios, written as Given / When
/ Then steps, are mapped to code.
Like this, we make executable
specifications. Most BDD frameworks
provide the functionality to execute the
stories immediately, so an empty skeleton
arises and you only have to implement the
actual test code.

A
CHARACTERISTIC
OF BDD IS THAT IT
IS AN OUTSIDE-IN
APPROACH. IT MEANS
THAT THE PRODUCT IS
DESIGNED, DEVELOPED
AND TESTED FROM
THE OUTSIDE.

LIVING DOCUMENTATION
The specification will be literally revealed at every
test execution. Each line of the specification turns
green or red depending on the result. Like this you
will get Living documentation and you can see

the actual state of the application


at any time. In theory, any other
functional description of the
application can be replaced.

You obtain the best results when


you hook BDD to a Continuous
Integration (CI) system, this way you
have immediate feedback on the
actual state of the application after
each and every commit. You will instantly
know if the implemented functionality has a
good or a bad effect on existing functionality. The
great thing is, with BDD the results are written in a
uniform and comprehensible manner and available
to the entire organisation.

EXAMPLE PROJECT
I created an example project on github, which you can use to start with Behaviour Driven Development. The project is
based on JBehave which is a BDD framework in Java. This project can be used to test web applications and supports
parallel test execution and if something unexpected occurred, a screenshot is taken.

SOFTWARE TO BE INSTALLED
1. Eclipse IDE for Java Developers www.eclipse.org/
downloads/

org.google.steps This package contains the mapping


between the textual sentences and code.
org/google/web This folder contains the textual stories.

This is a develop environment where we can develop our


tests in.
2. Maven - In eclipse: Help -> Eclipse Marketplace ->
search for: Maven Integration for Eclipse
Maven makes it possible to handle project
dependencies in an efficient way.
3. JBehave Plugin follow the instructions on https://
github.com/Arnauld/jbehave-eclipse-plugin

FIG 3

JBehave plugin makes it easy to write stories.

IMPORT THE EXAMPLE PROJECT


1. Download the source code from the following
location: http://roydekleijn.github.io/Spring-JbehaveWebDriver-Example/

FIG 2: Behaviour Driven


Development process

FIG 2

FIG 3: Image Project structure

2. Extract the ZIP-file to a chosen folder.

EXECUTION OF THE SPECIFICATIONS

3. (In Eclipse) right mouse button at Package Explorer


panel and choose: Import -> Existing Maven Projects.

You can execute the stories as follows:

4. Choose the folder where you stored the files, as


root Directory.

2. Right mouse button on Maven Build and choose New;

1. Navigate to Run -> Run Configurations;

5. Go through the wizard and click Finish.

3. Select the project by clicking at Browse Workspace;

STRUCTURE OF THE EXAMPLE PROJECT

4. Enter the following command in the Goals field:


integration-test -Dgroup=google Dbrowser=firefox

The project is divided into three different parts:


org.google.pages This package contains an
abstraction of the application under test.

PAGE 12

JUNE 2013 | www.testmagazine.co.uk

MOBILE PERFORMANCE TESTING


BRUNO DUVAL
VP PROFESSIONAL SERVICES
NEOTYS
WWW.NEOTYS.COM

MOBILE APP DEVELOPMENT LEAVES


TRADITIONAL PERFORMANCE
TESTING IN THE DUST
Performance is critical in order to guarantee a satisfying mobile user experience and
the entire model of traditional application testing is obsolete when it comes to mobile
apps according to Bruno Duval.

espite all the noise around Ux, the expectations


users have for mobile applications boil down to
two key areas: Mobile apps must be 1) simple
and 2) perform well. And while these expectations
have a significant impact on how the applications
are developed, meeting these two requirements may seem
nearly impossible for developers when using traditional
performance testing methods and tools.
Performance is critical in order to guarantee a satisfying
mobile user experience. If the app performance does
not deliver, the app is likely to be rejected and uninstalled
the same day, throwing away all investments in R&D
and marketing. Poor performance may even affect the
brand equity. In a perfect world the entire process of
developing a mobile app would be aligned with todays
new expectations and include the appropriate budget
to accommodate these requirements.
Furthermore, the first expectation - apps must be simple
- often translates into: the mobile app is a light version
of the web application and thus it should be cheaper
to develop. As a result, the budgets to develop mobile
apps are limited, and in this new equation, performance
testing is likely to be an adjustable variable as opposed
to a given.
This is what makes mobile app developers crazy.
Performance testing requirements are increasing but the
budgets to support performance are shrinking. Traditional
performance testing processes and tools, designed for
more complex applications with higher budgets and
longer development cycles, are no longer appropriate.
They simply do not fit into the economic equation of
building mobile applications.
In addition, performance testing mobile applications is
significantly more complex than testing a traditional PC
browser-based application. This is because a variety of
network conditions (bandwidth, latency packet loss etc.)
and a devices behaviour need to be simulated. This
is not an issue when performance testing PC browserbased applications.
Mobile drastically changes three aspects of
application development:
Technology: new testing tool features are required to test
mobile apps realistically.

PAGE 14

Processes: testing often needs to be faster to NOT be a


bottleneck in the application development project.
Cost: the entire process of testing including tools and
manpower needs to be optimized to fit within the
economic equation of building mobile apps.
The entire model of traditional application testing is
obsolete when it comes to mobile apps. In this article
we will deal with the three aspects listed above, the
challenges they bring and how they can be addressed
by developers and testers.

DEALING WITH A NEW TECHNOLOGY


There are specific requirements for testing the
performance of mobile applications that are not
addressed by traditional load testing techniques.
Although similar to traditional web apps in many ways,
mobile applications require special attention with regard
to recording mobile test scenarios, conducting realistic
tests that simulate real-world bandwidth network and
browser characteristics, and properly analysing the
results. Addressing challenges in these areas is essential
to ensure mobile web applications are sufficiently tested
prior to release so they can be relied upon to perform
well under load in production.
There are three key differences between traditional and
mobile load testing:
Simulating network for wireless protocols: With 3G wireless
protocols, mobile devices typically connect to the
Internet using a slower, lower quality connection than
desktops and laptops. This has an effect on response
times on the client side and on the server itself, which
developers will need to account for as they define tests
and analyse results. Additionally, latency and packet loss
becomes more of a factor with mobile applications and
needs to be considered.
Recording on mobile devices: Obviously, mobile apps run
on mobile devices, and this can make it difficult to record
test scenarios, particularly for secured applications that
use HTTPS.
Supporting a wide range of devices: The many
different kinds of mobile devices on the market have
led web application designers to tailor content based

JUNE 2013 | www.testmagazine.co.uk

MOBILE PERFORMANCE TESTING


on the capabilities of the
clients platform. This
presents challenges for
recording and playing
back test scenarios.

THERE
ARE SPECIFIC
REQUIREMENTS
FOR TESTING THE
PERFORMANCE OF
MOBILE APPLICATIONS
THAT ARE NOT ADDRESSED
BY TRADITIONAL
LOAD TESTING
TECHNIQUES.

Finding the appropriate


values for key test
settingssuch as the useragent, bandwidth, and
number of simultaneous
connectionscan be a
challenge. More advanced
load testing tools can help
testers set these values. For
example, test scenario playback
is
greatly simplified by tools that can automatically inform the
tester about which user-agent string and number of parallel
connections to use based on the browser name, version,
and platform. The process is further streamlined when
the tools can suggest the most appropriate upload and
download bandwidth settings based on the technology
used (for example, Wi-Fi, 3G, 3G+, and so on) and the
quality of the signal (for example, poor, average, or good).

TESTING FASTER, MORE EFFICIENTLY


Mobile apps do not have the same development cycle
as traditional PC applications. Typically the apps are
simpler with fewer features so the development cycle is
shorter. In addition, because the technologies underlying
development are evolving quickly, a mobile app is likely
to have more frequent releases than a PC application. It
is also important to note that mobile users are constantly
expecting new and fresh apps so the pace at which
new apps are released is amazingly high - several tens of
thousands per month!
With shorter initial development cycles, more releases
and more apps, load and performance testing - which
is often tackled towards the end of the application
lifecycle is performed under extremely tight time
constraints. The pressure is on to deliver actionable results
as quickly as possible, without becoming a bottleneck for
the project.
So developers and testers need tools that help them
adapt to these new times constrains. There are three key
dimensions where a testing tool will help a performance
tester do his job better while helping him meet the timing
constraints of the teams development plan:
Fast Learning Curve: Performance testing is a very
complex task which requires expertise in a number of
areas including performance testing tools (to record
scripts and execute tests); application architecture,
technology building blocks and understanding how all
these components interact. This latter skill can only be
acquired through experience, no tool will teach this. But
the path to become an expert in performance test tools
can be significantly accelerated with a new category of
software which does not require complex scripting.
Performance testing tools can accelerate the learning
curve without having to learn an API or a scripting
language to design tests. This is now possible with
solutions that use wizards and self-descriptive graphical
interface. Tools now enable entire engineering teams to
develop key expertise which had been only available
to specialists in the past. This means the team can
increase their skills rapidly and have more resources that
are able to conduct a test. This is a great benefit for the
application development lifecycle.

JUNE 2013 | www.testmagazine.co.uk

Automation: Productivity optimization is important


to apply to performance testing tasks. Testing the
performance of each application involves hundreds of
tasks, because a great number of workflows are possible
and because the test needs tuning which requires
re-iteration of a number of tasks. So the automation
capability of performance testing tools must support
testers so that they can design and execute tests more
rapidly. The diagram below highlights what phases of the
load and performance test process can be accelerated
with the appropriate tool enabling automation.
Collaboration: With time constraints growing and
development cycles being compressed, new ways of
working have emerged around agile organisations where
a number of team members can take part in designing,
running and analysing test scenarios. The best practice
of performing continuous performance testing (every
day) during the development phase makes it even more
critical for a team to be able to work with the same test
projects and scripts, to track changes and roll back to an
earlier version of the information when necessary.
Performance testing tools that provide adequate
collaboration features enable faster tests and enable
distributed project teams to work with pre-defined
scenarios or virtual users. In a mobile environment where
development cycles are short and projects are multiple,
major productivity gains are possible.

SOLVING THE ECONOMIC EQUATION


There are a number of factors that make the economic
equation of load testing difficult to solve for mobile
applications. First, there are applications developed purely
for marketing and branding which are by nature not directly
profitable because they are usually available free of charge.
Mobile apps developed to create revenue (like
eCommerce or gaming) or to access web services
or business applications (mobile apps for banking or
CRM etc) are light versions of bigger applications with
more features. As a result, budgets granted to develop
mobile applications are much lower than what is usually
provided for traditional PC applications.
Yet, as stated earlier, load testing is more complex for
mobile apps because of different network conditions,
and cross platform testing.

PAGE 15

MOBILE PERFORMANCE TESTING

FIG 1: Automation of the


performance testing phases

Complexity in
load testing

Development
cycle / time to
load test
Overall Budgets
/ Performance
testing budget

Traditional PC
based application
Standard (no
impact of network
conditions,
limited number of
platforms)
6 to 12 months / 1-2
weeks
Counted in 100s
of thousands of $ /
Usually licensing

Mobile application
Complex
Virtually unlimited number of
different network conditions
Numerous different platforms
1-3 months / often no time left
for performance tests
$1k to $30k / often no budget
available for licensing

Traditional testing methods and tools do not fit into this


equation because of two main reasons:
1. Traditional testing methods and tools are labour
extensive and their cost is not aligned with mobile app
development budgets

stated before, budgets for developing mobile applications


are far below what is usually assigned to traditional
desktop applications. So a business model based on
CAPEX where a project team has to invest upfront to
buy a license for a load testing tool is not appropriate
because the upfront cost may be higher than the total
budget of developing the application. Test managers
should look for solutions that provide flexible business
models based on pay per use/rental models which
enables them to adapt the cost of performance testing
to the requirements and the budgets.
As development cycles get shorter, the number of new
mobile applications gets larger. The performance testing
phase does not merely occur every 6 or 12 months (as
typical when traditional desktop PC applications are
released) but several times per year as new releases are
developed. Performance testing tools should not be
attached to a particular project or application, rather testers
should be able to share the license across different projects.

Traditional methods and tools are too long for


CRITICAL PERFORMANCE
performance testing mobile apps. This impacts both
time to market (performance testing may become
Performance has never been more critical to the success
a bottleneck) and budget as the cost to design
of an application than it is with mobile apps. This dynamic
and execute tests will be prohibitive for the budget
presents an opportunity for performance testers to assess
allocated to the development project. We have seen
their value in the development chain to contribute to the
above what should be considered to provide a faster
success of the service.
and more efficient performance testing process. This
Mobile brings new challenges to performance testers,
has a direct impact on the costs associated with
both in the technology and organisational areas. It is
performance testing. A tool that enables testers to
the role of technology to support those changes and
perform tasks more rapidly translates into
enable performance testers to contribute efficiently
cost efficiency and the ability for a R&D
to the application lifecycle with the appropriate
team to run realistic tests and make
tools. Choosing the right tool for performance
sure the application will perform well
THE
testing is not only about checking the ability
in production without spending
PRESSURE IS
to support the new mobile technologies,
a tremendous amount of time in
ON TO DELIVER
but also choosing a tool that enables the
learning how to use the tool and
ACTIONABLE RESULTS
organisational change driving the industry,
design test scripts.
AS QUICKLY AS
where performance tests have to be
2. CAPEX models are not efficient
performed faster, across distributed teams
POSSIBLE, WITHOUT
and with greater expectations of reliability
While enhancing tester productivity is
BECOMING A
than ever before.
one part of the equation, R&D teams
BOTTLENECK FOR
need solutions that enable new business
THE PROJECT.
models in order to match the cost
requirements to test mobile applications. As

PAGE 16

JUNE 2013 | www.testmagazine.co.uk

KEYWORD-DRIVEN TESTING
MARK LEHKY HAS BEEN
WORKING IN TEST
AUTOMATION SINCE 1999

KEYWORD DRIVEN TESTING


Following his exploration of the benefits of data-driven testing in
our last issue, this time California-based test automation expert
Mark Lehky looks at keyword-driven testing.

keyword driven test (sometimes called a table


driven test) approach is the next evolution of
a data driven test (DDT) framework. Similar to
DDT, information about the test is gathered in a
table, such as a spreadsheet. What differentiates
keyword driven test from DDT is that it describes the
mechanics of a test as a series of steps that ideally are not
tied to any data. If the test is constructed well, any operator,
even one with little technical knowledge, should be able to
execute such a test.
Translating a table test into a fully automated test requires
a framework that interprets the (human readable) table
commands and converts them into computer commands.
Selenium IDE (http://seleniumhq.org/projects/ide/), a
popular automation framework, is an example of a keyword
driven test automation framework. It is used to automate
browser-based applications. Tests are constructed in a threecolumn wide table; the first column contains an action that
is performed in the browser, the second column contains a
locator specifying which object in the browser the action
is to be performed on, and the last column contains an
optional value that the action may require. A very simple
Selenium script may look like this:
open
type

http://www.google.com/
q

clickAndWait
assertTextPresent

btnG
Selenium Web Browser Automation

well as statistical purposes (which an entire gambling industry


is based around) require that both the balls location as well
direction (which side is in possession) is constantly tracked.
During game play a team has four chances to advance the
ball 10 yards (little less than 10 meters), or lose possession of the
ball. Different play scenarios have different scoring value: a
touchdown (player runs with the ball past the oppositions end
zone) is worth six points, following a touchdown the scoring
team has an opportunity at a PAT for
two more points or a kick for one
point, a kicked field goal
(without a touchdown) is
worth three points, and
WHAT
finally if a player holding
DIFFERENTIATES
a ball is tackled in
KEYWORD DRIVEN
their own end zone is
TEST FROM DDT IS
called safety and is
THAT IT DESCRIBES THE
worth two points.

MECHANICS OF A TEST AS
A SERIES OF STEPS THAT
IDEALLY ARE NOT TIED
TO ANY DATA.

Selenium IDE

Although Selenium IDE has a record capability that works


only in Firefox, the resulting table tests (called Selenese) can
be run in any JavaScript-enabled browser - essentially any
modern browser. The details of Selenium IDE are beyond
the scope of this article. You can find quite extensive
documentation at the Selenium HQ website (http://
seleniumhq.org/docs/02_selenium_ide.jsp).

AMERICAN FOOTBALL
At this point I need to insert a slight diversion, specifically for
readers outside of North America.
The example that I am going to present below requires
some basic knowledge of American football. The game of
American football is played with an oval shaped ball, similar
to a rugby ball in the rest of the world. The players throw the
ball, catch the ball, and run with the ball. For game play as

PAGE 18

JUNE 2013 | www.testmagazine.co.uk

KEYWORD-DRIVEN TESTING
Within the gambling industry there are several competing
products. All of them essentially involve an operator
watching a game on television and in real time entering
all the relevant statistical information (game time, balls
position, direction, and score) into a web-based console,
which is then processed by back end servers and
presented to clients on various betting consoles.
The games history is tracked and individual games are
recorded in a human readable play-by-play format,
called a boxscore. One such website is Pro-FootballReference (http://www.pro-football-reference.com/
boxscores/). This website has built-in functionality to
display any boxscore in a formatted web page or to
export the boxscore into a CSV (comma-separated
values) format.

conversely the END pattern is executed after your input


file has been completely processed. The start of our awk
script will therefore look like this:
3.

BEGIN {

4.

# CSV input

5.

FS = ,

6.
7.
print <?xml version=\1.0\
encoding=\UTF-8\?>
8.
print <!DOCTYPE html PUBLIC
\-//W3C//DTD XHTML 1.0 Strict//EN\ \http://
www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\>

In the example below, I will use this CSV formatted


boxscore as my starting manual test and convert it
into a Selenium IDE script which can be used to drive an
operator console, which in turn can be used for testing
betting client consoles.

9.
print <html xmlns=\http://
www.w3.org/1999/xhtml\ xml:lang=\en\
lang=\en\>

AWK AS KEYWORD DRIVEN AUTOMATION


FRAMEWORK?

11.
print <meta httpequiv=\Content-Type\ content=\text/html;
charset=UTF-8\ />

The awk utility (www.gnu.org/software/gawk/) was


created in the early days of computing, for processing
text files. It is still widely used today, which means that
it has been ported to almost every operating system
known to man. The utility can be used to process any
text file, and although it is most often used as a singleline quick script, it has been used to create entire
applications as complex as a mail server. I personally find
this utility indispensable to my daily work.

10.
print <head profile=\http://
selenium-ide.openqa.org/profiles/test-case\>

12.
print <link rel=\selenium.
base\ href=\http://some.base_server.url/\
/>
13.
</title>

print <title>converted ARGV[1]

14.

print </head>

15.

print <body>

In the rest of this article I will present an introduction


to awk, using it as an interpreter to convert a humanreadable script into a Selenese automated test.

16.

print <table border=\1\>

awk mechanics: An awk script contains a series of


pattern-command pairs, one per line. The awk interpreter
reads in your script and an input text file. The text file is
processed one line at a time, and is tested against every
pattern in your script. For any pattern that tests true the
accompanying commands are executed.

Comments in awk are started with the hash (#) character


line 2.

The simplest awk script would look like this:


1.

awk { print } my_input_file.txt

This script has an empty pattern, which will match


every line in your input file. The print statement will print
whatever you specify; in this case we did not specify
anything, so the default is the current line read. This script
simulates the same functionality as the type command
on most systems.
For larger scripts it is convenient to write the entire script
into a separate file. In this case we would use the -f
switch to pass the script to the interpreter, like so:
2.

awk -f my_script.awk my_input_file.txt

csv2selenium.awk: Lets start looking at how we are


going to convert the input CSV script from Pro-FootballReference to a Selenium IDE script.
If you actually record the Selenium script that I had
shown in the Introduction, and you look at the source,
you will find the following:
Every Selenium script begins and ends with the same
block of code. awk has a special pattern to handle this
situation: BEGIN and END. The BEGIN pattern is executed
once, before any of your input file is processed, and

JUNE 2013 | www.testmagazine.co.uk

17.

awk tokenises each line of your input that is, breaks it


up into individual pieces that you can work with inside
of your script. By default it uses the space characters as
the token separator. Our input is going to be a commaseparated-values (CSV) file. Right at the start of the script
(line 3) the built-in variable FS (field separator) is set to the
comma character.
In line 11 the title contains the name of the first argument
passed to awk after the script the filename of your input file.
The end of our awk script is a very modest:
18.

END {

19.

print </table>

20.

print </body>

21.

print </html>

22.

THE FIRST SELENIUM SCRIPT


If you run this script as: awk f csv2selenium.awk csv2selenium.
awk (at this point you need to specify any input file, I just used
the script itself) you will see that all the print statements go to
your terminal output. At this point I grabbed one of the CSV
files from the Pro-Football-Reference website I mentioned
above, saved the file in the same location as my awk script (I
used a name like 2007NYGatNWE.csv), and created a bat file
to generate the Selenium script:

PAGE 19

KEYWORD-DRIVEN TESTING
23.
awk f csv2selenium.awk 2007NYGatNWE.
csv > 2007NYGatNWE.html
If you double click this bat script, you will get an HTML file
that will be a blank Selenium test. Go ahead and try to
open it in Selenium IDE.
The first play
Have a look at a boxplay from Pro-Football-Reference
inn Table 2.
The first five columns are the starting position for a play,
column six is a description of the next play, and columns
seven and eightare the score at the end of that play. The
last two columns are some statistical probabilities, which
are not relevant for our discussion.
The logic that my awk parser has to follow is to place the
ball and set the clock based on the first 5 columns, and
then determine what the next play is going to be based
on some keywords in column 6. Determining the next
play is going to require the most work. The determining
logic is to be applied at every line of my input, so it will
have a blank pattern. The logic is going to be built using
a series of if else if statements, since awk does not
have a case statement.
Every football game starts with a kick off:
24.

25.
KICKOFF
26.

if ($6 ~ /kicks? Off/) NextPlay =

The first point is that this pattern is placed between the


BEGIN and END patterns described previously. All the
patterns in your script will be processed in the order
that you specify them. As was already mentioned the
BEGIN and END patters are special, and so the order of
those two does not matter, but it makes logical sense for
someone reading your script afterwards to place them at
the beginning and end of your script.
For the rest of the script, this logic pattern will be the
last one processed for each line of my input CSV script.
Lets have a look at the if condition: ($6 ~ /kicks? Off/).
$-notation is how you refer to each of the tokens;
remember we broke up each line of input (tokenized it)
at the commas (FS = ,). So $6 refers to column number
6 in our input file.
The sine character (~) is equivalent to contains.
Anything anywhere in an awk script that is enclosed by
forward slashes (/) is considered a regular expression.
Regular expression is a way to describe strings with
atoms, similar to wildcards, but much more powerful
than simple wildcards; an entire book can be (and has
been) written on the subject, here I will just refer you to
the Wikipedia entry at (en.wikipedia.org/wiki/Regex). In
this expression, I use the question mark qualifier which
means zero or one of the previous character.

Quarter Time

Down

ToGo Location

1
1

NWE 30
14:55:00

10

NYG 23

Detail
1st Quarter

If the condition is true (the sixth column contains either


the words kick off or kicks off) then I set my variable
NextPlay to KICKOFF. I then recorded in Selenium
IDE what the sequence of required steps is to generate
a kick off in our application, and from that I created the
following awk pattern:
27.

NextPlay == KICKOFF {

28.

processBallOn()

29.
print <tr> <td>clickAndWait</
td>
<td>name=offensePossession</td> <td></
td>
</tr>
30.
print <tr> <td>type</td>
<td>id=clock_field</td>
<td> $2 </td>
</tr>
31.
32.
33.

NextPlay = done

The pattern that I am checking for (line 1) is not part of


the input file, it is the variable that I have just set.
Placing the ball correctly on the field (line 2) is a separate
process that has to be done at the start of almost every
single play. I decided to extract that into a separate
function, which I will cover next.
Selenium has to click a button for the offence to take
possession of the ball (line 4), and the clock has to be
adjusted to show the amount shown in column 2 (line 5).
Note that concatenating strings together is implicit for awk.
I then reset my control variable (line 6) so that no more
plays (which follow the same pattern) will be processed.

WHERE IS THE BALL


At the start of almost every single play, you need to figure
out where to place the ball.
The real football field is 100 yards long, divided into two
halves. The yards are counted from each of the end
zones (0 yards) towards the centre (50 yards). In our
boxplay the position of the ball is specified by a threeletter code for the team whos side the ball is on (and not
who is in possession) and the yard line. In the computer
console where we have to place the ball, the playing
field is 600 pixels long. We have to get Selenium to click
somewhere appropriate in this field.
We can use the awk function split($5, tokens, ) to split
column 5 at the space character ( ) and place the result
into an array called tokens. So the first token (tokens[1])
will contain the three-letter code for the team whos side
the ball is on, and the second token (tokens[2]) will contain
the yard line on which the ball is to be placed.
34.

function processBallOn() {

35.

split($5, tokens, )

36.

if (tokens[1] == SideLeft)
NYG
1st
Quarter
0

Stephen Gostkowski kicks off 72 yards returned by


Domenik Hixon for 25 yards (tackle by Pierre Woods)
Brandon Jacobs left tackle for 3 yards (tackle by
0
Adalius Thomas)

NWE
1st
Quarter
0

EPB
1st
Quarter
0

EPA
1st
Quarter
0.48

0.48

0.34

Table 2
PAGE 20

JUNE 2013 | www.testmagazine.co.uk

KEYWORD-DRIVEN TESTING
YOUR OWN BUG HUNT

{
37.
tokens[2] - 1

offSet = 301 / 50 *

38.
print <tr>
<td>clickAt</td>
<td>id=pointer_div</td>
<td> offSet ,10</td>
</tr>
39.
print <tr>
<td>waitForPageToLoad</td> <td></td>
<td></
td>
</tr>
40.
<td>assertValue</td>
ballOnYardLineLeft</td>
td>
</tr>
41.
SideRight) {

print <tr>
<td>id=processPlay_
<td> tokens[2] </

} else if (tokens[1] ==

42.
offSet = 600 - ((600
- 300) / 50 * tokens[2]) + 1
43.
print <tr>
<td>clickAt</td>
<td>id=pointer_div</td>
<td> offSet ,10</td>
</tr>
44.
print <tr>
<td>waitForPageToLoad</td> <td></td>
<td></
td>
</tr>
45.
print <tr>
<td>assertValue</td>
<td>id=processPlay_
ballOnYardLineRight</td> <td> tokens[2] </
td>
</tr>
46.
47.

Once your script starts to


IF YOU WANT TO
grow it becomes more
difficult (and important)
BECOME MORE
to track down your own
PROFICIENT IN YOUR
bugs. Awk does not have
AUTOMATION, I HIGHLY
a convenient debugger;
ENCOURAGE YOU TO
awk was created in the
HAVE
A FURTHER LOOK
days before graphical IDEs.
In the old days debugging
AT THE AWK UTILITY.
was accomplished with
print statements scattered
generously throughout your code.
My script eventually ended up
being 10kB of code, and this was by far
not the largest awk script I have written.
Throughout my script I have statements like: print <!-DEBUG FNR read play: $0 -->.
First, anything in HTML enclosed in <!-- and --> is
considered a comment; Selenium follows the same
standards. Next the keyword DEBUG makes it easy for
me to be able to filter out or find all my debug statement
if I need to. Last is the FNR business. Awk has a whole
bunch of very useful variables that are exposed to the
user. NR is one of those. It stands for Number Records
the number of lines of input read so far. FNR is the number
of records read in the current file; NR is the number of
records read in all files, in case you passed in multiple files
using a wildcard.

KEEP ON KEEPING ON
The next play is developed in much the same way:

The conversion from yards to pixels is calculated in lines


4 and 9. As mentioned previously, in football the yards
are counted from the two endzones, however pixels are
counted simply from left to right.

54.
Find a keyword in column 6 of the input to base
the next play on, and add that as an else if condition.

SKIP SOME

56.

The first two lines of the CSV input (see above) you
actually want to skip. The first one is accomplished with a
very simple:
48.

$1 == Quarter {

55.
Record the appropriate browser gestures to
make that play happen.
Test, repeat.

Publishing the remainder of my script is probably


uninteresting (and proprietary) to this discussion.
However, I have demonstrated many of the key features
of awk. There is additional functionality in awk, to help
you parse / verify any text input you need.

The next command stops all processing for current line of


input and continues with the next line.

I am often asked to extract various reports from server


or test run logs. I have yet to be asked for a report that
cannot be extracted from logs using awk, written out as
CSV, and then enhanced and formatted in LibreOffice
Calc (or MS-Excel if that is more your thing).

The second line can be captured with a simple


length($1) == 0, so the entire skip pattern becomes:

HARDCORE AUTOMATION

51.

If you want to become more proficient (and more


hardcore) in your automation, I highly encourage you
to have a further look at the awk utility. Besides this
article, the awk Wikipedia page (en.wikipedia.org/
wiki/Awk) has some introductory information, and the
further reading section of that page has pointers to
additional learning resources. Also the awk utility itself
is very extensively documented.

49.
50.

$1 == Quarter || length($1) == 0 {

52.
53.

next # no more processing

next # no more processing

As you can see, the pattern can be anything that


evaluates to a true or false.
The one special pattern is a regular expression. As was
already mentioned, anything enclosed in a forward slashes
is considered a regular expression. If the entire pattern is just
a regular expression, then it is tested against the entire line
of input just read. Meaning: $0 ~ /abc/ (entire line just read,
contains the text abc) is equivalent to just /abc/.

JUNE 2013 | www.testmagazine.co.uk

PAGE 21

SOFTWARE TESTING SURVEY

BRYAN FANGMAN
SENIOR PROJECT MANAGER
BORLAND
WWW.BORLAND.COM

JULIAN DOBBINS
CHANGE & CONFIGURATION
MANAGEMENT SOLUTION EXPERT
BORLAND
WWW.BORLAND.COM

REBECCA WETHERILL
BORLAND
WWW.BORLAND.COM

THE RESULTS ARE IN...


Earlier this year, Test magazine and Borland set out to take the temperature
of our industry by publishing the online Software Testing Survey throughout
January and February. We were surprised at the volume of responses more
than 850 testers and the things they said. Requirements, Agile and testing for
mobile applications are the three major topics, but there was plenty more for
Borlands Brian Fangman, Julian Dobbins and Becky Wetherill to comment on...
REQUIREMENTS
Bryan Fangman is senior product manager
at Borland. Being responsible for product
planning and execution, hes the right person
for studying the results for requirements.
Q1: What are the most common
problems encountered with the quality
of your requirements?
Fangman: I wanted to understand how
people went about setting requirements,
by focusing on major areas of concern.
Clearly, the increased use of Agile practices
is causing a break in the links between the
business needs and the development user
stories and tasks. There is no tool presently
available to bring the two together and no
common tools to track the delivery status.
Completeness was also a high-scoring
issue, adds Fangman. We dont always
know when we have enough requirements.
Visualisation can help define scope
boundaries and the appropriate level of
detail required.
Completeness scored high and standards
scored third highest, followed by
testability. Fangman comments: Agile is
about saying what youre doing as youre
doing it with all parties involved, instead
of a siloed approach.
Q2: In what phase of the Software
Development Life Cycle (SDLC) do most of
your defects occur?
Development was the clear winner here,
reinforcing the point that requirements
are not defined properly and conveyed
to development teams. Fangman
again: This could be to do with poor
development too though. We just need to
improve the link between business needs
and what is delivered.

PAGE 22

JUNE 2013 | www.testmagazine.co.uk

SOFTWARE TESTING SURVEY

Requirements was the second highest


scoring category for defects. Better
quality requirements could reduce
the number of defects in design and
development: it is far easier and more
cost-effective to fix software defects earlier,
says Fangman.

ALL IN ALL
THE RESULTS FOR
THE REQUIREMENTS
MANAGEMENT
QUESTIONS ARE VERY
INTERESTING. ITS CLEAR
WE NEED TO SAY
GOODBYE TO
DOCUMENTS

Q3: How do you currently capture and manage


your requirements?
The results were surprising. The majority of organisations
are still using static word documents and spreadsheets to
manage requirements, resulting in a minimal capacity to
conduct real-time impact analysis, reflects Fangman. A
lack of understanding dependencies results in errors and
omissions that may not be discovered until late in the
life cycle, causing significant rework, cost overruns and
schedule slips.
The number of requirement tools used is low, says
Fangman, ranging from 21.4 to 47 percent. Its
estimated that the majority of organisations with tools
have not standardised on common tools or have not
reached an appropriate level of adoption. Usability,
lack of training, lack of executive sponsorship or failure
to implement a common methodology could be
contributing factors.
Q4: How well are your requirements integrated with your
test cases?
Almost half of the respondents said they are manually
verifying the relationships of test cases to requirements.
This is expected as a high number of respondents use
documents and spreadsheets as well as, or in place
of tools, says Fangman. Approximately a third of
respondents have an integration between requirements
and test cases, implying the use of complimentary
requirements and testing tools.
Q5: Which visual techniques do you use when
eliciting requirements?
It is good news that storyboards was the number
one answer, says Fangman. More than half of the
respondents are using storyboards, showing the transition
from the traditional written case templates towards
a visual approach. By using a visual approach all the
different use cases are captured in one model. It is
much easier to maintain and reuse these models. Screen
mock ups are also being used by a significant number of
respondents and these align well to Agile practices.
Bryan Fangman concludes, All in all the results for
the requirements management questions are very

JUNE 2013 | www.testmagazine.co.uk

interesting. Its clear we need to say


goodbye to documents. Everything
related to requirements needs to be
traced and understood.

A QUESTION OF AGILITY
Julian Dobbins is a Borland change and
configuration management solution expert and the
go-to person for these questions.
Q6: Can you quantify the percentage split in your
development methodology between Agile and Waterfall?
One of the most interesting things about this question
was the number who skipped it, says Dobbins. Thirty-one
percent probably did so because they would say Were
a bit of both. I think people dodge questions they see as
irrelevant. It may also show that the respondents see their
front end teams as Agile and the traditional back end
mainframe teams as waterfall. So the question is, will the
mainframe end ever become Agile?
Q 7: For your agile development, which is the best term
that describes how you work? And Q8: How large are
your agile team?
Studying the working processes of Agile teams reveals
how they are organised. Dobbins comments, Over
41 percent of Agile development is done by a team
or teams that are in one place, and in Question 8 we
can see that 56 percent of those teams are less than 10
people.
However, 52 percent of respondents say they have
distributed Agile teams. The gold-standard for Agile is a
small team - fewer than 10 in one place - but for more
than half of respondents thats not the case. Only a third
of the distributed teams say they have a unified change
management discipline. Distributed teams without a solid
change management discipline indicate a lack of maturity
in the organisation to fully manage the rise of Agile.
From this question, we can learn about the maturity
of organisations embracing Agile. It seems that we are
not getting the full visibility we might expect from more
waterfall-based projects which are tried and tested
processes, says Dobbins.
Q9: How is Quality Assurance (QA) handled in
Agile teams?
These results show that QA is certainly part of the Agile
team, says Dobbins. But there is a split that reflects the
amount of Agile testing being done. Nearly a fifth say
they have an Agile development team, but much of it is

PAGE 23

SOFTWARE TESTING SURVEY

more waterfall-based. That said, a third of respondents


dodged the question.

THE BIGGEST AGILE CHALLENGE


Q10: What is the biggest challenge with Agile processes?
This is again about the maturity of the organisation,
says Dobbins. The Agile Manifesto is about 12 years
old, so in terms of a formal acknowledgment of Agile
development, were a fair way in. But in terms of larger
enterprises adopting it, it is still early days. The biggest
response (44.6 percent) was for Getting planned work
done within each sprint. If people are new to Agile, or
if much of the work is ad hoc, you may never be sure
exactly how long its going to take.
A large number of respondents place collaboration
with business owners and their tools very high in the
order of challenges, adds Dobbins. They havent
necessarily put in place the robust tooling infrastructure
to support the Agile processes. Teams of 10 people can
collaborate very well, but can they collaborate with
the wider business? If more than 52 percent of teams
are distributed, (see Q7) then collaboration will be
challenging.
Traceability was the third highest challenge. Most
distributed respondents said they dont have a unified
change management discipline: We have a large

PAGE 24

number of small distributed development teams


with varying degrees of QA, but collaboration and
traceability are significant issues, says Dobbins. They
may not be getting the communication across the
teams. The absence of an infrastructure can really slow
processes down. It all points to fragmentation in tooling
and processes.

MOBILE TESTING - GOING MOBILE


The importance of the mobile sector grows every day.
The diversity of platforms, systems and functionality,
creates many challenges for testing organisations.
Rebecca Wetherill is the product manager for functional
mobile automation tools at Borland, as well as being
product manager for test management.
Q11: What are the business drivers for the support of
mobile applications in your company?
Wetherill noted, The fact that about 46 percent of
organisations said they had no plans to support mobile
applications stood out for me. I think it shows the
respondents inexperience, as very few industries are
not considering the demand for mobile. There is some
contradiction in the responses - the answers to Question 12
suggest that nearly 65 percent realised their applications
had to become accessible to mobile devices.

JUNE 2013 | www.testmagazine.co.uk

SOFTWARE TESTING SURVEY

The mobile sector is still in its early stages, and people


are still assessing the best approach. They may well
decide to move from traditional to Agile because time
to market is crucial.
Q13: Which platforms do you currently target for
your mobile applications? And which platforms do
you expect to support with your longer term mobile
application strategy within two years?
The range of answers were interesting, says Wetherill,
because there is still a high percentage who didnt
seem to know. When people consider mobile
development, are they just thinking web app or device
specific app? It reflects the immaturity of the market. A
mobile strategy would enable a sensible decision.
Q14: Which factors are impacting your progress
with mobile application delivery?
Security is the highest priority, says
Wetherill. Its about working with the
right external partners to get the
suitable skills and expertise for testing
and developing. At the other end of
the scale, where respondents claim
there are no factors impacting their
progress, could that be down to a
lack of awareness?

moving to a more Agile approach. Organizations need


guidance towards the right approach in terms of tooling,
practises, guidelines, and support.

PRACTICES & PROCEDURES


Q18: How important is it that mobile testing should
follow the same practices and procedures as traditional
testing?
Its positive to see that the extremely important and
very important got the most response, says Wetherill.
Does this indicate a risk-based approach, or are
businesses assuming the customer can test it? That would
be case with some apps, especially if it speeds up time
to market. As long as you deliver and meet levels of
usability and customer satisfaction, the company will
maintain a good reputation.
Q19: What types of testing do you believe are

important in mobile testing and how do you


THE KEY
rate their importance?
POINT IS THAT
Traditional types of testing are used
ORGANISATIONS
in the mobile field and this question
MUST LEARN HOW
was worded to get an idea of which
TO DEAL WITH MOBILE
were the most used. This links back to
AND HAVE THE RIGHT
Question 9, says Wetherill. Regardless
of the kind of application, all types of
TOOLS, PRACTISES AND
testing are applicable. Performance
PROCESSES IN PLACE
testing is crucial in making sure customer
TO SUPPORT IT
experience is acceptable.

Outsourcing seems to come in and


out of vogue, but at the moment, most
activity is still in-house. The growth in Agile
may be causing the reduction in outsourcing.

Q15: How many months does it take your company to


deliver mobile applications on newly-launched mobile
device variations?
In my view, says Wetherill, the optimum is three to six
months and the survey results reflected this with over
40 percent agreeing. If its longer than that, youre
probably over-complicating it by trying to deliver a full
application over a mobile device.
Q16: What approach are you taking to mobile
application development and testing?
Outsourcing is an approach that seems to come in and
out of vogue, it is perhaps no surprise to see most is still inhouse though. If you are going for an Agile approach,
can you outsource your testing and have development
in-house? asks Wetherill. There is a view that the drive in
mobility is to see a reduction in outsourcing because of
the growth in Agile.

Q20: How important is it to perform mobile testing


on real physical devices vs. simulators?
This is a key consideration in the mobile market. Wetherill
says, Forty percent say they use a combination of both
because typically you need to test on the actual device.
But rather than testing every device on the market, the
top four or five should be the priority and the emulators
can be useful for the lower risk devices.
Overall, the responses were all valuable. The surprising
results could have been due to the immaturity of the
market. The key point is that organisations must learn
how to deal with mobile and have the right tools,
practises and processes in place to support it. We should
see an increase in maturity in the next 12 months.
Organisations need to build the right skills in order
to progress towards a more Agile approach. The silo
mentality is disappearing, which will help ensure that
business needs are met in the right timeframe.
For more information on requirements, agile and mobile
testing solutions visit Borland.com

Q17: Does your organisation have a mobile test strategy


and capability?
The spread of answers we received shows that its an
open market, says Wetherill. Only 29 percent said they
had a mobile strategy in place. So while organisations
over 40 percent are looking to adopt a mobile strategy
or expanding to look at mobile testing they are also
working closer with the development teams and

JUNE 2013 | www.testmagazine.co.uk

PAGE 25

TESTA

THE EUROPEAN SOFTWARE TESTING AWARDS

GOES FROM
STRENGTH TO STRENGTH

CELEBRATING TECHNICAL EXCELLENCE

THE EUROPEAN SOFTWARE TESTING AWARDS

CELEBRATING TECHNICAL EXCELLENCE

More support for The European Software Testing Awards as four new sponsors
and five new judges come on board.
The European Software Testing Awards has
announced that four new sponsors and five
Headline Sponsor
new judges have come on board for the event
which takes place at the Marriott Grosvenor
Square Hotel in central London on the 20th
November 2013.

Category Sponsors

Headline Sponsor

New sponsors for the event include software


testing tool supplier eggPlant which is to
sponsor the Best Mobile Project award;
collaborative issue management software
company TechExcel which will be sponsoring
the Best Agile Project award; leading
software testing service provider Sogeti which
is sponsoring the Testing Team of the Year
award, as well as software testing services
company Capita.

THE EUROPEAN SOFTWARE TESTING AWARDS

CELEBRATING TECHNICAL EXCELLENCE

THE EUROPEAN SOFTWARE TESTING AWARDS

NEW JUDGES
CELEBRATING

TECHNICAL EXCELLENCE

Five new judges have been added to the existing panel of seven from a range of testing competencies:

ADRIAN ELLIS
Adrian Ellis has been with Endsleigh
Insurance Services or 27 years and during
this time has filled many roles beginning as
a motor and property underwriter before
moving into the IT department in 1993 as a
Test Analyst. He was promoted into the role
of Test Lead in 2001 before being promoted again into
his current role as Quality and Assurance Manager.

BEN GALE
Ben Gale has been an IT professional for
over 13 years and founded Criterion Quality
Management Ltd (CriterionQM) with the
sole aim of improving test methods and
increasing awareness of test skills.

BRINDUSA AXON
Brindusa Axon is a passionate lean & agile
consultant, coach and trainer. She works with
ambitious companies to facilitate the kind of
organisational change that has great impact
on the individuals and the bottom line.

DR TIEREN ZHOU
Tieren Zhou Ph.D is founder, CEO and chief
software architect at TechExcel. Dr. Zhou is
an expert in the growing field of kowledgecentric business applications for distributed
development teams and uniting service and
support with development.

TOM CLARK
Tom has worked in IT for 27 years, with the
last nine of those in a variety of quality roles
for a leading international bank. He will soon
be taking up the role of Chief Information
Officer for a major UK building society.

PAGE 26

For the latest information on the TESTA


programme, or details of how to
become a judge or sponsor please
go to www.softwaretestingawards.com

JUNE 2013 | www.testmagazine.co.uk

www.softwaretestingawards.com

An independent awards programme


designed to celebrate and promote
excellence, best practice and innovation in
the software testing and QA community.

NOW RECEIVING NOMINATIONS AND IS


OPEN FOR ENTRIES IN ALL ITS CATEGORIES
If you would like to enter the awards please contact
the team on: +44 (0) 870 863 6930 or email
awards@softwaretestingawards.com

THE TESTING WORLD


ANGELINA SAMAROO
MANAGING DIRECTOR
PINTA EDUCATION
WWW.PINTAED.COM

AGILE PEOPLE
Angelina Samaroo continues her analysis of Agile principles, this issue she
focuses on the people and motivation.

n this article we continue the analysis of the


Agile principles and what each might mean
when applied in the real world. The first two
articles in this series for 2013 explored the first
four principle.

These were:
1. Our highest priority is to satisfy the customer through the
early and continuous delivery of valuable working software;
2. Welcome changing requirements, even late in the
development. Agile processes harness change for the
customers competitive advantage;
3. Deliver working software frequently, from a couple of
weeks to a couple of months, with preference to the short
timescale;
4. Business people and developers must work together
daily throughout the project.

This principle is not new. However, it is a soft principle,


and soft ideals of mind can lead to messy situations at
hand. People management is a profession in its own
right. A search on Google for motivation theory delivered
44.8 million pages, compared to a search on Agile
methodology, which brought up a paltry 2.3 million. From
my own databanks (memory - just as quick, and possibly
more useful) I can recall just three theories Maslow;
Herzberg and Expectancy. And to be even more
concise, Ill consider just the first Maslow.

A HIERARCHY OF NEEDS
Maslow created a hierarchy of needs, based on a study
of high-achievers, shown below on the left (and clearly
Maslow did not anticipate a future where size (of font)
matters). Actually font was probably not in his vocabulary
at all in 1943, but I digress. To the right is how this might
translate if youre working in an Agile project.

The fifth is: Build projects around motivated individuals.


Give them the environment and support they need, and
trust them to get the job done.

PAGE 28

JUNE 2013 | www.testmagazine.co.uk

THE TESTING WORLD

The idea of the pyramid visual is to


suggest that the needs on the higher
levels become apparent as motivational
needs only when those at the bottom
have been met. The first one is evident
by the now ubiquitous Costa coffee
cups. It means more than just our daily
fix of course; it includes the rest of the
physical work environment. Safety in the
workplace in 2013 should not descend
to having to worry about their physical
wellbeing (and for the avoidance of
doubt this means emotional wellbeing
too bullying is not limited to the school
playground), but should consider how
secure they feel in their job.

IN AGILE
PROJECTS,
THE MORE YOU
EXTEND THEIR SKILL
SET, THE CLOSER THEY
BECOME TO THAT
SENSE OF SAFETY IN A
SUSTAINABLE WAY.

In Agile projects, the more you extend


their skill set, the closer they become
to that sense of safety in a sustainable
way. This means mapping out the steps
of the software development lifecycle
(and helpfully the V model shows these explicitly) and
seeking to extend their skills into other areas. For a tester,
this means understanding requirements gathering,
programming, project management etc. In the last issue
we ended on principle four working together. To make
principle five work better, we should add - work together,
learn together dont hide your knowledge, by teaching
others you reach the top of the pyramid through a much
easier and rewarding climb.
The next level is the need to seek out like-minds. Agile
encourages this through the daily scrum meeting.
Social today is usually followed by networking. On-line
networking, when limited to 140 -160 characters clearly is
not about networking but about extending your network
different ball game. Ours is rugby. We need to huddle,
plan, tussle with the competition, then sprint to the finish
line. In other words, youre more likely to find a like mind
over a cup of coffee than being caught up in the web.

SHOW & TELL


After this they need to be recognised for what they have
done, and perhaps can do given the opportunity, in
order to raise their self esteem. The daily scrum focuses
on facts and blockers, but the Sprint Review allows
us a show and tell, and the Retrospective provides
for what went well and not so well. Praising others is
itself the subject of debate. If you praise one does
it mean the others didnt deserve any? If you praise
all is it meaningless? And when does it become just
patronising? Raising salaries is generally regarded as a
short-term fix; it often drops you back down the pyramid
if provided in isolation.
The ascent to the top is an interesting one. Reaching selfactualisation is about their self-worth. The theory though
says that they reach this stage when theyre happy in
their boots. The issue is, what size boots? This may mean
that they need to have several iterations of pyramids,
each with only four levels up to self esteem, with each
jump to an increasing level of responsibility or reward,
until they feel they have landed, as shown below.

PAGE 30

Fig 2

Of course, at the personal level, reaching contentment


may not be good for your pension, so it may be worth
just dropping the self actualisation thing too idealistic
for the real world of today. Better to find something you
enjoy doing and work (and shop) till you drop. Buddha
probably didnt know that reaching nirvana will require
many spa days, and these cost.

PRINCIPLE 6
Lets now look at the next principle:
6. The most efficient and effective method of conveying
information to and within a development team is through
face-to-face conversation.
This one is simple enough its good to talk. Im not
convinced entirely that this method of communication is
most efficient or most effective. It certainly has its place.
Conveying information on the need for a project; the
reasons for the prioritisation of work; why a particular
team has been chosen; an introduction of each team
member; and a word from the sponsor are all in my view
better conveyed, in line with the principle, face-to-face.
In a project with rapid releases of small pieces of
functionality carried out by a small team all co-located, I
can see the face-to-face communication being not just
ideal but routine. But how many projects today fit into this
model? If were chasing the sun for resources, then time
zones become a factor. Words can get lost, and thats
before we try to translate from spoken languages to
programming ones.
Conveying information on functional and non-functional
requirements from a testers perspective is best done
through the written word. Screen shots work very well too.
Testers need traceability. Without the history, regression
testing is based on recall. The business may be reluctant
to put it in writing, lest we hold them to it.
The shorter the trail of evidence, the longer the journey to
the customer?

JUNE 2013 | www.testmagazine.co.uk

TEST EXECUTIVE PROFILE


MARK CONWAY
DEVELOPMENT DIRECTOR
BORLAND
WWW.BORLAND.COM

INTERESTING DEVELOPMENTS
Frank aside, Mark Conway is Mr Borland. If the inspiration to create
market-leading, innovative products is Borlands lifeblood then, as
development director, Mark Conway must keep the arteries of this
software company busy. But what makes him tick? We decided to
find out...

WRITE
ONCE, TEST
EVERYWHERE.
THATS THE NAME
OF THE GAME
AND ANYTHING
ELSE WILL FEEL
UNWIELDY

PAGE 32

JUNE 2013 | www.testmagazine.co.uk

TEST EXECUTIVE PROFILE

ts no surprise that Borlands development


director Mark Conway keeps a clean
office. No papers clutter the desk and few
personal effects garnish the room. This man
keeps it all in his head or online. A degree
in Mathematics from Oxford University also suggests an
ordered approach to life and certainly doesnt hurt when
assessing the feasibility of a new software offering or
quantifying potential revenues from it.

Mark is a products-led man. He believes that were only


ever a game-changer away from a major shake-up,
and this belief is based on previous experience. While
no dot.com millionaire he focussed on running his own
consultancy rather than working in the bubble at the
time Conway recalls his good fortune in being in the
right place at the right time as the world fretted about
the impending digital apocalypse that would hit when
the clocks ticked round to Y2K.

Mark Conway joined Micro Focus straight from University,


having been enthused by a Micro Focus product he
worked with while on placement. It seemed a logical
point to start our interview. Conways CV mentions that
his involvement with COBOL while working for British
Rail inspired him to approach Micro Focus. I used
Micro Focus Workbench to develop British Rails track
and rolling stock management application, explains
Conway. It had a great source-level debugger called
Animator. It was different to anything Id seen before.
I cycled all the way from Oxford to Newbury to be
interviewed by the guy who created the product. I just
thought that this was the place for me.

One of his key achievements was developing a Micro


Focus Year 2000 product, Smart Find and Smart Fix. It
generated $40 million inside 12 months. Does he see
the potential for a product having that level of impact
again? Yes, he says after a moment of reflection.
But that was an interesting and completely unique
time. People had known about Y2K for a long time,
but some pre-shocks in 1998, like pension letters being
wrong, made it clear this was an issue that anyone
with a COBOL app couldnt ignore and the deadline
wasnt negotiable. Still, that was a great opportunity
for our company and I bought a Lotus Elise with my
bonus that year. I still have it. It reminds me that with few

JUNE 2013 | www.testmagazine.co.uk

PAGE 33

TEST EXECUTIVE PROFILE

exceptions, whatever you want to do can be done.


Software opens up new business opportunities you just
have to be looking in the right places. While Y2K was
clearly unique in terms of an occasion, something similar
will happen again.

THE PACE OF DEVELOPMENT

resisting immigration for the sake of a few populist, votewinning statements that feed into the anti-immigration
narrative is hopelessly misguided. He counts off on his
fingers the highly-skilled, best-in-breed and exoticallymonikered software engineers hes worked with over the
years. And these are people we should be encouraging
to stay away?

He then compares the economic value of these


So, what are the main challenges of working in
specialists to the big beasts of the UK financial sector. The
the software industry today? The sheer pace of
hopeless disparity between the perceived worth of the
development, responds Conway. Everything has to
salary-and-bonus-sponges of the City and the innovators
be delivered so much faster and this is accelerating.
in the tech sector who drive income for the UK bemuses
Development cadence is continually increasing. People
Conway immensely. And probably most people reading
want to release code more frequently monthly,
this magazine.
weekly, even daily. There is an increasing
use of components, both open source
What are the other barriers to success? The other
and commercial. Browsers like Chrome
problem is with organisations themselves,
CONSIDER
upgrade silently. Even if your code
says Conway. Software companies are
doesnt change, a lot is changing
inherently more agile than the organisations
ANYTHING THAT
around you. We need to be totally
we serve and can adapt more quickly to
WILL HELP, AND
aware of the context in which our
this accelerated pace of change. Our
TEST
AUTOMATION

products live.
customers often have heavy investments
INCLUDING UI TESTING
in static or unwieldy applications, platforms
Quality and how you maintain
IS CERTAINLY A
and hardware and they are struggling to
it is another issue, adds Conway.
understand both the new technology itself
KEY
PART OF THIS
As everything gets faster, quality must
and how they can transition to it. And thats
be continuous, and you cannot rely on
all without some other technology emerging
stabilisation phases. My advice? Consider
from leftfield. Which it always does. Ive been in this
anything that will help, and test automation
industry for 30 years but no two days have ever been
including UI testing is certainly a key part of this.
the same.
Borland has moved away from 12 year cycles and now
release major functionality every six months. We want to
Theres a huge variety of devices available now, adds
get faster still and we rely heavily on automated testing.
Conway, and some estimate half of new applications
Were lucky in that we build rock-solid testing tools, so its
written today target mobile devices. This is a huge
free for us to use them.
number of platforms to cover. The big guys like Google
and Facebook can afford to build natively for almost any
Does this pace create problems beyond just technical
device, but this isnt feasible for everyone, so the industry
issues? The expanding breadth of technology and
is converging on HTML5+Javascript for UI, with REST
the rate of this expansion is behind an inevitable lag in
services on the back-end, perhaps hosted in the cloud.
skills, says Conway. Many organisations are struggling
to build to new architectures and devices and are
Technology will move on, but for now this looks like a
having to outsource because theres no in-house skill. Im
reasonable platform bet for many people. Even if HTML5
not sure there is a short-term answer here, either. Going
works out, this must be tested on a variety of devices
forward, initiatives including the Micro Focus Academic
and browsers to be sure your application works. Write
Program will help organisations and academic institutions
once, test everywhere. Thats the name of the game and
work more effectively to help produce high-calibre
anything else will feel unwieldy.
students with the programming skills needed, but its a
problem today.

RECOGNISING THE GAPS IN TECHNOLOGY

QUALITY PEOPLE
So if Conway was elected Prime Minister tomorrow,
how would he deal with this situation? Declaring himself
staunchly apolitical he states, there are two forces you
cant fight, economics and technology and the two
issues are very closely linked in this case. His view is that

PAGE 34

For someone in Conways position, does the job


make him query the underlying technology behind
everything he uses? Oh, definitely, he says but Im
also concerned about why something doesnt exist.
Part of my job is to recognise where there are gaps
in the technology landscape that could be filled by
Borland software. If the process of creating new software

JUNE 2013 | www.testmagazine.co.uk

TEST EXECUTIVE PROFILE

is a journey from idea to conclusion, then I have the


knowledge and contacts to work out the bit in
between. Brian Reynolds, the founder of Micro Focus,
used to say Anythings possible with software and that
has never been more true. The beauty of software is that
its malleable. It can be shaped, and it can shape things.
Its not a static monument, like a statue. Its the medium
that makes everything possible.
So, everythings going mobile and moving to the
cloud, true or false? False. Its not. But a lot will, says
Conway, certainly the organisations that will benefit will
naturally gravitate there. For businesses with their own
IT infrastructure, moving to a PaaS offering like Amazon
may be no cheaper than a well-managed virtualised inhouse environment. And there are technical, legal and
commercial risks in moving out of your own, working data
centre. It cant be a headlong flight into the unknown just
because everyone is perceived to be doing it.
However, youre not always comparing like for like. Cloud
can provide data redundancy, global presence and
elasticity much cheaper than in-house IT. For example,
Silk Performer Cloudburst uses Amazons elasticity to
run massive load tests we simply couldnt do it with
the physical hardware at a typical customer site but
Cloudburst was created with the Cloud in mind. Theres a
clue in the name. However, embracing the cloud properly
likely means changing your application and how you
scale and store data may be different. You can very easily
end up dependent on a specific cloud provider. Do you
know how you will move if it doesnt work out? Its not easy
to see where youre going in a cloud ...

THE DIGITAL FREE-FOR-ALL


Could this digital free-for-all present Borland with
problems? Well, one of the more disruptive effects of
the Cloud is to dramatically lower the barrier of entry for
new competitors says Conway. Technologies such
as NoSQL databases enable very scalable, multi-tenant
applications to be quickly built from scratch. Sure, these
are often simpler, lower-end offerings but the price is
going to be pretty competitive. SaaS offerings can
be implemented so quickly and easily they get round
organisational controls. A department no longer needs to
negotiate with their IT department to provision a server.
When software can be adopted at the department or
team level then the genie really is out of the bottle.
Is this sea-change in user expectations now driving the
market? For sure. Apple and their like have set new
standards for usability. Everyones having to raise their
game and think about how users will truly experience
their products. Some big players are discovering their full
featured offerings look bloated, clunky and expensive
beside the new wave of lightweight apps. Their challenge

JUNE 2013 | www.testmagazine.co.uk

is bridging the gap


between old and new.
This is the space that
Borland are working in.

EXPENSE CLAIMS

THE BIG GUYS


LIKE GOOGLE
AND FACEBOOK
CAN AFFORD TO BUILD
NATIVELY FOR ALMOST
ANY DEVICE, BUT THIS
ISNT FEASIBLE FOR
EVERYONE

Whats at the top of Mark


Conways in tray at the
moment - and whats at the
bottom? Well, Im off to the Micro
Focus office at Linz in Austria to firm up the latest
Silk releases and as a last resort Ill be dealing with my
expenses, he admits. Im not big on admin, to be
honest, but my creativity does include some pretty
eyebrow-raising claims. My favourite? Purchasing
ammo while in the States. That was entertainment. I
also paid five dollars for a wilderness permit so I could
pitch a tent in a national park I successfully argued
that these were accommodation expenses.

For a man focussed on fine detail, Conways thoughts


are of wide open spaces. Rapidly approaching empty
nest status, his travel bucket list includes Asia and he
has already visited the lower slopes of the worlds biggest
peaks, including Annapurna and Everest. His ambition to
climb seven summits in three years burns bright. But for
now, his work at Borland gives him space to explore new
horizons in technology.
I have the flexibility to move within this company and
Im excited about the future. Were getting more agile
than ever up to 20 of our guys can be working on a
project in as many different locations. Theres a lot to
look forward to.

VISION ON
Clearly, theres no single trick to having the vision to spot
what the market needs and create the product to fill
that gap. Its a combination of intelligence, commercial
courage and experience. Conways own adage,
expresses his philosophy very succinctly. Someone said
software used to be about making things possible, now
its about making things easy.
Settled down with a long-term partner and grown-up
children, Mark Conway enjoys outdoor pursuits. A keen
triathlete and ski-er, his office affords a good view of the
countryside and the roads that would distract most keen
road cyclists he is as keen to discuss a bike mech as he
is high-tech but tellingly his chair faces his computer
screen, not the window. Whatevers coming next, he is
already looking for it.
www.borland.com/silkportfolio

PAGE 35

STRATEGIC IT
THOMAS COLES
MANAGING DIRECTOR
MSM SOFTWARE
WWW.MSMSOFTWARE.COM

SWEATING YOUR IT ASSETS


In a challenging economic climate you need
to make the most of what youve got. Thomas
Coles says its time to sweat those assets.

here are various definitions on what


sweating the assets means, but in the
main we consider it in terms of cutting
costs, maximising the capabilities of your
current systems and fully utilising the skills of
your personnel. With budgets tighter than ever before,
businesses are under even greater pressure to make cut
backs to outlays.
As a result, businesses are faced with a key challenge;
finding the balance between cutting back where
necessary, but not to the detriment of future business
growth or their ability to gain leverage over competitors.
This balancing act is pivotal when it comes to a businesss
IT systems. After all, in todays challenging climate the
quality and efficiency of IT is critical to business success. It
can be the one element that sets an organisation apart
from its competitors, helps to win business and optimise
productivity. It is therefore essential that businesses ensure
that incumbent IT systems work effectively and efficiently.
What makes this more complicated for businesses
is illustrated in a research study we undertook with
Dynamic Markets , which found 46 percent of IT
managers think the required skill set for staff has
changed. Therefore, some businesses find themselves
in a position where they dont have the strategic skills
in-house to help to get the most out of their IT systems.
Over 40 percent also told us they do not have sufficient
resources to support their existing IT systems.
The feedback came from over 100 interviews which
were undertaken with IT managers in medium to large
organisations across the country. We discussed the
changing nature of IT in their sector and not having
sufficient resources to support the IT function was a
frequent concern.

SWEATING ASSETS
With restricted budgets meaning that businesses do
not have the funds to invest in new systems, and
limited resource resulting in current IT systems not being
efficiently supported, it is no surprise many businesses fail
to maintain software.
To address this, businesses must look at ways to keep their
company competitive by carefully considering the most
economical resource solution to ensure systems are fully
supported and in turn can operate at an optimum level.

STRATEGIC
SKILLS

LEGACY
SYSTEMS CAN
BE A DRAIN ON
THE RESOURCES AND
THE CHALLENGE OF
MANAGING SUCH
TECHNOLOGIES IS A
CONSTANT BATTLE

With IT managers citing a


lack of in-house strategic
skills as a key issue,
support in this area can
be achieved through
outsourcing the IT function,
or body shopping, where a
business loans the technical
expertise of an organisations
employee. It can be something that
is provided over a short term or longer term period
depending on what priorities businesses have and what
capabilities they have in-house.

Legacy systems can be a drain on the resources and the


challenge of managing such technologies is a constant
battle. By outsourcing system support the strain on a
business internal team will be eased so they can focus
on other core activities, whilst also ensuring IT systems are
fully supported, to guaranteed SLAs. Ultimately this will
improve performance and reduce the cost of running
the business, providing peace of mind and more time to
focus on what the business does best.
Without the resource to support systems, software
becomes dated which puts organisations at risk and
greatly increases the potential for system failure. In a
worst case scenario, this can introduce turmoil into the
company, with huge repercussions in terms of lost sales,
revenue, custom and reputation. All of which are hard
to win back. It is therefore essential businesses act now
to ensure current systems have the support required, to
ensure the long term success of the business.

Analysing the efficiency of your organisations legacy


software can be achieved cost effectively through
technical audits or system health checks, which provide
vital insight into how to improve performance. Once the
system has been analysed recommendations will ensure
systems are fully supported and can operate at an
optimum level.

PAGE 36

JUNE 2013 | www.testmagazine.co.uk

eggPlant
the worlds favourite
Functional test automation tool
and now Facilita - with Load and Performance
solutions - is part of TestPlant

London

Boulder

ton!

ongle
Hong Kong and C

SOFTWARE EXPORTER OF YEAR

TEST ENVIRONMENT MANAGEMENT


SEAN HAMAWI
PRODUCT DIRECTOR
PLUTORA INC
WWW.PLUTORA.COM

AVOIDING TEST SLIPPAGE


Poor test environment management is the common
denominator when there is test slippage in large organisations
according to Sean Hamawi.
LOOKING
AT SOME OF THE
SIMPLEST CAUSES OF
WHY TEST ENVIRONMENTS
LEAD TO PROJECT SLIPPING
YOU WILL OFTEN FIND
THAT THERE IS NO PROPER
FRAMEWORK IN PLACE TO
ACTIVELY GOVERN THE
WAY ENVIRONMENTS
ARE MANAGED.

common denominator when it comes to doing


a post mortem on why testing on large scale
projects or programs slipped from the test
schedule often points to poor test environment
management. This is normally one of the high
percentage root causes of slippage amongst the other
usual causes such as bad requirements, overly buggy code
and resource constraints.
Rarely do projects operate without having test
environment issues. As organisations mature, their IT
application suites grow increasing integrations points
and data complexity. This always introduces end-to-end

PAGE 38

knowledge gaps around dependencies in both setting


up and managing test environments.
Complex integration points need to be replicated
in test environments in order to adequately perform
representative progressive and regressive testing of
production scenarios. As code drops and configuration
settings get deployed, managing the code and config
between each environment and their respective
components becomes extremely difficult. Its only takes
one missed config item and the test team has down-time
which can put a significant dent in the test schedule. All
too often on projects it happens over and over again.

JUNE 2013 | www.testmagazine.co.uk

TEST ENVIRONMENT MANAGEMENT


TEST ENVIRONMENT STRATEGY
Introducing a test environment strategy is all too often
neglected by organisations that have parallel projects
operating. While Test Centres of Excellence (TCoE) are well
and truly embedded in organisations the environment
management aspect of the TCoE is usually weak. The
reason for this is that in the test environment management
has lots of interfacing aspects with different IT teams
which makes it harder to achieve streamlined
and repeatable processes.

A quick fix is to implement access controls over the test


environments so that the test team and vendors can only
access components relevant to them performing there
day to day job.

Managing test environment communication &


contentions: Lots of projects means lots of test
environment logistics to manage. Competing project
requirements translates into tracking which environment
is pointed to which integrations and what codesets
& datasets reside at different application
layers. Often spreadsheets and are the tool
of choice for managing all this data which
GETTING BUDGET
spells disaster when you have competing
TO SPEND ON NEW
projects all vying to use the same test
environments.
ENVIRONMENTS IS OFTEN

Test environment managers in


organisations tend to use the phrase
TEMS aka Test Environment
Management Services. While the
TEMS framework on paper always
NOT A REALITY. THIS MEANS
Implementing a Test Environment
looks good the outcomes when
MANAGERS NEED TO GET
Management toolset: This can
delivering this service to multiple
dramatically improve how contentions
SMARTER AND INNOVATIVE
projects is the opposite. A test
and competing requirements are
environment manager should act
AROUND HOW TO DO
managed. There is no use having a
as a coordination and management
MORE WITH CURRENT
excel spreadsheet kept on the file share
function across multiple teams
ENVIRONMENTS.
somewhere if people cant access it. Find
and resources sitting in different
a TEMS tools which allows you to model
geographical locations. Where the TEMS
and implement your processes, you will find
practices come undone is with the highly
immediately all your colleagues will have a
aggressive and heavy processes it introduces
greater appreciation of what youre trying to achieve.
such as daily meetings, test environment
requirements forms etc; applying a framework which is
Enforcing a low touch change request (CR) process: Test
low touch and highly repeatable is the key to success.
environments are not managed with the same rigor as
production. I totally agree that non-prod environments
As projects and other IT initiatives kick-off in organisations
need more flexibility but where problems arise is when
a string of back and forth questions surface such as
changes are made to the test environments on an ad
do we have test environments? Is there enough? Who
hoc basis without any form of documentation for auditing
is responsible for managing drops? Which vendor is
and governance purposes.
responsible for what component of the end to end
environment? etc. Looking at some of the simplest causes
Implementing a change ticketing system can easily
of why test environments lead to project slipping you will
be performed. I would high advise against using
often find that there is no proper framework in place to
your production ITSM tool to do this because usually
actively govern the way environments are managed. In a
production protocols can get pushed on you for using a
world where vendors and SaaS are a fact of life, the need
ITSM toolset which create overheads.
to actively purse a test environment strategy becomes
Manage Configuration drift by properly documenting the
self-evident.
changes made against each environment. Without an
audit trail untangling configuration drift will become time
A FIVE POINT PLAN
consuming and expensive.
Here are five points which test environment managers
To understand where your organisation sits in relation to
can look at refining to minimise environment down-time
the maturity of your environment change ask yourself
and improve their working relationships with test teams.
one question Can I easily see track what changes to
Refine your Test Environment Management Services
(TEMS) Processes: Over the last half-decade many
larger organisations have opted to outsource their TEMS
practice to multi-national IT consulting firms. This has been
a sensible approach because these IT firms bring with
them experience, processes and sometimes custom tools.
Though, the major challenge is that there processes are
sometimes top heavy.
Reviewing the process to see what is working and what is
not is very important. A simple way to approach is to do
a review of a recently implemented project and looking
at how many of your existing TEMS processes where
followed. You will be surprised by how low the number
will be. There is no harm in version controlling your TEMS
processes and constantly making improvements and
sharing the improvements with your colleagues.
Implementation of access controls: In most enterprises
the testing team are not the resources performing code
deployments, server configs, data refreshes etc. However,
test teams do often have access to perform technical
procedures like above which bring introduce risks of
accidentally making un-qualified changes.

PAGE 40

environment x have been made and what is the current


configuration/application version of that environment?.
If the answer is not a definite yes then the writing is on
the wall as to why you may have environment issues all
the time.

AN EXPENSIVE BUSINESS
No doubt about it, test environments are extremely
expensive to run especially when you have integrated
environments which replicate production. You have
the cost of licensing, infrastructure and resourcing.
While virtualisation has reduced costs associated to
infrastructure and time to market, the cost of application
licensing and resourcing are on up.
Regardless if you work in the public or private sector getting
budget to spend on new environments is often not a reality.
This means managers need to get smarter and innovative
around how to do more with current environments.
Implementing a few of the concepts above will help you
get into TEMS success.

JUNE 2013 | www.testmagazine.co.uk

THE EUROPEAN SOFTWARE TESTER


INNOVATION FOR SOFTWARE QUALITY

Subscribe to TEST free!

Published by

www.31media.co.uk

Telephone: +44 (0) 870 863 6930


Facsimilie: +44 (0) 870 085 8837
Email: info@31media.co.uk
Website: www.31media.co.uk

FOR EXCLUSIVE
NEWS, FEATURES,
OPINION, COMMENT,
DIRECTORY, DIGITAL
AND MUCH MORE VISIT:
testmagazine.co.uk

SOCIAL MEDIA
GEMMA MURPHY
SOLICITOR
LESTER ALDRIDGE LLP

THE LESSON
IS CLEAR FOR
EMPLOYEES - THEY
SHOULD NOT HAVE
ANY EXPECTATION OF
PRIVACY ONCE THEIR
COMMENTS ARE IN A
PUBLIC FORUM.

WATCH WHAT YOU TWEET!


There used to be a clearer line between an employees professional and personal
life, but with the ever growing use of social networking sites for business and
pleasure, the line is increasingly blurred. Gemma Murphy, solicitor at Lester Aldridge
LLP reports.

hile social media undoubtedly brings


substantial benefits to a business, particularly
in relation to LinkedIn which can help raise a
business profile and develop networks, the
abuse of social media by employees may
also pose risks. Some employees feel the need to publish
their every thought and grudge on Facebook or Twitter,
including derogatory comments about their employer,
colleagues or even clients.

hoped that her death was painful and degrading. His


actions showed disregard to the reputation of the police,
and he clearly forgot how closely his personal and
professional life were intertwined.

Recent news headlines demonstrate that the use of


Twitter and Facebook not only affects an employees
current career but also a candidates prospects of a
job. Take for example the case of Paris Brown, the UKs
first Youth Police and Crime Commissioner hired at just
17 to be the much needed link between the police
and young people. Within days of her appointment,
her previous offensive tweets, which could have been
considered racist and homophobic, came back to haunt
her and it led to her promptly resigning from the post.
Some had sympathy for Paris saying that all teenagers
do stupid things and that its just a part of growing
up but the fact remains that her opinions were public!
As a prospective employer there is nothing to stop
you googling a candidate to assess their credentials
or indeed if there is anything which might bring your
business into disrepute.

So the lesson is clear for employees - they should not


have any expectation of privacy once their comments
are in a public forum.

If you think it all comes down to age and maturity,


then think again! Police sergeant Jeremy Scott aged
52 recently resigned following tweets about Margaret
Thatchers death. Mr Scott not only described the world
as a better place but in offensive language said that he

PAGE 42

Then theres the recent story of Kelly Doherty, aged


26 who called in sick from work for two days then
proceeded to post photos of her antics on Facebook
whilst off sick.

WHAT DO YOU DO AS AN EMPLOYER?


The best way to protect your business is to introduce
effective policies and procedures to set out acceptable
(and unacceptable) social media practices and to
manage issues in a clear and consistent way. We would
recommend a social media policy which gives clear
examples of unacceptable practices, such as not using
Facebook during work hours, reputational damage to
your business and your clients business, disclosure of
confidential information and harassment of colleagues.
We also recommend a bullying and harassment policy
which specifically covers social media abuse both during
and outside work hours. Your disciplinary and grievance
procedures should also make specific reference to social
media issues, with clear guidelines about how these will
be dealt with in practice depending on the different
scales of social media abuse.

JUNE 2013 | www.testmagazine.co.uk

DESIGN FOR TEST

MIKE HOLCOMBE
FOUNDER AND DIRECTOR
EPIGENESYS LTD
WWW.EPIGENESYS.CO.UK

COMPREHENSION MATTERS
This issue Mike Holcombe investigates automatic
test set generation in particular and test
automation in general.

n the increasingly challenging world of testing


the ability to automate different parts of the
testing process can prove attractive. The idea of
automatic test set generation is one area where
progress is being made. For example, searchbased test input generation can offer significant benefits.
The approach is based around the identification of input
sequences that will exercise all the paths through a
program graph. So we are looking at the structure of the
code and trying to cover all possible routes through the
software. We can use evolutionary techniques to breed
test input sequences and then test them to see if they
achieve what we want, in this case - do they exercise the
path we are targeting?
A general approach is to choose, in a random manner,
an initial test sequence and to see how fit this is by
checking how close it gets to the desired path. If it is not
right we then generate another sequence using a variety
of possible methods genetic algorithms and so on. This
new sequence is then tested to see how well it fits and if
it is better we then throw away the earlier one and try to
improve the new one by repeating the process.
Now this is all very well but the type of sequences
generated may be difficult to understand when we
ask the question what is the desired output we expect
to see from this test input if the code is correct? Thus
we need to interrogate an Oracle for the software to
determine the correct outcome of the test.
To take an example from Sheeva Afshans recent paper1,
the test sequence #qpgbkJ;_ir9, generated by such
a search-based technique may not be easily handled
compared with something like inererof_yo which is
somewhat more readable. The class being tested in
this context is one that converting an identifier with
underscores to the equivalent string in camel case ie
under_score is transformed to underscore.
So what we need to do is to throw away those strings
that are incomprehensible until we get to a point where
the test input string not only exercises the desired path
but is easier to understand, and we want to do this
automatically.
The way to do this is through the use of language
technology where techniques have been developed
that can determine to a level of confidence whether

a given
string
belongs
to a
language,
WE
such as
NEED
TO
English. One
THROW
AWAY
THE
such approach
STRINGS THAT ARE
uses what are
called bigrams and
INCOMPREHENSIBLE UNTIL
a language model.
WE GET TO A POINT WHERE
Bigrams look at
THE TEST INPUT STRING
pairs of adjacent
NOT ONLY EXERCISES
characters in a string
THE DESIRED PATH
and the technique
works out what the
BUT IS EASIER TO
probability is that the
UNDERSTAND
pair is a valid pair in the
target language, a large
corpus of this information is
provided. In the scheme used in this
work the word testing has a much larger probability of
meeting this requirement than the string Qu55-ua. For
example the string te has a certain probability of being
in the Language, es also and so on. We put these
probabilities together to find a probability for the whole
word testing. In the case of Qu55-ua. The first pair
Qu has a specific probability (actually quite high for
these 2 characters) but when combined with the rest of
the pairs in the string the likelihood of this string belonging
to the language is very much smaller. This gives us a
technique for rejecting strings that are not very easy to
check in an oracle and should lead to a much easier
task of determining whether the test has produced the
correct result.
This approach was evaluated by a number of testers
and seems to produce significant benefits especially
a significant reduction in test data cognition time,
specifically improved accuracy of human oracle ability
and improved speed.

References:
1. Sheeva Afsahan, Phil McMinn, Mark Stevenson, Evolving Readable String Test Inputs Using a Natural Language Model
to Reduce Human Oracle Cost, ICST 2012.
philmcminn.staff.shef.ac.uk/publications/pdfs/2013-icst.pdf

JUNE 2013 | www.testmagazine.co.uk

PAGE 43

OUTSOURCED TESTING
MARK BARGH
FOUNDER AND DIRECTOR
ROQ IT
WWW.ROQIT.CO.UK

BRINGING IT ALL
BACK HOME
You could be forgiven for thinking that a
recession would drive further offshoring
due to the economies it promises but
these promises appear to have been
broken and its no longer a risk CIOs are
prepared to take. Mark Bargh reports.

PAGE 44

JUNE 2013 | www.testmagazine.co.uk

OUTSOURCED TESTING

hen IT offshoring began accelerating


around nine years ago, it quickly became
adopted by large global corporations in the
marketplace and launched a trend which
showed little signs of abating. Until now that is.
IT in the UK has always been seen as more expensive to
that delivered in other countries such as India and other
low labour cost countries which no doubt caused the
initial attraction to move practises offshore. But it seems
companies are finally realising that price should not be
the only driving factor in choosing an IT service provider.
With an increasing number of organisations failing to
renew long-term contracts, and more triggering their
break clause, we need to look at why this change is
happening and whether it is set to reshape the future of
the IT industry in the UK. You could be forgiven for thinking
that a recession would drive further offshoring due to
the economies it promises but these promises appear
to have been broken and its no longer a risk CIOs are
prepared to take.

THE OFFSHORE BARGAIN

high calibre resources, the exceptional ones move on


too quickly taking with them valuable skills and domain
knowledge. In an isolated incident this would be
difficult enough to replenish but as a sustained practice,
deadlines and suchlike become unachievable. The
expected rates of staff departure and the necessary
time to resource and train new team members are simply
not built into project timelines. And as such, project
expectations are highly unrealistic and requirements are
unlikely to be met. The high level of optimism in planning
serves only to impede schedules in the long term
and again it ensures the original business case for the
contract is not satisfied.
This is somewhat different in the UK IT marketplace. The
current climate appears to be having some positive
stabilising effects; the talented people in a suitable role
are not feeling the same desire to move on as quickly.
With no solid future guaranteed anywhere, the fear of
the unknown is greater than that of the known and the
same riches are not promised to entice newcomers as
they once were. Some level of attrition can always be
expected but certainly not to the same degree as in
other countries popular with outsourcing.

Two thirds of UK IT projects are destined to fail before


When an offshore company is engaged and there is a
theyve even started by running over time and budget
need to bring staff over to work on site (perhaps due to a
so its no wonder other delivery options are considered
particular cycle of testing) the recent changes in
and adopted. The appeal of the heavily process
the law mean visas are only granted for work
driven, CMMi focussed offshorers with their
in the UK after employment for a specific
perceived bargain rates and reduced
time period. This only accentuates the
delivery timescales cannot be ignored.
CONTRACTS
problems with high staff turnover it
The original business case for offshoring
ARE NOT
limits the amount of team members
was clear: lower costs, faster speed of
who could visit the UK if needed.
AUTOMATICALLY
delivery, immediate scalability, access
RENEWED,
NOR
ARE
THEY
to superior skills and technical expertise.
Furthermore, maybe due to the
EASILY WON; EVERYTHING
However, now that companies have
changes the IT market had to
first-hand experience of offshoring
undergo in the UK as companies
HAS BECOME MORE
and as the global market continues to
shifted to offshore or perhaps due to
COMPETITIVE AND
evolve, there are an increasing amount
our culture in itself, expectations are
EVERYONE IS FIGHTING
of indicators highlighting the return to UK
more conservative. Frankly, there is
HARDER TO
based service providers.
less fundamental complacency in the
SUCCEED.
UK no longer is anything a given. This
In a time when the UK economy was
is as true of attrition as it is of more general
exceptionally strong and wages were high,
issues. Contracts are not automatically
those in developing countries were comparatively
renewed, nor are they easily won; everything
low. But as their economies have improved, their day
has become more competitive and everyone is fighting
rates (primarily driven by low salaries) have increased
harder to succeed. Along with many other factors, this
exponentially which is quickly eradicating that financial
has resulted in much more favourable price points in the
competitive edge. India in particular has been battling
UK market (more favourable for the buyers of services
to cap wages as inflation sits higher than nine percent
at least!) The squeeze on salaries, a heightened focus
and wages themselves increased by 11.9 percent in 2012
on economies and sophisticated general practices
alone. Pit this against the average 1.4 percent increase in
have enabled the rates of UK service providers to almost
the UK in the same period and the gap was significantly
match those promised by offshorers whilst delivering a
closed in the space of just 12 months, discounting any
more value-based proposition.
previous growth.
This is further accentuated by the differences often seen
Had the increase in day rates been supported by an
in manpower structure. Carefully managing resources
improvement in service, the value might not have
is critical in ensuring project costs are kept as low as
been in question. Instead, it has become a common
possible. Yet in some cases, projects are managed
occurrence that the business case objectives are
somewhat irresponsibly seven test leads might be
simply not satisfied. Whether that be from a quality,
managing ten testers where in fact it would be normal in
cost or time perspective, the fact remains that
the UK for one lead to comfortably manage six testers.
the service is frequently over promised and under
This would of course cost significantly less and would
delivered which not only has a huge impact on
ensure a more streamlined workflow.
budgets but also on wider programmes of work and
can even have a direct impact on revenue streams.
We could attribute this to cultural differences, of
CIOs want value not compromise.
which there are many crossing thousands of miles
geographically not only brings a different climate but
The quality of the work can be compromised by a
different values too. Ideas on project execution can
number of factors. One of which is the high rate of
differ internally within an organisation from individual
attrition experienced in many offshore companies as
to individual, let alone from country to country. For
much as 50 percent in some cases but more commonly
example, many companies in India are heavily process
20-30 percent. Although offshorers can provide extremely
driven and place huge emphasis on frameworks such

JUNE 2013 | www.testmagazine.co.uk

PAGE 45

OUTSOURCED TESTING

as CMMi, which simply is not as widely


adopted in the UK. The success promised
by following its path is rarely delivered
owing to the fact that software
development projects are seldom
conducted according to its strict
guidelines and as such is difficult to
align other processes, such as testing,
that utilise it. Its rigidity is also in stark
contrast to the flexibility of an onshore
engagement model.

TRUST & COMMUNICATION

THE DRIVERS
TO OFFSHORE
IN THE FIRST PLACE
CANNOT BE IGNORED
BUT ENSURING THEY ARE
MET IS PROVING TO BE
EXTREMELY CHALLENGING
A UK DELIVERY MODEL
MIGHT INSTEAD PROVIDE
THE NECESSARY
SOLUTION.

And the differences are not limited


to formal process; there are far more
fundamental problems too. There often exists
a reluctance to say no to any requests that might
arise, and nor might there be any questioning of any
requirements, regardless of how clear they are or how
incorrect they might be. The implications of this are highly
damaging. IT Service providers must be able to have
open and frank conversations with their clients trust
and honest communication should be of paramount
importance. Expectations must be set realistically in
order to achieve the desired outcome of a project or it
is doomed to fail before it even begins. And this is before
we take into account the fact that the project may kick
off in earnest with a team of 50 business requirements
which they do not fully understand but do not dare
question. The Definition of Done can differ from person
to person and it is difficult enough for the Business to
determine requirements in the first place. Ensuring they
are met in this type of culture is nigh on impossible.
To add to this is the fact that the resources employed
are often allocated to a multitude of projects,
stretching their time, commitment and responsibility
too thinly. A solution to most problems in such cultures
is to throw headcount at it meaning your project
might be further compromised if another elsewhere
starts to fail more seriously. Conversely, it does mean
that you get more people dedicated to yours when it
all goes wrong but ideally no one wants to get to that
stage in the first place!

GEOGRAPHY
The geographical separation of course also brings with it
different time zones. It goes without saying that operating
during the same working day is easier you have longer
in which to schedule meetings or discussions and should
any critical problems arise, they can be addressed
promptly. This is further compounded as more and more
projects shift to an Agile methodology the daily stand
ups are infinitely more difficult across language and time
zones and should ideally be executed at the start of the
working day. The intricate working practices are also
more challenged by the time zone split as the whole

PAGE 46

nature of Agile is to be more collaborative


and work more closely together.
So if offshoring is fraught with such
problems, what are the alternatives
when you need to realise the
same benefits of cost savings and
outsourced delivery? Until now, very
few. UK providers have been seen
as expensive contractors or body
shoppers and suchlike. But lately
there has been serious development
in their offerings.

A spate of test labs have been opening


in the UK, most recently by ROQ IT based
in the North West. Theyve moved to a new
facility to house a rapidly growing in-house team
who will be working remotely on client projects that
might previously have been offshored. The streamlined
nature of their working practices and the locality, in
addition to the inherent professionalism garnered from
years of working on client sites, affords the ability to
operate certain types of project at price points very
close to those of offshore providers. In most cases, it gives
clients the best of both worlds excellent service at a
sensible cost and all at a lower risk.
In a localised service, there is increased ease of
knowledge transfer. With test analysts able to move
between the lab and client site as required, passing
testing processes and knowledge on becomes far
more straightforward; there are no visa issues, long haul
flights or time zones to contend with. The abilities of
such an offering range from delivering large projects as
a managed test service combining a flexible on-site/
off-site model, dictated to by delivery requirements
and not contractual obligations to more tactical
long term engagements such as regression testing, live
proving, test automation and device testing. Taking
all this into account, the original specifications laid
out in business cases developed for offshoring can be
translated to onshore services and are more likely to
yield the required outcomes.

SECURITY
Test labs in the UK dont need to compromise on
technological quality either. With state of the art security
in use, secure servers, firewalls, and failsafe backups in
addition to remote access to client test environments
and secure accounts for production testing, UK test labs
can parallel any other.
So were perhaps on the brink of a further shift in global
IT delivery and one that will only benefit the industry in
the UK. The drivers to offshore in the first place cannot
be ignored but ensuring they are met is proving to be
extremely challenging a UK delivery model might
instead provide the necessary solution.

JUNE 2013 | www.testmagazine.co.uk

Event Loading...

London Autumn 2013

This highly anticipated event now returns to London on 24th October 2013.
Register by 5th July for your early bird discount!
Just visit: testexpo.co.uk

THE LAST WORD


DAVE WHALEN
PRESIDENT AND SENIOR SOFTWARE ENTOMOLOGIST
WHALEN TECHNOLOGIES
HTTP://SOFTWAREENTOMOLOGIST.WORDPRESS.COM

SPECIAL OFFER!
Dave Whalen assesses the value of free
automated testing tools.
here are two groups of people that know
absolutely nothing about software testing, first
- test tool vendors; second - software project
managers. Sadly, the first group is keenly
aware of the existence of the second groups
lack of testing knowledge/experience and are like the
old travelling salesman when peddling their wares.

If you are an automated tool vendor - keep reading.


If youre currently involved in any automated testing
project - keep reading. If you think automation is always
the answer - ignore the vendors and keep reading. If your
underlying tests are rubbish, automation just gives you high
speed rubbish. Its still rubbish.
Any test automation effort is going to be expensive - even
if you use a free tool. In fact, Im willing to bet a superdeluxe, cream-filled, chocolate donut that free tools, in
the long run, cost more than the big high-dollar tools to
implement and use effectively. Not that those big tool
vendors are off the hook.
Please dont think that Im against test automation
- nothing is further from the truth. Many of the tools
are great tools - when used properly and in the right
circumstances. Will automated tools reduce test times?
Mostly. Will they improve your testing? Maybe. Will they
result in a higher quality end product? Rarely. Much
depends on what you choose to automate, or how you
define automation. Engrave the following somewhere in
your brain where you can refer to it often: You cannot,
and in most cases, should not automate everything. It is
rarely economically feasible!
Lets take a quick look at functional test tools. If you only
use the test automation tool to run positive, happy path
tests can you really consider yourself automated? Not
in my opinion. The problem with most free tools is that all
they do is record user responses and play them back at
high speed. Errors page or messages can fly by - just like
the good pages or confirmation messages. If you are not
paying attention, you may miss them unless the system
crashes. These tools do not evaluate or validate anything.
That part is still up to the tester. Just because a test script
runs end to end, it may not mean the test passed or the
application works.
Higher functioning tools allow you to modify the input
data, which is a step in the right direction. You can now
enter bad data and see what happens. Most tools
typically still require some type of manual validation of
the results. Unless it is a critical error causing the system to
crash, most scripts will sail past error messages. If you are
intentionally inducing errors (negative test), you probably
want to see the results. I would.
The really good tools will not only let you input good and
bad test data, but will also evaluate the results. These are
usually not the free tools. Really good tools will validate
page loads or data fields, detect error messages, read
parts of the screen, validate databases, etc. All of the

PAGE 48

special
offer!
things you would typically
do during a manual test.

ANY TEST
AUTOMATION
EFFORT IS GOING
TO BE EXPENSIVE EVEN IF YOU USE
A FREE TOOL.

Many of the automated test


projects that I have evaluated
as a consultant fall way short
of the level of testing that I would
expect. Most of the tools are purchased
by someone far outside of the testing arena who has no
clue what they have bought. They listen to all the vendor
hype and purchase the tool without talking to anyone
from the test team. Then like Santa Claus, bring you this
nicely wrapped gift that is completely useless.
So you load the new tool, designate one or two people as
automated testers and record and run your first test. You
feed in all the right data in all the right places and click all
the right buttons. The test passes. On to feature number
two. You demonstrate your library of new high-speed tests
to a completely ignorant and clueless group of managers
who are wicked impressed. You continue to automate the
remaining suite of tests. You deliver the product ahead of
schedule - all as a direct result of your automated testing.
You now declare yourself automated.
But are you really? At this point all you really have is an
automated smoke test. Its a start but there is a huge
amount of testing remaining. Dont forget the negative
side. If you use a free tool you may have no other option
than to do only positive tests. That is typically all these tools
will support. They are usually just record and playback
tools. They are of limited value. But hey, what do you want
for free? Actually, a handful of the free tools are actually
pretty good.
Lastly, in spite of what the traveling salesmen tell you
- automated tools are not a plug-n-play option. Sorry
tool vendors Im not buying. They require specialized
knowledge, development time, and maintenance. All still
pretty expensive. Once the system is up and running and
all of the scripts are recorded and errors are fixed, you
may begin to see some cost savings. Maybe!

JUNE 2013 | www.testmagazine.co.uk

ONLINE
THE EUROPEAN SOFTWARE TESTER
INNOVATION FOR SOFTWARE QUALITY

The Whole Story


Print Digital Online

Published by

www.31media.co.uk

Telephone: +44 (0) 870 863 6930


Facsimilie: +44 (0) 870 085 8837
Email: info@31media.co.uk
Website: www.31media.co.uk

FOR EXCLUSIVE
NEWS, FEATURES,
OPINION, COMMENT,
DIRECTORY, DIGITAL
AND MUCH MORE VISIT:
testmagazine.co.uk

You might also like