5 Ways To Revolutionize Your Software Testing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

5 WAYS TO

REVOLUTIONIZE
UTIONIZE
YOUR
OUR SOFTW
SOFTWARE
TESTING
By Dr. James Whittaker

01

INTRODUCTION

REVOLUTIONIZE YOUR QA

uring serious ship mode time, people often lose sight of the big picture
and concentrate solely on the day-to-day details of finding and resolving

bugs. Always being one to buck such trends, Ive come up for air and am
offering five insights that will revolutionize the way you test and, I hope, make
your team more effective.

INSIGHT 1 There are two types of code and they require different types of
testing

INSIGHT 2 Take your testing down a level from features to capabilities


INSIGHT 3 Take your testing up a level from test cases to techniques
INSIGHT 4 Improving development is your top priority
INSIGHT 5 Testing without innovation is a great way to lose talent

02

THERE ARE TWO TYPES OF CODE


AND THEY REQUIRE DIFFERENT
TYPES OF TESTING
Screws and nails: two types of fasteners that carpenters
use to bond pieces of wood. A carpenter who uses a
hammer on a screw wouldnt last long as a professional
furniture maker. A hammer drives a nail and a
screwdriver a screw. A good carpenter knows when to
wield the right tool.
But what is simple for the carpenter is commonly a
problem for the software tester. To demonstrate, heres
a note I received from a friend (who works for another
company):

INSIGHT 1

03

I was just in test review yesterday and they were seeing

Inflection point indeed. Ive seen

diminishing returns on functional automation testing.

this same inflection point on a

They were experiencing an inflection point where

number of products here. When

the bugs are now mostly about behavioral problems

automation stops finding bugs, the

and integration with other systems. Their tests were

tempting conclusion is that quality

becoming less useful and [it was] harder to investigate

is high. Clearly, my friend above

the failures. My guess was that the only way to solve

didnt make that mistake. Too much

this was heavy instrumentation (that could produce an

faith in automation is one of the

intelligent bug report based on a hell of a lot of state/

problems I believe we suffered from

historical context).

with Windows Vista. The automation


stopped finding bugs and we took
that as good news.

INSIGHT 1

04

** This is often referred to as end-user behavior,


functionality, features and so forth but I think experience
is the best encapsulating term. Feel free to think of it in
these other terms if you find them more to your liking.

INSIGHT 1

Even if it is good news, it is only good for part of your code. For the
other part, its useless. This brings me to the point of this section:
there are two types of code in every product and the testing concerns,
methods and practices are different for each of them.
The first type of code is what I call experience code. Its the code that
implements the functionality that attracted users to the application
in the first place. It provides the user with the experience** they are
seeking. For a word processor, it is the code that allows a user to
type, format and print a document. For an email application, its the
code that allows reading, replying and managing of messages. For a
mobile phone, its the ability to make and receive calls. For my own
product in Visual Studio, its the code that allows testers to write,
manipulate and execute test cases.

05

INSIGHT 1

The second type of code is infrastructure

This is code that a human user generally does not see, as it executes

code, that is, the code that makes software

invisibly and only announces its presence when it fails. Infrastructure

run on its intended platform (cloud,

code cant generally be executed directly by a user. Rather, it responds

desktop, phone, etc.) and interoperate with

to changing environmental situations, stress, failure, load and so

various devices, peripherals, languages and

forth. Thus, its testing is tailor made to be automated.

fonts. It is all the input and output code,


memory, file system, network, runtime and
glue code that makes all the moving parts
work together.
For my Visual Studio product, it is the
connection to the Team Foundation Server
that stores test cases, project data and
the various Data Collectors that extract
information from the environment where
the app under test is executed.

If my friend is trying to find user experience bugs, then Id suggest


ignoring the automation altogether and putting some manual testers
on the case. Business logic bugs require putting eyes on screens and
fingers on keyboards, and what I like to call a brain-in-the-loop.
In short, automation is simply overmatched by the human tester when
it comes to analyzing subtle behaviors and judging user experience.

BOTTOM
LINE

WE MUST UNDERSTAND THE DIFFERENCE


BETWEEN EXPERIENCE CODE AND
INFRASTRUCTURE CODE AND USE THE
RIGHT TECHNIQUE FOR THE JOB. LIKE THE
CARPENTER, WE MUST USE THE RIGHT TOOL
FOR THE RIGHT JOB.

07

TAKE YOUR TESTING DOWN A LEVEL


FROM FEATURES TO CAPABILITIES
Once you pick the right way to test, my

I push teams to take testing to a lower level and concentrate

second insight will help you get more out

on what I call capabilities. At Microsoft we do an exercise called

of your actual testing. One of my pet peeves

Component-Feature-Capability Analysis to accomplish this. The

is so-called feature testing. Features are far

purpose is to understand more precisely the testable capabilities

too high-level to act as good testing goals,

of a feature and to identify important interfaces where features

yet so often a testing task is partitioned by

interact with each other and external components. Once these

feature. Assign a feature to a tester, rinse

are understood, then testing becomes the task of selecting

and repeat.

environment and input variations that cover the primary cases.

Theres a lot of risk in testing features in

Heres a snapshot outline of what Component-Feature-Capability

isolation. We need to be more proactive

Analysis looks like for the product I am working on for Visual

about

Studio (continued on next page):

decomposing

feature

into

something more useful and understanding


how outside influences affect that feature.

INSIGHT 2

08
MAIN SHELL
CONTEXT (TFS) (red indicates an external dependency)
Add a new TFS Server
Connect to a TFS Server
Delete a TFS Server
OPEN ITEMS
Verify that only savable activities show up here
Verify that you can navigate to them

TESTING CENTER
CONTENTS
Add new static suites
Add new query-based suites
...
Refresh
PROPERTIES (current test plan)
Modify iterations
Modify/create new manual test setting
...
Modify start/end dates
PLAN MANAGER
Select a current test plan
Create a new test plan
...
Refresh test plan manager

TEST
RUN TESTS
Run all tests on a suite (automated and/or manual or mixed)
Run tests with a specific environment chosen

INSIGHT 2

09

Obviously, theres a lot missing from this, as the use of


ellipses shows. This is also only a subset of our feature
set, but it gives you an idea of how the analysis is
performed:
1. List the components
2. Decompose components into features
3. Decompose the features into capabilities
4. Keep decomposing until the capabilities are simple

Our product is very feature rich (its a test case


management system for manual testing) and this
analysis took a total of about two hours during a threeperson brainstorm. The entire analysis document serves
as a guide to the manual tester or to the automation
writer to determine how to test each of the capabilities.
I have some suggestions about how to go about testing
these capabilities and that is the subject of the next
insight.

INSIGHT 2

10

TAKE YOUR TESTING UP A LEVEL


FROM TEST CASES TO TECHNIQUES
Test cases are a common unit of measurement in the

Most of them have learned not to just

testing world. We count the number of them that we

repro the bug by re-running the test

run, and when we find a bug, we proudly present the

case. They know I am not interested

test case that we used to find it.

in the bug per se, but the context of

I hate it when we do that.


I think test cases and most discussions about them
are generally meaningless. I propose we talk in more
high-level terms about test techniques instead. For
example, when one of my testers finds a bug, they often
come to my office to demo it to me (I am a well-known
connoisseur of bugs and these demos are often reused
in lectures I give around campus).

its exposure.

INSIGHT 3

11

INSIGHT 3

THESE ARE THE TYPE OF QUESTIONS I LIKE TO ASK

These questions represent a higher level of discussion and

DURING BUG DEMOS:

one we can learn something from. The test case itself lost

What made you think of this particular test case?

its value when it found the bug, but when we merge the test
case with its intent, context, strategy and the bug it found, we

What was your goal when you ran that test case?

have something that is far more than a single test case that

Did you notice anything interesting while running the test case

evokes a single point of failure.

that changed what you were doing?


At what point did you know you had found a bug?

Weve created the basis for understanding why the test case
was successful and weve implied any number of additional
test cases that could be derived from this one successful
instance.
At Microsoft, we have begun using the concept of test tours
as a way to categorize test techniques. This is an even higher
level discussion and we are working on documenting the tours
and creating experience reports based on their use within the
company.
Stay tuned for more information on this effort.

12

INSIGHT 4

IMPROVING DEVELOPMENT IS YOUR


TOP PRIORITY
I had a lunch conversation recently with a test trainer at

Well that is certainly one purpose

Microsoft who encourages testers to get more involved

of testing: to find and remove bugs,

in development. He cited cases where testers would

but there is a broader purpose too.

recognize weakness in code and suggest new design

It is our task to use testing as an

patterns or code constructs. Both myself and our other

instrument of improvement.

lunch partner (also a tester) were skeptical.

This is where the techniques and tours

Testers are hired to test, not to develop, and most teams

discussed in the prior section can help.

Ive worked with both inside and especially outside of

By raising the level of abstraction

Microsoft would see such behavior as meddling. So, if

in our primary test artifact, we can

testers arent contributing directly to the code, then are

use this to create new avenues of

we simply bug finders whose value is determined by

communication with development.

what we take out of the software?

In my group we regularly review test


strategy, techniques and tours with
developers. Its our way of showing
them what their code will face once
we get a hold of it

13

ON THE PERCEIVED ROLE OF TESTING

This is the true job of a tester: to make


developers and development better. We dont
ensure better software we enable developers
to build better software.

Of course, their natural reaction is to write their code so it wont


be susceptible to these attacks. Through such discussions, they
often help us think of new ways to test. The discussion becomes
one that makes them better developers and us better testers.
We have to work harder in the sense that the tests we would
have run wont find any bugs (because the developers have
anticipated our tests) but this is work I am willing to invest in
any day!
This is the true job of a tester: to make developers and
development better. We dont ensure better software we
enable developers to build better software. It isnt about finding
bugs, because the improvement caused is temporal.
The true measure of a great tester is that they find bugs, analyze
them thoroughly, report them skillfully and end up creating a
development team that understands the gaps in their own skill
and knowledge.

INSIGHT 4

14

The end result will be developer improvement and that will


reduce the number of bugs and increase their productivity in
ways that far exceeds simple bug removal.
This is a key point. Its software developers who build software
and if were just finding bugs and assisting their removal, no
real lasting value is created. If we take our job seriously enough
well ensure the way we go about it creates real and lasting
improvement.
Making developers better, helping them understand failures
and the factors that cause them will mean fewer bugs to find in
the future. Testers are quality gurus and that means teaching
those responsible for anti-quality what they are doing wrong
and where they could improve.
The immortal words of Tony Hoare ring very true:

The real value of tests is not that they detect bugs in the code,
but that they detect inadequacies in the methods, concentration
and skill of those who design and produce the code.
He said this in 1996. Clearly, he was way ahead of his time.

INSIGHT 4


ON THE REAL ROLE OF TESTERS

THIS IS THE TRUE JOB OF A TESTER: TO MAKE


DEVELOPERS AND DEVELOPMENT BETTER. WE
DONT ENSURE BETTER SOFTWARE - WE ENABLE
DEVELOPERS TO BUILD BETTER SOFTWARE

16

TESTING WITHOUT INNOVATION


IS A GREAT WAY TO LOSE TALENT
Testing sucks. Now let me qualify that statement:

Smart

Running test cases over and over in the hope that

directors need to recognize this and

bugs will manifest sucks. Its boring, uncreative work.

ensure that every tester splits their

And since half the world thinks that is all testing is

time between strategy and tactics.

about, it is no great wonder few people covet testing

Take the tedious and repetitive parts

positions! Testing is either too tedious and repetitive or

of the testing process and automate

its downright too hard. Either way, who would want to

them. Tool development is a major

stay in such a position?

creative task at Microsoft and is well-

What is interesting about testing is strategy, deciding


what to test and how to combine multiple features
and environmental considerations in a single test. The
tactical part of testing, actually running test cases and
logging bugs, is the least interesting part.

test

managers

and

test

rewarded by the corporate culture.

INSIGHT 5

17

For the hard parts of the testing process, like deciding what to test
and determining test completeness, user scenarios and so forth,
we have another creative and interesting task. Testers who spend
time categorizing tests and developing strategy (the interesting
part) are more focused on testing and thus spend less time running
tests (the boring part).
Testing is an immature science. There are a lot of insights that a
thinking person can make without inordinate effort. By ensuring
that testers have the time to take a step back from their testing
effort and find insights that will improve their testing, teams will
benefit. Not only are such insights liable to improve the overall
quality of the test, but the creative time will improve the morale
of the testers involved.

INSIGHT 5

18

ABOUT JAMES

ABOUT JAMES WHITTAKER


James A. Whittaker, now a Technical Evangelist at Microsoft, has spent his
career in software testing. He was an early thought leader in model-based
testing, where his PhD dissertation from the University of Tennessee became
a standard reference on the subject. While a professor at Florida Tech, he
founded the worlds largest academic software testing research center and
helped make testing a degree track for undergraduates.
Before he left Florida Tech, his research group had grown to over 60 students
and faculty and had secured over $12 million in research awards and contracts.
During his tenure at FIT he wrote How to Break Software and the series
follow-ups How to Break Software Security (with Hugh Thompson) and How
to Break Web Software (with Mike Andrews). His research team also developed
the highly acclaimed runtime fault injection tool Holodeck and marketed it
through their startup Security Innovation, Inc.
Dr. Whittaker currently works at Microsoft and dreams of a future in which
software just works.

19

ABOUT APPLAUSE
Applause is leading the app quality revolution by

Applause app quality tools help companies stay connected

enabling companies to deliver digital experiences

to their users and the health of their apps with the

that win from web to mobile to wearables and

Applause SDK, Applause Analytics and the 360 App

beyond. By combining in-the-wild testing services,

Quality Dashboard.

software

tools

and

analytics,

Applause

helps

companies achieve the 360 app quality they need


to thrive in the modern apps economy. Thousands
of companies including Google, Fox, Amazon, Box,
Concur and Runkeeper choose Applause to launch
apps that delight their users.

The company is headquartered near Boston, with offices


in Cambridge, San Mateo, Seattle, Germany, Israel and
Poland with resellers serving dozens of international
markets. Since launching as uTest in 2008, Applause
has raised more than $80 million in funding, generated
triple-digit revenue growth annually, made consecutive

Applause in-the-wild testing services span the app

Inc. 500 appearances and was named the 7th Most

lifecycle, including functional, usability, localization,

Promising Company in America by Forbes in 2014.

load and security.

100 Pennsylvania Ave.


Framingham, MA 01701
1.844.500.5556
www.applause.com

ABOUT APPLAUSE

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy