Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Blogs Banner

Disruptive Testing: Part 1 - James Bach

Hello Friends welcome to the start of a very interesting series - to interview luminaries in the testing space to objectively evaluate the software testing industry, both where it stands now and where we are headed.

In this post we interview infamous software tester, author, and consultant James Bach (@jamesmarcusbach). James Bach the original maverick tester needs little by way of introduction for those in the testing community. He is the defining voice in the field of exploratory and context driven testing, an inspiring speaker and an ingenuous tester. He has co-authored Lessons Learned in Software Testing: A Context-Driven Approach (a Jolt award finalist), and has published numerous articles in IEEE, the Software Testing and Quality Engineering magazine, stickyminds.com, Cutter Trend, and more.

Q James, how would you define Context Driven Testing? How does your view differ from Brian Marick, Brett Pettichord and Cem Kaner?

A Well, I can't speak for the other guys, in any detail. All I can say is that I'm a founding member of the Context-Driven School, and this is how I define it: Context-Driven testing (CDT) is an approach, a community, and a paradigm. As a paradigm, it is a school of thought that defines what is important about testing and what it means to study testing. As a community it is a group of people who identify themselves as being context-driven. As an approach it is a meta-practice that insists we must develop the skills to design, select, and guide our own ways of working in order to solve real problems within our context.

Context-Driven testing generally boils down to this: Pay attention to what's going on and whom you serve. Find test problems. Solve them.

There are seven principles that roughly define the Context-Driven paradigm and approach. The community is defined by its enthusiasts.

Q Does it make sense to apply the popular “Testing Pyramid” to all projects? How does Context Driven Testing fit into it?

A The "testing pyramid" is a simple heuristic that has little to do with testing. It's called the testing pyramid because whomever created it probably confuses testing with checking. That's a very common problem and we as an industry should clean up our language. The pyramid simply suggests that if you are going to use automated fact checks, avoid going through the GUI to do them. I agree that it’s a worthwhile principle.

Be that as it may, Context-Driven testing doesn't "fit" into specific heuristics, any more than it makes sense to say that a skilled carpenter "fits" into a hammer. Context-Driven testing transcends heuristics. As a Context-Driven guy I look at a heuristic, such as the testing pyramid, and ask myself how I might use it. And my answer is I don't see a use for it, as such, but I agree with the idea that tools work better if they interact with the product underneath the level of the GUI.

Q Can one agree with the Agile Manifesto and still be a Context Driven Tester?

A Absolutely. The Agile Manifesto makes good sense. Unfortunately, almost no one understands it or follows it. People over process, right? In that case, stop deifying specific practices like TDD or Scrum.

Q Agile testing and the context-driven approach share a lot of the same core values. What has your experience been with the context-driven approach in an Agile environment?

A Some values we share are the importance of humanism, the importance of practicing your craft, and the importance of the exploratory, cyclic, incremental approach to creating things.

There's no problem with that except that too often Agilists are uninterested in testing. They do like the *word* testing, though. They often use that word. I don't think that's enough.It doesn't bother me that people aren't interested in testing. What bothers me is when they can't tell the difference, or don't care about the difference, between professional, skilled testing, and amateur fact checking. I encounter that a lot in the Agile circles.

Q Much has been said about how the recent healthcare.gov crisis could have been averted/lessened with a more robust testing strategy. What are your thoughts on it?

A I don't think it was the testing. Imagine that someone drives a bulldozer through the wall of a shopping mall, through 10 stores and out the other end, strewing the remnants of Jamba Juice and Victoria's Secret inventory behind it. We wouldn't say, "If only he had turned on his headlights"! Testing is like headlights. Yes, the headlights were not on, but the problem is not that they didn't know there were problems-- It's that the project was being run by Peter Pan, and Peter felt that Tinkerbell would surely make the project fly with her fairy dust if only we all clapped hard enough. Professional testing is not welcome in Neverland, of course, but then neither is amateur testing.

Q Quis custodiet ipsos custodes? Who watches the watchmen? Does breaking down silos and having testers/QA working closely with developers cloud their ability to be objective and make them lesser “watchmen”?

A I don't believe in breaking silos. But I would like my silos to have good ventilation and lots of windows to let the light in. Every person is a silo. Every team is a silo. The very act of organizing a project requires that the people on the project deal more with each other than with people who are not on the project. If everyone is equally testing or equally developing, then any decision would have to be made by all people equally.

But that's not how it is. Each person and each team has its focus; its portfolio of influence and interest. When people say things like "break down the silos" what I suppose they mean is "let's help each other with our work." Well sure, help, but no matter what you do, some people will be more invested and more focused than others. Some people will have more mastery of the details than others, and that means there are natural divisions between people. We are not one big seamless Borg creature.

I'm for breaking down any wall that keeps me (and the project) from being reasonably productive, and I'm for building any wall that keeps me (and the project) productive. What I want is for those walls to be under the control of the people on the team, not mandated from outside the project.

Now, as to your specific question: When a tester has a weak idea of his role, or when his skills are weak, there is a significant danger that he will become a "junior developer" rather than a tester if he works mostly alongside developers. The purpose of good leadership and testing community is to prevent that loss of focus and identity. I've worked hard to be a great tester, and for me, that has required a strong sense of mission, lots of practice, and the development of specific, enumerable skills. Having said that, I am eager to work closely with everyone on a project, and no, I don't think it will do much harm to my objectivity.

Q Google has been said to have 10,000 virtual Android simulations for builds and test a significantly small fraction of use cases on physical devices. In this milieu to “automate everything”, is exploratory testing being confined to being a hidden competitive edge rather than a required general understanding?

A Google doesn't know much about testing. They don't need to know. Generally speaking, their software doesn't have to work well. Look at all the software they have created, released, then abandoned.

Look at all the security holes in Android. What if Google said "absolutely, positively no more security holes?" Then they would have to invest hugely more in testing, among other things. Instead they don't, and they have perfected the art of saying "oops, sorry" and pretending like no one could possibly have anticipated or caught the problems any earlier. The very success of their bounty program is documentary evidence of the failure of their internal testing efforts.There are people at Google who are quite good at testing, I'm sure. Certainly there are people who are good at making careful, wise decisions about technical risk. But as a company, the tidal flow is against those who would develop deep knowledge of such things. Fortunately for them, shallow knowledge is enough to get by-- when your company has billions of dollars to burn and lawmakers don't penalize you for your avoidable mistakes.

Now, regarding exploratory testing (ET), all testing is exploratory to some degree. Watch someone test. If he doesn't think at all or manage himself as he works, then perhaps we are dealing with a robot. The question is whether scripted and exploratory elements are well combined to make an effective testing strategy that has a reasonable chance of finding every important bug.

People get confused because they think ET means mysterious anything-goes testing down the rabbit hole with Alice. That's not what we are talking about. ET means self-managed testing, where the tester is in charge of the test process and it hasn't been set out in advance like a formal wedding between script and actor.

ET is everywhere. But skilled testing is not. Skilled testing should be our concern.

Q Where is the Software testing industry headed? What would you like to see happen in 3-5 years?

A The testing industry is a strange place. It is dominated by what I would have to describe as fake testing. Lots of testers labor over silly test case specifications (that cost much more to write than they are worth) and automate them (great, let's pay a huge price to simulate one very dimwitted tester with our tools) and count those test cases (delivering a metric that cannot be sensibly used by management) are essentially performing a testing *ceremony* rather than actually testing.

That part of the industry is rightly under some pressure by the advocates of Agile and Continual Deployment. I hope the fake testers go away.

Meanwhile, the future I am helping to build is about systematically training up skilled testers, some of whom but not all with coding skills, so that a small number of testers can do-- or coordinate to be done-- all the testing that a large project might need. A good future for testing would be one with a lot fewer "testers" but each one of those testers being passionate about his craft.

Q There are aspiring software testers looking to you as an industry leader, what is the most important advice you can give them to be successful in this field?

My advice is be suspicious of industry leaders. Be suspicious of conventional wisdom. Most of it is wrong. Bad advice persists, over many years and miles, because of insufficient skepticism about what experts say. Behind every simplistic aphorism and "best practice" is a lot of subtlety and depth. But, simplicity and elegance in engineering is made possible in the first place only by a keen appreciation and even love of complexity and paradox. In order to properly use common statistical formulae, for instance, you must know their scope and limitations. Otherwise you are just practicing a form of witchcraft.

One of my heroes is Richard Feynman, who once said that "science is the belief in the ignorance of experts." I love how Feynman was at once a true expert in his field, and at the same time desired (at least publicly) to be considered an expert solely on the merit of his ideas, which is earned via the success of ongoing experimentation rather than public accolade.

I want to be the same kind of leader. I want to help you to be that kind of leader.

Thanks James! We appreciate you taking the time out of your schedule to chat with us. 

Stay connected for another interview with a testing guru...

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Keep up to date with our latest insights