Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Blogs Banner

Disruptive Testing: Part 5 - Justin Rohrman

In Part 5 of our interview series with testers who are upping the testing ante, we chat with Justin Rohrman (@JustinRohrman).  Justin is a long time tester and an instructor for the Black Box Software Testing classes run by the Association for Software Testing (AST). He is also a Miagi-do black belt, and was part of the review committee for the Testing & Quality Assurance Track at Agile 2013 and 2014. In particular, his interests lie at the intersection of risk, quality, defects, and delivery.

Q Hi Justin, and welcome to the Disruptive Testing series. Let's start with a topic of keen interest to you - heuristics. How do you use heuristics in testing software? Why are they important?

Heuristic is a fancy word for guideline. For example - the dead insect heuristic. If you walk into your house and see a dead bug on the floor, there is an obvious problem to clean up, but there is probably a bigger problem somewhere else. We see this in testing, too, when we find a small bug in, say, an analytics ETL (Extract-Transform-Load) - there are probably a few more hiding in there! A list of heuristics creates a sort of mental short cut to tester skills - ways to quickly gain insight into the product and project.

I like to think of them as a sort of framework to guide the questions we ask the software. It is important to use these because it is not feasible to recreate the universe every time you want to test a change. Take a date field being added to some web form as an example. A tester doesn’t approach this problem by starting with the question of what is a date. They start with the question of what is immediately relevant to my user: Can I select an appropriate date? Can I do incorrect things? Does the software satisfy my current need?

Many people think of and use heuristics without realizing it and giving it a name. Being conscious of what you are doing and being able to identify the heuristics used can help you to further explore and understand how to use it. I think it was James Bach that said testers should aspire to be an artist of heuristics.

Q Do test heuristics apply to Agile software development?

​A I think they do. Aside from an emphasis on teamwork and tool usage, I struggle to see differences between how testing is performed in a traditional environment vs an agile environment vs some sort of hybrid. Perhaps there are differences in how teams are formed and the skill sets of the individuals on the teams, but we are still trying to discover information about the product that is meaningful to people who matter. I think that the testers skill set applies to all of these scenarios and heuristics are still used.

One important thing to remember is that heuristics are fallible, they can occasionally steer you wrong if you are not careful. Learning to use them wisely within your context is an important part of avoiding this.

Q How does applying heuristics to testing help build a quality product?

​A Heuristics help us build quality products by giving people a way to quickly come up with probing questions about the product throughout its development. These questions are often relevant to the customer and their needs. A useful heuristic can guide development and change the product into something that will make a customer happy and fulfill their expectations.

Q If there is a team of Testers on a project, do they all need to be aligned to the testing heuristics they come up with?

​A ​I don’t think so. Actually, I tend to think that the varied perspective different team members bring is a great thing and should be fostered and encouraged rather than homogenized. Perspective means that people view the world in fundamentally different ways. These differences cause help people to excel at certain types of testing and noticing certain types of problems.

Q What are metrics that matter from a Test Case Management perspective?

​A ​The metrics that matter are the ones that support getting good software to the people that are paying for it. There may be some way to represent this through test case management, but I have never seen that done well. Part of the difficulty is that there is no unified definition for the phrase "test case". People frequently use the term test case but really mean different things. An alternative to measuring the traditional way via test case management might be to use a few different models of coverage, to show you what has not been tested and what you might want to look at before releasing. One way I have done this in the past was to combine models of programmatic coverage such as methods touched, decision trees touched, lines touched, etc., with more hands-on models such as tours, session charters, and check lists. This didn't tell me a whole lot about the quality of the testing that had occurred, but it did tell a story about what was not tested.

Q What are the most over-rated metrics from the perspective of Test Case Management and Execution?

​A ​Test case execution as a measure of how close a product is to releasing, is probably the most common measurement I have seen. Ironically, it also tells you the least about what you actually want to know. If 5 test cases executed out of 1000 means you have a lot of work to do, 995 test cases complete out of 1000 should mean you are pretty close to done. But that may not be the case in reality . Questions about how long it takes to perform a test, how long things take if a problem is discovered, and what to do about all the other stuff that's not currently documented as a test case, isn’t wrapped up very neatly when using this measurement.

Another popular measurement that has been getting a lot of press lately is defect removal efficiency (DRE). This one has big problems of reliability and validity. What is a defect, what measure of time matters, how does this reflect efficiency, why does it matter, what about defects that are never logged? James Christy wrote a great blog post about the problems with this one.

Q You read and review a lot of books. Which books would you recommend to wannabe Testers? What topics do you feel are not covered well, or not at all covered yet?

​A ​You’re right, I read a lot on a variety of topics. For new testers, to avoid being overwhelmed by all the options, I recommend the classics: the Quality Software Management series, Testing Computer Software, and Lessons Learned in Software Testing. For more experienced testers, explore the world of philosophy, science, math, and social science. But more importantly, find a way to apply what you have learned to how you work. Learning for the sake of learning is a great thing. Learning and spending enough time with the material for it to change the way you work and for you to be able to explain how your work has been changed, is far greater.

One topic that I am spending more time with is how research and measurement is done in social sciences. Tools like qualitative research, narrative and discourse study, and focus groups are fascinating to me and I suspect have direct application to what we do.

Q How important is domain knowledge to test an application / enterprise products? Can you do meaningful testing with limited domain knowledge?

​A In my opinion, domain expertise is way over emphasized for hiring decisions. Speaking of books that have changed the way I work, Rethinking Expertise has this topic covered nicely. In short, for most jobs, the type of expertise you need in order to become productive in the domain can be acquired very quickly. When you join a project and lack domain expertise, there are still areas of the project unrelated to the domain where you can use your tester skill set to be productive and add value to the team.

There are probably exceptions to this, if you want to work on geographic surveying software used in oil drilling, or on software used on the space station, you will probably want to be a domain expert.

Q Given that you play music (Euphonium in the Brass Band of Nashville), does music (study / practice / performance / etc.) have an impact on the way you approach Software Testing? Please explain.

Music has definitely shaped the way I approach skill development. The way I learned to play music was based on isolating very specific aspects of your playing, practicing that thing, and then integrating that skill into a real piece of music. Scales are a great example of this. Scales are sets of notes played in ascending or descending order in a specific key. Practicing scales daily can make reading a new piece of music much easier because you are already accustomed to playing in different keys. Practicing something repeatedly and getting different kinds of feedback helps me to improve.

I tend to take a similar approach in developing a skill in software testing. When I discover something I want to develop I generally do that thing repeatedly and try to get feedback so I can alter and hopefully improve what I have done next time. With enough exposure to whatever I'm trying to learn, I find that testing is a lot like playing music. You are focusing on the performance and not the specific technique you are using at a given moment. You move between techniques or apply individual techniques because it makes sense with the music you are playing or the software you are testing.

In another regard, this is significantly different from music. Music is very old and has distinctly isolated skills that one develops for each instrument. For example, brass players learn articulation, breathing, finger dexterity, slurs, double-tonguing, and on and on.  There are very limited resources for software testers to develop themselves in this manner. Dr. Cem Kaner has produced some wonderful workbooks focused on developing skill in test techniques, and AST held an event called WHOSE (http://www.associationforsoftwaretesting.org/programs/workshop-on-self-education-in-software-testing-whose/) which was focused on beginning the work of defining a tester skill set. 

 

Thanks Justin for your awesome insights!

Do let us know your thoughts, feedback, and suggestions...and stay tuned for another testing guru. You can also read through other interviews in this series with James Bach,Lorinda BrandonMarkus Gärtner and Matt Heusser.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Keep up to date with our latest insights