Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Blogs Banner

Disruptive Testing: Part 7 - Michael Hunter

My interview today is with Michael Hunter, a senior software developer, who’s been with Microsoft for 14 years now. He helps the teams evaluate the quality of their products. When he’s not at work, he can be seen city hiking around Seattle and spending time with his fiancée and her giant cat (the furry one in the photo!). Read more about his “You Are Not Done Yet” checklist and other good stuff at http://www.thebraidytester.com. You can contact him at michael@thebraidytester.com

Michael Hunter

Q - Hi Michael! Thank you for chatting with us. As someone who’s been in the field of testing for over a decade, do you think the ‘wall’ between a Developer and a Quality Analyst is dissolving, or getting stronger?

A - I think it’s dissolving, and good riddance! In most teams I’ve worked with which separated Dev and Test (I’ve never seen a software team with actual QA roles) into separate groups, Dev starts relying on Test to evaluate the code. I’ve never seen this speed up the process of shipping software. While I’m all for people specializing in testing, and even in specific types of testing, I shy away from separating them out in any way. My most effective partnerships with developers have been figuratively (and often literally) sitting in their offices writing tests alongside and even integrated into their code.

I want to partner with the engineers I work with, not be a crutch.

Q - In these times, how important is it for a Quality Analyst to be technical, and maybe also contribute to Test Automation?

A - I see my ability to read and understand code as one tool in my toolbox. I find that reading code can help me understand how a system works, and it can suggest techniques to try and even specific tests to run. Debugging into an issue I found sometimes points out other issues, or helps me generalize the issue to a class or category of issues. Writing code can jump me through parts of a workflow I’m not interested in; it can also help me notice things I might not notice otherwise. My technical expertise also helps me help the engineers I work with reason about their designs and probe into their code.

However, technical expertise isn’t required to do any of these things! I have yet to see a system that can’t be reasoned about and probed into simply by asking questions of its functionality. While I recommend everyone on an engineering team learn at least the basics of reading code, I don’t think doing so is required.

Q - What innovative features would you highly recommend in a Test Automation framework, that would help the team immensely?

A - Most important, I think, is that your framework helps test authors think and write tests in terms of what their customers’ model of the product, rather than the testers’ concept of the product or what the UI happens to look like today. (I’ve detailed one way to do this at http://www.thebraidytester.com/stack.html.)

Architect your framework to provide single points of change for the aspects of the product you expect to change.

Keep your framework as simple as you possibly can while still adding the value you want it to.

Remember that your framework is software! Treat it with the same care in engineering as you do the software it’s testing.

Q - What tips and tricks can you share for building a ‘testable product’?

A - Start with understanding what you actually care about – how you define quality. Next, decide what information you’ll need to measure that. Now you can think about how to get that information out of your product. Once I’m at that point, I use my colleague Dave Catlett’s SOCK mnemonic to remind me of key testability concerns:

  • Simplicity – Are the mechanisms we’re using to enable testability the right level of simple for what we desire
  • Observability – Can I/my automation see the data we need to see to understand what the product is doing?
  • Control – Can I/my automation force the product into the code paths and usage scenarios we want to explore?
  • Knowledge of expected results – Do I have an oracle to reliably (enough) evaluate how well what actually happens matches with what we expect to happen?

Q - What are the top five myths about Software Testing from your experiences? Why?

A - 

  •     Testing ensures quality. While testing can provide information about the product it's evaluating, it can’t ensure anything. Testing might provide evidence that our customers will generally enjoy using our product, or demonstrate that when our product is used in certain ways it tends to crash, or any number of other data points that might correlate with the quality of our product. I don’t, however, know any way that testing can ensure the quality of our product
  •   Testers break the product they test. When I test I often find sequences of actions that cause my product to crash, or corrupt data, or otherwise act in ways my customers (hopefully) will not expect and probably will not enjoy. I don’t cause any of these reactions, however – they’re already baked into the product by the time I’m looking at it
  •     Automated tests save time. While I've seen automated tests save immense amounts of time and add immense value, I've also seen them waste immense amounts of time and remove immense value. While sometimes we don't know ahead of time which will happen, I am consistently happiest when I constantly re-evaluate which techniques and activities seem most likely to be most fruitful right now
  •    Testing is necessary. I don’t even think evaluating the quality of our software is necessary. While I know any number of activities I can do, before and after our customers use our software, which will increase the likelihood they will become raving fans and convert all their friends and give us lots of money, none of these activities are necessarily necessary
  •    Testing can reliably simulate our customers’ experiences. While I’ve had success at reliably simulating certain aspects of certain of the ways my customers have used my products, and of predicting the experiences they will have using my products, I’ve also failed miserably at doing these things. Further, my customers constantly surprise me with the ways they use my products. The only way I know to truly understand my customers’ experiences is to ask them and to observe them. I can do this in person, and more and more I can do this by analyzing the streams of data they let my software send back to me describing what it and they are doing. I think we have only the barest inklings how this ability will change the ways we evaluate our products' quality; I’m looking forward to being surprised and amazed at what we become able to do!

Q - What tips would you like to share with Quality Analysts who are relatively new in the software industry?

A - Look for the way *you* find bugs, and for the way *you* understand how a system works – and how *you* understand where it is likely to fall apart. You do these in a very different way than everyone else on your team. Find *your* way and you find the value *you* uniquely add.

Be willing to ask “the stupid question”. I get the information I want, and as a bonus, I find admitting I don’t understand something builds trust with my team. And I guarantee you other people have the exact same question.

Search for why, not just for what. Understand why the issue you just found manifests, why it surprised you, why it’s likely to surprise your customers, why your marketing people will care about it, why your CEO will care about it. Understand why your manager, their manager, your CEO, and your peers ask you to do what they do – and why they *don’t* ask you to do other things.

Look for what’s being left out, ignored, dismissed.

Time spent away from your job is as important as time spent doing it. Take a walk, read about unrelated topics, build things that seem unconnected; do this intermixed with your assigned work and see if new ideas and approaches don’t start coming to mind.

Thank you Michael, for your time and this conversation! 

Check out other great interviews in this series with James BachLorinda BrandonMarkus GärtnerMatt Heusser,  Justin Rohrman and Anna Royzman.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Keep up to date with our latest insights