Hi! Welcome to Part 3 of our series where we interview testers who with their insightful (and disruptive) thoughts challenged and improved the testing practice. Today I interview Markus Gärtner (@mgaertne), agile tester, trainer and coach with it-agile GmbH and author of the seminal book ATDD by Example: A Practical Guide to Acceptance Test-Driven Development. He is a black-belt tester in the Miagi-do school of testing and is a recepient of "The Most Influential Agile Testing Professional Person" award (2013) from Agile Testing Days. Apart from regularly presenting at Agile and testing conferences globally, he blogs about testing, foremost in an Agile context, and contributes to the Software Craftsmanship movement.
Q Can you define your typical workflow as a tester? When does a tester's job begin?
A That's a hard one. Personally I believe that a tester's job should start sooner than most people think - and ends later than when most testers think.
My typical workflow starts with negotiating the contract with the client. The problem there is to negotiate enough freedom, and convince the client that there will be changes according to the various topics that will arise while you are at the client. I also like to have a certain degree of personal freedom that is better negotiated before the contract, rather than during the gig.
After that I start with collecting enough information so that I can get started. That usually happens within a day - otherwise I become nervous and restless. I want to contribute to the project from day one, in some way or the other. Just like developers try to commit on their first day, I strive to make an impact on day one. That's usually hard because – with me being a contractor - you need to get in touch with lots of people while figuring out how stuff gets done at the particular client. That's the hard part of our work.
Q Do you feel it is necessary for testers to formally define test cases?
A That depends. There are circumstances where formal test cases are mandated. I have never worked in such contexts. I also don't see what value these test cases bring at all.
My personal testing style has become agnostic of formal test cases. I still automate test cases, if you would like to use that term there. Yet, I learned to leverage automation to help me learn about the system quickly, and look further than the test cases ever could.
Q What are your views on TDD (Test Driven Development)? Do you feel it affects a testers role or responsibilities?
A Of course it does. The question usually is to what extent.
When a programming team applies TDD well, there are less "forgotten" corner cases for the testers to hunt for. The testers can then focus on more meaningful tests, and ask those questions that are difficult to answer even for business stakeholders – which is a good thing.
On the other hand, if TDD does not relieve testers from more traditional corner test cases, then you have an indicator in place that something is wrong on your team. That's unfortunate, and should trigger a whole team conversation on how to improve from here. I remember hearing about TDD for the first time in 2000, and really getting it in 2008 when I started to really do it. It was a period of 8 years with a shallow understanding of TDD. I never want to write meaningful code without TDD anymore. There’s a huge difference once you experienced the benefits, rather than hearing or reading of it.
Q Who do you feel is ultimately responsible for the quality of the application being built?
A That depends. As Weinberg put it in Quality Software Management Volume 1 “Quality is value to some person.” So, which type of quality are we talking about here?
Most of the time people mean technical, internal quality of the product. This quality comes from the team, when you are using agile methodologies. They are all responsible for the code quality, and that the code is responsive to latest business changes.
Then there is the external observable quality. On an agile team Product Owners play a major role in this - and they usually are better off consulting with the test specialists on the team to get insight into it.
And then there is the quality of the underlying process with the thought that good processes create good products. Coaches and Scrum Masters are responsible for the quality of the process, and creating the right environment where everyone can safely contribute to the products we create.
So, long story short: everyone is responsible for the quality of the application that is being built.
Q How would you recommend measuring a testing team’s effectiveness?
A I would challenge the need to measure it. Ultimately if you can answer the question "Is this working for you" with a clear "yes", it seems effective enough to me.
Q A lot of organizations often struggle to define the career path of a tester. Where should a novice tester begin and what can someone more experienced develop into?
A Novice testers don't fall from heaven. They usually have a background that makes them a tester. Let's call this their primary area of expert knowledge. This might be testing knowledge, this might be business knowledge, or this might be automation knowledge. I don't see a clear "should begin" here.
If they want to developer further, that has a lot to do with the surrounding environment, and their background. If they started with business knowledge as their primary expertise, and the company around them values automation knowledge more than testing knowledge, then they probably need to advance in this direction. Likewise if they have more favor for testing knowledge.
The problem most of the time lies in finding out what the environment around us values. I am a fan of small experiments for large environments. For instance, try to build an automation prototype. If no one is interested in the results, you either need to convince them more if you want to advance in this direction, or you stick with the status quo, and work towards more testing skill. Find out what suits you, where your learning energy lies, and what brings you the most fun right now. From there it will be easy to advance either way.
Q In an environment with roles such as manual or domain testers, should testing and "test" automation be separated in the development process? As in should an organizations have designated testers who test new functionality being built and another role for an automation engineer who focuses on automating system checks. One tester’s responsibility is to find bugs on new features, whereas another one focuses on ensuring previous functionality works as expected.
A I think for each testing activity, you should become aware of why you are doing it. For the cases that you mentioned, manual and automated testing, there are two different goals attached. Manual testing is mostly about tackling important bugs first, and then quickly identifying risks we’re unaware of. Test automation is mostly about preventing problems in the future, and having "un-intended change detectors" as part of a fast-running regression test suite.
These two activities need different skills. There are some folks around that can do both to a reasonable level. If you have that type of folks in your team, you don't need to slice up those employees. On the other extreme there might be people that are good only in one of those skill-sets. In that case, you will have a natural separation in order to cope with your backlog items. Then it would be natural that you separate the two activities.
With regards to your question, should this separation be necessary? Or should we work towards cross-functional team members all the time? I think we should start to think about taking on a few new skills every now and then. And we don't need to become full experts in everything. Thereby we wouldn't achieve anything. I think in the past decade we have heavily worked on creating an extremely “separated” team. I think we need to lessen that picture a bit, and should also avoid hitting the other extreme. That's not suitable, either.
Thank you Markus for sharing your time and insights!