The third part of the “Is TDD Dead?” hangout series with Martin Fowler, Kent Beck and DHH centered around “Feedback and QA” - the nuances of feedback and the role of the QA. After the 1st part, Fabio Pereira published his thoughts “Mockists Are Dead. Long Live Classicists", followed by my analysis of Part 2 Test-Induced Design Damage. Fallacy or Reality? Here are a few takeaways from Part 3.
Kent started with a great rant about the tradeoffs inherent in testing.
There are various tradeoffs in his opinion about Testing. In an ideal world, you have instant feedback about programming decisions to determine is it ready to go live or not. That is IDEAL. That kind of feedback is impossible at the moment. So the question is: How far off the ideal do we need to back off?
- Frequency of Feedback - How frequently do we need the feedback? (100 seconds, milliseconds or minutes?) - Is it doing what it is supposed to be done?
- Fidelity - Any test is accurate only to a certain number of decimal places. When in Production, some %x of time it is ok, some %y it is not. This is a continuum - different personalities / contexts require different percentages of fidelity.
- Overhead - To increase the fidelity of the test suite, the cost is going to go up at the same time. How much of the overhead are you willing to accept to get more frequent and higher feedback of fidelity?
- Lifespan - How long is the software going to live? This is measured not only in years, but also measured in probability (likely to be used for x years).
Kent also shared an example that around ½ of Facebook’s hackathons have never shipped / gone live. So how much effort do you need to put in testing to find out that the idea was not worth pursuing?
Martin opined that the goal of testing is to validate multiple things you are looking for feedback on. The most important being if the software is doing something useful for the users of the software. The other category could be, say, if the HTML is rendering properly.
Martin listed these feedback categories that are crucial for the team to know:
- What are the users’ needs and how can we satisfy them?
- Have I broken anything? (This is where the regression suite is a lifesaver).
- Is the codebase healthy? Can I run quickly with this for the length of time I intend to work with it. Is it organized well?
David felt that TDD has become so successful that many teams removed QAs from the teams. TDD made programmers over-confident about the quality of the software that they came up with. Are more tests better tests? Are faster tests better tests? Where is the value from the automated tests? What about the overheads and balance that needs to be maintained?
Some programmers still believe quality is their responsibility. They consider quality to be automated tests. This is a shallow understanding of what quality is. Tests may be green, but they may not find actual problems. 100% test coverage does not mean there are no defects in the code.
Martin was optimistic that times they are a-changing. Earlier the role of the QA was confined to test scripts. Automation has changed that by embedding QA and testing in the deployment pipeline. QA at ThoughtWorks, for instance, is a collaborative role and function - not the dysfunctional, throwing things over the wall, finger-pointing one, as in the “dark ages”.
Kent advised that perhaps we should put a few red pixels in the green tests bar to remind the team not to be too arrogant about what that “green tests” mean. As soon as you think you are not making mistakes anymore, that means you are making mistakes, and you stop growing as an engineer. Another way to “stay grounded” is to rotate being on-call for production issues. It helps provide a real-time feedback loop and point out all those gaps in your test suite.
Here are my thoughts from this session:
- The team needs to collaborate and together come up with what is the goal of testing, and what different types of testing are going to give good coverage for the product-under-test.
- Experimentation and calculated risks help the team being innovative and creative in testing.
- "All tests passing" does not necessarily mean there are no defects / issues in the product.
- It is better to have ‘x’ number of tests that add value, rather than ‘10x’ which may not have any significant value add.
- Identifying and implementing a ‘good’ regression suite is very important.