In the traditional world of enterprise software delivery, there can be a strong focus on software delivery metrics - What’s the test coverage? What is the code coverage? What’s the bug count? Why is the bug count so high? How many requirements have we delivered in this release?
Without the context that surrounds them, these types of measures are completely nonsensical. I could answer all the questions above, yet still not provide any meaningful information. For example, the answers could be just 81%, 54%, 134, schedule pressure, 34.
Whilst this may answer all the questions, it still fails to provide any value to the conversation on software delivery.
Changing the Conversation
When faced with questions like the above, I would approach them from a different angle. I would seek to understand why these questions are being asked, and if they are indeed the right questions. Are they trying to track test execution progress? Are they trying to track ‘quality’? Are they trying to get a degree of comfort that the questioner will be able to answer any question that is asked of them?
The Importance of Understanding
Approaching this type of discussion with empathy is important. It is very easy to come across in this situation in a negative way - “That’s a stupid metric”, “Code coverage means nothing”, “Why are you asking for something that adds no value and making us do more work to get it?”. Instead, understand the reasoning behind the questions - the real answer may be that stakeholders may not have enough visibility into your work, or have the confidence that your current process is going to produce a quality result.
Testing is Never Complete
Personally, I find that most non-testers assume, subconsciously and incorrectly, that testing is complete. To counter that, I would like to give an example of a six character user name into a textbox limited to ASCII input. This simple example still requires 127 x 127 x 127 x 127 x 127 x 127 = 4,195,872,914,689 test cases. Even if each test case takes us only 1 second to execute, it will still take 7.9 million years to finish testing the user name field, before we move onto the password field.
Something else I would like to discuss is Cyclomatic Complexity as an indicator of how to achieve a representative level of statement coverage. Once I have laid the foundation with those two points, I can then evolve the conversation to directly address their initial concerns. For example:
- Arriving at a count of test cases needed to achieve test coverage goals, for the sake of chasing a number, which is the incorrect thing to do
- Most people create flaws in test coverage numbers
- Any claim of a high level of test coverage, without a proportionately high number of unit tests is probably incorrect
- Counting test cases, bugs and requirements is a lot like counting fruit. You may have a grape and a watermelon, but they are still treated as being the same size in a metrics context.
With improved understanding and hopefully trust, the conversation should be a more productive one.
Proactive Radiating of Information
The other key thing I’d like to recommend is to proactively radiate information, as opposed to reactively defend its absence. This allows me to focus my attention and energy on shipping great software.
Hopefully, your conversation has moved back to what actually is to be measured, what your stakeholders’ concerns are, who the target audience is for the “numbers” and how you can help them tell their story better.