In our previous blog we looked at useful metrics for measuring the quality of a product. Here we look at some of the more questionable metrics often used by teams or organizations to measure quality. There may be some value in using these but, in our experience, they do not always indicate the quality of the product.

Number of defects
Organizations measure different metrics related to defects, such as number of severe defects, regression defects and fixed defects, to name a few. These metrics tell us the number of defects that we have managed to find so far in the areas of the application that we have tested. There might be defects still hidden in the system that we may not have managed to find. This changes the perspective of the system quality. Dedicating time to understanding the deficiencies in the existing software development process and fixing them will result in reduction of defects
Image from: https://www.flickr.com/photos/pasukaru76/4651227194/in/album-72157623209706846/

Number of tests planned, prepared and executed
Tracking the number of tests in any particular state, be it executed, prepared or planned, doesn't tell you anything other than the number of tests in those states at that particular time. You can have a good quality product that has few tests, or a poor quality product that has a lot of tests. The quality of a product is not defined by the number of tests that have been written and run against it.
Image from: https://www.flickr.com/photos/pasukaru76/6173547428/in/album-72157623209706846/

Code coverage
Measuring code coverage can be a valuable development tool. It is useful for identifying high-risk areas in the codebase. Using this metric to measure product quality is dangerous as the team might game the metric by writing tests that check the same behaviour so that they can achieve the target coverage. There can be tests with false positives, or tests that do not assert anything about the behaviour of the system. None of this coverage will contribute to a better quality product. This phenomenon was observed by a British economist Charles Goodhart and is popularly called “Goodhart’s law”
Image from: https://www.flickr.com/photos/14691188@N02/21526952108

Ratio of Quality Analysts/ testers to developers in the team
A high ratio of Quality Analysts (QAs) in a team does not lead to a good quality product. A team that thinks about quality strategically will collaborate across different roles and identify tests that can be automated and reduce the manual effort required by QAs, thus reducing the number of QAs required in the team. Such an approach will also free up a QA’s time for effective exploratory testing and gathering feedback from production systems. This in turn can result in refining existing test strategies to avoid defects.
Image from: https://www.flickr.com/photos/pasukaru76/4755621203/in/album-72157623209706846/

Requirements coverage
Requirements coverage metrics are useful if you have a fixed set of requirements. However, requirements can change over time. Tracking and tracing requirements to a list of tests is a futile effort. Instead, writing automated tests and considering them as a living document should itself solve the purpose. These tests are written in such a way that it explains to us the system requirements along with failing for the right reasons if any requirements change.
Image from: https://www.flickr.com/photos/38451115@N04/7235193614

Acceptance criteria met
Acceptance criteria should not be tracked as a separate measurement. The project you are working on should have a "definition of done" in place, which states that a story is only deemed complete when all the acceptance criteria have been met. If this is the case, tracking whether a story is finished or not will give you the state of the acceptance criteria by proxy.
Image from: https://www.flickr.com/photos/pasukaru76/6141973022/in/album-72157623209706846/
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.