Enable javascript in your browser for better experience. Need to know to enable it? Go here.
The seven guiding principles in testing

Seven guiding principles in testing

Testing is a broad and fast-growing space that is decades-old, rich and equipped with the agility to constantly subsume new processes, tools and methodologies. Testing follows certain foundational principles that remain unchanged irrespective of its age, recent technology trends and domain. 

 

These are what I call the ‘guiding principles in testing.’ If you observe closely, recommended best practices and tools in testing are usually based on these principles too. 

 

In my experience of proactively applying these principles when working on new technology stacks and domains, I have observed them to consistently yield high quality. Subsequently, these testing principles should also accelerate high quality outcomes in scenarios employing emerging technologies like AI, blockchain, IoT and more. Here is a consolidated list of the ‘guiding principles in testing.’

 

The end-users are your only friend!

 

When wearing the tester’s hat, your only friend should be the application’s end-user. Why? Because, testing is all about playing the role of end-users. We could be completely distracted by business needs and the technical implementation details, but we should always operate with a focus on the end-user’s interest. 

 

That calls for us to go beyond verifying a story’s acceptance criteria and explore the application, like a typical end-user would - requiring us to understand the targeted user personas before testing even begins. 

 

Often, teams tend to make trade-offs on end-user needs when they are weighed against  factors like development complexity or timelines. However, the tester’s role is to systematically leverage the end-user’s perspective and negotiate such trade-offs. 

 

Micro and macro level testing

 

Testing should be embedded at both micro and macro levels. Micro-level testing dives closer to a small piece of functionality and tests it in detail, for every edge case. For instance, a small functionality could include calculating an order’s total amount with various boundary conditions, technical and business validations. 

 

Macro-level testing uses a broader lens to cover functional flows, data propagation between modules, integration between components and more. For example, the ‘total order amount’ calculation feature could help build the ‘order creation’ flow. And, we can proceed to test the order creation flow by testing the database, third-party integrations, UI flows, failures in order creation and more. 

 

What I have realized is most teams focus only on macro-level testing, which usually results in multiple issues in production – because this kind of testing disregards minor details. Let me elaborate using the same ‘order creation’ example. Teams relying only on macro-level testing would have checked for valid order creation and business error flows. But, when the item prices in production are negative or show an unexpected number of decimals, order creation breaks – because when testing, they did not focus on micro-level testing. 

 

Micro-level tests can be added as unit and integration tests, while macro-level tests can be covered as part of functional automation tests, visual tests and so on. 

 

Our recommendation is to constantly zoom in and out of the micro and macro details while testing. Failing to focus on any of these levels could result in a dip in team confidence due to the unexpected issues in production.

 

Faster feedback

 

This principle talks about the early detection of defects so the defect fixing cycle and consecutively, the release cycle can be faster. Defects tend to become costlier when they are discovered later in the delivery cycle. 

 

For instance, imagine a high-priority bug found two weeks post feature development. First, this becomes extremely time and effort intensive – creating bug cards, triaging them, tracking them in iterations and identifying the right developer’s time to fix them. Second, in worst-case scenarios, we might find it impossible to fix the defect without a major refactoring, hence delaying the release. I would say, that’s the ultimate cost to pay for a defect!

 

There’s another notable correlation between time taken to fix a defect and how late it is found. When a feature is in development, the developer has complete context and can easily understand bugs’ root causes. This makes fixing them a quick process. However, when said developer moves on to other features and the codebase continues to grow every day (refactoring), context is lost and debugging the root cause becomes a longer and costlier process. 

 

You might ask how early should we test a piece of code to ensure earlier feedback cycles? Shift-left testing is centered on faster feedback. Some shift-left testing practices that work efficiently are dev-box testing, running automated tests on the developer’s machine and CI and coverage metrics. Implementing the test pyramid should also produce faster feedback. Additionally, story sign-offs by product owners and a regular cadence of showcases every sprint to all stakeholders ensures faster feedback on missing business cases.

 

Continuous feedback

 

Fast feedback should always be backed by continuous feedback. It is not enough to just test a feature once and leave it idle until release. We have to continue performing regression testing on the feature to see if the integrations are intact and the refactorings have not hampered the old functionalities. 

 

These continuous feedback mechanisms help in early fixing of regression defects and prevent release timeline disruptions. One of the prominent ways teams achieve this is by integrating all automated tests to the CI – ideally, running all the tests for every commit. If tests take too long to run, we will have to adopt parallelization techniques. The test pyramid will help avoid separating the tests, allowing for continuous feedback for every commit. 

 

Quantify quality

 

When trying to achieve high quality via testing, we should correctly measure it as well. Some of the recommended metrics are defects caught by automated tests in all layers, time taken from commit to deployment, number of automated deployments to testing environments, regression defects caught during story testing, automation backlog based on the severity of test cases, production defects and their severity, usability scores with end-users, failures due to infrastructure issues and metrics around cross-functional aspects. 

 

Many of these metrics align with the ‘Four key metrics’ that measure quality in terms of code stability and the team’s delivery tempo. The team’s delivery tempo is derived from the time taken from commit to deployment and the number of deployments in a day to testing environments. For instance, one of the Four key metrics, is the ‘deployment frequency,’ and it needs to be ‘on demand’ for a high-quality performer. Production defects will inform us of the ‘change fail percentage’ – percentage of changes made to production that fail – which should be 0-15% for a high performer. When tracked and discussed consistently, metrics like these empower the team to build high-quality software. 

 

Communication and collaboration is key to quality

 

Testing is not a siloed activity but requires adequate knowledge of business requirements, domain knowledge, technical implementation, environment details and similar. This calls for efficient collaboration and communication amongst all roles in a project team. 

 

The communication could take place via agile ‘ceremonies’ like stand-ups, story kick-offs, Iteration Planning Meetings (IPMs), dev box testing and through proper documentation like story cards, Architecture Decision Records (ADRs), test strategies, test coverage reports, etc. While the communication might not be synchronous within distributed teams, we should ensure hand-overs via asynchronous mediums like video recordings, documentation, chats, emails and more. 

 

Defect prevention over defect detection

 

While testing focuses on finding issues in the application, we should strive to prevent defects in the first place. An obvious reason to avoid defects is cost – right? I’d compare defect fixes to painting over a rough patch on an otherwise seamlessly painted wall. Sometimes, the newly painted patch might not fit with the existing wall, and we will have to paint the entire wall all over again. 

 

Similarly, software defects could lead to significant architectural changes. Which is why, we should adopt practices, tools and methods that allow ‘defect prevention’ right from the start. Here are a few of the practices in today’s software world that could fulfill this principle – the Three Amigos process, IPMs, story kick-offs, ADRs, shift-left testing, test-driven development (TDD), pair programming, showcases, story sign-offs by POs, and more. 

 

In conclusion, the seven principles discussed above could be applied to any new domain, like data, or even when the team features some extremely niche and unusual roles. We expect these principles to help teams drive their testing strategies even as they venture into newer technology territories to achieve high quality results.

 

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Keep up to date with our latest insights