When it comes to building high-quality products, as a QA, we might usefully start with the question: "What is quality?". It’s often answered academically, philosophically or with ISO definitions. But I’d like to start with a look at the example of a product we all know: a muffin.
What qualities would make a muffin a high-quality muffin? Certainly chocolate sprinkles on top! Let's get straight to the first comparison to quality assurance: just as the chocolate is spread over a muffin after baking, many teams only start caring about the quality of software after programming. The problem in both cases: there is no chocolate (quality) in the muffin (software).
In this article, we want to look at methods and processes that show us how to be quality-conscious as early as possible in software development, or in other words, how to bake the chocolate directly into muffins.
As you might guess, this is something that will significantly increase the scope of work for regular “QAs” , so that we emerge from our role as "Quality Assurance" and become true "Product Quality Specialists".
How changes in the process improve quality
If we take a close look at the process steps of a typical agile team, we can see what is going on there in detail.
In the first step ("Analysis") we plan to create value. We write a story for this, which we then put in a "backlog" — so nothing is done with it immediately. In other words: we’re wasting time. In the best case, an analyzed story is still up to date when it is implemented. Often, stories are outdated by this time. In the next step ("Development"), we add the planned value to the product. Afterwards the story waits again in a process step ("waiting for QA"), so here too, we’re only wasting time. The team waits for a Quality Analyst (QA) to review the scheduled value and deliver it to users (“in QA”).
As QAs, we have little influence on the "backlog" column, but "waiting for QA" is "ours". Why should we have a process step in the team where the only thing happening is time being wasted? Removing the “waiting for QA” column has a very positive impact on the speed and quality of the work of the team if we pair it with another tool: work-in-progress limits. The idea is as follows: If there’s no “waiting for QA" column , a developer has no room to leave a story once the implementation is finished and thus cannot just start a new one. Therefore, the developer must ensure that the story is passed to an available QA. This forced handover facilitates direct communication: The QA receives valuable context from the developer; and the developer can make use of the conversation with the QA to check whether all the acceptance criteria of a particular story are actually fulfilled. This conversation is already a big win for quality.
We used these limits in different projects and in different contexts. In one case we were able to shorten the period from "analysis" to "done" for stories from 13 days to just four without anyone having to work "harder" and without compromising on quality. As fewer stories in development do not require constant context changes, successively, the stories can be completed faster. The restriction of "work-in-progress" has a positive effect on the concentration of teams and thus not only on time-to-market but also on the quality of a product.
Synergy of analysts: How business and quality work together
To really work with business people (Business Analyst, Product Owner or Product Manager), it's important to understand all the involved perspectives: Our Business Analysts (BAs) are typically very motivated to publish software quickly, because a late release of a (software) product leads to higher costs and lower income.
The job of QA, on the other hand, is to be sure that a solid and mostly error-free product is being introduced to the market.
The challenge is to find a balance between speed and quality. To this end, NASA has once analyzed where exactly the cost of errors arise.
Figure 1: The costs of errors
It's easy to spot the moment where the cost of errors explodes: in production. That isn’t only because the server is called “production”, it’s also due to the fact that bugs that reach a production system are long living and therefore more expensive. But no matter what the exact reason behind this huge increase in cost, a typical reaction is to re-examine everything thoroughly before going into production. These are the automated test cases in our CI / CD systems. But if you look at the diagram more closely, it seems most cost effective to detect defects as early as possible and deliver high quality right from the start when drafting the requirements and analyzing the stories — and not only while testing.
In my experience, this is best done when QAs and BAs are actually sitting side by side. When they work together on requirements and present them together to the team in short planning meetings. In this way, the knowledge about requirements (including "edge cases" and "sad paths") can be shared with the entire team before implementation. The QAs can be sure that the most important "sad paths" are already considered for the implementation and thus, also automatically tested for regression. In this way, the effort for manual testing is significantly reduced. In contrast to the code-freeze test under time pressure, we’re introducing our products with substantially better quality and much earlier to the market.
Ideally, the developers work together with the QAs to structure the regression tests from the unit test level all the way up to required end-to-end tests, adhering to the principle of the testing pyramid. However, no matter how elaborate these tests are when designed, they’re always executed after implementation.
To increase quality from the beginning, we try to apply a simple rule to all teams: Every commit goes to production.
Setting this rule increases the quality: When QAs are no longer the last guardians of possible mistakes, developers spend more time making sure that their tests cover the most important things. And as QAs are the experts for testing, the push-and-pull relationship between developers and QAs is reversed by this new rule. Previously, tickets have been pushed to the QAs for testing. Now, developers are asking QAs for advice on how best to structure the tests — especially before implementation.
A second great advantage is that you're have more flexibility to act in cases of emergencies. Normally, in classic release management, you have a complicated plan for how to perform a release. If something goes wrong, there’s a hotfix-cherry-pick-branch-release-process which bypasses most security measures and quality controls in your team. That doesn’t sound particularly trustworthy.
We no longer need that in our team setups. With release cycles of 30 minutes or less, we can quickly respond to production issues without hotfixes or branches. We don’t even need a rollback strategy any more.
The really interesting question is then how to design the monitoring so that you can find problems in production even earlier. This contains at least the visualization of the number of errors of your application, as well as the server or database response times. If you want to get to a really professional level, you can think about the fully automated detection of anomalies.
As a result, we’ve significantly fewer problems with the releases, while enabling new features in a very controlled manner using Feature Toggles. Not only do we quickly deliver new value to our users, but we also achieve higher quality at the same time.
Creating a culture of accepting and learning from mistakes provides the ultimate quality boost
Every team makes mistakes, it's human nature and it’s good, because we learn from mistakes. As QAs, we’re are extremely concerned with the risk and impact of errors, they can help the team make mistakes quickly in a secure environment — and thus enable the team to learn new things very fast. This doesn’t only increase the security and trust in the team and the quality of the software, but almost accidentally also the innovation potential of the team.
One method that we use very intensively — and as QAs constantly demand from the teams — is pair programming. It helps us in particular to avoid small slip ups — which are bound to happen sometimes. An effective damage control is the "pairing" of two people, mostly developers.
But slip ups aren’t the only motivation for pairing. Do you know the "aha moment" that you have when you’ve tried something new and understood how something works? We try to make it as easy as possible for the team to experience such moments by "trying out" (= deliberately provoking mistakes) — and talking within the pair about it!
Lastly, make sure your team has fun.
"Fun", you may ask - "Really?"
Yes. Imagine two different teams. In the first team, the people are very excited, they come to work happily and greet each other in the morning, they communicate and discuss what they’re working on. Then there is the second team where people prefer to get a to-do list in the morning and be left alone for the rest of the day until they delivered towards that to-do list. Which team do you think will build a better product? Which team will make fewer mistakes? Which team is more likely to deliver on time?
Quality Specialists are usually the link between developers and business, we connect people and their perspectives, we help to build a culture of trust. Those are the small things that make up a great culture, and we Quality Specialists are usually in the middle of it.
Combining all of this: a positive team atmosphere, a strong bond between QAs and BAs, an excellently shaped test pyramid and a process that actually enables the team to deliver software to production quickly, will ultimately lead to a better, faster, stronger high quality product.
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.