Enable javascript in your browser for better experience. Need to know to enable it? Go here.

There is no such thing as a "tech task"

Technical tasks — testing, pipelines, refactoring etc — should serve your business objectives. The right work makes your product more reliable, scalable and maintainable. When you don't manage them like other product tasks, your ability to deliver suffers.

 

How do you handle technical tasks?

 

We've all been there: your development team insists we need to rewrite tests. We need to refactor this module. We need to automate this pipeline. We know the concept of "technical debt". We may even accept it as a natural result of an evolving product.

 

Here are some approaches we've seen:

 

  1. Random tech tasks: development teams add tasks to the backlog as needs come up. Update a library, clean up old code, extend integration tests. Teams pick up these tasks between "real" user stories.

     

  2. Technical epics: The team groups tasks according to product areas, such as refactoring a single user flow. In some (worse) cases, they group them by technical components.

     

  3. Time splitting: Product managers give in to developers' demands by letting them spend some percentage of their time on "tech tasks". The developers decide what to spend that time on.

     

  4. Separate backlogs: tech tasks are maintained on a board, list or document, living separately from user-facing features.

One way or another, the team handles the plumbing and wiring to keep the house running. Is everyone happy? You know you're not, and you also know why:

 

If a task has no clear business value, you shouldn't do it

 

The main issue with all these approaches is that we continually fail to prioritise "tech" tasks. There will always be new product requirements. User-facing features have a tangible impact on product metrics. Updating a library may or may not achieve that. This means you end up with buckets of tasks languishing at the bottom of our backlog while the backbone of your product suffers.

 

Another anti-pattern that arises from these methods is working on anything other than the most valuable tasks. In this case, you are not prioritising tech tasks or measuring their value in the same way as your user stories. We've seen development teams finish their refinement and go on to a separate discussion to fill up their remaining capacity with tech tasks. The PM doesn't always understand (or even see) what developers decide. Also, the developers aren't accountable for the business value of this work.

 

Am I arguing that you shouldn't rewrite tests at all? That you shouldn't update libraries? That you shouldn't revamp old code? That can't be right either.

 

Tech tasks are product tasks

 

We relate user stories to business goals. An accurate search leads to more sales. A new registration process leads to increased conversion rates. Faster loading leads to more satisfied users and more referrals. 

 

Can we apply this logic to technical tasks?

 

As a product team, you measure success not only in business impact, but also in your ability to reliably deliver that impact at speed. Shipping new features is great if it leads to more users or more conversion, but what if it takes you 6 months? What if it leads to constant instability?

 

You want to prioritise tech tasks like any user story: according to the key metrics of success. This means mapping them to concrete business goals and indicators of value:

 

  • Extending automated tests means you can release frequently with confidence

     

  • Healthy Continuous Delivery accelerates release cycles and improves feedback loops

     

  • Updating old code and dependencies reduces unexpected failures and security risks

     

  • Refactoring keeps overhead low as you extend your product

     

  • Extracting a service combines many of these advantages and lets your team work autonomously on a well-defined scope

     

But how do you know what the business value is? Let's put it all together.

 

Business, user and delivery goals

 

When you go into your next backlog discussion, you will have new feature requests, updates to existing features, a bug or two and some "tech" tasks. What’s to be done? The best way to begin is to sort them together by looking at business objectives and our ability to deliver on them.

 

Our favourite way of measuring that ability is with the four key metrics. The types of tasks mentioned above can contribute to each of these metrics, providing a quantifiable impact to how we deliver. This makes it easier to prioritise this kind of work alongside user stories. In essence, these metrics help us to ask will this work help us deliver more features, more reliably?

 

As a bonus, the four key metrics are well aligned with business outcomes. A team with short release cycles and low failure rates learns faster, keeps users happy and drives business results.

 

We must stop treating technical tasks as if they are separate from our product backlog. All the work we do supports better business outcomes and our team's ability to deliver those outcomes. Keeping your whole backlog under one set of objectives is a reflection of a team that is aligned on outcomes and all the work that will get us there.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Find out what's happening at the frontiers of tech