Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Blogs Banner

On DVCS, continuous integration, and feature branches

I like to say that feature branches are evil in order to get people’s attention. However in reality I lack the determination and confidence to be a zealot. So here is the non-soundbite version.

First, let me say that Mercurial (and more recently Git) has been my workhorse since 2008, and I love distributed version control systems. There are many reasons why I think they represent a huge paradigm shift over existing tools, as discussed in Continuous Delivery (pp393-394). But like all powerful tools, there are many ways you can use them, and not all of them are good. None of my arguments should be construed as attacking DVCS: the practice of feature branching and the use of DVCS are completely orthogonal, and in my opinion, proponents of DVCS do themselves – and the tools – a disservice when they rely on feature branching to sell DVCS.

First a few definitions. Note that some people use these terms in different ways, so you’ll need to temporarily erase any other definitions from your brain or my discussion won’t make much sense.

Continuous Integration is a practice designed to ensure that your software is always working, and that you get comprehensive feedback in a few minutes as to whether any given change to your system has broken it.

Feature branching is a practice whereby people do not merge their code into mainline until the feature they are working on is “complete” (i.e. done, but not done done1).

Mainline is the line of development – on a conventionally designated version control repository – which is the reference from which the builds of your system or project are created that feed into your deployment pipeline. Note that this definition applies perfectly well to DVCS and to open source projects, even on GitHub.

First, let’s dismiss the straw man argument. Every time you use version control you are effectively working on a branch: your working copy. On a DVCS, there’s a further level of indirection, because your local repository is effectively a branch until you push your changes to mainline. I have no problem with creating branches. What I do have a problem with is letting code that you ultimately want to release accumulate on branches.

Here are my observations. When you let large amounts of code accumulate off mainline – code that you ultimately want to release – several bad things happen:

  • The longer you leave it, the harder it becomes to merge, because as other people check in to mainline, mainline diverges from your branch. Really awesome merging tools help with this to some extent, but anyone who has done much programming has experienced situations where the code merged successfully but the application broke. The probability of this happening increases substantially – more than linearly – as the amount of stuff you need to merge, and the time between initial branch and final merge, increases.
  • The more work you do on your branch, the more likely it is you will break the system when you merge into mainline. Everyone has had the experience of getting in the zone and running with what seemed like a genius solution to your problem, only to find hours – or days – later that you need to scrap the whole thing and start again from scratch, or (more subtly and more commonly) that your check-in resulted in unintended consequences or regressions
  • When you have more than a handful of developers working on a codebase and people work on feature branches, it becomes difficult to refactor. If I refactor and check in, and other people have significant amounts of stuff on branches, I make it much harder for them to merge. This is a strong force discouraging me from refactoring. Not enough refactoring equals crappy code.

These problems go away when people regularly merge their work into mainline. Conversely, they become exponentially more painful as the size of your team increases. Furthermore, there’s a vicious circle: the natural reaction to this pain is to merge less often. As I am fond of saying, when something hurts, the solution is to do it more often, and to bring the pain forward. In this case this is achieved by having everyone merge to mainline more frequently.

However, it’s hard to do this if you’re working on a feature that involves a lot of work, or if you’re working on a large-scale change to your system. Here are some solutions.

  1. Break down your stories into smaller chunks of work (sometimes referred to as tasks). I have never yet found a large piece of work that I couldn’t split into smaller chunks – usually less than an hour and almost always less than a day – that got me some way towards my goal but kept the system working and releasable. This involves careful analysis, discussion, thought, and discipline. When I can’t see a way to do something incremental in less than a couple of hours, I try spiking out some ideas2. Crucially though, it means I get essential feedback early on as to whether my proposed solution is going to work, or whether it will have unintended consequences for the rest of the system, interfere with what other people are working on, or introduce regressions (this is the motivation for continuous integration.)
  2. Implement stories in such a way that the user-facing bits are done last. Start with the business logic and the bits further down the stack first. Grow your code using TDD. Check in and merge with mainline regularly. Hook up the UI last3.
  3. Use branch-by-abstraction to make complex or larger scale changes to to your application incrementally while keeping the system working.

How do you know when you’ve got too much unmerged stuff? Here’s a thought experiment. Imagine you’re the maintainer of an open source project, and someone you don’t know just submitted what you have locally on your branch as a patch. Would you merge it? Is the unified diff with mainline more stuff than you can easily keep in your mental stack when you read it? Is your intent sufficiently clear that someone else on your team could understand it in a minute or so without having to ask you? If you can’t answer “yes” to all these questions, then you need to stop working, stash your work, and split it into smaller chunks.

It should be clear that I’m not really attacking feature branches, provided your “features” are sufficiently small. However generally people who use feature branches overwhelmingly fail the test in the last paragraph, which is why it makes for a nice soundbite. Really experienced developers understand the trade-offs that using feature branches involve and have the discipline to use them effectively, but they can still be dangerous – GitHub is littered with forks created by good developers that are unmergeable because they diverged too far from mainline.

The larger point I’m trying to make is this. One of the most important practices that enables early and continuous delivery of valuable software is making sure that your system is always working. The best way for developers to contribute to this goal is by ensuring they minimize the risk that any given change they make to the system will break it. This is achieved by keeping changes small, continuously integrating them into mainline, and making sure there is a comprehensive suite of automated tests to verify that changes behave as expected and don’t introduce any regressions.

What about feature toggles?

See how I haven’t even mentioned feature toggles yet? Feature toggles don’t even come into play unless you have a complete, user-visible feature that you don’t want to appear in your next release. In this situation, the feature-branch alternative is to keep your feature branch unmerged until after your release. Unless you’re doing continuous deployment, or working on a small and experienced team, this is a painful and risky proposition.

However another (perhaps more important) use of feature toggles is to reduce the risk of release, and to increase the resilience of your production systems. The most important part of release planning is working out what to do when things go wrong (this is known as “remediation” in ITIL circles). Re-deploying an old version is usually what people opt for, but having the ability to turn off problem features without rolling back the whole release is a less risky approach. In terms of resilience, an important technique is the ability to gracefully degrade your service under load (see John Allspaw’s 40m talk at USI for a masterful discussion of creating resilient systems). Feature toggles provide an excellent mechanism for doing this.

For people who are skeptical about feature toggles, or interested in finding out more, I highly recommend that you look at Facebook’s video on release management (look for the section on “Gatekeeper”). Sarah Taraporewalla also just wrote an experience report on using feature toggles.

What about cherry picking?

Some people recommend keeping features out of mainline until they’re ready to be released, perhaps keeping them on a development branch that developers check in to, and then cherry-picking them in. However assuming you’re following the guidelines I provide above and your stories are small, the need to take features out is very much the exceptional case, not the normal case.

Furthermore, you then face all the problems that I mention elsewhere of getting from done to done done1 – the pain of integrating, regression testing, performance testing and so forth. With continuous delivery, you completely get rid of any integration or testing phases. In my experience, unless you have a small, experienced team working on a well-factored codebase with plenty of automated tests, these benefits massively outweigh the pain of occasionally having to take a feature out – and feature toggles provide a cheaper alternative if your analysis is done right.

Of course a key assumption here is that your stories are small and don’t spatter stuff all across the UI. I discuss this, and other considerations when analyzing stories in the context of continuous delivery, in another article.


1 A feature that is dev complete is “done”. A feature that is released is “done done”. One of the axioms of continuous delivery is that much of the pain and risk in releasing software occurs after software is “done”, particularly if your work isn’t sitting on mainline and needs to be merged. Thus “saving” your work on a feature until it is “done” doesn’t really make sense. Some people recommend not merging until after a feature is tested and showcased, but this seriously exacerbates the problems described below without provide much additional benefit, since tested, showcased features are still not “done done” (consider the need to integrate your code and run regression tests, for example).
2 Spiking is the practice of writing some code that you will throw away to test out an idea. The output of a spike is knowledge.
3 I am not trying to imply you shouldn’t prototype the UI early on.

This post is from Continuous Delivery by Jez Humble. Click here to see the original post in full.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Keep up to date with our latest insights