"Oh for goodness sake, you put it in upside down!"
"I'm sorry, Secret. I thought the pointy end went in first."
-Secret Squirrel and Morrocco Mole, Secret Squirrel
I'm always excited to learn something new, and this week a colleague introduced me to the "Lead/Lag" concept of measuring the performance of a change program such as an agile transformation. He also introduced me to the "Secret Squirrel (and Morocco Mole)" Hannah-Barbera cartoon series from the 1960s, which briefly seemed to be a more interesting thing to discuss, but I'm pretty sure you guys should all pursue that on your own without further commentary from me. We agilists are a fun bunch.
Seriously, though, this Lead/Lag thing gets you exactly where you want to be as you design your agile transformation, and it keeps you from drowning in a pool of agile purism. "Is not Scrum/Is so Scrum" is not the discussion you want to be having for very long, especially when accompanied by ineffectual slapping. You want to be looking at "Is not valuable/Is so valuable." And yet if you solely focus on "what is valuable for the business," what makes you different as an agilist from anyone else?
This is a serious question and I will conjecture that some of us in the agile/lean community focus so much on "pure" agile behaviors because we are worried we will lose our identity if we throw ourselves whole-heartedly into a process of self-improvement which deviates from some well-known agile or lean script. Witness the large-scale prejudice of the agile community against the PMI and the BABOK. We've got excellent pioneers working to put PMI/BABOK/CMM insights to work in agile shops, but they need to be very brave and very impervious to sarcasm.
Anyway, enter the Lead/Lag indicators, perhaps in a Secret Squirrel flying saucer. The lean/lag concept, as most of you probably already knew, is one where you purposely track two sets of measures over time with the expectation that they will correlate. For an agile transformation, the "lead" measures would be things like:
- Do you have teams that develop iteratively?
- Is there a card wall? and even
- Would Jeff Sutherland certify this team as one following "pure Scrum"?
You can empirically measure how many teams you have that do these things over time, and how well those teams are doing your local form of agile "correctly," as specified for your company, and this is certainly something you want to do if you are heading up an agile transformation program in an enterprise environment.
The "lag" measures, however, are the ones which will motivate your stakeholders. They will be things like :
- Increased speed to market
- Higher code quality
- Better ROI on dollars invested in IT projects
As you design your program, what you want to do is motivate your stakeholders using the lag measures, and build a case as soon as possible for a correlation between what you measure around the penetration of your chosen agile practices into a firm, and what impact those measures seem to have on things of value to the business.
Thought of this way, there are things you should think about measuring that you might not otherwise do, because they are "obvious" to you as an agilist. One example is "transparency." As agilists, we take it for granted that since agile produces working software with every iteration, a program office running a set of agile practices will have the ability to measure EXACTLY what percentage of planned scope is complete at ANY time after the release plan is created. But for someone new to agile, this concept is not obvious.
So one thing to think about is creating side-by-side snapshots taken at two-week intervals of the "project health dashboard" your PMO keeps on its waterfall projects, and corresponding snapshots taken at the same interval of the health of your first agile projects. If you're honest, most likely the agile projects will track red at first, not least because the team is processing a lot of new stuff. Later the projects will go yellow and green. That's good.
But what is GREAT is that while your agile "health" dashboard is telling the truth every two weeks, the snapshots of the waterfall projects will show a pattern of "all green until red" or "all amber until red" or "all red, then suddenly green," or the like. The point is that in a waterfall project, you JUST DO NOT KNOW if you are actually okay or not until the day you go into user acceptance testing, and sometimes not even then, if UAT looks bad enough. By measuring "transparency of project health" through snapshots at the company where you are introducing agile, you make the change concrete and immediately apparent. This is the type of metric that will "motivate" even the most senior and agile-unfamiliar CIO "elephant," in the terminology of Dan Heath and Chip Heath.
I am on the road, so I can't get my teenager to explain this graph to you in detail, but the general idea is that by measuring some simple and straightforward things, you will be in a position to do some dynamite and statistically interesting "lean/lag" metrics reports to your agile transformation sponsors weeks or months into your project. The faster you can get a correlation between "we did this agile practice, and here's the benefit to the business," the faster you're going to get enthusiastic buy-in from your stakeholders, instead of eye-rolling and nervous references to the "TQM" fad from the 1980s.
Moreover, measuring this way keeps you honest. What key business performance indicator can you impact most quickly in your context, and how? Maybe you should lead with automated testing, not the business case. Maybe it's the opposite. But even if you don't share with your stakeholders (in many cases the truth is something to be very careful with), YOU should know what you're doing and how it matters to the business, with as much quantitative evidence as you can muster.
So don't stop with "we're agile!" Drive directly to "we're agile, and you can tell, because the business is already better." Don't settle for putting the pointy end in first.
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.