Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Blogs Banner

Metric-Driven Management Versus Management-Driven Metrics

The demand for IT metrics is outstripping IT's ability to produce them. Part of this is due to the increase in quantitative management practices in business, the trend being to measure everything in sight in an effort to show change in performance over time as well as competitiveness (to the extent possible) relative to commercial peers. But part of this is also self-inflicted: long delivery horizons punctuated by abstract milestones make IT opaque. This puts IT under increased pressure to produce performance and quality metrics. More often than not, the metrics IT come up with contribute more noise than signal.

The root of the problem is how IT goes about its business. Traditional IT abstracts a business need into collections of narrowly defined technical tasks that are performed by technical specialists. The UI might be developed by one set of programmers, middleware by another, back office by yet another. In addition, some roles, notably QA and database administrators, are assigned to support multiple teams simultaneously. The net effect is that people in IT teams don't work together, they work individually on fragments of business requirements.

To get control over the vast distribution of work, IT tries to measure to the smallest unit of work that it possibly can. This is done by creating project plans with highly detailed inventories of technical tasks, estimating the effort required to complete each, and assigning them to people in very specifically defined specialist roles. The assumption is that completion of all defined tasks will result in a complete business solution. "Progress" toward completion is measured as the percentage of time spent versus the time estimated.

Quality is similarly measured to a finely-grained level of precision. In traditional IT, QA is a discrete phase that follows development, during which very specific test scripts are executed against delivered code. The assumption here is that the software is of sufficient fitness once all test scripts pass. "Quality" is defined largely as the pass rate of test scripts.

Three factors undermine this pursuit of precision. The first is the tragedy of the commons: because everyone is responsible for delivering a project, nobody takes responsibility to see that it's delivered. In this model, no one person is responsible for completing a business requirement from end-to-end; people are responsible for nothing more than working their particular task to completion and then passing it along. The second is a soft definition of what it means to get work done. "Completion" can be relative, interpretive, or simply a matter of working to a state where nobody can say that a technical task isn't "done." The third is an increased number of hand-offs. Handoffs create the risk of misunderstanding. Misunderstanding requires rework.

Thus we may be highly cost efficient at the unit-of-execution level, but horribly cost ineffective at delivering solutions, because the whole is always less than the sum of the parts.

Project management isn't as precise as it hopes to be. IT is a business of problem solving, not executing to script. In any project, every day we will discover new things that weren't in the original task order. We will also create work that isn't on the plan through all the hand-offs. This means that we continuously create work that isn't tracked. It ends up being "off-balance sheet" to our project, or the shadow matter of our project universe.

Quality management must deal with two problems. First, staging QA to happen only after a long development cycle means there is long latency in feedback. Second, that feedback is of questionable value as it comes as single points of potential failure, not comprehensive feedback on fitness or quality. That is, a test script could fail for any one of a number of reasons: environment, data, incorrect results interpretation or actual software defect being a few among many. At best, all we can say is that the test script did not pass; that is not the same as saying that a test script failed. Each failure report is of limited value and requires re-creation and investigation by somebody else. True product quality may be obfuscated. It also may be incomplete: the universe of test scripts may not in fact test how the software will actually be used; thus "percentage of test scripts passing" may be an inaccurate indicator of fitness.

Worst of all, we're measuring hours. The business isn't buying hours, its buying software. These metrics confuse effort for results. They are not the same thing. Ask any CEO which they'd rather be armed with when they go to talk to Wall Street.

Ironically, not only does this approach not unwind after a project failure, it intensifies. When projects fail, the tendency is to double-down: longer time horizons for phases of activity, more explicit role definitions, more precise task definition, more precise estimating, and above all more project data. As the saying goes, "there must be a pony in here somewhere!"

Caught in the middle of this is the project manager. While the increase in project data doesn't lead to an increase in project information, it does create an increase in effort needed to collect the data. This relegates the project manager to a role of spreadsheet-pusher. The one person on the hook for results spends all of his or her time chasing after the metrics to feed the data collection beast, not doing what a manager is supposed to do: get things done through people. The metrics drive management.

Agile offers a materially different alternative. Agile teams don't work to an abstract set of tasks, they work to deliver small, discreet chunks of functionality that are recognizable to the business. People in Agile teams don't work independently, they work as an integrated team, collaborating and pairing to achieve team goals. They also work to a state where code delivered is functionally and technically complete. By making deliveries as often as daily, and immediately sharing those with both business partners and QA, functional completeness is certified as near the moment of creation as possible and in the full context of the business need.

Asset capability, certified to satisfy the business need, is a far more meaningful measure of progress than time spent, obviously because it's a measure of results. It's also beneficial to the project manager, who no longer has to translate from one set of abstract measures into another, but can simply report out team status in a common language with the business. Management is no longer at the mercy of metrics, but is master of them.

It also means that "effort" is not a proxy for "results." This shifts the focus away from the cost of IT, toward an expression of speed. IT has an obsession with "cost per function point," when it should be obsessed with time-to-live: the length of time it takes for a priority feature to go from idea to production. The shorter the time to get something live, the more competitive the organization. In a time when the cost of capital is rising, it is far more expensive for the business to not have the capability to do something than the cost to rapidly - and competently - deliver it.

This is a critical differentiating point for IT. Providing the lowest cost per function point describes an effective utility, no different from water, electricity or garbage collection. Utilities don't create business impact. Providing the fastest time to market describes a strategic partner. Being fast to market is far more critical a factor if IT is to be a contributor to business competitiveness.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Keep up to date with our latest insights