Forced by circumstances (and an especially pragmatic client), I've recently been asking peers and "the blogosphere" the apparently naive question, "is it important to do automated testing and clear up what Mike Cohn calls 'manual test technical debt?'"
Theoretically, if you have no Big Upfront Design and you also have no automated test suite, you're pretty much just building a big unplanned mess and you'll never be able to change anything for fear of breaking something else. I understand that, and so does my pragmatic client. But can this issue be quantified? Aside from feeling somewhat inadequate when reading agile theory, I mean.
The reason it's important to ask this question is that setting up and maintaining automated testing is expensive. Unlike many agile practices, which can be set off with some sharpies and a clean wall, automated testing requires a large scale capital expenditure to bring one or more testing packages into an organization, and then some significant strategy development, staging, and training of resources to put those new packages into use and to keep them running.
You need some compelling financial case to show the likely return on an investment which may be half a million dollars for software alone in a medium sized corporation. So how much is it worth to clean up your manual test technical debt?
There are some related questions out there with quantified answers. Gartner andCAST
have recently published estimates of how much technical debt is out there globally and on a per-application basis ($500B and $1M, respectively). "Technical debt" is defined by these studies as "the cost of dealing with delayed and deferred maintenance of the application portfolio." As most people who worry about technical debt note, this "cost to fix" is not the same as what it costs the business to have a software emergency of some kind, or to be slow in rolling out enhancements to the software. And of course this "cost to fix" is not exactly the same as the cost accrued by an organization due to the lack of automated testing. But it's a handy statistic to be able to throw around.
Within your own organization, you may want to look at the following, in terms of making your business case:
It's all very well and good for us zealots to say things like "without automated testing, you Just Aren't Agile." But if you're looking to get your CFO to release significant funds, you'll want a case relevant to the company bottom line, not just an alignment with some invisible Agile Correctness measure.