By Scott Shaw
Learning from past mistakes
It might seem odd to give advice on cloud adoption at this point in the hype cycle. After all, much of the industry considers public cloud a settled area with established practices and success stories. Surely there’s more than enough advice to help newcomers plan and execute their cloud transition? Despite all the available information, I still see organisations struggling along the way on their cloud journey. Many well-intentioned cloud migration programs sputter and stall in their attempt to turn the promise of on-demand elastic compute into concrete, measurable benefits to their business.
Sure, we have shelves of literature describing how to architect cloud systems “well” and how to securely engineer complex compute environments and network topologies. But there’s still very little information on how to structure your organisation, to employ operating models that embrace the potential of extreme virtualization. Finally, I see businesses busily ignoring the lessons of the past and creating fleets of assets that will be, at best, costly to maintain and at worst, unmaintainable. In their haste to move to the cloud, organisations are accruing technical debt faster than they can pay it down. They’re creating one-off, hard-to-replicate environments and failing to implement even the most basic quality checks on those assets. In the long run, these assets will consume far more in maintenance costs than will ever be saved in moving from an on-premise data center to a public cloud provider.
In this article I’ll try to point out some of the common pitfalls I’ve seen companies make in their move to the cloud and then humbly offer an alternative. I’d like to encourage a cloud adoption approach that takes more than infrastructure into account. I’m of the firm opinion that the shift to public cloud is so profound that we need to change the entire business to adapt. We can’t apply the same culture, organisational structures, roles, responsibilities, engineering practices and vendor choices that have worked (or not) for us in the past. All of these need to evolve beyond the days when software was hosted in a physical facility built and maintained by a separate group of infrastructure specialists. Literally, this split still exists, but the power we now have to completely reconfigure our virtual data centers with a few lines of code demands a different approach.
Why is cloud success so elusive?
I’ve seen this pattern play out a number of times over the course of my career. A new technology comes along, it promises to be the wave of the future. Industry analysts draw quadrants and select winners and businesses assume that by going with a winner, they can avoid the hard work of learning a new discipline. They assume that by selecting the right IT outsourcing partner and managing the costs closely, the business benefits will naturally accrue. This has never been the case. Businesses that succeed with new technology do so by adapting their organisation, their operating model and their engineering capabilities to the new paradigm. In many cases they also need to adapt their business model as well.
In the case of public cloud, one way this fallacy manifests is in the belief that cloud is simply an alternate form of infrastructure. Cloud becomes the responsibility of the infrastructure or operations division who previously managed data centers and physical networks. Typically, these teams manage capital assets as a service to the rest of the organisation. When infrastructure teams become the custodians of corporate cloud resources, those facilities become — in effect — an extension of the existing infrastructure. This approach is widespread and often results in “hybrid” cloud implementations where the public compute resources seamlessly (in theory) extend the on-premise assets.
While this approach might make intuitive sense and works — to a degree, it doesn’t acknowledge or accomodate the potential revolution in technology promised to the businesses by cloud pundits and analysts. In organisations where cloud is managed by an infrastructure team, environments tend to be doled out like a scarce, capital resource. Often, digital delivery teams have to request a cloud environment via a support ticket and wait for the environment to be manually or semi-automatically provisioned. This makes no sense in a cloud-native world, where resources are abundant and can be provisioned with a line of code. If the concern is security or compliance, there are ways to address those risks without giving up self-service access to cloud environments on demand.
One of the fundamental misunderstandings underlying this approach is the belief that cloud is just a virtual form of hardware infrastructure. But cloud isn’t hardware. Rather, it’s 100% software.
The same businesses that hand over cloud implementation to an infrastructure or operations team would never dream of handing that same team a major, multi-year software development project. But that’s what cloud implementation is: a major, multidisciplinary software delivery effort. Nobody would undertake even a medium-size software development project these days without things like a product owner, experience designers, experienced software tech leads, automated testing, and quality assurance. But I’ve seen countless cloud migration projects executed with none of this software professionalism. Sometimes a DevOps team (an oxymoron) is involved but this is still seen as a cloud specialist team, not a general software delivery team.
Another common pitfall that businesses encounter in their rush to move to the cloud — and which constrains their long-term success — is to become deeply and inextricably entangled with a single cloud vendor. Again, the industry has a long history of experience in managing IT outsourcing and most experts agree on the need to maintain control and flexibility over outsourced service arrangements.
I recall working with a large financial services company a number of years ago to help it move away from its chosen database vendor. The open-source database alternatives had matured to the point where they equaled or surpassed the vendor’s products, commercial support was available for the open-source platform and the commercial database vendor was becoming less responsive to the customer’s needs over time. Sounds easy, right? After all SQL is a standard. However, the program was long, expensive and in the end, didn’t actually enable the customer to retire the database vendor’s products entirely.
The reasons are familiar to anyone who has worked in this field. Over the years, the vendor had introduced innumerable “time-saving” proprietary features into their base product. DBAs succumbed to the temptation to use these features and implemented bits and pieces of business logic directly in the database platform without the usual software engineering safeguards. They also took advantage of proprietary SQL extensions that made it easier to store and retrieve things like time series and images. But the real portability killer came with the hidden integration to other products within the vendor’s substantial suite. Integration adapters with proprietary point-and-click configuration, heavy dependence on the vendor’s optimized storage appliances and automatic integration with the vendor’s identity and access management platform. All of this entanglement became slow and tedious to unpick and of course, the vendor’s consultants, who had been so helpful in adopting the new features, had no advice to offer when moving away from the products. In the end the customer just threw up their hands and accepted that some systems would just have to live out their lifetime until no more end users depended on them.
Although this is a familiar story, the lessons learned seem to be lost on the current generation of organisations rushing to pledge their allegiance to one cloud vendor or another. Partly this is due to the market domination of a single vendor who more or less invented the idea of cloud computing and whose offerings far outpace their competitors. But we are at a point today where businesses seeking public cloud services have a choice of vendors. The three major vendors have achieved relative parity across a fundamental set of services and are competing fiercely to capture a larger portion of the market. Their pricing, the services they offer and their business practices all encourage cloud customers to consume ever more services and to take greater advantage of the vendor’s unique, differentiating features. These proprietary features are inherently difficult to port from one vendor to another. Sometimes this is to the consumers’ advantage. Cloud vendors are outdoing one another in providing a seamless, low-friction developer experience. But single-vendor entanglement also carries risk — primarily by moving control over IT decision-making away from the business and into the hands of the vendor.