7. Should you start over?
In the essay “Things you should never do”, Joel Spolsky says rewriting the code to keep the same functionality is the worst strategic mistake that an organization can make. You are giving a gift of at least two or three years to your competitors, as you will be unable to make any strategic changes or react to new features that the market demands and likely see your market share plummet. There are many examples of software programs attempting to rewrite and therefore wasting millions of dollars.
So what should an organization do if you are caught up in the Vortex of Doom — when change is slow and painful, and organizations then try to fix it by adding more process, which slows things down further? Sometimes creative workarounds emerge — by attempting to ‘innovate’ by adding a new user interface without fixing the foundations. But, you can’t hide for long when you’re not meeting customers’ needs, the competition is snapping at your heels.
If you haven’t invested in your systems for a long time and incremental improvements aren’t going to deliver at the pace needed, you might not have many options left; rewriting is something that you should consider. Before embarking on that journey, here are some considerations:
- Is the legacy codebase really messed up beyond repair? Is any of it salvageable?
- Are the legacy systems so intertwined that even simple changes require a cascade of changes to other parts of the code?
- Is the original technology choice preventing you or constraining you from making improvements?
- Is the original technology unsupported?
- Is the capability rapidly diminishing in the organisation and the industry?
- Are competitors taking advantage of your lack of responsiveness?
- What is the general market direction for your product or service and is your current way of doing things still on point or does it need a refresh or rethink?
- Avoid replacing your software with a new version of the same thing but build something new next to it without throwing away what you have
- Make sure you are removing functionality that is not needed — and use data, not sentiment, to do this
- Don’t get into a cycle of pilots and proof of concepts that go nowhere, have a plan to operationalize these or kill them early when you have the relevant data
- Ensure you look to solve the problem in a completely different way given the options that new technology affords
- Remember to carry forward the learnings from your legacy, especially ideas for a fundamentally new approach
The complexity of software systems can be hard and frustrating for an executive to understand. What might seem like a simple request, “I just want a new button on a screen”, can sometimes actually be difficult and time-consuming for the team implementing it, especially when there’s huge complexity and legacy on the inside of the system.
Complex systems often become slower and slower to change. Because of this, we see that there slowly emerges a “them and us” attitude in the organization, where business and IT divide starts becoming a chasm that feels impossible to bridge. Usually due to the frustration of the business thinking something is simple when actually it’s not.
If we try to understand why this is happening, we need to understand the difference between inherent complexity and accidental complexity. Inherent complexity of a system is unavoidable, something that anyone trying to build a software system has to deal with (and is sometimes necessary depending on your domain). Accidental complexity, on the other hand, is something that happens to the decisions everyone involved in the creation of software — from the developer writing the code, to the product manager deciding the priority of a feature, to the exec who decides what investments should be made.
A result of all of these decisions often ends in what teams refer to as ‘technical debt’. The characteristics of tech debt are much like debt in the real world — if it becomes unsustainable, it will cripple the system and the impacts are widespread.
A 2018 survey by payments firm Stripe has found that on average 40% of developers’ time is wasted on dealing with bad code and fixing tech debt. Averages aren’t always useful measures but they do help understand trends. From our experience of working with organizations, most old(er) systems, especially anything in production and still under active development for more than a year, the number is significantly higher. The cost of bad code isn’t just what’s spent in terms of hours understanding what it does or to refactor and clean, but also on how it hurts customers. That is significantly higher in terms of economic impact. It is sometimes possible to hide bad code behind good UX for a short duration, but the shortcomings soon surface and soon you stop delivering what customers’ need. In a highly competitive world, competitors will take advantage of any of your inefficiencies. It should, therefore, be simple to understand that bad quality of code leads is bad for business.
Here’s some of the key points we try to understand when speaking to our clients to assess the level of accidental complexity:
- What is the lead time from idea to production for a new feature? Has this increased over time for similar complex features?
- What is the culture around automated testing, testing failures, and builds?
- Does your team regularly look at the complexity or other relevant analytics on your code bases? Do they track the trends in these changes over time and understand if they are the right trends?
- How does your team work with technical debt (i.e. intentional complexity)?
- How long does it take to recover from outages?
- How much of your development effort goes into supporting/maintaining existing systems to keep them running compared to building new features and/or is the cost of maintenance increasing over time?
Focus on building a continually improving engineering culture, and support and invest to improve understanding of how the quality of code improves the ability to build newer, more delightful products.
9. You aren’t gonna need it (YAGNI)
The above point about your assets looks at things to add as part of a Transformation and indicates that as with any additional things it will incur an unplanned cost. But does this always have to be a net additional cost? The answer is ‘no’.
“One antipattern we keep seeing is legacy migration feature parity, the desire to retain feature parity with the old. We see this as a huge missed opportunity. Often the old systems have bloated over time, with many features unused by users and business processes that have evolved over time. Replacing these features is a waste.” - ThoughtWorks Technology Radar — Legacy migration feature parity
There’s a misconception that when renewing a legacy estate, feature parity is what’s important. In fact, this is the opportunity to drive freshness into what your customers and users need from these platforms. As called out in the legacy modernization handbook, we see feature parity as an antipattern: it should be avoided, not made the starting point for legacy overhaul.
Instead, you need to analyze what is actually needed and make the right decision for your investment. Concentrate on what provides the highest value to your business and customers. This is where evidence and data-driven decisions is key:
- What are the most commonly used features of your existing systems? This usage indicates how important the features are to your users. Do you have the data to back this up?
- Is this a Pareto optimum, i.e. the optimum set of features for end customers balanced by cost of the features, or do you have to tackle the long tail, i.e. features that exist to satisfy individual customers that are cost prohibitive to maintain?
- What costs the most to keep going? Do you truly understand a feature or business value compared to how much it costs to support and maintain?
- What features do you know were important in the past but won't be in the future?
- What can you see happening in your market space of higher value that attracts you to give up features of low/lower value your existing systems are serving?
- If you have to choose to give yourself wider options for future business agility and flexibility — for example, to incorporate data as mentioned above — what will you give up by not transferring from legacy to new, to free up the budgets?
This isn’t always simple. Beyond legacy technology, there may be history, sentiment, teams and whole divisions impacted by discontinuing something which is no longer economically viable. However, short-funding a transformation — or worse — missing out on the future of the organization for the sake of the past is nearly always even more damaging.
Our series on Breaking out of legacy, nine lessons for Business Leaders highlights the importance of new thinking on leadership and culture, understanding and the scope of work to be undertaken.
These lessons are ripping up the management books of past decades — working out strategies that combine ‘thriving’ with ‘sustainability’ requires a continual change processes. This is a new mindset, urgently expected by customers and citizens, that organizations need to respond to with agile and adaptive approaches.
As a global technology consultancy, we too at ThoughtWorks have to adapt as our markets and customer needs change rapidly. We look to both remain flexible and learn across sectors, service lines, and markets. This change is often not without pain but the prize of being both responsive & relevant, we feel is worth it. We trust this series has been helpful — if you are in a leadership role and have any challenges that resonate from our articles with your organisation please do not hesitate to get in touch.
Some related article you may also like to read: