There’s one particular history of software development that can be told as a story of increasing levels of abstraction. From assembly code to high-level languages through to low and no code, and, today, natural language prompting with AI, there’s been an ongoing attempt to simplify the process of writing software.
While this trajectory has been driven by commercial imperatives and a need to make the field more accessible to a wider workforce of technology professionals, there’s also a cost. I’m referring, of course, to what Joel Spolsky described as the law of leaky abstractions: the fact that the any attempt to encapsulate or hide complexity inevitably leads to a loss of control of some level of detail — this detail will, at some point or other, leak.
However, the law of leaky abstractions is missing something: it places the focus on technology and tools; it doesn’t speak to the human consequences of abstraction. By this, I’m thinking of what I’d like to call ‘cognitive leakage’. In short, when abstractions shield us from complexity, we don’t have to understand and grapple with it. This means we miss out on understanding or learning something that might well come to be important in solving a problem.
True, we rarely need to know everything to accomplish a task. But if we rely unthinkingly on abstractions we give up control — what I’d like to call cognitive sovereignty — over the technologies we use. That’s a risky place to be. It has the potential to hinder both our personal development and our collective ability to tackle tough and complicated problems in the future.
Undigested complexity
It’s important we appreciate there can be practical consequences to this that go beyond personal development. Technical debt, for instance, is often tied up with the challenges of different levels of abstraction. Rarely is it just a question of an older technology simply needing to be updated. (If it was that easy we surely wouldn’t talk about it so much!) Often it’s the complexity of abstractions that are starting to leak as a system or its context evolves.
What’s more, anyone who has worked on these kinds of projects will know that system complexity is very much a cognitive issue. It’s often not the process of change that’s challenging but instead understanding what’s actually happening in the first place.
When faced with these challenges there are, broadly speaking, two options:
A cognitive shift left: During the development phase, we face complexity head-on (writing code, writing tests, DDD modeling). This takes considerable time and effort but we arguably amortize the cost of understanding.
A cognitive shift right: We can move quickly over system details with the help of highly encapsulated tools. These allow us to deliver quickly and cut costs upfront — yet complexity remains.
I like to see this as a hidden variable in the 'iron triangle' of cost, speed and scope. We’re not dealing with a square exactly, but we do run the risk of the triangle going out of shape.
Think of it this way: when time and resources are fixed and we force an increase in delivery speed by introducing new abstractions in the form of black-box tools, what we save isn’t the volume of code, but really the cognitive volume — the burden of complexity we haven’t properly digested. In turn, undigested complexity accumulates in the system.
There are a number of ways it could manifest itself — perhaps as bugs during testing or bizarre glitches in the production system. Over time, it may become a fully-fledged black box system no one understands and refuses to touch — a big architectural ball of mud with a significant cognitive repurchase cost no one really wants to pay.
Refactoring and cognitive repurchase
Here’s a classic engineering puzzle: why do teams facing a messy, ball of mud architecture vacillate between rewrite and refactor, yet fail at both? Either the refactoring plan fizzles out, or the rewritten system quickly degenerates into another ball of mud.
This can be attributed to system entropy or increased business complexity — which is accelerated by cognitive leakage. While it's true that all systems tend towards rot and decay without thoughtful and proactive maintenance, cognitive leakage ensures that process is faster than we might ordinarily expect.
At an organizational level this is clearly problematic. As a system scales with further layers of abstraction added to the palimpsest of software, leaks and the attendant complexity and cognitive debt require further labor. Often managers will try and tackle the issue by adding people or implementing new KPIs.
This is, however, risky:
When you add more people, you’re not immediately tackling existing complexity, you’re adding to it. Newcomers (however experienced they may be) won’t have established the necessary cognitive meta-models — this requires communication, training and may dilute the team's average cognitive density.
New KPIs may encourage certain activities or behaviors but it can also lead to teams embracing abstraction further as they seek cover up problems — even if piercing those existing abstractions are what really needs to be done.
While it’s easy to see why short-term fixes are attractive, they stand in the way of organizations and developers taking a more deliberate and intentional approach to evolving their systems. It also creates a kind of collective amnesia where expertise about how and why software was produced in the way it was is completely lost.
The cost of convenience
At this point an organization will have to choose between rewriting or refactoring. The problem is that such work involves far more than just typing code: it requires recovering lost requirements, logic and context.
This is the cost of cognitive leakage we need to repay. We spend time repairing logic that broke as a result of our comfort with an abstracted (some might say superficial) level of understanding; we have to rebuild test safety nets that atrophied due to over-encapsulation. This challenging process is the necessary act of repairing cognitive leakage.
This isn’t to say abstractions have no place in our work. That would be ridiculous and wrong. Indeed, all software as we know it today operates at some level of abstraction. Take high-level programming languages, for example: C++ and Java abstracted hardware but still require us to critically engage with the technical challenges we're faced with. Indeed, in some scenarios — small-scale systems, prototypes — even low code and AI assistance can be valuable.
However, I would suggest there’s a qualitative difference between high-level languages and no code and AI approaches. No code and AI allows us to bypass critical engagement completely. They simulate work, tricking us that we're solving problems when the problems remain untouched beneath the attractive sheen of whatever we've delivered.
In turn, this leads to a phenomena I describe as software inflation and zombie systems. When complexity hits, developers either have to reclaim cognitive sovereignty or risk the system moving into a kind of living death, with problems that they can’t decompose.
In short, convenience always has a cost. What’s more, this cost is particularly heavy when applied to the strategic core. High business complexity, long lifecycles and cross-team collaboration make the leakiness of abstractions not only more problematic but also more likely.
Fortunately there are already things we can do to guard against this, starting with good governance. When using AI, for example, we can enforce mandatory code reviews to ensure cognitive synchronization; if using low-code, we can ensure the generated logic remains within the range of our cognitive control.
Cognitive sovereignty is a weapon: Use it
Every time you copy and paste code and every time you choose to use a black box tool, ask yourself one question: what’s the cost of convenience? This isn’t to say you should be dogmatic — far from it; it might well be the case that the cost of convenience is perfectly reasonable.
But it won’t always be. And however much product marketers try to convince you that the one simple trick they’re selling will solve your problem with no downsides, debt or leakage, remember there is always some cost. That’s why maintaining cognitive sovereignty is essential: in a highly abstracted world and one of increasing AI-assistance it’s a strength, not an inflexible weakness.
Ultimately, it’s a critical weapon that can help you and your colleagues fight system decay and sustain product vitality.