Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Blogs Banner

Macro trends in the tech industry | Nov 2018

Twice a year we create the Thoughtworks Technology Radar, an opinionated look at what’s happening in the enterprise tech world. We cover tools, techniques, languages, and platforms and we generally call out over one hundred individual ‘blips’. Along with this detail we write about a handful of overarching ‘themes’ that can help a reader see the forest for the trees, and in this piece, I try to capture not just Radar themes, but wider trends across the tech industry today. These “macro trends” articles are only possible with the help from the large technology community at Thoughtworks, so I’d like to thank everyone who contributes ideas and has commented on drafts.

Quantum Computing is both here and not here

We’re continuing to see traction in the quantum computing field. Academic institutions are partnering with commercial organizations, large investments are being made, and a community of startups and university spinouts is springing up. Microsoft’s Q# language allows developers to get started with quantum computing and run algorithms against simulated machines, as well as tap into real cloud-based quantum computers. IBM Q is its competing offering, again partnering with large commercial organizations, academia, and startups. At a local level, we’ve hosted quantum computing hack nights with extremely good community turnout.
 
The largest (non-classified) quantum computer available as of this writing is small: just 72 qubits. There are a lot of headlines indicating the forthcoming demise of conventional cryptography, but 2048-bit RSA keys likely require a quantum computer of at least 6,000 qubits in size, and more modern algorithms such as AES probably have better security against quantum attacks. A commercial quantum computer is expected to need at least 100 qubits, as well as improved stability and error correction over what is available today. Practical uses for quantum computing are still in the realm of research exercises, for example, modeling the properties of complex molecules in chemistry. For now, at least, mainstream enterprise use of quantum computing seems a long way off.

Hyperkinetic pace of change

We’ve frequently observed that the pace of change in technology is not just fast: it’s accelerating. When we started the Radar a decade ago, the default for entries was to remain for two Radar editions (approximately one year) with no movement before they fade away automatically. However, as indicated by the formula in one of our Radar themes—pace = distance over time—change in the software development ecosystem continues to accelerate. Time has remained constant (we still create the Radar twice a year), but the distance traveled in terms of technology innovation has noticeably increased, providing yet more evidence of what’s obvious to any astute observer: the pace of technology change continues to increase. We see an increased pace in all our Radar quadrants and also in our client’s appetite to adopt new and diverse technology choices. Given that almost everything in the world today across business, politics, and society is driven by technology, the pace of change in all these other areas increases as well. An important corollary for businesses is that there will be much less time available to adopt new technologies and business models—it’s still “adapt or die,” but the pressure is higher now than ever before.

For companies to compete, continuous modernization is required

The need to upgrade and replace older technology isn’t new—for as long as computers have been around a new model was in planning or just around the corner—but it does feel like the “volume level” on the need to modernize has increased. Businesses need to move fast, and they can’t do so encumbered by their legacy tech estate. Modern businesses compete to offer the best customer experiences, brand loyalty is largely dead, and the fastest movers are often the winners. This issue hits all companies—even the darlings of Silicon Valley and the startup unicorns of the world—because almost as soon as something is in production, it can be considered legacy technology and an anchor rather than an asset. The success of these companies is in constantly upgrading and refining their technology and platforms.

My colleague George Earle and I have recently written a two-part series detailing the imperative to modernize as well as a plan for doing it.

Industry catches up to previous big shifts

It was obvious to us that containers (especially Docker) and container platforms (especially Kubernetes) were important from the get-go. A couple of Radars ago, we declared that Kubernetes had won the battle and was the modern platform of choice; industry now seems to agree with us. There are a phenomenal number of Kubernetes-related blips on this edition of the Radar—Knative, gVisor, Rook, SPIFFE, kube-bench, Jaeger, Pulumi, Heptio Ark and acs-engine to name but a few. These all help with the Kubernetes ecosystem, configuration scanning, security auditing, disaster recovery and so on. All these tools help us to build clusters more easily and reliably.

Lingering Enterprise Antipatterns

In this edition of the Radar, many of our ‘Hold’ entries are simply new ways to be misguided in putting together enterprise systems. We have new tools and platforms, but we tend to keep making the same mistakes. Here are a few examples:
  • Recreating ESB antipatterns with Kafka—this is the “egregious spaghetti box” all over again, where a perfectly good technology (Kafka) is being abused for the sake of centralization or efficiency.
  • Overambitious API gateways—a perfectly good technology for access management and rate limiting of APIs also happen to have transformation and business logic added to it.
  • Data-hungry packages—we buy a software package to do one thing, but it ends up taking over our organization, feeding on more and more data and accidentally becoming the ‘master’ for all of it, while requiring a lot of integration work too.
The bottom line is that the more things change, the more they stay the same. We make mistakes because often these technologies offer a “silver bullet” to solve our problems. MDM, ESBs, packaged software all have great promise, but in the end, every organization needs to balance coupling and isolation, do good design, and do the right amount of upfront thinking. Organizations need to respect Conway’s Law and build teams in the right structure to solve problems and deliver features. New tools, platforms, and capabilities change how we solve these problems, but we still need to solve them. There are no silver bullets.

JavaScript community goes quiet

We’ve previously written about the churn in the JavaScript ecosystem, but the community appears to be emerging from a period of rapid growth to one with less excitement. Our contacts within the publishing industry tell us that searches for JavaScript-related content have been replaced by an interest in a group of languages led by Go, Rust, Kotlin, and Python. Could it be that Atwood’s Law has come to pass—everything that can be written in JavaScript has been written in JavaScript—and developers have moved on to new languages? This could also be an effect of the rise of microservices, where a polyglot approach is much more feasible, allowing developers to experiment with using the best language for each component. Either way, there’s a lot less JavaScript on our Radar in this edition.

Cloud happened, and it’s still happening

One of our themes on this Radar is the surprising ‘stickiness’ of cloud providers, who are in a tight race to win hosting business and often add features and services to improve the attractiveness of their product. Using these vendor-specific features can lead to accidental lock-in, but of course, will accelerate delivery, so are a bit of a double-edged sword.
 
In the cloud space right now, we see more and more organizations successfully move to the public cloud, with more mature conversations and understanding around what this means. Bigger companies—even banks and other regulated industries—are moving larger and more sensitive workloads to the cloud, and bringing their regulators along on the journey. In some cases, this means they’re mandated to pursue a multi-cloud strategy for those material workloads. Many of the blips on today’s Radar—multi-cloud sensibly, financial sympathy, and so on—are indicators that cloud is finally mainstream for all organizations and that the “long tail” of migration is here.

Serverless gains traction, but it’s not a slam dunk (yet)

Serverless” architectures are one of the biggest trends in today’s IT landscape, but also possibly the most misunderstood. In this edition of the Radar, we actually don’t highlight any blips for serverless tech—we’ve done so in the past, but this time around we felt nothing quite made the cut. That’s not to say things are quiet in the serverless space, however. Amazon recently released SLAs for Lambda, something that is relatively rare for AWS services, and almost everything on the AWS platform has some sort of Lambda tie-in. The other major cloud vendors offer competing (but similar) services and tend to respond whenever Amazon makes a move in this space.

Where things get tricky is if an organization simply assumes that their workload is appropriate for serverless techniques and carries on regardless, or doesn’t really do the math on whether it’s better to pay for on-use functions than setting up and maintaining a dedicated server instance. We’d highlight two key areas where serverless needs to mature:
  • Patterns for use: Architectural and workload models where the approach is or isn’t the right one. A better understanding is needed of how to compose an application from Serverless components as well as containers and virtual machines.
  • Pricing model: Not well understood or easy to tune, leading to large bills and limited applicability. Ideally, we should compare Total Cost of Ownership including things like DevOps engineering time, server maintenance and so on.
We think that serverless is about where we’d expect it to be on the adoption lifecycle: it promises new technology, we’ve had significant successes as well as some failures with it, and techniques and tooling to go along with Serverless are progressing in leaps and bounds. It’s definitely an approach every architect should have in their toolbox.

Engineering for failure

In the past we’ve highlighted Netflix’ Simian Army testing tools that deliberately cause failures in a production system, so you can be sure that your architecture can tolerate failure. This Chaos Engineering has become more widespread and expanded to related areas. In this Radar, we highlight the 1% Canary and Security Chaos Engineering as specific instances of engineering for failure.



Enduring good practices

As a happy counterpoint to the problems with lingering antipatterns, in this Radar, we highlight that good practices are enduring in the industry. Whenever a new technology comes along, we all experiment with it, try to figure out which use cases are the best fit and the what the limits are of what it can and can’t do. A good example of this is the recent emphasis on data and machine learning. Once that new thing has been experimented with, and we’ve learnt what it’s good for and what it can do, we need to apply good engineering practices to it. In the machine learning case, we’d recommend applying automated testing and continuous delivery practices—combined we call this Continuous Intelligence. The point here is that all the practices we’ve developed over the years to build software well continue to be applicable to all the new things. Doing a good job with the ‘craft’ of software creation continues to be important, no matter how the underlying tech changes.

That’s it for this edition of Macro Trends. If you’ve enjoyed this commentary, you might also like our recently re-launched podcasts series, where I am a host along with several of my Thoughtworks colleagues. We release a podcast every two weeks covering topics such as agile data science, distributed systems, continuous intelligence and IoT. Check it out!

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Keep up to date with our latest insights