Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Blogs Banner

Macro trends in the tech industry | Nov 2019

The Technology Radar is a snapshot of things that we’ve recently encountered, the stuff that’s piqued our interest. But the act of creating the Radar also means we have a bunch of fascinating discussions that can’t always be captured as blips or themes. Here’s our latest look into what’s happening in the world of software.

Race for cloud supremacy resulting in too many feature false starts

As I’ve written about previously, Cloud is the dominant infrastructure and architectural style in the industry today, and the major cloud vendors are in a constant fight to build market share and gain a leg up over their competitors. This has led them to push features to the market — in our opinion — before those features and services were really ready for prime time. This is a pattern we’ve seen many times over in the past, where enterprise software vendors would market their product as having more features than a competitor, whether or not those features were actually complete and available in the product. This isn’t a new problem, per se, but it is a fundamental challenge with today’s cloud landscape. It’s also not an accident — this is a deliberate strategy and a consequence of how the cloud companies have structured themselves to get software out of the door really fast.

The race by each cloud platform to deliver new products and services isn’t necessarily going to create good outcomes for the teams using them. The vendors over-promise, so it’s “buyer beware” for our teams. When there’s a new cloud database or other service, it’s critical that teams evaluate whether something is actually ready for their use. Can the team live with the inevitable rough edges and limitations?

Hybrid cloud tooling starts to take shape

Many large organizations are in a “hybrid cloud” situation where they have more than one cloud provider in use. The choice to use a single provider or multiple providers is complex and involves not just technology but also commercial, political and even regulatory considerations. For example, organizations in highly regulated industries may need to prove to a regulator that they could easily move to a new cloud provider should their current provider suffer some kind of catastrophic technical or commercial problem that rendered them no longer a going concern. Some of our clients are undertaking significant cloud consolidation work to transition to a single cloud platform, because being on multiple clouds is problematic due to latency, complexity of VPN setup, a desire to consolidate in order to get better pricing from the vendor, or for cloud-specific features such as Kubernetes support or access to particular machine learning algorithms.

Such transitions or consolidations could take years, especially when you consider how legacy on-premise assets may factor into the plan, so organizations need a better way to deal with multiple clouds. A number of “hybrid cloud control planes” are springing up that may help ease the pain. We think Google Anthos, AWS Outposts, Azure Arc and Azure Stack are worth looking at if you’re struggling with multiple clouds.

“Quantum-ready” could be next year’s strategic play

Google recently trumpeted its achievement in so-called “quantum supremacy” — it has built a quantum computer that can run an algorithm that would be essentially intractable on a classical computer. In this particular case, Google used a 53 qubit quantum computer to solve a problem in 200 seconds that would take a classical supercomputer 10,000 years (IBM has disputed the claims, and says its supercomputer could achieve the result in 2.5 days). The key point is to show that quantum computers are more than just an expensive toy in a lab, and that there are no hidden barriers to quantum computing solving important, larger-sized problems. 

For now, the problems solvable with a small number of qubits are limited in number and usefulness, but quantum is clearly on the horizon. Canadian startup Xanadu is developing not just quantum chips — using a ‘photonic’ approach to capture quantum effects as opposed to Google’s use of superconductors — but also quantum simulation and training tools. They point out that even though most quantum algorithms today seem a bit theoretical, you can use quantum techniques to speed up problems such as Monte Carlo simulation, something that’s very useful today in fields such as FinTech. 

As with many technology shifts (big data, blockchain, machine learning) it’s important to at least have a passing familiarity with the technology and what it might do for your business. IBM, Microsoft and Google all provide tools to simulate quantum computers, as well as in some cases access to real quantum computing hardware. While your organization may not (yet) be able to take advantage of highly specific algorithmic speedups “Quantum-ready developer” could soon become popular in the way “data scientist” has in the past.

90% decommissioned is 0% saved

As an industry, IT constantly faces the pressure of legacy systems. If something is old, it might not be adaptable enough for today’s fast pace of change, too expensive to maintain, or just plain risky — creaky systems running on eBay’d hardware can be a big liability. As IT professionals we constantly need to deal with, and eventually retire, legacy systems. One cool-sounding approach to legacy replacement is the Strangler Fig Application, where we build around and augment a legacy system, intending to eventually retire it completely. This pattern gets a lot of attention, not least due to the violent-sounding name — many people would like to do violence to some of these frustrating older systems, so you tend to get a lot of support for a strategy that involves “strangling” one of them.



The problem comes when we claim to be strangling the legacy system, but end up just building extra systems and APIs on top. We never actually retire the legacy. Our colleague Jonny LeRoy (famed for his ability to name things) suggested that we put “neck massage for legacy systems” on ‘Hold.’ We felt the blip was too complex for the Radar, but people liked the message: if we plan to retire a legacy system using the strangler pattern, we better actually get around to that retirement or often the whole justification for our efforts falls apart.

Trunk-based development seems to be losing the fight

We’ve campaigned for years that trunk-based development, where every developer commits their code directly to a “main line” of source control (and does so daily or better) is the best way to create software. As someone who’s seen a lot of source code messes, I can tell you that branching is not free (or even cheap) and that even fancy code merging with tools such as Git don’t save a team from the problems caused by a “continuous isolation” style of development. The usual reasons given for wanting code branches are actually signs of deeper problems with a team or a system architecture, and should be solved directly instead of using code branches. For example, if you don’t trust certain developers to commit code to your project and you use branches or pull requests as a code review mechanism, maybe you should fix the core trust issue instead. If you’re not sure you’re going to hit a project deadline and want to use branches to “cherry pick” changes for a release candidate, you’re in a world of hurt and should fix your estimation, prioritization and project management problems rather than using branches to band-aid the problem.

Unfortunately, we seem to be losing the fight on this one. Short-lived branching techniques such as GitFlow continue to gain traction, as does the use of pull requests for governance activities such as code review. Our erstwhile colleague, Paul Hammant, who created and maintains trunkbaseddevelopment.com has (grudgingly, I hope!) included short-lived feature branches as a recommendation for how to do trunk-based development at scale. We’re a little glum that our favored technique seems to be losing the fight, but we hope like-minded teams will continue to push for sane, trunk-based development where possible.

XR is waiting for Apple

At the recent Facebook Connect conference, Oculus confirmed they are working on AR glasses but didn’t have anything specific to announce. The most recent leaks and rumors suggest that Apple will launch an XR headset of some kind in 2020, with AR glasses planned for 2022. As with many other advances such as the smartphone and smartwatch, Apple will probably lead the way when it comes to creating really compelling experience design. Apple’s magic has always been to combine engineering advancements with a great consumer experience, and it doesn’t enter a market until it can truly do that. For a long time (and maybe still today) Apple’s Human Interface Design guidelines have been required reading for anyone building an app. I expect a similar leap forward will be taken when Apple (eventually) get into the AR space. Until then, while we have some nifty demos and some limited training experiences, XR is going to remain a bit of a niche technology.

Machine learning continues to amaze and astonish, but do we understand it?

One of my favourite YouTube channels is Two Minute Papers in which researcher Károly Zsolnai-Fehér provides mind-blowing reporting on advances in AI systems. Recently the channel has featured AI that can mimic a human voice given just five seconds of input, AI that can infer game physics 30,000 times faster than a traditional physical simulation, and AI that learns to play Hide and Seek and literally breaks the rules of the game world within which it’s playing. The channel does a great job of showing the amazing (and slightly scary) advancements in narrow-AI capability, usually for problems that can be visualized and make for good videos. But machine learning is also being applied to many other fields such as business decision making, medicine, and even advising judges on sentencing criminals, so it’s important that we understand how an AI or machine learning system works.

One big problem is that although we can describe what an underlying algorithm is doing (for example how back propagation of a neural network works) we can’t explain what the network actually does once trained. This Radar features tools such as what-if and techniques such as ethical bias testing. We think that explainability should be a first-class concern when choosing a machine learning model.

Mechanical sympathy comes around again

Back in 2012, the Radar featured a concept called “mechanical sympathy” based on the work of the LMAX Disruptor team. At a time when many software applications were being written at an increasing level of abstraction, Disruptor got closer to the metal, being tuned for extremely high performance on specific Intel CPUs. The LMAX problem was inherently single threaded, and they needed high performance from single-CPU machines. It seems like mechanical sympathy is having something of a resurgence. 

Last Radar we featured Humio, a log aggregation tool built to be super fast at both log aggregation and querying. This Radar, we’re featuring GraalVM, a high performance virtual machine. We think it’s ironic that much of the progress in the software industry is getting things away from the hardware (containers, Kubernetes, Functions-as-a-Service, databases-in-the-cloud, and so on) and yet others are hyper focused on the hardware on which we’re running. I guess it depends on the use-case. Do you need scale and elasticity? Then get away from the hardware and get to cloud. Do you have a very specific use-case like high-frequency trading? Then get closer to the hardware with some of these techniques.

I hope you’ve enjoyed this lightning tour of current trends in the tech industry. There are some others that I didn’t have room for, but if you’re interested in software development as a team sport, or in protecting the software supply chain, you can read about those in the Radar.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Keep up to date with our latest insights