In the past decade of the Tech Radar, lots has changed in the IT industry. Agile has risen to be the dominant software development methodology, cloud has prevailed as the de-facto platform, and the continuous delivery and DevOps revolutions have allowed us to get systems into production faster than ever before. But 10 years is a long time, especially in IT, and we didn’t always get everything right. In this article we’ll do a tour of some of our most prescient prognostications as well as our widest ‘misses’, which are both instructive and often have a story attached to them. Along the way, we’ll talk about some of the approaches we use to cull the list of possible blips so that we can make a Radar that isn’t too overwhelming.
We sourced these hits and misses both from the folks on the Technology Advisory Board who create the Radar, although that group has changed over the years, but also from the broader Thoughtworks technology community. While the Radar is a collective, collaborative effort, I take full responsibility for the misses.
Our greatest hits
Our greatest hits often fit in the category of being well in front of the industry. For example, in the first year of the Radar, 2010, we had DevOps, Continuous Deployment, and Evolutionary Architecture in Assess. We followed this with the introduction of Infrastructure as Code first in 2011. It then returned again in 2020, nearly a decade later. All of these technologies have made a significant impact on how we deliver software quickly and effectively, with low risk. The first Radar also had Non-relational Databases in Trial, as we were already seeing their importance for more complex data architectures.
In 2014, we introduced both Docker and Cloud Lift and Shift to the Radar, albeit in very different rings. Docker came in at Assess quite early, quickly making it to Adopt by 2016. One meeting we had so many Docker items proposed as blips that someone put up a green +1 voting card labeled “Docker, Docker, Docker”. Cloud Lift and Shift debuted in 2014 in Hold and stayed there until mid-2016. It then made a reappearance in 2020 as we were still seeing too many workloads moved to the cloud without proper consideration of how they should be re-engineered to take advantage of these platforms. While Lift and Shift can work for some workloads, the practice is far too common and is contributing to the slowing of cloud adoption.
Speaking of the Hold ring, we tend to get a lot of techniques in there. Two that stand out are SAFe and Big Data Envy. The TAB spent several meetings trying to convince me to put SAFe on hold, but I resisted. We do try to turn any Hold proposals into something positive that can be blipped in its place. However, after seeing so many instances where SAFe was not implemented in a way that was delivering value, I relented.
The final entry in the hit category I want to discuss is Big Data Envy. Too many people wanted to jump on the Big Data bandwagon without needing to or without understanding what they could achieve with it. Big Data Envy is part of our “Envy” family, which also includes Web Scale Envy (few organizations really need to get to Google’s scale) and Microservices Envy. For the latter, we saw Microservices hyped to the point where organizations would try moving to them without having the right level of maturity, as Martin described in his post.
Enough with the hits, let’s get on to the misses, an impressive, and hopefully amusing collection.
Greatest misses (The Radar blooper reel)
Again, with the misses, there are some themes. For some, we just lost the argument with the industry. For some, we just completely missed the mark. For some, the recommendations ended up representing data points that were too isolated or too early. Some were things that were in some ways too easy to misuse, and we weren’t always as cautionary as we could have been. While we source the inputs to the Radar from our global community and the group that selects the blips is global, we do sometimes get it wrong. Sometimes, the reasons are pretty amusing.
In the first year of the Radar, we put Azure on Hold. At the time, the strategy for Azure was not nearly as well developed as AWS. We didn’t see from Microsoft a commitment to building a platform for application software development for enterprises. Obviously, this prediction turned out to be quite wrong. We moved Azure to Assess at the next Radar, and it made it into Trial in 2018. Each of the major cloud vendors are now roughly comparable in terms of the base features, and all of them also have particular areas of specialization. Azure as a platform is appealing to many organizations, although AWS is still dominant and Google is still a significant entry. In hindsight, we were premature in our assessment of Azure.
Putting Feature Branching on Hold is an area where we just have lost the overall argument with the industry. We continue to believe that trunk-based development is a more effective approach, and short-lived branches certainly have their place. However, the industry continues to use Feature Branching more than we would like. I guess we can’t win them all.
There are two misses in the “Gee, we really blew that category”, demonstrating that even with a room full of people, things can get missed in a really big way.
The first exhibit in this category is the Windows Phone. We put it into Assess in 2012. We had several Thoughtworkers who were really excited about the Windows Phone; one even talked about accidentally throwing his iPhone into a fountain so he could replace it with a Windows Phone. Needless to say this didn’t turn out to reflect what anyone else felt about the Windows Phone. We relied on too small of a sample set for this blip. Yes, we can make ourselves feel better by saying “But it was only Assess”, but I have to call this one a miss, although not everyone agrees.
The second, more serious, miss also reflected blindspots in the group. We put Experience Design in Assess in 2012. Needless to say we heard pretty quickly from our own Design Community that we’d really missed the mark. We put an entire aspect of software delivery and product development in Assess, as if there should be any question about the usefulness of that discipline to delivering products and services that serve the needs of the target customer. After that embarrassment, we began to source more widely from our Thoughtworks colleagues, rather than just talking to the developers.
The next miss belongs in the category of good intentions. In 2010, we put Iterative Data Warehouses in Assess and moved it to Trial in 2011. We knew that the traditional approaches to Data Warehouses were just not working. We hoped that taking a more iterative approach to Data Warehouses would solve the problem. Perhaps it was a step on the path, but realistically how we use data for business decision making and exploration has gone a completely different direction. Some of this change in direction is the result of the rise of the cloud; some of it is because of the enormous increase in the volume and variety of data. Still, while we had high hopes to improve the success rate of these projects, a more fundamental change was needed. We got there in the end, but it took us a while.
Another good intention that turned out wrong was our decision to put Java End of Life in Assess in 2010 and leaving it there until 2011. Oracle had recently acquired Sun and the Java assets, and made a decision to delay the release of the next version until it was ready, and that was taking a long time. We put Java End of Life in Assess because we felt that organizations with a significant investment in the Java ecosystem needed to at least be thinking about what a migration path might look like. During this same time, C# had new features coming out and for a time was significantly ahead of Java from a language perspective. Obviously, we got this one wrong too. The Java releases began again, restoring the community’s faith in the viability of Java and the JVM. It’s hard in a way to call this a miss, as with what we knew at the time, there was a risk. However, in the end, we got it wrong, as Java and the JVM continue to be a significant player.
So there you have it. In the first 10 years of the Radar, we got a lot of things right, but we made a lot of mistakes. Along the way, we’ve learned a lot about technology adoption and witnessed, along with all of you in the industry, a broadening of the scope of technologies that we all have to be aware of. I can’t guarantee that we won’t make any more blunders, although I do hope they’ll at least be a different kind of blunder. I will say that, for the foreseeable future, we will continue to produce a Radar twice a year, despite things like global pandemics making it impossible for us to create the Radar in person. Here’s to another 10 years of the Thoughtworks Technology Radar.