Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Technology Radar
Technology Radar
Volume 28

Technology Radar

An opinionated guide to technology frontiers
 
  • techniques quadrant with radar rings Adopt Trial Assess Hold Adopt Trial Assess Hold
  • platforms quadrant with radar rings Adopt Trial Assess Hold Adopt Trial Assess Hold
  • tools quadrant with radar rings Adopt Trial Assess Hold Adopt Trial Assess Hold
  • languages-and-frameworks quadrant with radar rings Adopt Trial Assess Hold Adopt Trial Assess Hold
  • New
  • Moved in/out
  • No change
Blips can be new to the volume or move between rings from a previous volume.

Download Technology Radar Volume 28

English | Español | Português | 中文

Stay informed about technology

 

Subscribe now

Themes for this volume

The meteoric rise of practical AI

 

No, this theme text wasn't written by ChatGPT. Artificial intelligence has been quietly bubbling away in specialized areas for decades, and tools like GitHub Copilot have been around (and gradually seeing adoption) for a few years. However, over the last few months, tools like ChatGPT have completely reoriented everyone to what’s possible and made the tools widely available. Several blips in this edition of the Radar touch on practical uses for AI for projects beyond suggesting code that requires tweaking: AI-aided test-first development, using AI to help build analysis models, and many more. Similar to how spreadsheets allowed accountants to stop using adding machines to recalculate complex spreadsheets by hand, the next generation of AI will take on chores to relieve technology workers, including developers, by replacing tedious tasks that require knowledge (but not wisdom).

 

However, we caution against over- or inappropriate uses. Right now, the AI models are capable of generating a good first draft. But the generated content always needs to be monitored by a human who can validate, moderate and use it responsibly. If these precautions are ignored, the results can lead to reputational and security risks to organizations and users. Even some product demos caution users, "AI-generated content can contain mistakes. Make sure it's accurate and appropriate before using it."

Accessible accessibility

 

Accessibility has been an important consideration for organizations for many years. Recently, we've highlighted the experiences of our teams with the ever-growing set of tools and techniques that add improved accessibility to development, and several regions our teams highlighted awareness of these techniques via awareness campaigns. We've featured accessibility-related blips on continuous integration pipeline development, design playbooksintelligent guided accessibility testinglinting and unit testing. Growing awareness around this important topic is welcome; techniques that give more people access to functionality in improved ways can only be a good thing.

Lambda quicksand

 

Serverless functions — AWS Lambdas — increasingly appear in the toolboxes of architects and developers, and are used for a wide variety of useful tasks that realize the benefits of cloud-based infrastructure. However, like many useful things, solutions sometimes start suitably simple but then, from relentless gradual success, keep evolving until they reach beyond the limitations inherent in the paradigm and sink into the sand under their own weight. While we see many successful applications of serverless-style solutions, we also hear many cautionary tales from our projects, such as the Lambda pinball antipattern. We also see more tools that appear to solve problems but are prone to wide misuse. For example, tools that facilitate sharing code between Lambdas or orchestrate complex interactions might solve a common simple problem but are then at risk of recreating some terrible architecture antipatterns with new building blocks. If you need a tool to manage code sharing and independent deployment across a collection of serverless functions, then perhaps it’s time to rethink the suitability of the approach. Like all technology solutions, serverless has suitable applications but many of its features include trade-offs that become more acute as the solution evolves.

Engineering rigor meets analytics and AI

 

We've long viewed "building in quality" as a vital aspect of developing reliable analytics and machine learning models. Test-driven transformations, data sanity tests and data model testing strengthen the data pipelines that power analytical systems. Model validation and quality assurance are crucial in tackling biases and ensuring ethical ML systems with equitable outcomes. By integrating these practices, businesses become better positioned to leverage AI and machine learning and forge responsible, data-driven solutions that cater to a diverse user base. The corresponding tooling ecosystem has continued to grow and mature. For example, Soda Core, a data quality tool, allows the validation of data as it arrives in the system and automated monitoring checks for anomalies. Deepchecks allows for the intersection of continuous integration and model validation, an important step in incorporating good engineering practices in analytics settings. Giskard allows for quality assurance for AI models, allowing designers to detect bias and other negative facets of models, which aligns with our encouragement to tread ethical waters carefully when developing solutions with AI. We view these maturing tools as further evidence of the mainstreaming of analytics and machine learning and its integration with good engineering practices.

To declare or program?

 

A seemingly perpetual discussion that happens at every Radar gathering gained particular prominence this time — for a given task, should you write a declarative specification using JSON, YAML or something domain-specific like HCL, or should you write code in a general-purpose programming language? For example, we discussed the differences between Terraform Cloud Operator versus Crossplane, whether to use the AWS CDK or not and using Dagger for programming a deployment pipeline among other cases. Declarative specifications, while often easier to read and write, offer limited abstractions which leads to repetitive code. Proper programming languages can use abstractions to avoid duplication, but these abstractions can make the code considerably harder to follow, especially when the abstractions are layered after years of changes. In our experience, there’s no universal answer to the question posed above. Teams should consider both approaches, and when a solution proves difficult to implement cleanly in one language type, they should reevaluate the other type. It can even make sense to split concerns and implement them with different languages.

 

Contributors

 

The Technology Radar is prepared by the Thoughtworks Technology Advisory Board, comprised of:

 

Rebecca Parsons (CTO) • Martin Fowler (Chief Scientist) • Bharani Subramaniam • Birgitta Böckeler • Brandon Byars • Camilla Falconi Crispim • Erik Doernenburg • Fausto de la Torre • Hao Xu • Ian Cartwright • James Lewis • Marisa Hoenig • Maya Ormaza • Mike Mason • Neal Ford • Pawan Shah • Scott Shaw • Selvakumar Natesan • Shangqi Liu • Sofia Tania • Vanya Seth

Subscribe. Stay informed.

We publish articles related to Technology Radar throughout the year

Marketo Form ID is invalid !!!

Thank you!

You have been subscribed to our Technology Radar content. Keep an eye on your inbox, we will be in touch soon.

Visit our archive to read previous volumes