Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Languages & Frameworks

Adopt ?

  • With Playwright you can write end-to-end tests that run in Chrome, Firefox and WebKit. By using the Chrome DevTools Protocol (CDP) Playwright can offer new features and eliminate many of the issues seen with WebDriver. Chromium-based browsers implement CDP directly. To support Firefox and Webkit, though, the Playwright team has to submit patches to these browsers, which may sometimes limit the framework.

    Playwright’s many features include: Built-in auto-waits, which result in tests that are more reliable and easier to understand; browser contexts, which let you test that persisting sessions across tabs work properly; and the ability to simulate notifications, geolocation and dark mode settings. Our teams are impressed with the stability Playwright brings to the test suite and like that they get feedback more quickly by running tests in parallel. Other features that set Playwright apart include better support for lazy loading and tracing. Although Playwright has some limitations — component support is currently experimental, for example — our teams consider it the go-to test framework and in some cases are migrating away from Cypress and Puppeteer.

Trial ?

  • ASP.NET core MVC has proven to be a powerful and flexible approach for building web applications that host APIs. However, its flexibility brings a certain amount of complexity with it, including boilerplate and conventions that aren't always obvious. The routing provided by ASP.NET allows multiple services to be hosted in a single application, but in today's world of serverless functions and independently deployable microservices, that flexibility might be overkill. .NET Minimal APIs provide a simple approach to implementing a single-API web application in the .NET ecosystem. The Minimal API framework can implement an API endpoint with just a few lines of code. Minimal API joins the new generation of API frameworks — including Micronaut, Quarkus and Helidon — which are optimized for lightweight deployments and fast startup times. We're interested in the combination of Minimal APIs and .NET 7 Native AOT for implementing simple, lightweight microservices in serverless functions.

  • Ajv is a popular JavaScript library used to validate a data object against a structure defined using a JSON Schema. Ajv is fast and flexible for validating complex data types. It supports a wide range of schema features, including custom keywords and formats. It's used by many open-source JavaScript applications and libraries. Our teams have used Ajv for implementing consumer-driven contract testing in CI workflows, and, along with other tools for generating mock data with the JSON Schema, it's very powerful. In the TypeScript world, Zod is a popular alternative that has a declarative API for defining the schema and validating the data.

  • Armeria is an open-source framework for building microservices. Our teams have used it to build asynchronous APIs, and we quite like the approach to address cross-cutting concerns, like distributed tracing or circuit breakers, via service decorators. The framework includes port reuse for both gRPC and REST traffic among other clever design choices. With Armeria, we can incrementally add new features in gRPC on top of an existing codebase in REST or vice versa. Overall, we find Armeria to be a flexible microservices framework with several out-of-the-box integrations.

  • The AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications on the AWS Cloud infrastructure. Previously we blipped Serverless Framework as a popular framework for deploying serverless services on various cloud providers, primarily AWS Lambda-based services. AWS SAM has gained popularity in recent times as the framework has come a long way from its early days. Our teams have found it very easy to set up, and they also use it for testing and debugging AWS Lambda-based services, including local executions of Lambdas for development.

  • Dart is a programming language developed by Google that supports building apps targeting multiple platforms, including web browsers, WebAssembly, desktop and mobile apps. Its adoption has been driven by the dominance of Flutter — a popular, multi-platform UI toolkit powered by Dart — in the cross-platform native mobile app framework space. In response to community feedback, Dart has evolved since its initial versions and has added built-in sound null safety in version three, in addition to a robust type system. Furthermore, Dart's ecosystem is growing rapidly, with a vibrant community and a wide range of available libraries and tools, making it attractive for developers.

  • fast-check is a property-based testing tool for JavaScript and TypeScript, capable of automatically generating test data so a wide range of inputs can be explored without creating separate tests. This makes it easier to uncover edge scenarios. Our teams are reporting good results using fast-check in back-end testing due to its good documentation, ease of use and seamless integration with existing testing frameworks, which enhances unit testing efficiency.

  • Five years ago we moved Kotlin into the Adopt ring, and today many of our teams report that Kotlin is not only their default choice on the JVM but that it has displaced Java almost completely in the software they write. At the same time, microservice envy appears to be fading — we’ve noticed people starting to explore architectures with larger deployable units, using frameworks like Spring Modulith among others. We're aware of many good Kotlin-native frameworks and have mentioned some of them previously; however, in some cases, the maturity and feature-richness of the Spring framework is a real asset, and we've been using Kotlin with Spring successfully, usually with no or only smaller issues.

  • Mockery is a mature Golang library that helps generate mock implementations of interfaces and simulates the behavior of external dependencies. With type-safe methods to generate call expectations and flexible ways to mock return values, it enables the tests to focus on the business logic rather than worrying about the correctness of external dependencies. Mockery uses Go generators and simplifies the generation and management of the mocks in the test suite.

  • We mostly use GraphQL for server-side resource aggregation and have implemented the server side using a variety of technologies. For services written with Spring Boot, our teams have had good experiences with Netflix DGS. It's built on top of graphql-java and provides features and abstractions in the Spring Boot programming model that make it easy to implement GraphQL endpoints and integrate with Spring features such as Spring Security. Although it's written in Kotlin, DGS works equally well with Java.

  • We've been using OpenTelemetry as a solution for a while now and recommended trying it in previous editions. Its ability to seamlessly capture, instrument and manage telemetry data across various services and applications has improved our observability stack. OpenTelemetry's flexibility and compatibility with diverse environments have made it a valuable addition to our toolkit. We're now particularly curious about the recent release of the OpenTelemetry Protocol (OTLP) specification, which includes both gRPC and HTTP. This protocol standardizes the format and transmission of telemetry data, promoting interoperability and simplifying integrations with other monitoring and analysis tools. As we continue to explore the integration potential of the protocol, we're evaluating its long-term impact on our monitoring and observability strategy and on the general monitoring landscape.

  • Polars is an in-memory data frame library implemented in Rust. Unlike other data frames (such as pandas), Polars is multithreaded, supports lazy execution and is safe for parallel operations. The in-memory data is organized in the Apache Arrow format for efficient analytic operations and to enable interoperability with other tools. If you're familiar with pandas, you can quickly get started with Polars' Python bindings. We believe Polars, with Rust implementation and Python bindings, is a performant in-memory data frame for your analytical needs. Our teams continue to have a good experience with Polars which is why we're moving it to Trial.

  • Pushpin is a reverse proxy that acts as an intermediary between clients and back-end servers handling long-lived connections, such as WebSockets and Server-Sent Events. It provides a way to terminate long-lived connections from clients and means the rest of the system can be abstracted from this complexity. It effectively manages a large number of persistent connections and automatically distributes them across multiple back-end servers, optimizing performance and reliability. Our experience using this for real-time communication with mobile devices using WebSockets has been good, and we've been able to scale it horizontally for millions of devices.

  • Snowpark is a library for querying and processing data at scale in Snowflake. Our teams use it for writing manageable code for interacting with data residing in Snowflake — it's akin to writing Spark code but for Snowflake. Ultimately, it's an engine that translates code into SQL that Snowflake understands. You can build applications that process data in Snowflake without moving data to the system where your application code runs. One drawback: unit testing support is suboptimal; our teams compensate for that by writing other types of tests.

Assess ?

  • Baseline Profiles — not to be confused with Android Baseline profiles — are Android Runtime profiles that guide ahead-of-time compilation. They're created once per release on a development machine and are shipped with the application, making them available faster than relying on Cloud Profiles, an older, related technology. The run time uses the Baseline Profile in an app or library to optimize important code paths, which improves the experience for new and existing users when the app is downloaded or updated. Creating Baseline Profiles is relatively straightforward and can lead to significant (up to 30%) performance boosts according to its documentation.

  • GGML is a C library for machine learning that allows for CPU inferencing. It defines a binary format for distributing large language models (LLMs). To do that it uses quantization, a technique that allows LLMs to run on consumer hardware with effective CPU inferencing. GGML supports a number of different quantization strategies (e.g., 4-bit, 5-bit, and 8-bit quantization), each of which offers different trade-offs between efficiency and performance. A quick way to test, run and build apps with these quantized models is a Python binding called C Transformers. This is a Python wrapper on top of GGML that takes away the boilerplate code for inferencing by providing a high level API. We've leveraged these libraries to build proof of concepts and experiments. If you're considering self-hosted LLMs, carefully assess these community-supported libraries for your organization.

  • GPTCache is a semantic cache library for large language models (LLMs). We see the need for a cache layer in front of LLMs for two main reasons — to improve the overall performance by reducing external API calls and to reduce the cost of operation by caching similar responses. Unlike traditional caching approaches that look for exact matches, LLM-based caching solutions require similar or related matches for the input queries. GPTCache approaches this with the help of embedding algorithms to convert the input queries into embeddings and then use a vector datastore for similarity search on these embeddings. One drawback of such a design is that you may encounter false positives during cache hits or false negatives during cache misses, which is why we recommend you carefully assess GPTCache for your LLM-based applications.

  • In many languages, gender is more visible than in English, and words change based on gender. Addressing users, for example, can require words to be inflected, but it is common practice to use the masculine form by default. There is evidence that this has a negative impact on performance and attitude — and, of course, it's impolite. Workarounds using gender-neutral language can often feel clumsy. Addressing users correctly, then, is the preferred option, and with the Grammatical Inflection API, introduced in Android 14, Android developers now have an easier path to do so.

  • htmx is a small, neat HTML UI library that recently became popular seemingly out of nowhere. During our Radar discussion, we found its predecessor intercooler.js existed ten years ago. Unlike other increasingly complex pre-compiled JavaScript/TypeScript frameworks, htmx encourages the direct use of HTML attributes to access operations such as AJAX, CSS transitions, WebSockets and Server-Sent Events. There's nothing technically sophisticated about htmx, but its popularity recalls the simplicity of hypertext in the early days of the web. The project’s website also features some insightful (and amusing) essays on hypermedia and web development, which suggests the team behind htmx have thought carefully about its purpose and philosophy.

  • Kotlin Kover is a code coverage tool set designed specifically for Kotlin, supporting Kotlin JVM, Multiplatform and Android projects. The significance of code coverage lies in its ability to spotlight untested segments, which reinforces software reliability. As Kover evolves, it stands out because of its ability to produce comprehensive HTML and XML reports, coupled with unmatched precision tailored to Kotlin. For teams deeply rooted in Kotlin, we advise you to assess Kover to leverage its potential in enhancing code quality.

  • LangChain is a framework for building applications with large language models (LLMs). To build practical LLM products, you need to combine them with user- or domain-specific data which wasn’t part of the training. LangChain fills this niche with features like prompt management, chaining, agents and document loaders. The benefit of components like prompt templates and document loaders is that they can speed up your time to market. Although it's a popular choice for implementing Retrieval-Augmented Generation applications and the ReAct prompting pattern, LangChain has been criticized for being hard to use and overcomplicated. When choosing a tech stack for your LLM application, you may want to keep looking for similar frameworks — like Semantic Kernel — in this fast-evolving space.

  • LlamaIndex is a data framework designed to facilitate the integration of private or domain-specific data with large language models (LLMs). It offers tools for ingesting data from diverse sources — including APIs, databases and PDFs — and structures this data into a format that LLMs can easily consume. Through various types of "engines," LlamaIndex enables natural language interactions with this structured data, making it accessible for applications ranging from query-based retrieval to conversational interfaces. Similar to LangChain, LlamaIndex’s goal is to accelerate development with LLMs, but it takes more of a data framework approach.

  • promptfoo enables test-driven prompt engineering. While integrating LLMs in applications, tuning of the prompts to produce optimal, consistent outputs can be time-consuming. You can use promptfoo both as a CLI and a library to systematically test prompts against predefined test cases. The test case, along with assertions, can be set up in a simple YAML config file. This config includes the prompts being tested, the model provider, the assertions and the variable values that will be substituted in the prompts. promptfoo supports many assertions, including checking for equality, JSON structure, similarity, custom functions or even using an LLM to grade the model outputs. If you're looking to automate feedback on prompt and model quality, do assess promptfoo.

  • Semantic Kernel is the open-source version of one of the core components in Microsoft's Copilot suite of products. It's a Python library that helps you build applications on top of large language models (LLMs), similar to LangChain. Semantic Kernel's core concept is its planner, which lets you build LLM-powered agents that create a plan for a user and then execute it step by step with the help of plugins.

  • Although we were early advocates for microservices and have seen the pattern used successfully on myriad systems, we've also seen microservices misapplied and abused, often as the result of microservice envy. Rather than start a new system with a collection of separately deployed processes, it's often advisable to start with a well-factored monolith and only break out separately deployable units when the application reaches a scale where the benefits of microservices outweigh the additional complexity inherent in distributed systems. Recently we've seen a resurgence of interest in this approach and a more detailed definition of what, exactly, constitutes a well-factored monolith. Spring Modulith is a framework that helps you structure your code in a way that makes it easier to break out microservices when the time is right. It provides a way to modularize your code so that the logical concepts of domains and bounded context are aligned with the physical concepts of files and package structure. This alignment makes it easier to refactor the monolith when necessary and to test domains in isolation. Spring Modulith provides an in-process eventing mechanism that helps further decouple modules within a single application. Best of all, it integrates with ArchUnit and jmolecules to automate verification of its domain-driven design rules.

Hold ?

 
  • languages-and-frameworks quadrant with radar rings Adopt Trial Assess Hold Adopt Trial Assess Hold
  • New
  • Moved in/out
  • No change

Unable to find something you expected to see?

 

Each edition of the Radar features blips reflecting what we came across during the previous six months. We might have covered what you are looking for on a previous Radar already. We sometimes cull things just because there are too many to talk about. A blip might also be missing because the Radar reflects our experience, it is not based on a comprehensive market analysis.

Unable to find something you expected to see?

 

Each edition of the Radar features blips reflecting what we came across during the previous six months. We might have covered what you are looking for on a previous Radar already. We sometimes cull things just because there are too many to talk about. A blip might also be missing because the Radar reflects our experience, it is not based on a comprehensive market analysis.

Download Technology Radar Volume 29

English | Español | Português | 中文

Stay informed about technology

 

Subscribe now

Visit our archive to read previous volumes