Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Volume 30 | April 2024

Platforms

Platforms

Adopt ?

  • Events are common mechanisms in event-driven architecture or serverless applications. However, producers or cloud providers tend to support them in different forms, which prevents interoperability across platforms and infrastructures. CloudEvents is a specification for describing event data in common formats to provide interoperability across services, platforms and systems. It provides SDKs in multiple languages so you can embed the spec into your application or toolchain. Our teams use it not only for cross-cloud platform purposes but also for domain event specification, among other scenarios. CloudEvents is hosted by the Cloud Native Computing Foundation (CNCF) and is now a graduated project. Our teams default to using CloudEvents for building event-driven architectures and for that reason we’re moving it to Adopt.

Trial ?

  • Arm compute instances in the cloud have become increasingly popular in recent years due to their cost and energy efficiency compared to traditional x86-based instances. Many cloud providers now offer Arm-based instances, including AWS, Azure and GCP. The cost benefits of running Arm in the cloud can be particularly beneficial for businesses that run large workloads or need to scale. We’re seeing many teams of ours moving to Arm instances for workloads like JVM services and even databases (including RDS) without any change in the code and minimal changes in the build scripts. New cloud-based applications and systems increasingly default to Arm in the cloud. Based on our experiences, we recommend Arm compute instances for all workloads unless there are architecture-specific dependencies. The tooling to support multiple architectures, such as multi-arch Docker images, also simplifies build and deploy workflows.

  • Azure Container Apps is a managed Kubernetes application platform that streamlines the deployment of containerized workloads. In comparison to Azure Kubernetes Service (AKS), the operational and administrative burden of running containerized applications is reduced, but this comes at the expense of some flexibility and control, which is a trade-off teams need to consider. Another product in this area, Azure Container Instances, is usually too limited for production use. Our teams started using Azure Container Apps last year, when it was still in public preview, with good results, even when running large containers. Now that it is generally available, we’re considering it for more use cases. Both Dapr and the KEDA Autoscaler are supported.

  • Azure OpenAI Service provides access to OpenAI's GPT-4, GPT-35-Turbo, Embeddings, DALL-E model and more through a REST API, a Python SDK and web-based interface. The models can be adapted to tasks such as content generation, summarization, semantic search and translating natural language to code. Fine-tuning is also available via few-shot learning and the customization of hyperparameters. In comparison to OpenAI's own API, Azure OpenAI Service benefits from Azure's enterprise-grade security and compliance features, is available for more regions (although availability is limited for each of the larger geographic regions) and supports private networking, content filtering and manual model version control. For these reasons and our positive experience with it, we recommend that enterprises already using Azure consider using Azure OpenAI Service instead of the OpenAI API.

  • When you build data products using data product thinking, it's essential to consider data lineage, data discoverability and data governance. Our teams have found that DataHub can provide particularly useful support here. Although earlier versions of DataHub required you to fork and manage the sync from the main product (if there was a need to update the metadata model), improvements in recent releases have introduced features that allow our teams to implement custom metadata models with a plugin-based architecture. Another useful feature of DataHub is the robust end-to-end data lineage from source to processing to consumption. DataHub supports both push-based integration as well as pull-based lineage extraction that automatically crawls the technical metadata across data sources, schedulers, orchestrators (scanning the Airflow DAG), processing pipeline tasks and dashboards, to name a few. As an open-source option for a holistic data catalog, DataHub is emerging as a default choice for our teams.

  • In-house infrastructure orchestration codebases frequently become a time sink to maintain and troubleshoot. Infrastructure orchestration platforms are appearing, promising to standardize and productize various aspects of infrastructure code delivery and deployment workflows. These include build tools like Terragrunt and Terraspace, services from IaC tool vendors such as Terraform Cloud and Pulumi Cloud as well as tool-agnostic platforms and services like env0 and Spacelift. There is a rich ecosystem of Terraform-specific orchestration tools and services, often called TACOS (Terraform Automation and Collaboration Software), including Atlantis, Digger, Scalr, Terramate and Terrateam. Each of these platforms enables different workflows, including GitOps, Continuous Delivery and compliance as code. We welcome the growth of solutions in this space. We recommend infrastructure and platform engineering teams explore how to use them to reduce the amount of non-differentiating custom code they need to develop and maintain their infrastructure. Standardization of how infrastructure code is structured, shared, delivered and deployed should also create opportunities for the emergence of an ecosystem of compatible tools for testing, measuring and monitoring infrastructure.

  • Tooling in the infrastructure-as-code space continues to evolve, and we’re pleased to see that Pulumi is no exception to this trend. The platform recently added support for Java and YAML, for managing infrastructure at scale as well as for a multitude of cloud configurations and integrations, making the platform even more compelling. For our teams, it’s still the main alternative to Terraform for developing code for multiple cloud platforms.

  • Changes in licensing for Docker Desktop have left us scrambling for alternatives for running a fleet of containers on a developer's local laptop. Recently we've had good success with Rancher Desktop. This free and open-source app is relatively easy to download and install for Apple, Windows or Linux machines and provides a handy local Kubernetes cluster with a GUI for configuration and monitoring. Although Colima has become our Docker Desktop alternative of choice, it’s primarily a CLI tool. In contrast, Rancher Desktop will appeal to those who don't want to give up the graphical interface that Docker Desktop provides. Like Colima, Rancher Desktop allows you to choose between dockerd or containerd as the underlying container run time. The choice of direct containerd frees you from the DockerCLI, but the dockerd option provides compatibility with other tools that depend on it to communicate with the run-time daemon.

  • Weights & Biases is a machine learning (ML) platform for building models faster through experiment tracking, data set versioning, visualizing model performance and model management. It can be integrated with existing ML code to get live metrics, terminal logs and system statistics streamed to the dashboard for further analysis. Recently, Weights & Biases has expanded into LLM observability with Traces. Traces visualizes the execution flow of prompt chains as well as intermediate inputs/outputs and provides metadata around chain execution (such as tokens used and start and end time). Our teams find it useful for debugging and getting a greater understanding of the chain architecture.

Assess ?

  • Bun is a new JavaScript run time, similar to Node.js or Deno. Unlike Node.js or Deno, however, Bun is built using WebKit's JavaScriptCore instead of Chrome's V8 engine. Designed as a drop-in replacement for Node.js, Bun is a single binary (written in Zig) that acts as a bundler, transpiler and package manager for JavaScript and TypeScript applications. Since our last volume, Bun has gone from beta into a stable 1.0 release. Bun has been built from the ground up with several optimizations — including fast startup, improved server-side rendering and a much faster alternative package manager — and we encourage you to assess it for your JavaScript run-time engine.

  • When managing distributed architectures, accounting for the cost of sorting, indexing and accessing data is as critical as observability. Chronosphere takes a unique approach to cost management, tracking the use of observability data so that organizations can consider the cost-value trade-offs of various metrics. With the help of the Metrics Usage Analyzer, part of the Chronosphere Control Plane, teams can identify and exclude metrics they rarely (or never) use, thus yielding significant cost savings by reducing the amount of data organizations have to comb through. Given these advantages, as well as the ability of Chronosphere to match the functionality of other observability tools for cloud-hosted solutions, we believe it to be a compelling option for organizations to look into.

  • With data mesh adoption on the rise, our teams have been on the lookout for data platforms that treat data products as a first-class entity. DataOS is one such product. It provides end-to-end lifecycle management to design, build, deploy and evolve data products. It offers standardized declarative specs written in YAML that abstract the low-level complexity of infrastructure setup and allow developers to define the data products easily via CLI/API. It supports access control policies with ABAC and data policies for filtering and masking data. Another notable feature is its ability to federate data across a variety of data sources, which reduces data duplication and the movement of data to a central place. DataOS fits best for greenfield scenarios where it does the heavy lifting since it provides an out-of-the-box solution for data governance, data discoverability, infrastructure resource management and observability. For brownfield scenarios, the ability to orchestrate resources outside of DataOS (for example, data stacks like Databricks) is in its nascent stage and still evolving. If your ecosystem doesn’t exert a lot of opinion on data tooling, DataOS is a good way to expedite your journey for building, deploying and consuming data products in an end-to-end fashion.

  • Dify is a UI-driven platform for developing large language model (LLM) applications that makes prototyping them even more accessible. It supports the development of chat and text generation apps with prompt templates. Additionally, Dify supports retrieval-augmented generation (RAG) with imported data sets and can work with multiple models. We’re excited about this category of applications. Based on our experience, however, Dify is not quite ready for prime time yet, because some features are buggy or don't seem fully fleshed out. At the moment, though, we’re not aware of a competitor that is better.

  • Although vector databases have been gaining popularity for retrieval-augmented generation (RAG) use cases, research and experience reports suggest combining traditional full-text search with vector search (into a hybrid search) can yield superior results. Through Elasticsearch Relevance Engine (ESRE), the well-established full-text search platform Elasticsearch supports built-in and custom embedding models, vector search and hybrid search with ranking mechanisms such as Reciprocal Rank Fusion. Even though this space is still maturing, in our experience, using these ESRE features along with the traditional filtering, sorting and ranking capabilities that come with Elasticsearch has yielded promising results, suggesting that established search platforms that support semantic search are not to be passed over.

  • Cloud and SaaS billing data can be complex, inconsistent among providers and difficult to understand. The FinOps Open Cost and Usage Specification (FOCUS) aims to reduce this friction with a spec containing a set of terminologies (aligned with the FinOps framework), a schema and a minimum set of requirements for billing data. The spec is intended to support use cases common to a variety of FinOps practitioners. Although still in the early stages of development and adoption, it’s worth watching because, with growing industry adoption, FOCUS will make it easier for platforms and end users to get a holistic view of cloud spend across a long tail of cloud and SaaS providers.

  • Google's Gemini is a family of foundational LLMs designed to run on a wide range of hardware, from data centers to mobile phones. Gemini Nano has been specifically optimized and scaled down to run on mobile silicon accelerators. It enables capabilities such as high-quality text summarization, contextual smart replies and advanced grammar correction. For example, the language understanding of Gemini Nano enables the Pixel 8 Pro to summarize content in the Recorder app. Running on-device removes many of the latency and privacy concerns associated with cloud-based systems and allows the features to work without network connection. Android AICore simplifies the integration of the model into Android apps, but only a few devices are supported at the time of writing.

  • HyperDX is an open-source observability platform that unifies all three pillars of observability: logs, metrics and tracing. With it, you can correlate end-to-end and go from browser session replay to logs and traces in just a few clicks. The platform leverages ClickHouse as a central data store for all telemetry data, and it scales to aggregate log patterns and condense billions of events into distinctive clusters. Although you can choose from several observability platforms, we want to highlight HyperDX for its unified developer experience.

  • IcePanel facilitates collaborative architectural modeling and diagramming using the C4 model, which allows technical and business stakeholders to zoom in to the level of technical detail they need. It supports modeling architecture objects whose metadata and connections can be reused across diagrams, along with the visualization of flows between those objects. Versioning and tagging allows collaborators to model different architecture states (e.g., as-is versus to-be) and track user-defined classifications of various parts of the architecture. We’re keeping an eye on IcePanel for its potential to improve architecture collaboration, particularly for organizations with complex architectures. For an alternative that better supports diagrams as code, check out Structurizr.

  • Langfuse is an engineering platform for observability, testing and monitoring large language model (LLM) applications. Its SDKs support Python, JavaScript and TypeScript, OpenAI, LangChain and LiteLLM among other languages and frameworks. You can self-host the open-source version or use it as a paid cloud service. Our teams have had a positive experience, particularly in debugging complex LLM chains, analyzing completions and monitoring key metrics such as cost and latency across users, sessions, geographies, features and model versions. If you’re looking to build data-driven LLM applications, Langfuse is a good option to consider.

  • Qdrant is an open-source vector database written in Rust. In the September 2023 edition of the Radar, we talked about pgvector, a PostgreSQL extension for vector search. However, if you have to scale the vector database horizontally across nodes, we encourage you to assess Qdrant. It has built-in single instruction, multiple data (SIMD) acceleration support for improved search performance, and it helps you associate JSON payloads with vectors.

  • While the Arm architecture continues to expand its impact — we've updated our assessment of Arm in the cloud in this edition — interest in the newer and less established RISC-V architecture also grows. RISC-V doesn’t bring breakthroughs in performance or efficiency — in fact, its per-watt performance is similar to Arm’s, and it can't quite compete on absolute performance — but it’s open source, modular and not tied to a single company. This makes it an attractive proposition for embedded systems, where the cost of licensing proprietary architectures is a significant concern. This is also why the field of RISC-V for embedded is maturing, and several companies, including SiFive and espressif, are offering development boards and SoCs for a wide range of applications. Microcontrollers and microprocessors capable of running the Linux kernel are available today, along with the corresponding software stack and toolchains. We’re keeping an eye on this space and expect to see more adoption in the coming years.

  • Tigerbeetle is an open-source distributed database for financial accounting. Unlike other databases, it’s designed to be a domain-specific state machine for safety and performance. The state from one node in the cluster is replicated in a deterministic order to other nodes via the Viewstamped Replication consensus protocol. We quite like the design decisions behind Tigerbeetle to implement double-entry bookkeeping with strict serializability guarantees. It’s a relatively new and actively evolving database, but not quite ready for production.

  • WebTransport is a protocol that builds on top of HTTP/3 and offers bidirectional communication between servers and apps. WebTransport offers several benefits over its predecessor, WebSockets, including faster connections, lower latency and the ability to handle both reliable and ordered data streams as well as unordered ones (such as UDP). It can handle multiple streams in the same connection without head-of-line blocking, allowing for more efficient communication in complex applications. Overall, WebTransport is suitable for a wide range of use cases, including real-time web apps, streaming media and Internet of Things (IoT) data communications. Even though WebTransport is still in the early stages — support across browsers is gradually maturing, with popular libraries such as socket.io adding support for WebTransport — our teams are currently assessing its potential for real-time IoT apps.

  • Zarf is a declarative package manager for offline and semi-connected Kubernetes environments. With Zarf, you can build and configure applications while connected to the internet; once created, you can package and ship to a disconnected environment for deployment. As a standalone tool, Zarf packs several useful features, including automatic Software Bill of Materials (SBOM) generation, built-in Docker registry, Gitea and K9s dashboards to manage clusters from the terminal. Air-gap software delivery for cloud-native applications has its challenges; Zarf addresses most of them.

  • ZITADEL is an open-source identity and user management tool, and an alternative to Keycloak. It’s lightweight (written in Golang), has flexible deployment options and is easy to configure and manage. It’s also multi-tenant, offers comprehensive features for building secure and scalable authentication systems, particularly for B2B applications, and has built-in security features like multi-factor authentication and audit trails. By using ZITADEL, developers can reduce development time, enhance application security and achieve scalability for growing user bases. If you're looking for a user-friendly, secure and open-source tool for user management, ZITADEL is a strong contender.

Hold ?

 
  • platforms quadrant with radar rings Adopt Trial Assess Hold Adopt Trial Assess Hold
  • New
  • Moved in/out
  • No change

Unable to find something you expected to see?

 

Each edition of the Radar features blips reflecting what we came across during the previous six months. We might have covered what you are looking for on a previous Radar already. We sometimes cull things just because there are too many to talk about. A blip might also be missing because the Radar reflects our experience, it is not based on a comprehensive market analysis.

Unable to find something you expected to see?

 

Each edition of the Radar features blips reflecting what we came across during the previous six months. We might have covered what you are looking for on a previous Radar already. We sometimes cull things just because there are too many to talk about. A blip might also be missing because the Radar reflects our experience, it is not based on a comprehensive market analysis.

Download the PDF

 

 

English | Español | Português | 中文

Sign up for the Technology Radar newsletter

 

Subscribe now

Visit our archive to read previous volumes