Master

Platforms

Adopt?

    Trial?

    • Many of our teams who are already on AWS have found AWS Cloud Development Kit (AWS CDK) to be a sensible AWS default for enabling infrastructure provisioning. In particular, they like the use of first-class programming languages instead of configuration files which allows them to use existing tools, test approaches and skills. Like similar tools, care is still needed to ensure deployments remain easy to understand and maintain. The development kit currently supports TypeScript, JavaScript, Python, Java, C# and .NET. New providers are being added to the CDK core. We've also used both AWS Cloud Development Kit and HashiCorp's Cloud Development Kit for Terraform to generate Terraform configurations and enable provisioning with the Terraform platform with success.

      History
    • We continue to see interest in and use of Backstage grow, alongside the adoption of developer portals, as organizations look to support and streamline their development environments. As the number of tools and technologies increases, some form of standardization is becoming increasingly important for consistency so that developers are able to focus on innovation and product development instead of getting bogged down with reinventing the wheel. Backstage is an open-source developer portal platform created by Spotify, it's based upon software templates, unifying infrastructure tooling and consistent and centralized technical documentation. The plugin architecture allows for extensibility and adaptability into an organization’s infrastructure ecosystem.

      History
    • Delta Lake is an open-source storage layer, implemented by Databricks, that attempts to bring ACID transactions to big data processing. In our Databricks-enabled data lake or data mesh projects, our teams continue to prefer using Delta Lake storage over the direct use of file storage types such S3 or ADLS. Of course this is limited to projects that use storage platforms that support Delta Lake when using Parquet file formats. Delta Lake facilitates concurrent data read/write use cases where file-level transactionality is required. We find Delta Lake's seamless integration with Apache Spark batch and micro-batch APIs greatly helpful, particularly features such as time travel — accessing data at a particular point in time or commit reversion — as well as schema evolution support on write; though there are some limitations on these features.

      History
    • Materialize is a streaming database that enables you to do incremental computation without complicated data pipelines. Just describe your computations via standard SQL views and connect Materialize to the data stream. The underlying differential data flow engine performs incremental computation to provide consistent and correct output with minimal latency. Unlike traditional databases, there are no restrictions in defining these materialized views, and the computations are executed in real time. We've used Materialize, together with Spring Cloud Stream and Kafka, to query over streams of events for insights in a distributed event-driven system, and we quite like the setup.

      History
    • Since we last mentioned Snowflake in the Radar, we've gained more experience with it as well as with data mesh as an alternative to data warehouses and lakes. Snowflake continues to impress with features like time travel, zero-copy cloning, data sharing and its marketplace. We also haven't found anything we don't like about it, all of which has led to our consultants generally preferring it over the alternatives. Redshift is moving toward storage and compute separation, which has been a strong point of Snowflake, but even with Redshift Spectrum it isn't as convenient and flexible to use, partly because it is bound by its Postgres heritage (we do still like Postgres, by the way). Federated queries can be a reason to go with Redshift. When it comes to operations, Snowflake is much simpler to run. BigQuery, which is another alternative, is very easy to operate, but in a multicloud setup Snowflake is a better choice. We can also report that we've used Snowflake successfully with GCP, AWS, and Azure.

      History
    • Variable fonts are a way of avoiding the need to find and include separate font files for different weights and styles. Everything is in one font file, and you can use properties to select which style and weight you need. While not new, we still see sites and projects that could benefit from this simple approach. If you have pages that are including many variations of the same font, we suggest trying out variable fonts.

      History

    Assess?

    • Apache Pinot is a distributed OLAP data store, built to deliver real-time analytics with low latency. It can ingest from batch data sources (such as Hadoop HDFS, Amazon S3, Azure ADLS or Google Cloud Storage) as well as stream data sources (such as Apache Kafka). If the need is user-facing, low-latency analytics, SQL-on-Hadoop solutions don't offer the low latency that is needed. Modern OLAP engines like Apache Pinot (or Apache Druid and Clickhouse among others) can achieve much lower latency and are particularly suited in contexts where fast analytics, such as aggregations, are needed on immutable data, possibly, with real-time data ingestion. Originally built by LinkedIn, Apache Pinot entered Apache incubation in late 2018 and has since added a plugin architecture and SQL support among other key capabilities. Apache Pinot can be fairly complex to operate and has many moving parts, but if your data volumes are large enough and you need low-latency query capability, we recommend you assess Apache Pinot.

      History
    • Bit.dev is a cloud-hosted collaborative platform for UI components extracted, modularized and reused with Bit. Web components have been around for a while, but building a modern front-end application by assembling small, independent components extracted from other projects has never been easy. Bit was designed to let you do exactly that: extract a component from an existing library or project. You can either build your own service on top of Bit for component collaboration or use Bit.dev.

      History
    • Since we first mentioned data discoverability in the Radar, LinkedIn has evolved WhereHows to DataHub, the next generation platform that addresses data discoverability via an extensible metadata system. Instead of crawling and pulling metadata, DataHub adopts a push-based model where individual components of the data ecosystem publish metadata via an API or a stream to the central platform. This push-based integration shifts the ownership from the central entity to individual teams making them accountable for their metadata. As more and more companies are trying to become data driven, having a system that helps with data discovery and understanding data quality and lineage is critical, and we recommend you assess DataHub in that capacity.

      History
    • Feature Store is an ML-specific data platform that addresses some of the key challenges we face today in feature engineering with three fundamental capabilities: (1) it uses managed data pipelines to remove struggles with pipelines as new data arrives; (2) catalogs and stores feature data to promote discoverability and collaboration of features across models; and (3) consistently serves feature data during training and interference.

      Since Uber revealed their Michelangelo platform, many organizations and startups have built their own versions of a feature store; examples include Hopsworks, Feast and Tecton. We see potential in Feature Store and recommend you carefully assess it.

      History
    • JuiceFS is an open-source, distributed POSIX file system built on top of Redis and an object store service (for example, Amazon S3). If you're building new applications, then our recommendation has always been to interact directly with the object store without going through another abstraction layer. However, JuiceFS can be an option if you're migrating legacy applications that depend on traditional POSIX file systems to the cloud.

      History
    • As more businesses turn to events as a way to share data among microservices, collect analytics or feed data lakes, Apache Kafka has become a favorite platform to support an event-driven architectural style. Although Kafka was a revolutionary concept in scalable persistent messaging, a lot of moving parts are required to make it work, including ZooKeeper, brokers, partitions, and mirrors. While these can be particularly tricky to implement and operate, they do offer great flexibility and power when needed, especially at an industrial enterprise scale. Because of the high barrier to entry presented by the full Kafka ecosystem, we welcome the recent explosion of platforms offering the Kafka API without Kafka. Recent entries such as Kafka on Pulsar and Redpanda offer alternative architectures, and Azure Event Hubs for Kafka provides some compatibility with Kafka producer and consumer APIs. Some features of Kafka, like the streams client library, are not compatible with these alternative brokers, so there are still reasons to choose Kafka over alternative brokers. It remains to be seen, however, if developers actually adopt this strategy or if it is merely an attempt by competitors to lure users away from the Kafka platform. Ultimately, perhaps Kafka's most enduring impact could be the convenient protocol and API provided to clients.

      History
    • NATS is a fast, secure message queueing system with an unusually wide range of features and potential deployment targets. At first glance, you would be forgiven for asking why the world needs another message queueing system. Message queues have been around in various forms for nearly as long as businesses have been using computers and have undergone years of refinement and optimization for various tasks. But NATS has several interesting characteristics and is unique in its ability to scale from embedded controllers to global, cloud-hosted superclusters. We're particularly intrigued by NATS's intent to support a continuous streaming flow of data from mobile devices and IoT and through a network of interconnected systems. However, some tricky issues need to be addressed, not the least of which is ensuring consumers see only the messages and topics to which they're allowed access, especially when the network spans organizational boundaries. NATS 2.0 introduced a security and access control framework that supports multitenant clusters where accounts restrict a user's access to queues and topics. Written in Go, NATS has primarily been embraced by the Go language community. Although clients exist for pretty much all widely used programming languages, the Go client is by far the most popular. However, some of our developers have found that all the language client libraries tend to reflect the Go origins of codebase. Increasing bandwidth and processing power on small, wireless devices means that the volume of data businesses must consume in real time will only increase. Assess NATS as a possible platform for streaming that data within and among businesses.

      History
    • Opstrace is an open-source observability platform intended to be deployed in the user's own network. If we don't use commercial solutions like Datadog (for example, because of cost or data residency concerns), the only solution is to build your own platform composed of open-source tools. This can take a lot of effort — Opstrace is intended to fill this gap. It uses open-source APIs and interfaces such as Prometheus and Grafana and adds additional features on top like TLS and authentication. At the heart of Opstrace runs a Cortex cluster to provide the scalable Prometheus API as well as a Loki cluster for the logs. It's fairly new and still misses features when compared to solutions like Datadog or SignalFX. Still, it's a promising addition to this space and worth keeping an eye on.

      History
    • We've seen interest in Pulumi slowly but steadily rising. Pulumi fills a gaping hole in the infrastructure coding world where Terraform maintains a firm hold. While Terraform is a tried-and-true standby, its declarative nature suffers from inadequate abstraction facilities and limited testability. Terraform is adequate when the infrastructure is entirely static, but dynamic infrastructure definitions call for a real programming language. Pulumi distinguishes itself by allowing configurations to be written in TypeScript/JavaScript, Python and Go — no markup language or templating required. Pulumi is tightly focused on cloud-native architectures — including containers, serverless functions and data services — and provides good support for Kubernetes. Recently, AWS CDK has mounted a challenge, but Pulumi remains the only cloud-neutral tool in this area. We're anticipating wider Pulumi adoption in the future and looking forward to a viable tool and knowledge ecosystem emerging to support it.

      History
    • Redpanda is a streaming platform that provides a Kafka-compatible API, allowing it to benefit from the Kafka ecosystem without having to deal with the complexities of a Kafka installation. For example, Redpanda simplifies operations by shipping as a single binary and avoiding the need for an external dependency such as ZooKeeper. Instead, it implements the Raft protocol and performs comprehensive tests to validate it's been implemented correctly. One of Redpanda’s capabilities (available for enterprise customers only) is inline WebAssembly (WASM) transformations, using an embedded WASM engine. This allows developers to create event transformers in their language of choice and compile it to WASM. Redpanda also offers much reduced tail latencies and increased throughput due to a series of optimizations. Redpanda is an exciting alternative to Kafka and is worth assessing.

      History

    Hold?

    • We've observed before that the cloud providers push more and more services onto the market. We've also documented our concerns that sometimes the services are made available when they're not quite ready for prime time. Unfortunately, in our experience, Azure Machine Learning falls into the latter category. One of several recent entrants in the field of bounded low-code platforms, Azure ML promises more convenience for data scientists. Ultimately, however, it doesn't live up to its promise; in fact, it still feels easier for our data scientists to work in Python. Despite significant efforts, we struggled to make it scale and lack of adequate documentation proved to be another issue which is why we moved it to the Hold ring.

      History
    • Products supported by companies or communities are in constant evolution, at least the ones that get traction in the industry. Sometimes organizations tend to build frameworks or abstractions on top of the existing external products to cover very specific needs, thinking that the adaptation will provide more benefits than the existing ones. We're seeing organizations trying to create homemade infrastructure-as-code (IaC) products on top of the existing ones; they underestimate the required effort to keep those solutions evolving according to their needs, and after a short period of time, they realize that the original version is in much better shape than their own; there are even cases where the abstraction on top of the external product reduces the original capabilities. Although we've seen success stories of organizations building homemade solutions, we want to caution about this approach as the effort required to do so isn't negligible, and a long-term product vision is required to have the expected outcomes.

      History
    Unable to find something you expected to see?

    Each edition of the Radar features blips reflecting what we came across during the previous six months. We might have covered what you are looking for on a previous Radar already. We sometimes cull things just because there are too many to talk about. A blip might also be missing because the Radar reflects our experience, it is not based on a comprehensive market analysis.

    New,Moved in/out,No change