Menu

Platforms

Adopt?

  • We previously had .NET Core in Adopt, indicating that it had become our default for .NET projects. But we felt it's worth again calling attention to .NET Core. With the release of .NET Core 3.x last year, the bulk of the features from .NET Framework have now been ported into .NET Core. With the announcement that .NET Framework is on its last release, Microsoft have reinforced the view that .NET Core is the future of .NET. Microsoft has done a lot of work to make .NET Core container friendly. Most of our .NET Core–based projects target Linux and are often deployed as containers. The upcoming .NET 5 release looks promising, and we're looking forward to it.

    History
  • If you're building and operating a scaled microservices architecture and have embraced Kubernetes, adopting service mesh to manage all cross-cutting aspects of running the architecture is a default position. Among various implementations of service mesh, Istio has gained majority adoption. It has a rich feature set, including service discovery, traffic management, service-to-service and origin-to-service security, observability (including telemetry and distributed tracing), rolling releases and resiliency. Its user experience has been improved in its latest releases, because of its ease of installation and control panel architecture. Istio has lowered the bar for implementing large-scale microservices with operational quality for many of our clients, while admitting that operating your own Istio and Kubernetes instances requires adequate knowledge and internal resources which is not for the fainthearted.

    History

Trial?

  • Anka is a set of tools to create, manage, distribute, build and test macOS reproducible virtual environments for iOS and macOS. It brings Docker-like experience to macOS environments: instant start, CLI to manage virtual machines and registry to version and tag virtual machines for distribution. We've used Anka to build a macOS private cloud for a client. This tool is worth considering when virtualizing iOS and macOS environments.

    History
  • Without making a judgment of the GitOps technique, we'd like to talk about Argo CD within the scope of deploying and monitoring applications in Kubernetes environments. Based on its ability to automate the deployment of the desired application state in the specified target environments in Kubernetes and our good experience with troubleshooting failed deployments, verifying logs and monitoring deployment status, we recommend you give Argo CD a try. You can even see graphically what is going on in the cluster, how a change is propagated and how pods are created and destroyed in real time.

    History
  • Most of the projects with multilingual support start with development teams building features in one language and managing the rest through offline translation via emails and spreadsheets. Although this simple setup works, things can quickly get out of hand. You may have to keep answering the same questions for different language translators, sucking the energy out of the collaboration between translators, proofreaders and the development team. Crowdin is one of a handful of platforms that help in streamlining the localization workflow of your project. With Crowdin the development team can continue building features, while the platform streamlines the text that needs translation into an online workflow. We like that Crowdin nudges the teams to continuously and incrementally incorporate translations rather than managing them in large batches toward the end.

    History
  • For several years now, the Linux kernel has included the extended Berkeley Packet Filter (eBPF) virtual machine and provided the ability to attach eBPF filters to particular sockets. But extended BPF goes far beyond packet filtering and allows custom scripts to be triggered at various points within the kernel with very little overhead. Although this technology isn't new, it's now coming into its own with the increasing use of microservices deployed as orchestrated containers. Service-to-service communications can be complex in these systems, making it difficult to correlate latency or performance issues back to an API call. We're now seeing tools released with prewritten eBPF scripts for collecting and visualizing packet traffic or reporting on CPU utilization. With the rise of Kubernetes, we’re seeing a new generation of security enforcement and instrumentation based on eBPF scripts that help tame the complexity of a large microservices deployment.

    History
  • Google's Firebase has undergone significant evolution since we mentioned it as part of a serverless architecture in 2016. Firebase is a comprehensive platform for building mobile and web apps in a way that's supported by Google's underlying scalable infrastructure. We particularly like Firebase App Distribution, which makes it easy to publish test versions of an app via a CD pipeline, and Firebase Remote Config, which allows configuration changes to be dynamically pushed to apps without needing to republish them.

    History
  • The GraphQL ecosystem and community keep growing. Hot Chocolate is a GraphQL server for .NET (Core and Classic). It lets you build and host schemas and then serve queries against them using the same base components of GraphQL — data loader, resolver, schema, operations and types. The team behind Hot Chocolate has recently added schema stitching, which allows for a single entry point to query across multiple schemas aggregated from different locations. Despite the potential to misuse this approach, our teams are happy with Hot Chocolate — it’s well documented, and we're able to deliver value quickly to our clients.

    History
  • Not everyone needs a self-hosted OAuth2 solution, but if you do, have a look at Hydra — a fully compliant open source OAuth2 server and OpenID connect provider. Hydra has in-memory storage support for development and a relational database (PostgreSQL) for production use cases. Hydra as such is stateless and easy to scale horizontally in platforms such as Kubernetes. Depending on your performance requirement, you may have to tune the number of database instances while scaling Hydra instances. And because Hydra doesn't provide any identity management solutions out of the box, you can integrate whatever flavor of identity management you have with Hydra through a clean API. This clear separation of identity from the rest of the OAuth2 framework makes it easier to integrate Hydra with an existing authentication ecosystem.

    History
  • OpenTelemetry is an open source observability project that merges OpenTracing and OpenCensus. The OpenTelemetry project includes specification, libraries, agents, and other components needed to capture telemetry from services to better observe, manage and debug them. It covers the three pillars of observability — distributed tracing, metrics and logging (currently in beta) — and its specification connects these three pieces through correlations; thus you can use metrics to pinpoint a problem, locate the corresponding traces to discover where the problem occured, and ultimately study the corresponding logs to find the exact root cause. OpenTelemetry components can be connected to back-end observability systems such as Prometheus and Jaeger among others. Formation of OpenTracing is a positive step toward the convergence of standardization and the simplification of tooling.

    History
  • Snowflake has proven to be a robust SaaS big data storage, warehouse or lake solution for many of our clients. It has a superior architecture to scale storage, compute, and services to load, unload and use data. It's also very flexible: it supports storage of structured, semi-structured and unstructured data; provides a growing list of connectors for different access patterns such as Spark for data science and SQL for analytics; and runs on multiple cloud providers. Our advice to many of our clients is to use managed services for their utility technology such as big data storage; however, if the risk and regulations prohibit the use of managed services, then Snowflake is a good candidate for companies with large volumes of data and heavy processing workloads. Although we've been successful using Snowflake in our medium-sized engagements, we've yet to experience Snowflake in large ecosystems where data need to be owned across segments of the organization.

    History

Assess?

  • We see a shift from accidental hybrid or whole-of-estate cloud migration plans to intentional and sophisticated hybrid, poly or portable cloud strategies, where organizations apply multidimensional principles to establish and execute their cloud strategy: where to host their various data and functional assets based on risk, ability to control and performance profiles; how to utilize their on-premise infrastructure investments while reducing the cost of operations; and how to take advantage of multiple cloud providers and their unique differentiated services without creating complexity and friction for users building and operating applications.

    Anthos is Google's answer to enable hybrid and multicloud strategies by providing a high-level management and control plane on top of a set of open source technologies such as GKE, Service Mesh and a Git-based Configuration Management. It enables running portable workloads and other assets on different hosting environments, including Google Cloud and on-premises hardware. Although other cloud providers have comparative offerings, Anthos intends to go beyond a hybrid cloud to a portable cloud enabler using open source components, but that is yet to be seen. We're seeing a rising interest in Anthos. While Google's approach in managed hybrid cloud environments seems promising, it’s not a magic bullet and requires changes in both existing cloud and on-premise assets. Our advice for clients considering Anthos is to make measured tradeoffs between selecting services from the Google Cloud ecosystem and other options, to maintain their right level of neutrality and control.

    History
  • Apache Pulsar is an open source pub-sub messaging/streaming platform, competing in a similar space with Apache Kafka. It provides expected functionality — such as low-latency async and sync message delivery and scalable persistent storage of messages — as well as various client libraries. What has excited us to evaluate Pulsar is its ease of scalability, particularly in large organizations with multiple segments of users. Pulsar natively supports multitenancy, georeplication, role-based access control and segregation of billing. We're also looking to Pulsar to solve the problem of a never-ending log of messages for our large-scale data systems where events are expected to persist indefinitely and subscribers are able to start consuming messages retrospectively. This is supported through a tiered storage model. Although Pulsar is a promising platform for large organizations, there is room for improvement. Its current installation requires administering ZooKeeper and BookKeeper among other pieces of technology. We hope that with its growing adoption, users can soon count on wider community support.

    History
  • The performance of blockchain technology has been greatly improved since we initially assessed this area in the Radar. However, there's still no single blockchain that could achieve "internet-level" throughput. As various blockchain platforms develop, we're seeing new data and value silos. That's why cross-chain tech has always been a key topic in the blockchain community: the future of blockchain may be a network of independent parallel blockchains. This is also the vision of Cosmos. Cosmos releases Tendermint and CosmosSDK to let developers customize independent blockchains. These parallel blockchains could exchange value through the Inter-Blockchain Communication (IBC) protocol and Peg-Zones. Our teams have had great experiences with CosmosSDK, and the IBC protocol is maturing. This architecture could solve blockchain interoperability and scalability issues.

    History
  • Often training and predicting outcomes from machine learning models require code to take the data to the model. Google BigQuery ML inverts this by bringing the model to the data. Google BigQuery is a data warehouse designed to serve large-scale queries using SQL, for analytical use cases. Google BigQuery ML extends this function and its SQL interface to create, train and evaluate machine learning models using its data sets; and eventually run model predictions to create new BigQuery data sets. It supports a limited set of models out of the box, such as linear regression for forecasting or binary and multiclass regression for classification. It also supports, with limited functionality, importing previously trained TensorFlow models. Although BigQuery ML and its SQL-based approach lower the bar for using machine learning to make predictions and recommendations, particularly for quick explorations, this comes with a difficult trade-off: compromising on other aspects of model training such as ethical bias testing, explainability and continuous delivery for machine learning.

    History
  • JupyterLab is the next-generation web-based user interface for Project Jupyter. If you've been using Jupyter Notebooks, JupyterLab is worth a try; it gives you an interactive environment for Jupyter notebooks, code and data. We see it as an evolution of Jupyter Notebook: it provides a better experience by extending its original capabilities of allowing code, visualization and documentation to exist in one place.

    History
  • Marquez is a relatively young open source project for collecting and serving metadata information about a data ecosystem. It represents a simple data model to capture metadata such as lineage, upstream and downstream data processing jobs and their status, and a flexible set of tags to capture the attributes of data sets. It provides a simple RESTful API to manage the metadata which eases the integration of Marquez to other tool sets within the data ecosystem.

    We've used Marquez as a starting point and easily extended it to fit our needs such as enforcing security policies as well as changes to its domain language. If you're looking for a small and simple tool to bootstrap storage and visualization of your data-processing jobs and data sets, Marquez is a good place to start.

    History
  • Matomo (formerly Piwik) is an open source web analytics platform that provides you with full control over your data. You can self-host Matomo and secure your web analytics data from third parties. Matomo also makes it easy to integrate web analytics data with your in-house data platform and lets you build usage models that are tailored to your needs.

    History
  • MeiliSearch is a fast, easy-to-use and easy-to-deploy text search engine. Over the years Elasticsearch has become the popular choice for scalable text searches. However, if you don't have the volume of data that warrants a distributed solution but still want to provide a fast typo-tolerant search engine, then we recommend assessing MeiliSearch.

    History
  • Ultraleap (previously Leap Motion) has been a leader in the XR space for some time, creating remarkable hand-tracking hardware that allows a user's hands to make the leap into virtual reality. Stratos is Ultraleap's underlying haptics, sensors and software platform, and it can use targeted ultrasound to create haptic feedback in mid-air. A use case is responding to a driver's hand gesture to change the air conditioning in the car and providing haptic feedback as part of the interface. We're excited to see this technology and what creative technologists might do to incorporate it into their use cases.

    History
  • Trillian is a cryptographically verifiable, centralized data store. For trustless, decentralized environments, you can use blockchain-based distributed ledgers. For enterprise environments, however, where the cost of CPU-heavy consensus protocols is unwarranted, we recommend you give Trillian a try.

    History

Hold?

  • Technologies, especially wildly popular ones, have a tendency to be overused. What we're seeing at the moment is Node overload, a tendency to use Node.js indiscriminately or for the wrong reasons. Among these, two stand out in our opinion. Firstly, we frequently hear that Node should be used so that all programming can be done in one programming language. Our view remains that polyglot programming is a better approach, and this still goes both ways. Secondly, we often hear teams cite performance as a reason to choose Node.js. Although there are myriads of more or less sensible benchmarks, this perception is rooted in history. When Node.js became popular, it was the first major framework to embrace a nonblocking programming model which made it very efficient for IO-heavy tasks. (We mentioned this in our write-up of Node.js in 2012.) Due to its single-threaded nature, Node.js was never a good choice for compute-heavy workloads, though, and now that capable nonblocking frameworks also exist on other platforms — some with elegant, modern APIs — performance is no longer a reason to choose Node.js.

    History
Unable to find something you expected to see?

Each edition of the radar features blips reflecting what we came across during the previous six months. We might have covered what you are looking for on a previous radar already. We sometimes cull things just because there are too many to talk about. A blip might also be missing because the radar reflects our experience, it is not based on a comprehensive market analysis.

New,Moved in/out,No change