Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Volume 32 | Abril 2025

Plataformas

  • Plataformas

    Adote Experimente Avalie Evite Adote Experimente Avalie Evite
  • Novo
  • Modificado
  • Sem alteração

Plataformas

Adote ?

  • 23. GitLab CI/CD

    O GitLab CI/CD evoluiu para um sistema totalmente integrado no GitLab, abrangendo tudo, desde a integração e o teste de código até a implantação e o monitoramento. Ele oferece suporte a fluxos de trabalho complexos com recursos como pipelines de vários estágios, armazenamento em cache, execução paralela e executores de dimensionamento automático e é adequado para projetos de grande escala e necessidades complexas de pipeline. Queremos destacar suas ferramentas integradas de segurança e conformidade (como a análise SAST e DAST), que o tornam adequado para casos de uso com altos requisitos de conformidade. Ele também se integra perfeitamente ao Kubernetes, dando suporte a fluxos de trabalho nativos da nuvem, e oferece registro em tempo real, relatórios de teste e rastreabilidade para melhorar a observabilidade.

  • 24. Trino

    Trino é um mecanismo de consulta SQL distribuído e de código aberto projetado para consultas analíticas interativas sobre big data. Ele é otimizado para executar ambientes locais e na nuvem e oferece suporte à consulta de dados onde eles residem, incluindo bancos de dados relacionais e vários armazenamentos de dados proprietários por meio de conectores. Trino também pode consultar dados armazenados em formato de arquivos como Parquet e formato de tabela aberta como Apache Iceberg. Os recursos de federação de consultas integrados permitem que dados de diversas fontes sejam consultados como uma única tabela lógica, o que o torna uma ótima opção para cargas de trabalho analíticas que exigem agregação de dados de diversas fontes. Trino é uma parte fundamental de stacks populares como AWS Athena, Starburst e outras plataformas de dados proprietárias. Nossas equipes o utilizaram com sucesso em vários casos de uso, e quando se trata de consultar um conjunto de dados em diversas fontes para análise, Trino tem sido uma escolha confiável.

Experimente ?

  • 25. ABsmartly

    A ABsmartly é uma plataforma avançada de teste e experimentação A/B projetada para uma tomada de decisão rápida e confiável. Seu principal diferencial é o mecanismo de Group Sequential Testing (GST), que acelera os resultados dos testes em até 80% em comparação com as ferramentas tradicionais de teste A/B. A plataforma oferece relatórios em tempo real, segmentação aprofundada dos dados e integração completa através de uma abordagem baseada em APIs, permitindo experimentos em aplicações web, mobile, microsserviços e modelos de machine learning.

    A ABsmartly enfrenta os principais desafios da experimentação escalável orientada por dados, possibilitando iterações mais rápidas e um desenvolvimento de produtos mais ágil. Sua execução sem latência, capacidade avançada de segmentação e suporte a experimentos multiplataforma tornam a ferramenta especialmente valiosa para organizações que desejam escalar uma cultura de experimentação e priorizar a inovação baseada em dados. Ao reduzir significativamente os ciclos dos testes e automatizar a análise dos resultados, a ABsmartly nos ajudou a otimizar funcionalidades e experiências do usuário de forma mais eficiente do que com as plataformas tradicionais de testes A/B.

  • 26. Dapr

    O Dapr evoluiu consideravelmente desde que nós o apresentamos no Radar. Suas muitas novas funções incluem agendamento de jobs, atores virtuais, bem como políticas de retry mais sofisticadas e componentes de observabilidade. Sua lista de blocos de construção continua a crescer, incluindo jobs, criptografia e muito mais. Nossas equipes também destacam o foco crescente em padrões seguros, com suporte para mTLS e imagens sem estado (ou seja, sem um sistema operacional completo, incluindo apenas binários e dependências necessárias). No geral, estamos satisfeitas com o Dapr e ansiosas para futuros desenvolvimentos.

  • 27. Grafana Alloy

    Anteriormente conhecido como Grafana Agent, Grafana Alloy é um coletor do OpenTelemetry de código aberto. Alloy é projetado para ser um coletor de telemetria tudo-em-um para todos os dados de telemetria, incluindo logs, métricas e rastreamentos. Ele suporta a coleta de formatos de dados de telemetria comumente usados, como OpenTelemetry, Prometheus e Datadog. Com a recente descontinuação do Promtail, o Alloy está emergindo como uma escolha preferencial para a coleta de dados de telemetria — especialmente para logs — se você está usando o conjunto de ferramentas de observabilidade Grafana.

  • 28. Grafana Loki

    Grafana Loki é um sistema de agregação de logs multi-tenant com alta disponibilidade e escalabilidade horizontal, inspirado no Prometheus. O Loki indexa apenas metadados sobre seus logs como um conjunto de rótulos para cada fluxo de log. Os dados de log são armazenados em uma solução de armazenamento em blocos, como S3, GCS ou Azure Blob Storage. A vantagem é que o Loki promete reduzir a complexidade operacional e os custos de armazenamento em comparação com outros concorrentes. Como era de se esperar, ele se integra perfeitamente ao Grafana e ao Grafana Alloy, embora outros mecanismos de coleta também possam ser usados.

    Loki 3.0 introduziu suporte nativo a OpenTelemetry, tornando a ingestão e integração com sistemas baseados em OpenTelemetry tão simples quanto configurar um endpoint. Ele também oferece recursos avançados de multi-tenancy, como isolamento de inquilinos via “shuffle-sharding”, o que evita que inquilinos com mau comportamento (por exemplo, consultas pesadas ou indisponibilidades) afetem outros em um cluster. Se você não vem acompanhando as novidades no ecossistema Grafana, agora é um ótimo momento para dar uma olhada, pois está evoluindo rapidamente.

  • 29. Grafana Tempo

    Grafana Tempo é um backend de tracing distribuído em larga escala que suporta padrões abertos como OpenTelemetry. Projetado para ser eficiente em custo, ele utiliza armazenamento de objetos para retenção de traces a longo prazo e permite busca de traces, geração de métricas baseada em spans e correlação com logs e métricas. Por padrão, o Grafana Tempo usa um formato de blocos em colunas baseado no Apache Parquet, melhorando a performance das consultas e permitindo que outras ferramentas downstream acessem os dados de trace. As consultas são feitas via TraceQL e Tempo CLI. O Grafana Alloy também pode ser configurado para coletar e encaminhar traces para o Grafana Tempo. Nossas equipes usaram o Grafana Tempo no GKE, utilizando MinIO para armazenamento de objetos, coletores OpenTelemetry e Grafana para visualização de traces.

  • 30. Railway

    Heroku costumava ser uma excelente escolha para muitas desenvolvedoras que desejavam lançar e implantar seus aplicativos rapidamente. Nos últimos anos, também vimos o surgimento de plataformas de implantação como Vercel, que são mais modernas, leves e fáceis de usar, mas projetadas para aplicações frontend. Uma alternativa full-stack neste espaço é o Railway, uma plataforma de nuvem PaaS (em português, “plataforma como serviço”) que simplifica tudo, desde a implantação de GitHub/Docker até a observabilidade de produção.

    Railway oferece suporte à maioria dos frameworks de programação convencionais, bancos de dados, bem como implantação em contêineres. Como uma plataforma hospedada de longo prazo para um aplicativo, você pode precisar comparar cuidadosamente os custos de diferentes plataformas. Atualmente, nossa equipe tem uma boa experiência com a implantação e observabilidade do Railway. A operação é suave e pode ser bem integrada com as práticas de implantação contínua que defendemos.

  • 31. Unblocked

    Unblocked é um assistente de equipe baseado em IA pronto para uso. Uma vez integrado com repositórios de base de código, plataformas de documentação corporativa, ferramentas de gerenciamento de projetos e ferramentas de comunicação, o Unblocked ajuda a responder perguntas sobre conceitos técnicos e de negócios complexos, design arquitetural e implementação, bem como processos operacionais. Isso é particularmente útil para navegar em sistemas grandes ou legados. Ao usá-lo, observamos que as equipes valorizam mais o acesso rápido a informações contextuais do que a geração de código e histórias de usuário; para tais cenários, especialmente assistentes de programação, agentes de engenharia de software são mais adequados.

  • 32. Weights & Biases

    Weights & Biases continuou a evoluir, adicionando mais recursos focados em modelos de linguagem de grande porte desde que foi apresentado pela última vez no Radar. Eles estão expandindo Traces e introduzindo Weave, uma plataforma completa que vai além do rastreamento de sistemas de agentes baseados em modelos de linguagem de grande porte. O Weave permite que você crie avaliações de sistema, defina métricas personalizadas, use modelos de linguagem de grande porte como juízes para tarefas como sumarização e salve conjuntos de dados que capturam comportamentos diferentes para análise. Isso ajuda a otimizar os componentes do modelo de linguagem de grande porte e rastrear o desempenho em níveis local e global. A plataforma também facilita o desenvolvimento iterativo e a depuração eficaz de sistemas de agentes, onde os erros podem ser difíceis de detectar. Além disso, permite a coleta de feedback humano valioso, que pode ser usado posteriormente para ajustar modelos.

Avalie ?

  • 33. Arize Phoenix

    With the popularity of LLM and agentic applications, LLM observability is becoming more and more important. Previously, we’ve recommended platforms such as Langfuse and Weights & Biases (W&B). Arize Phoenix is ​​another emerging platform in this space, and our team has had a positive experience using it. It offers standard features like LLM tracing, evaluation and prompt management, with seamless integration into leading LLM providers and frameworks. This makes it easy to gather insights on LLM output, latency and token usage with minimal configuration. So far, our experience is limited to the open-source tool but the broader Arize platform offers more comprehensive capabilities. We look forward to exploring it in the future.

  • 34. Chainloop

    Chainloop is an open-source supply chain security platform that helps security teams enforce compliance while allowing development teams to seamlessly integrate security compliance into CI/CD pipelines. It consists of a control plane, which acts as the single source of truth for security policies, and a CLI, which runs attestations within CI/CD workflows to ensure compliance. Security teams define workflow contracts specifying which artifacts — such as SBOMs and vulnerability reports — must be collected, where to store them and how to evaluate compliance. Chainloop uses Rego, OPA's policy language, to validate attestations — for example, ensuring a CycloneDX SBOM meets version requirements. During workflow execution, security artifacts like SBOMs are attached to an attestation and pushed to the control plane for enforcement and auditing. This approach ensures compliance can be enforced consistently and at scale while minimizing friction in development workflows. This results in an SLSA level-three–compliant single source of truth for metadata, artefacts and attestations.

  • 35. Deepseek R1

    DeepSeek-R1 is DeepSeek's first-generation of reasoning models. Through a progression of non-reasoning models, the engineers at DeepSeek designed and used methods to maximize hardware utilization. These include Multi-Head Latent Attention (MLA), Mixture of Experts (MoE) gating, 8-bit floating points training (FP8) and low-level PTX programming. Their high-performance computing co-design approach enables DeepSeek-R1 to rival state-of-the-art models at significantly reduced cost for training and inference.

    DeepSeek-R1-Zero is notable for another innovation: the engineers were able to elicit reasoning capabilities from a non-reasoning model using simple reinforcement learning without any supervised fine-tuning. All DeepSeek models are open-weight, which means they are freely available, though training code and data remain proprietary. The repository includes six dense models distilled from DeepSeek-R1, based on Llama and Qwen, with DeepSeek-R1-Distill-Qwen-32B outperforming OpenAI-o1-mini on various benchmarks.

  • 36. Deno

    Created by Ryan Dahl, the inventor of Node.js, Deno was designed to address what he saw as mistakes in Node.js. It features a stricter sandboxing system, built-in dependency management and native TypeScript support — a key draw for its user base. Many of us prefer Deno for TypeScript projects, as it feels like a true TypeScript run time and toolchain, rather than an add-on to Node.js.

    Since its inclusion in the Radar in 2019, Deno has made significant advancements. The Deno 2 release introduces backward compatibility with Node.js and npm libraries, long-term support (LTS) releases and other improvements. Previously, one of the biggest barriers to adoption was the need to rewrite Node.js applications. These updates reduce migration friction while expanding dependency options for supporting tools and systems. Given the massive Node.js and npm ecosystem, these changes should drive further adoption.

    Additionally, Deno’s Standard Library has stabilized, helping combat the proliferation of low-value npm packages across the ecosystem. Its tooling and Standard Library make TypeScript or JavaScript more appealing for server-side development. However, we caution against choosing a platform solely to avoid polyglot programming.

  • 37. Graphiti

    Graphiti builds dynamic, temporally-aware knowledge graphs that capture evolving facts and relationships. Our teams use GraphRAG to uncover data relationships, which enhances retrieval and response accuracy. As data sets constantly evolve, Graphiti maintains temporal metadata on graph edges to record relationship lifecycles. It ingests both structured and unstructured data as discrete episodes and supports queries using a fusion of time-based, full-text, semantic and graph algorithms. For LLM-based applications — whether RAG or agentic — Graphiti enables long-term recall and state-based reasoning.

  • 38. Helicone

    Similar to Langfuse, Weights & Biases and Arize Phoenix, Helicone is a managed LLMOps platform designed to meet the growing enterprise demand for LLM cost management, ROI evaluation and risk mitigation. Open-source and developer-focused, Helicone supports production-ready AI applications, offering prompt experimentation, monitoring, debugging and optimization across the entire LLM lifecycle. It enables real-time analysis of costs, utilization, performance and agentic stack traces across various LLM providers. While it simplifies LLM operations management, the platform is still emerging and may require some expertise to fully leverage its advanced features. Our team has been using it with good experience so far.

  • 39. Humanloop

    Humanloop is an emerging platform focused on making AI systems more reliable, adaptable and aligned with user needs by integrating human feedback at key decision points. It offers tools for human labeling, active learning and human-in-the-loop fine-tuning as well as LLM evaluation against business requirements. Additionally, it helps manage the cost-effective lifecycle of GenAI solutions with greater control and efficiency. Humanloop supports collaboration through a shared workspace, version-controlled prompt management and CI/CD integration to prevent regressions. It also provides observability features such as tracing, logging, alerting and guardrails to monitor and optimize AI performance. These capabilities make it particularly relevant for organizations deploying AI in regulated or high-risk domains where human oversight is critical. With its focus on responsible AI practices, Humanloop is worth evaluating for teams looking to build scalable and ethical AI systems.

  • 40. Model Context Protocol (MCP)

    One of the biggest challenges in prompting is ensuring the AI tool has access to all the context relevant to the task. Often, this context already exists within the systems we use all day: wikis, issue trackers, databases or observability systems. Seamless integration between AI tools and these information sources can significantly improve the quality of AI-generated outputs.

    The Model Context Protocol (MCP), an open standard released by Anthropic, provides a standardized framework for integrating LLM applications with external data sources and tools. It defines MCP servers and clients, where servers access the data sources and clients integrate and use this data to enhance prompts. Many coding assistants have already implemented MCP integration, allowing them to act as MCP clients. MCP servers can be run in two ways: Locally, as a Python or Node process running on the user’s machine, or remotely, as a server that the MCP client connects to via SSE (though we haven't seen any usage of the remote server variant yet). Currently, MCP is primarily used in the first way, with developers cloning open-source MCP server implementations. While locally run servers offer a neat way to avoid third-party dependencies, they remain less accessible to nontechnical users and introduce challenges such as governance and update management. That said, it's easy to imagine how this standard could evolve into a more mature and user-friendly ecosystem in the future.

  • 41. Open WebUI

    Open WebUI is an open-source, self-hosted AI platform with a versatile feature set. It supports OpenAI-compatible APIs and integrates with providers like OpenRouter and GroqCloud, among others. It can run entirely offline by connecting to local or self-hosted models via Ollama. Open WebUI includes a built-in capability for RAG, allowing users to interact with local and web-based documents in a chat-driven experience. It offers granular RBAC controls, enabling different models and platform capabilities for different user groups. The platform is extensible through Functions — Python-based building blocks that customize and enhance its capabilities. Another key feature is model evaluation, which includes a model arena for side-by-side comparisons of LLMs on specific tasks. Open WebUI can be deployed at various scales — as a personal AI assistant, a team collaboration assistant or an enterprise-grade AI platform.

  • 42. pg_mooncake

    pg_mooncake is a PostgreSQL extension that adds columnar storage and vectorized execution. Columnstore tables are stored as Iceberg or Delta Lake tables in the local file system or S3-compatible cloud storage. pg_mooncake supports loading data from file formats like Parquet, CSV and even Hugging Face datasets. It can be a good fit for heavy data analytics that typically requires columnar storage, as it removes the need to add dedicated columnar store technologies into your stack.

  • 43. Reasoning models

    One of the most significant AI advances since the last Radar is the breakthrough and proliferation of reasoning models. Also marketed as "thinking models," these models have achieved top human-level performance in benchmarks like frontier mathematics and coding.

    Reasoning models are usually trained through reinforcement learning or supervised fine-tuning, enhancing capabilities such as step-by-step thinking (CoT), exploring alternatives (ToT) and self-correction. Examples include OpenAI’s o1/o3, DeepSeek R1 and Gemini 2.0 Flash Thinking. However, these models should be seen as a distinct category of LLMs rather than simply more advanced versions.

    This increased capability comes at a cost. Reasoning models require longer response time and higher token consumption, leading us to jokingly call them "Slower AI" (as if current AI wasn’t slow enough). Not all tasks justify this trade-off. For simpler tasks like text summarization, content generation or fast-response chatbots, general-purpose LLMs remain the better choice. We advise using reasoning models in STEM fields, complex problem-solving and decision-making — for example, when using LLMs as judges or improving explainability through explicit CoT outputs. At the time of writing, Claude 3.7 Sonnet, a hybrid reasoning model, had just been released, hinting at a possible fusion between traditional LLMs and reasoning models.

  • 44. Restate

    Restate is a durable execution platform, similar to Temporal, developed by the original creators of Apache Flink. Feature-wise it offers workflows as code, stateful event processing, the saga pattern and durable state machines. Written in Rust and deployed as a single binary, it uses a distributed log to record events, implemented using a virtual consensus algorithm based on Flexible Paxos; this ensures durability in the event of node failure. SDKs are available for the usual suspects: Java, Go, Rust and TypeScript. We still maintain that it's best to avoid distributed transactions in distributed systems, because of both the additional complexity and the inevitable additional operational overhead involved. However, this platform is worth assessing if you can’t avoid distributed transactions in your environment.

  • 45. Supabase

    Supabase is an open-source Firebase alternative for building scalable and secure backends. It offers a suite of integrated services, including a PostgreSQL database, authentication, instant APIs, Edge Functions, real-time subscriptions, storage and vector embeddings. Supabase aims to streamline back-end development, allowing developers to focus on building front-end experiences while leveraging the power and flexibility of open-source technologies. Unlike Firebase, Supabase is built on top of PostgreSQL. If you're working on prototyping or an MVP, Supabase is worth considering, as it will be easier to migrate to another SQL solution after the prototyping stage.

  • 46. Synthesized

    A common challenge in software development is generating test data for development and test environments. Ideally, test data should be as production-like as possible, while ensuring no personally identifiable or sensitive information is exposed. Though this may seem straightforward, test data generation is far from simple. That's why we’re interested in Synthesized — a platform that can mask and subset existing production data or generate statistically relevant synthetic data. It integrates directly into build pipelines and offers privacy masking, providing per-attribute anonymization through irreversible data obfuscation techniques such as hashing, randomization and binning. Synthesized can also generate large volumes of synthetic data for performance testing. While it includes the obligatory GenAI features, its core functionality addresses a real and persistent challenge for development teams, making it worth exploring.

  • 47. Tonic.ai

    Tonic.ai is part of a growing trend in platforms designed to generate realistic, de-identified synthetic data for development, testing and QA environments. Similar to Synthesized, Tonic.ai is a platform with a comprehensive suite of tools addressing various data synthesis needs in contrast to the library-focused approach of Synthetic Data Vault. Tonic.ai generates both structured and unstructured data, maintaining the statistical properties of production data while ensuring privacy and compliance through differential privacy techniques. Key features include automatic detection, classification and redaction of sensitive information in unstructured data, along with on-demand database provisioning via Tonic Ephemeral. It also offers Tonic Textual, a secure data lakehouse that helps AI developers leverage unstructured data for retrieval-augmented generation (RAG) systems and LLM fine-tuning. Teams looking to accelerate engineering velocity while generating scalable, realistic data — all while adhering to stringent data privacy requirements — should consider evaluating Tonic.ai.

  • 48. turbopuffer

    turbopuffer is a serverless, multi-tenant search engine that seamlessly integrates vector and full-text search on object storage. We quite like its architecture and design choices, particularly its focus on durability, scalability and cost efficiency. By using object storage as a write-ahead log while keeping its query nodes stateless, it’s well-suited for high-scale search workloads.

    Designed for performance and accuracy, turbopuffer delivers high recall out of the box, even for complex filter-based queries. It caches cold query results on NVMe SSDs and keeps frequently accessed namespaces in memory, enabling low-latency search across billions of documents. This makes it ideal for large-scale document retrieval, vector search and retrieval-augmented generation (RAG) AI applications. However, its reliance on object storage introduces trade-offs in query latency, making it most effective for workloads that benefit from stateless, distributed compute. turbopuffer powers high-scale production systems like Cursor but is currently only available by referral or invitation.

  • 49. VectorChord

    VectorChord is a PostgreSQL extension for vector similarity search, developed by the creators of pgvecto.rs as its successor. It’s open source, compatible with pgvector data types and designed for disk-efficient, high-performance vector search. It employs inverted file indexing (IVF) along with RaBitQ quantization to enable fast, scalable and accurate vector search while significantly reducing computation demands. Like other PostgresSQL extensions in this space, it leverages the PostgreSQL ecosystem, allowing vector search alongside standard transactional operations. Though still in its early stages, VectorChord is worth assessing for vector search workloads.

Evite ?

  • 50. Tyk hybrid API management

    We've observed multiple teams encountering issues with the Tyk hybrid API management solution. While the concept of a managed control plane and self-managed data planes offers flexibility for complex infrastructure setups (such as multi-cloud and hybrid cloud), teams have experienced control plane incidents that were only discovered internally rather than by Tyk, highlighting potential observability gaps in Tyk's AWS-hosted environment. Furthermore, the level of incident support appears slow; communicating via tickets and emails isn’t ideal in these situations. Teams have also reported issues with the maturity of Tyk's documentation, often finding it inadequate for complex scenarios and issues. Additionally, other products in the Tyk ecosystem seem immature as well, for example, the enterprise developer portal is reported to not be backward compatible and has limited customization capabilities. Especially for Tyk’s hybrid setup, we recommend proceeding with caution and will continue to monitor its maturity.

Não encontrou algo que você esperava achar?

 

Cada edição do Radar inclui blips que refletem nossas experiências nos seis meses anteriores. Talvez já tenhamos falado sobre o que você procura em um Radar anterior. Às vezes, deixamos coisas de fora simplesmente porque há muitas a serem abordadas. Também pode faltar um tópico específico porque o Radar reflete nossa experiência, não se baseando em uma análise abrangente do mercado.

Baixe o PDF

 

 

 

English | Español | Português | 中文

Inscreva-se para receber a newsletter do Technology Radar

 

 

Seja assinante

 

 

Visite nosso arquivo para acessar os volumes anteriores