工具
采纳
-
57. ClickHouse
ClickHouse is an open-source, distributed columnar online analytical processing (OLAP) database for real-time analytics. It has matured into a highly performant and scalable engine capable of handling large-scale data analytics. Its incremental materialized view, efficient query engine and strong data compression make it ideal for interactive queries. Built-in support for approximate aggregate functions enables trade-offs between accuracy and performance, which is especially useful for high-cardinality analytics. The addition of the S3 storage engine and MergeTree allows separation of storage and compute, using S3-compatible storage for ClickHouse tables. We’ve also found ClickHouse to be an excellent backend for OpenTelemetry data and crash analytics tools like Sentry. For teams seeking a fast, open-source analytics engine, ClickHouse is an excellent choice.
-
58. NeMo Guardrails
NeMo Guardrails is an open-source toolkit from NVIDIA that makes it easy to add programmable safety and control mechanisms to LLM-based conversational applications. It ensures outputs remain safe, on-topic and compliant by defining and enforcing behavioral rules. Developers use Colang, a purpose-built language, to create flexible dialogue flows and manage conversations, enforcing predefined paths and operational procedures. NeMo Guardrails also provides an asynchronous-first API for performance and supports safeguards for content safety, security and moderation of inputs and outputs. We’re seeing steady adoption across teams building applications that range from simple chatbots to complex agentic workflows. With its expanding feature set and maturing coverage of common LLM vulnerabilities, we’re moving NeMo Guardrails to Adopt.
-
59. pnpm
Since the last Radar, we’ve continued to receive positive feedback about pnpm from teams. pnpm is a Node.js package manager that delivers significant performance improvements over alternatives, both in speed and disk space efficiency. It hard-links duplicate packages from multiple projects’
node_modulesfolders to a single location on disk and supports incremental, file-level optimizations that further boost performance. Because pnpm offers a much faster feedback loop with minimal compatibility issues, it has become our default choice for Node.js package management. -
60. Pydantic
Pydantic is a Python library that uses standard type hints to define data models and enforce data schemas at run-time. Originally, type annotations were added to Python for static analysis, but their growing versatility has led to broader uses, including run-time validation. Built on a fast Rust core, it provides efficient data validation, parsing and serialization.
While it’s best known for web API development, Pydantic has also become essential in LLM applications. We typically use the structured output from LLMs technique to manage the unpredictable nature of LLMs. By defining a strict data schema, it acts as a safety net for the unpredictable nature of model output — converting free-form text responses into deterministic, type-safe Python objects (e.g., JSON). This approach, often implemented through Pydantic AI or LangChain, turns potentially brittle LLM interactions into reliable, machine-readable data contracts. Our teams have successfully used Pydantic in production to extract structured representations from unstructured documents, ensuring the output conforms to a valid structure. Given its maturity, performance and reliability, Pydantic is now our default choice for production-level Python AI applications.
试验
-
61. AI Design Reviewer
AI Design Reviewer is a Figma plugin for conducting design audits or heuristic evaluations and collecting actionable feedback on existing or new designs. Its audits cover UX critiques, UI inconsistencies, accessibility gaps, content quality and edge-case scenarios. Beyond identifying issues, it provides domain-aware recommendations that help teams build a shared design vocabulary and rationale behind design choices. Our teams used AI Design Reviewer to analyze legacy designs — identifying positive experiences to retain and negative ones to address — which informed UX goals for redesigns. It has also served as a peer-review substitute, offering early feedback on new designs before development handoff.
-
62. Barman
Barman (Backup and Recovery Manager) is an open-source tool for managing backups and disaster recovery of PostgreSQL servers. It supports the full disaster recovery process, simplifying the creation of physical backups through a variety of methods, organizing them into a comprehensive catalog and restoring backups to a live server with point-in-time recovery capabilities. We’ve found Barman to be powerful and easy to use, and have been impressed at the speed of point-in-time recovery operations during migration activities. We've also found it capable for scheduled backups, with an ability to handle complex, mixed configurations of scheduling and retention.
-
63. Claude Code
Anthropic's Claude Code is an agentic AI coding tool that provides a natural language interface and agentic execution model for planning and implementing complex, multi-step workflows. Released less than a year ago, it has already been widely adopted by developers inside and outside Thoughtworks, leading us to place it in Trial. Console-based coding agents such as OpenAI's Codex CLI, Google's Gemini CLI and the open-source OpenCode have been released, while IDE-based assistants like Cursor, Windsurf and GitHub Copilot now include agent modes. Even so, Claude Code remains a favorite. We see teams using it not only to write and modify code but also as a general-purpose AI agent for managing specifications, stories, configuration, infrastructure and documentation.
Agentic coding shifts the developer's focus from writing code to specifying intent and delegating implementation. While this can accelerate development cycles, it can also lead to complacency with AI-generated code, which in turn may result in code that is harder to maintain and evolve — for both humans and AI agents. It’s therefore essential for teams to rigorously manage how Claude Code works, using techniques such as context engineering, curated shared instructions and potentially teams of coding agents.
-
64. Cleanlab
In the data-centric AI paradigm, improving data set quality often delivers greater performance gains than tuning the model itself. Cleanlab is an open-source Python library designed to address this challenge by automatically identifying common data issues — such as mislabeling, outliers and duplicates — across text, image, tabular and audio data sets. Built on the principle of confident learning, Cleanlab leverages model-predicted probabilities to estimate label noise and quantify data quality.
This model-agnostic approach enables developers to diagnose and correct data set errors, then retrain models for improved robustness and accuracy. Our teams have used Cleanlab successfully in production, confirming its effectiveness in real-world settings. We recommend it as a valuable tool for promoting data standardization and improving data set quality in AI engineering projects.
-
65. Context7
Context7 is an MCP server that addresses inaccuracies in AI-generated code. While LLMs rely on outdated training data, Context7 ensures they generate accurate, up-to-date and version-specific code for the libraries and frameworks used in a project. It does this by pulling the latest documentation and functional code examples directly from framework source repositories and injecting them into the LLM's context window at the moment of prompting. In our experience, Context7 has greatly reduced code hallucinations and reliance on stale training data. You can configure it with AI code editors such as Claude Code, Cursor or VS Code to generate, refactor or debug framework-dependent code.
-
66. Data Contract CLI
Data Contract CLI is an open-source command-line tool designed for working with the Data Contract specification. It helps you create and edit data contracts and, critically, lets you validate data against its contract, which is essential for ensuring the integrity and quality of your data products.
The CLI offers broad support for multiple schema definitions (Avro, SQL DDL, Open Data Contract Standard, etc.) and can compare different contract versions to immediately detect breaking changes. We've found it especially useful in the data mesh space to operationalize contract governance between data products via CI/CD integration. This approach reduces manual errors and ensures data quality, integrity and compatibility in data exchanges across services.
-
67. Databricks Assistant
Databricks Assistant is an AI-powered conversational tool integrated directly into the Databricks platform, acting as a contextual pair programmer for data professionals. Unlike general-purpose coding assistants, it benefits from a native understanding of the Databricks environment and data context, including metadata from the Unity Catalog. The Assistant goes beyond generating code snippets; it can craft complex, multi-step SQL and Python queries, diagnose errors and provide detailed, workspace-specific explanations. For organizations already invested in the Databricks ecosystem, it can accelerate productivity and lower the barrier to entry for complex data tasks.
-
68. Hoppscotch
Hoppscotch is a lightweight open-source tool for API development, debugging, testing and sharing. It supports multiple protocols — including HTTP, GraphQL and WebSocket — and offers cross-platform clients for web, desktop and CLI environments.
While the API tooling space is crowded with alternatives like Postman, Insomnia and Bruno, Hoppscotch stands out for its lightweight footprint and privacy-friendly design. It omits analytics, uses local-first storage and supports self-hosting. It’s a strong choice for organizations seeking an intuitive way to share API scripts while maintaining strong data privacy.
-
69. NVIDIA DCGM Exporter
NVIDIA DCGM Exporter is an open-source tool that helps teams monitor distributed GPU training at scale. It converts proprietary telemetry from the NVIDIA Data Center GPU Manager (DCGM) into open formats compatible with standard monitoring systems. The Exporter exposes critical real-time metrics — including GPU utilization, temperature, power and ECC error counts—from both GPU and host servers. This visibility is essential for organizations fine-tuning custom LLMs or running long-duration, GPU-intensive training jobs. The straggler effect — where one slow worker bottlenecks the entire process — can reduce throughput by over 10% and waste up to 45% of allocated GPU hours. Designed for cloud-native, large-scale environments, the DCGM Exporter integrates seamlessly with Prometheus and Grafana, helping ensure every GPU operates within optimal performance bounds.
-
70. RelationalAI
When large volumes of diverse data are brought into Snowflake, the inherent relationships and implicit rules within that data can become obscured. Built as a Snowflake Native App, RelationalAI enables teams to build sophisticated models that capture meaningful concepts, define core business entities and embed complex logic directly against Snowflake tables. Its powerful Graph Reasoner allows users to then create, analyze and visualize relational knowledge graphs based on these models. Built-in algorithms help explore graph structures and reveal hidden patterns. For organizations managing massive, fast-changing data sets, constructing a knowledge graph can be essential for proactive monitoring and generating richer and more actionable insights.
-
71. UX Pilot
UX Pilot is an AI tool that supports multiple stages of the UX design process — from wireframing to high-fidelity visual design and review. It accepts text or image inputs and can automatically generate screens, flows and layouts. Its Autoflow feature creates user flow transitions, while Deep Design produces richer, more detailed outputs. UX Pilot also includes a Figma plugin that exports generated designs for refinement within standard design tools. Our teams have used UX Pilot for ideation and inspiration, generating multiple options during Crazy 8’s exercises and translating project story lists into product vision boards and epic-level design concepts. Tools like UX Pilot also enable non-designers, such as product managers, to create quick prototypes and gather early stakeholder feedback — a growing trend in AI-assisted design workflows.
-
72. v0
v0 has evolved since we last featured it in the Radar. It now includes a design mode that further lowers the barrier for product managers to create and tweak self-serve UI prototypes. The latest release introduces an in-house model with large context windows and multimodal capabilities, enabling v0 to generate and improve UIs from both text and visual inputs. Another notable addition is its agentic mode, which allows the system to break down more complex work and select the appropriate model for each. However, this feature is still new, and early feedback has been mixed.
评估
-
73. Augment Code
Augment Code is an AI coding assistant that delivers deep, context-aware support across large codebases. It stands out through advanced context engineering that enables rapid code index updates and fast retrieval, even as code changes frequently. Augment supports models such as Claude Sonnet 4 and 4.5 and GPT-5, integrates with GitHub, Jira and Confluence and supports the Model Context Protocol (MCP) for external tool interoperability. It provides turn-by-turn guidance for complex codebase changes — from refactors and dependency upgrades to schema updates — along with personalized in-line completions that reflect project-specific dependencies. Augment also promotes collaboration by allowing teams to query and share code insights directly within Slack.
-
74. Azure AI Document Intelligence
Azure AI Document Intelligence (formerly Form Recognizer) extracts text, tables and key-value pairs from unstructured documents and transforms them into structured data. It uses pre-trained deep learning models to interpret layouts and semantics, and custom models can be trained through a no-code interface for specialized formats. In some cases, however, power users may require a custom fine-tuning interface instead.
One of our teams reported that ADI significantly reduced manual data entry, improved data accuracy and accelerated reporting, leading to faster data-driven decisions. Like Amazon Textract and Google Document AI, it provides enterprise-grade document processing with strong layout understanding. An emerging open-source alternative is IBM’s Docling, which offers a more flexible, code-centric approach to structured data extraction. Compared to traditional OCR tools, ADI captures not just text but also structure and relationships, making it easy to integrate into downstream data pipelines. That said, we’ve observed occasional latency when embedding it into synchronous user workflows, so we recommend using it primarily for asynchronous processing.
-
75. Docling
Docling is an open-source Python and TypeScript library for advanced document processing of unstructured data. It addresses the often overlooked "last mile" problem of converting real-world documents — like PDFs and PowerPoints — into clean, machine-readable formats. Unlike traditional extractors, Docling uses a computer vision–based approach to interpret document layout and semantic structure, which makes its output particularly valuable for retrieval-augmented generation (RAG) pipelines. It converts complex documents into structured formats such as JSON or Markdown, supporting techniques like structured output from LLMs. This contrasts with ColPali, which feeds page images directly to a vision-language model for retrieval.
Docling's open-source nature and Python core, built on a custom Pydantic-based data model, provide a flexible, self-hosted alternative to proprietary cloud tools such as Azure Document Intelligence, Amazon Textract and Google Document AI. Backed by IBM Research, the project’s rapid development and plug-and-play architecture for integrating with other frameworks like LangGraph make it well worth assessing for teams building production-grade AI-ready data pipelines.
-
76. E2B
E2B is an open-source tool for running AI-generated code in secure, isolated sandboxes in the cloud. Agents can use these sandboxes, built on top of Firecracker microVMs, to safely execute code, analyze data, conduct research or operate a virtual machine. This enables you to build and deploy enterprise-grade AI agents with full control and security over the execution environment.
-
77. Helix editor
There’s been somewhat of a resurgence in simple text editors aiming to replace the command-line favorite Vim. Helix is one such contender in the crowded space alongside Neovim and, more recently, Kakoune. Describing itself — somewhat playfully — as a post-modern text editor, Helix features multiple cursors, Tree-sitter support and integrated Language Server Protocol (LSP) support, which is what first drew our attention. Helix is actively developed with a plugin system on the way. Overall, it’s a lightweight modal editor that feels familiar to Vim users but adds a few modern conveniences.
-
78. Kueue
Kueue is a Kubernetes-native controller for job queuing that manages quotas and resource consumption. It provides APIs for handling Kubernetes workloads with varying priorities and resource requirements, functioning as a job-level manager that determines when to admit or evict jobs. Designed for efficient resource management, job prioritization and advanced scheduling, Kueue helps optimize workload execution in Kubernetes environments — particularly for ML workloads using tools such as Kubeflow. It works alongside the cluster-autoscaler and kube-scheduler rather than replacing them, focusing on job admission based on order, quota, priority and topology awareness. As part of the Kubernetes Special Interest Group (SIG) ecosystem, Kueue adheres to its development standards.
-
79. MCPScan.ai
MCPScan.ai is a security scanner for Model Context Protocol (MCP) servers that operates in two modes: scan and proxy. In scan mode, it analyzes configurations and tool descriptions to detect known vulnerabilities such as prompt injections, tool poisoning and toxic flows. In proxy mode, MCPScan.ai acts as a bridge between agent system and MCP server, continuously monitoring runtime traffic. This mode also enforces custom security rules and guardrails, including tool call validation, PII detection and data flow constraints. The tool provides a proactive security layer for agents, ensuring that even if a malicious prompt is accepted, the agent cannot execute harmful actions. MCPScan.ai is a purpose-built security solution for the emerging field of agentic systems.
-
80. oRPC
oRPC (OpenAPI Remote Procedure Call) provides end-to-end typesafe APIs in TypeScript while fully adhering to the OpenAPI specification. It can automatically generate a complete OpenAPI spec, simplifying integration and documentation. We’ve found oRPC particularly strong for integrations. While alternatives such as tRPC and ElysiaJS often require adopting a new framework to gain type safety, oRPC integrates seamlessly with existing Node.js frameworks, including Express, Fastify, Hono and Next.js. This flexibility makes it an excellent choice for teams looking to adopt end-to-end type safety to existing APIs without a disruptive refactor.
-
81. Power user for dbt
Power user for dbt is an extension for Visual Studio Code that integrates directly with both dbt and dbt Cloud environments. Since dbt remains one of our favourite tools, anything that improves its usability is a welcome addition to the ecosystem. Previously, developers relied on multiple tools to validate SQL code or inspect model lineage outside the IDE. With this extension, those capabilities are now built into VS Code, offering code autocompletion, real-time query results and visual model and column lineage. This last feature makes it easy to navigate between models. Our teams report the plugin reduces pipeline errors and enhances the overall development experience. If you use dbt, we urge you to take a look at this tool.
-
82. Serena
Serena is a powerful coding toolkit that equips coding agents such as Claude Code with IDE-like capabilities for semantic code retrieval and editing. By operating at the symbol level and understanding the relational structure of code, Serena greatly improves token efficiency. Instead of reading entire files or relying on crude string replacements, coding agents can use precise Serena tools such as find_symbol, find_referencing_symbols and insert_after_symbol to locate and edit code. Although the impact is minimal on small projects, this efficiency is extremely valuable as the codebase grows.
-
83. SweetPad
The SweetPad extension enables developers to use VS Code or Cursor for the entire Swift application development lifecycle on Apple platforms. It eliminates the need to constantly switch to Xcode by integrating essential tools such as xcodebuild, xcode-build-server, and swift-format. Developers can build, run and debug Swift applications for iOS, macOS and watchOS directly from their IDEs, while also managing simulators and deploying to devices without opening Xcode.
-
84. Tape/Z (Tools for Assembly Program Exploration for Z/OS)
Tape/Z (Tools for Assembly Program Exploration for Z/OS) is an evolving toolkit for analyzing mainframe HLASM (High-Level Assembler) code. Developed by a Thoughtworker, it provides capabilities such as parsing, control flow graph construction, dependency tracing and flowchart visualization. We’ve long noted the scarcity of open, community-driven tools in the mainframe space, where most options remain proprietary or tied to vendor ecosystems. Tape/Z helps close that gap by offering accessible, scriptable analysis capabilities. Alongside COBOL REKT — a companion toolkit for COBOL that we’ve also used multiple times with clients — it represents encouraging progress toward modern, developer-friendly tooling for mainframe systems.
暂缓
无法找到需要的信息?每期技术雷达中的条目都在试图反映我们在过去六个月中的技术洞见,或许你所搜索的内容已经在前几期中出现过。由于我们有太多想要谈论的内容,有时候不得不剔除一些长期没有发生变化的条目。技术雷达来自于我们的主观经验,而非全面的市场分析,所以你可能会找不到自己最在意的技术条目。