Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Building your AI future
on responsible foundations

As the rapid proliferation of AI raises questions around the ethical use of technology and the stakes of governance lapses, responsible tech is shifting from aspiration to operational discipline. In 2026  and beyond organizations will embed responsibility into technology strategy, architecture and delivery  — covering safety, privacy, security, environmental impact, accessibility and social outcomes —  not as an afterthought, but as the foundational layer that makes AI systems dependable at scale. 

 

Responsible tech anticipates and manages technology’s risks and externalities while maximizing positive impact. In the AI era this spans several connected disciplines: 

 

  • Policy-aware design and engineering. Privacy-by-design, safety cases, threat modeling, red-teaming, evals and guardrails for generative and agentic AI. 
  • Computational governance. Policy-as-code, automated controls, audit trails and human-in-the-loop escalation. 
  • Accountability and transparency. Impact assessments, provenance, lineage, model cards, incident reporting and user rights. 
  • Sustainability and equity. Resource measurement, inclusive design  and community-level harm analysis. 

 

As global and local rules advance, the organizations that succeed will treat responsible tech as governance by design — a built-in capability that underpins every AI-enabled workflow. It becomes the base on which enterprises can adeptly rebuild core systems, rewire for agents and reimagine  value through adaptive, intelligent products. 

 

Key trends  

 

  • Computational governance platforms: Organizations will codify policy (privacy, safety and fairness) as machine‑enforced  controls integrated with CI/CD, data platforms and runtime. Expect automated  DPIAs/AIAIAs, exception workflows and continuous controls monitoring. 

 

  • Assurance as a product: Evidence packs — traceable datasets, lineage, eval results, risk registers and audit  trails — ship with systems. ‘Assurance SLOs’ become part of service contracts for  buyers, regulators and insurers. 

 

  • Human oversight 2.0: Shift from generic ‘human-in-the-loop’ to role‑specific oversight (safety operator,  red‑team lead, ethics steward), with well-established escalation trees and shutdown  protocols for autonomous agents and decision systems. 

 

  • AI supply‑chain transparency: Model provenance, training‑data disclosure windows, third‑party component attestations  and AI software bills of materials (SBOMs) become table stakes to counter hidden  dependencies and license/privacy risks. 

 

  • Rights‑respecting interfaces: UX patterns that operationalize user rights — notice, explanation, contestability,  portability — become standardized across sectors and geographies. 

 

  • Safety cases for agentic systems: Formal safety cases move beyond aviation/medical into AI: structured arguments  with supporting evidence for goal‑directed agents operating in open‑ended environments  (e.g., robotics, autonomous operations). 

 

  • Harmonized global assurance via standards: Convergence of standards such as ISO/IEC 42001 and 23894, and the NIST AI Risk  Management Framework enables cross-jurisdictional recognition of controls, reducing  the audit burden and enabling portable assurance. 

 

  • Socio‑technical impact markets: Emergence of impact registries and marketplaces for verified positive outcomes  (e.g., accessibility uplift, emissions avoided), creating incentives aligned to  responsibility metrics. 

 

Signals of this shift include 

 

  • The rise of comprehensive, cross-jurisdictional regulation such as the EU AI Act, which has  entered into force with phased obligations. These include bans on unacceptable-risk systems  and obligations for providers of general purpose AI models (GPAI) in 2025, as well as stricter  rules around the use of AI in high‑risk areas like biometrics and law enforcement, originally  slated for 2026 but likely to be delayed until 2027. 
 
  • The first internationally legally binding treaty on AI governance, the Council of Europe’s  Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule  of Law, continues to gain momentum, with Canada and Uruguay among the recent signatories.  
 
  • AI rules emerging at the national and state/local level, though they remain in flux in major economies like the US, where the Biden administration’s 2023 Executive Order on the Safe,  Secure, and Trustworthy Development and Use of Artificial Intelligence has been rescinded and followed by new federal directives. Some states are taking matters into their own hands,  with California recently passing a law mandating transparency in advanced AI systems;  and the implementation of Colorado’s landmark AI Act extended to mid-2026.  

 

  • Momentum behind universal standards. ISO/IEC 42001, which sets requirements for the  management of AI systems based on ethical considerations, transparency, and continuous  improvement; and ISO/IEC 23894, which provides guidance on integrating risk management  into AI-related systems and activities, are examples of the common scaffolding emerging for  AI compliance and assurance. 
 
  • Rising corporate appetite, and readiness. In the business sphere, adoption of AI evals/guardrails,  incident response playbooks, and SBOMs for AI supply chains is rising rapidly, as are expectations  for transparency and whistleblower protections. More companies are adopting AI codes of conduct and scaling up responsible AI investment plans.  

 

The opportunities 

 

By getting ahead of the curve on this lens, organizations can: 

Reduce current and future regulatory exposure
With Gartner predicting that by 2026 around half of governments worldwide will mandate the use of responsible AI through a combination of rules, policies and data privacy requirements, organizations that embrace rigorous governance standards early will be ahead of the curve.

Reinforce customer and stakeholder trust
As the use of AI in areas like marketing and customer service grows, research points to a split emerging, with similar proportions of consumers growing more and less confident that AI tools will handle their data responsibly. Customers are also clear that responsible data and AI practices are an important consideration in their purchasing decisions. This points to a critical juncture where businesses able to convince consumers that they stand on the right side of privacy and ethical lines will gain more market share.  

Boost resilience
AI introduces numerous new threats and vulnerabilities, from more sophisticated phishing  attempts, to ‘back door’ breaches via third-party vendors and weaknesses in automated code. Clearly defined policies and practices around data protection and the use of AI systems can reduce the likelihood of major incidents, and limit the fallout when they do occur. In one recent poll over half of business leaders reported responsible AI had improved their cybersecurity and data protection capabilities.  

Deliver innovation faster, and with greater confidence
Embedding responsibility as a design input and platform capability, rather than a project phase or box-ticking exercise, ensures products require less in the way of compliance checks and fixes, and are more robust, when they are launched into the market. 

Microscope
Microscope

What we've done

Thoughtworks partnered with a leading pharmaceutical company on a generative AI chatbot designed for the high-stakes world of preclinical drug discovery. Beyond technical innovation, the project  was engineered for responsible use in a scientific setting where trust is non-negotiable. The chatbot  combines retrieval-augmented generation and text-to-SQL within an intelligent multi-agent system to unlock insights from large volumes of structured and unstructured preclinical data. 

 

Crucially, the solution prioritizes explainability and reliability through robust engineering practices such as granular citations, error handling, state persistence and LLM fallbacks. These safeguards  help researchers understand how answers are formed, trace claims back to source material, and reduce the risk of hallucination-driven decision making. 

 

By enabling fast search, synthesis and report generation across thousands of historical study pages, the chatbot improves speed and accuracy while reducing costly rework. 

Actionable advice 

 

Things to do (Adopt) 

 

  • Develop, articulate and model actionable AI policies. These should include guidelines that will be clear even to a non-technical audience on when and how it’s appropriate to use GenAI tools; mandatory steps to establish a baseline of data security; and guidance ensuring AI usage preserves data privacy and intellectual property rights. Policies should be accompanied by  regular training so employees are familiar with shifting AI risks.  
 
  • Enhance oversight by strengthening leadership AI literacy. Senior executives and the board  should be kept abreast of developments, threats and opportunities in the AI space and how  these could impact their functions and responsibilities, as well as the organization’s overall strategy. Ensuring leaders have access to a steady stream of insights and expertise will help them make the right decisions as AI integration progresses throughout the business. 
 
  • Factor AI risks into third-party relationships, as well as the contracts that govern them. Current and potential vendors should be assessed on a regular basis to ensure their technology practices align with the enterprise’s ethical and governance standards.  

 

Things to consider (Analyze) 

 

  • Benchmarking against the best emerging standards. While some of the rules and standards emerging globally will not be strictly mandatory or may face delays in implementation, they can still be used as a reference point to refine the organization’s approaches to responsible technology, and claim competitive advantage by exceeding customer and regulatory expectations.  

 

  • Making responsible tech a platform feature. Structures like data mesh are making it more feasible to deploy policies and governance across systems. Tools are emerging that will help organizations reflect responsible tech policies in machine-enforced controls that are seamlessly integrated into  existing platforms, reducing reliance on human compliance.  

 

Things to watch for (Anticipate)

 

  • Responsible tech operating as an ‘always-on’ assurance fabric. By 2030 responsible tech will be part of the way systems function, showing up in continuous risk sensing, automated controls, periodic third‑party attestations, and transparent user rights spanning data, models, and agents.  

 

  • More proactive, AI-enabled risk management. Business leaders will be able to use increasingly sophisticated policy simulators and behavioral sandboxes to test the socio‑technical impacts of solutions before deployment. Tools like these will also support the emergence of impact registries and marketplaces for verified positive outcomes much like those around carbon emissions today, creating incentives aligned to responsible tech metrics. 

 

Read Looking Glass 2026 in full