Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Architecture drift reduction with LLMs

Published : Apr 15, 2026
Apr 2026
Assess ?

Increased use of AI coding agents can accelerate drift from the intended codebase and architecture designs. Left unchecked, this drift compounds as agents and humans replicate existing patterns, including degraded ones, creating a feedback loop where poor code begets poorer code. Some of our teams are now addressing architecture drift reduction with LLMs.

This approach combines deterministic analysis tools (such as Spectral, ArchUnit or Spring Modulith) with LLM-powered evaluation to detect both structural and semantic violations. LLMs are then used to help fix these issues. Our teams have applied this to enforce API quality guidelines across services and to define architectural zones that guide agent-generated improvements.

A few lessons learned: like with traditional linting, initial scans can surface a large number of violations that require triage and prioritization, and LLMs can assist with this process. Keeping agent-produced fixes small and focused makes review easier, and an additional verification loop is essential to confirm changes improve the system rather than introduce regressions.

This technique extends the idea of feedback sensors for coding agents into the later stages of the delivery lifecycle. As one team at OpenAI describes it, drift reduction acts as a form of "garbage collection," reflecting the reality that entropy and decay emerge even in systems with strong early feedback loops.

Download the PDF

 

 

 

English | Português 

Sign up for the Technology Radar newsletter

 

 

Subscribe now

Download the PDF

 

 

 

English | Português 

Sign up for the Technology Radar newsletter

 

 

Subscribe now

Visit our archive to read previous volumes