Term FF-005
Decision quality preservation
AI accelerates decisions. Most teams stop checking whether the decisions are still right.
Decision quality preservation is the practice of tracking whether technical decisions made with AI assistance hold up over time. Measured by decision rework rate, incident pattern shift, and senior engineer review time after AI adoption.
What it is
The problem with faster decisions
AI coding assistants produce code faster than humans. That is their primary value proposition and it is well documented. What is less documented is the effect of that speed on the quality of the underlying technical decisions the code represents.
When code is written slowly, architecture decisions and implementation choices are made deliberately. A developer who spends three days implementing a feature has three days to reconsider the approach. A developer who generates the same feature in three hours with an AI assistant may not pause to evaluate the implementation pattern with the same care.
The issue compounds over time. If a team adopts AI tools and their decision rework rate increases, the productivity gains from faster initial generation are partially or fully offset by the cost of reworking incorrect decisions. That offset does not appear in throughput metrics. It appears in rework, in incident root causes, and in the growing proportion of senior time spent reviewing and fixing rather than building.
Decision quality preservation is the metric that makes this offset visible. It does not argue against AI adoption. It argues for maintaining deliberate evaluation practices alongside AI-assisted generation.
How it is measured
Three signals that surface decision quality over time
Decision rework rate
The percentage of architecture and implementation decisions that are reversed or substantially revised within 90 days of being made. A rising rework rate after AI adoption suggests that decision velocity has outpaced decision quality. Measured by tracking Architecture Decision Records and change management logs.
Incident pattern shift
The change in the distribution of incident root causes after AI adoption. If design and logic errors increase as a proportion of total incidents while infrastructure incidents remain stable, decision quality has likely declined. Measured by classifying incidents by root cause category and tracking the distribution over time.
Senior engineer review time
The change in the proportion of senior engineer time spent on code and architecture review versus original creation after AI adoption. If senior engineers shift toward spending more time cleaning up AI-generated decisions, decision quality preservation is failing. Measured through time-tracking correlations and Git contribution analysis.
Why it matters
Productivity theater versus sustainable throughput
The risk of measuring AI productivity by velocity alone is significant. A team that ships twice as many features per week while doubling its rework rate is not twice as productive. It is generating debt at twice the speed.
Larridin Developer Productivity Benchmarks 2026 found that AI helps low-performing teams four times more than high-performing teams on velocity metrics. That finding is consistent with the DORA AI mirror effect. Low-performing teams have more low-hanging fruit for AI to address. But if those teams do not also instrument decision quality, their productivity metrics will overstate their improvement.
Decision quality preservation is the metric that prevents organizations from declaring AI adoption a success while accumulating the technical debt that will surface in the next incident spike. It is, in Foundations Framework terms, the application of Principle 03: measurement must outpace the tools.
How Clouditive uses it
The fourth AI signal in every Foundations engagement
Decision quality preservation is the fourth of four AI metrics Clouditive instruments on every Foundations engagement. It is the metric that is most commonly absent in organizations when Clouditive arrives. Teams instrument throughput, sometimes quality, rarely rework, and almost never the senior review time shift.
During the Foundations Assessment, the team interviews senior engineers specifically about whether their daily work distribution has shifted since AI adoption. The questions are direct: what percentage of your time is now spent reviewing AI-generated code versus writing original code? Has that changed in the last six months? Are you finding yourself reversing implementation decisions that a junior or mid-level engineer made with AI assistance?
The answers to those questions, combined with incident root cause data and ADR revision history, produce the decision quality preservation baseline that every subsequent engagement is measured against.
See all four AI signals
The AI metrics page covers all four signals: throughput quality coupling, cognitive offload, AI agent observability, and decision quality preservation.
Read the AI metrics frameworkInstrument decision quality on your platform
The Foundations Assessment establishes the decision quality baseline before it becomes a liability.
Four to six weeks. Maturity radar. DORA baseline. AI readiness score. 90 day roadmap.