Skip to main content
Platform Engineering12 min read·May 13, 2026

Golden paths that developers actually choose (without being forced to)

Most golden paths fail the moment a deadline hits. The path the team uses under pressure is the real path. Here is how to build one they choose instead of one they tolerate.

Golden paths that developers actually choose (without being forced to)

There is a signal that tells you a golden path has failed, and most platform teams never look for it.

Watch what happens when an engineering team is four days from a release and something goes wrong. Watch where they deploy from. Watch which CI template they use. Watch whether they open the internal developer portal or open a Slack thread asking someone who knows the workaround. That moment, when the deadline is real and the cost of doing things the wrong way feels lower than the cost of figuring out the right way, is the true measure of a golden path.

If the team leaves the path the moment pressure appears, the path was never adopted. It was tolerated.

This is compliance theater. The usage numbers looked fine. The retrospectives did not surface complaints. But the path the team trusted when it mattered was not the one the platform team built.

Why most golden paths fail adoption

There is a standard list of reasons. They are not wrong, but they are usually presented too abstractly to act on.

Built for the platform team, not the application teams. Golden paths tend to reflect the constraints that the platform team finds important: security scanning, cost tagging, audit logging, compliance gates. These are real requirements. They are also not what an application team thinks about at 11 pm before a launch. When a golden path is designed around platform team concerns rather than application team workflows, the result is a structure that is correct according to the people who built it and irritating according to the people who need to use it. The technical term for this is a golden cage. The path you cannot leave but would prefer not to be in.

Slower than the workaround when it counts. The 30th of the month, a live customer issue, a demo in three hours. These are the moments that reveal whether a golden path has been designed for normal conditions or real ones. If following the platform's preferred path requires three approvals, two documentation steps, and a pipeline that takes 18 minutes, while the workaround is a direct push that takes two, the workaround will win. Every time. Developers are not being reckless when they take the shortcut. They are responding rationally to the actual cost structure of the two options.

Does not reflect how the best teams work. Most golden paths are designed by asking platform teams what a developer should do. The more diagnostic question is: what do the teams with the highest deployment frequency and lowest change failure rate actually do? The answer is almost always different from the prescribed path. The teams that ship well have found the friction points and routed around them. A golden path that ignores what these teams discovered will not be adopted by the teams that are still learning, because the teams that know what they are doing will not validate it with their own use.

No adoption telemetry. The State of Platform Engineering Vol 4 (2026) found that 29.6% of engineering organizations measure nothing about their platform's actual use. Platform teams that cannot answer "how many deployments last month went through the canonical path versus a bypass" are flying without instruments. They are guessing whether adoption is real. They typically guess it is higher than it is.

The adoption threshold that matters

There is a concept I use called paved road compliance under pressure. It comes from the Cognitive Absorption pillar of the Foundations Framework, adapted from Agarwal and Karahanna's 2000 construct in MIS Quarterly. The idea is simple: a platform is absorbing complexity when developers route through it by default, not by mandate. The signal is what happens when conditions are adversarial.

Under normal conditions, a golden path can benefit from inertia. The developer opens the template because it is there. Under pressure, inertia reverses. The developer opens the tab that gets them to a working state fastest. The platform that wins in that moment is the one that was designed for it.

The gap between adoption under normal conditions and adoption under pressure is the real adoption gap. A golden path that shows 70% usage overall but 20% usage during high-pressure sprints is not a successful golden path. It is a path that developers use when they have time to follow documentation.

Designing for pressure, not for demo

The framing shift that changes how golden paths get built: design for the moment of maximum pressure, not for the moment of maximum time.

Concretely, this means the golden path must be the path of least resistance when an incident is open, not when a developer is onboarding on their first calm Monday morning. These are different design constraints. Onboarding optimization produces good documentation. Incident optimization produces fast defaults, clear error messages, and zero-click deployment paths.

The platform team should ask one question for every capability it ships: if a developer has 15 minutes and a live P1 incident, would they use this path or would they skip it? If the answer is skip it, the path is not ready to be called golden.

Skelton and Pais described paved roads in Team Topologies (2019) as the mechanism for reducing cognitive load on application teams. The reduction in cognitive load has to be real under operational stress, not only in low-stakes conditions. A path that adds cognitive load when load is already high is worse than no path at all, because the developer now has to decide whether to use it.

Measuring adoption that tells you something

Telemetry for golden path adoption is not complicated, but it requires deciding what to instrument before the path launches, not after.

Three measurements that give a useful picture:

Deployment source ratio. For every production deployment in a given period, classify whether it originated from the canonical pipeline or from a bypass. Include all bypasses, not just the ones that got flagged. A bypass that shipped without incident is still a bypass. Track this ratio weekly, not monthly. Monthly aggregates hide the pressure-driven spikes.

Template adoption in pull requests. Track how many pull requests use the repository template or scaffold the platform provides versus how many start from scratch or copy from another repository. A pull request that copies a non-standard structure is a signal that the template did not serve the use case. Aggregate by team and by quarter to see trend direction.

Time on path versus time off path. If the platform has observability into the deployment workflow, track how long canonical deployments take versus bypasses. If the canonical path is reliably faster, adoption follows without mandate. If bypasses are faster, the data tells you exactly where the design problem is.

Stack Overflow's 2024-2025 developer survey found that 84% of developers use AI tools, with 51% using them daily. The platform teams that have built AI coding assistance into the golden path as a first-class capability, not an optional add-on, are seeing significantly higher adoption of the path overall. The path that integrates the tools developers already depend on daily is more useful than the path that treats AI assistance as external to platform concerns.

Rollout that does not rely on mandate

The instinct when a golden path has low adoption is to mandate it. Make it required. Add a gate. This works in the same way that any compliance program works: it produces compliance, not adoption. The developer follows the path and resents it, which means the platform team has no honest feedback signal and the developer has no genuine investment in making the path better.

A different approach has a higher long-term return.

Start with the team that complains most about lacking tools. This is counterintuitive. Most platform teams want to pilot with a team that is already well-organized and will make the path look good. The right choice is the opposite. The team that is loudest about tool gaps is the most motivated to test something new, and if the path solves their specific problems, they become the path's most credible advocates. An endorsement from the team that was previously the most frustrated carries more weight than an endorsement from the team that was already successful.

The first iteration of the path needs to pass one test: it must be faster than the workaround for the 90% case, not the 100% case. Platform teams that try to solve all edge cases in the first version ship late and ship something too complex to be fast. The 90% case is enough for the first iteration. The edge cases get absorbed in later iterations, informed by actual use.

Measure adoption week over week before rolling out to other teams. Not because the data will be perfect, but because it forces the platform team to be honest about whether the path is working before it becomes someone else's problem. The first rollout is a test. The second rollout is a deployment. The difference matters.

What changes when deployments come from AI agents

Golden paths in 2026 have a new constituency. The DORA 2025 report frames AI as an amplifier: AI tools make strong platforms stronger and weak platforms more visibly broken. The State of Platform Engineering Vol 4 (2026) data shows that mature platforms deliver 3.5 times the deployment frequency of immature ones. An AI coding agent operating in a mature platform context will produce usable artifacts at speed. The same agent operating against a platform without clear paths will produce deployment risk at speed.

The golden path built for human developers operating under pressure still applies to AI agents, but with a modified threat model. A human developer who is uncertain will pause and ask a question. An AI agent that is uncertain will often proceed with the closest available pattern, which may or may not be the canonical one. The consequence of an unclear or incomplete golden path is different when the actor does not have the same friction response a human does.

A golden path designed for AI agent use needs three properties that are not strictly required for human-only use. It needs schema-level documentation, not just prose documentation, because agents parse structure more reliably than narrative. It needs explicit error outputs when the path is violated, not just successful-path guidance, because agents need signals to self-correct. And it needs the quality gates (test coverage, security scanning, observability instrumentation) to be embedded in the path as defaults, not as optional steps, because an agent will not make the discretionary judgment call that a senior developer would make to add coverage before shipping.

This last point is load-bearing. The METR 2025 study found that senior developers using AI in unfamiliar codebases were 19% slower than baseline. Part of that slowdown is the cognitive cost of evaluating AI output. A golden path that embeds quality gates removes the evaluation cost on both sides: the human reviewer does not need to catch what the agent missed, and the agent does not need to decide whether the gate applies. The gate runs. The path continues or stops.

The platform team's job in 2026 is to build paths that work correctly when the actor is a human under deadline pressure and when the actor is an AI agent under no deadline pressure but with no judgment about what corners are safe to cut.

The Foundations Framework and where golden paths get built

Within the Foundations Framework, golden paths are not a phase zero activity. They are built in the Forge phase, after the Horizon phase has produced a clear picture of which paths teams actually need.

This sequencing matters because most golden paths that fail were built before anyone confirmed what problem they were solving. The Horizon phase exists to surface the real workflow patterns, not the stated ones. What teams say they do and what telemetry shows they do are often different. A golden path built against stated workflows will be designed for how teams think they work. A golden path built against observed workflows will be designed for how teams actually work under pressure.

The Forge phase is where the path gets built, measured from day one, and iterated based on adoption signals. The path is not done when it launches. It is done when the paved road compliance under pressure ratio shows that teams are routing through it when it matters.

The Horizon phase is where you find out which paths are worth building.


If you want to understand which paths your teams are actually using versus which ones they were supposed to use, the Foundations Assessment is the starting point. The first finding in most assessments is a gap between the platform the team built and the platform the teams use. That gap is measurable, and it is fixable.

References

golden-pathsidpplatform-engineeringdeveloper-experiencefoundations-framework

Found this useful? Share it with your network.

Matías Caniglia

Mat Caniglia

LinkedIn

Founder of Clouditive. 18+ years transforming engineering organizations across LATAM and globally through Developer Experience consulting.

38 articles published

Related Articles

Platform Engineering

Why your DORA metrics are lying to you (and how to fix it)

Most DORA implementations produce numbers that look authoritative but cannot survive a single definitional question. Mat Caniglia explains the four failure modes behind unreliable engineering metrics and what Signal Integrity actually requires.

Read More →
Platform Engineering

Cognitive absorption: the platform metric nobody measures

How much cognitive load is your platform absorbing for your developers, and how do you know? Most engineering metric frameworks cannot answer that question. This post explains why, and what to instrument instead.

Read More →
Platform Engineering

The AI amplifier. What DORA 2025 is actually telling you about your platform.

DORA 2025 calls AI an amplifier, not an accelerator. It magnifies what your platform already does, good or bad. Mat Caniglia explains the three observable signals that tell you which side of that curve you are on, and why the METR paradox is an argument for better platforms, not fewer AI tools.

Read More →

Stay updated with Clouditive

Long-form analysis on platform engineering, DORA, and AI readiness from Mat Caniglia. Sent when there is something worth reading.

Two ways to start

Two ways forward.

A thirty minute strategy call, or a fifteen minute self diagnostic. Both close with a roadmap.

Want to read first? See the Foundations Framework