Skip to main content
Platform Engineering10 min read·May 13, 2026

Cognitive absorption: the platform metric nobody measures

How much cognitive load is your platform absorbing for your developers, and how do you know? Most engineering metric frameworks cannot answer that question. This post explains why, and what to instrument instead.

Cognitive absorption: the platform metric nobody measures

Ask a VP of Engineering how much cognitive load their platform is absorbing for their teams, and they will usually pause for a long time. Then they will say something like, "the developer survey came back at a seven out of ten." That number tells you very little. A survey score is not a measurement of absorption. It is a measurement of how developers felt when they filled in the form.

The question I ask when I start a Foundations Assessment is more specific. How many tools does a developer touch to complete a production deployment, and how do you know? What percentage of your teams used the golden path during your last major deadline, and where did the rest go? How long after a developer opens their laptop in the morning before they produce their first commit that actually ships something? Almost nobody has numbers for these. Almost nobody has even asked.

This is not a developer experience problem. It is a measurement architecture problem. The frameworks engineering organizations rely on for their metrics do not capture what platforms absorb on behalf of developers. They capture what developers produce. That gap is where most platform investments disappear without trace.

What cognitive absorption is, and where it comes from

Cognitive Absorption has an academic origin that platform engineering has largely ignored. In 2000, Ritu Agarwal and Elena Karahanna published a paper in MIS Quarterly defining it as a state of deep involvement with software. The paper described five dimensions: temporal dissociation (you lose track of time), focused immersion (external distractions stop registering), heightened enjoyment (the work itself is satisfying), control (you feel capable of navigating the system), and curiosity (you want to explore further). Source: Agarwal, R., and Karahanna, E. (2000). Time flies when you're having fun: Cognitive absorption and beliefs about information technology usage. MIS Quarterly, 24(4), 665-694.

The research was about software users, not platform engineers. It described the state that software could create or prevent. For twenty-six years the concept stayed inside information systems research.

The application to platform engineering is an extension I made when building the Foundations Framework. A developer in a well-designed platform reaches cognitive absorption more readily, because the platform is absorbing the concerns that would otherwise interrupt the work. A developer on a weak platform is constantly broken out of flow by friction the platform should have handled. The user state Agarwal and Karahanna described becomes a design discipline. Not "how does the user feel?" but "what is the platform absorbing that would otherwise fall on the user?"

That shift in question changes what you measure.

Why the standard frameworks do not reach this

The DORA four keys (deployment frequency, lead time for changes, change failure rate, mean time to restore) are outcome metrics. They tell you whether the delivery system is working. They do not tell you what the delivery system is costing the people who operate it. A team that maintains a 15-minute deployment frequency by having one senior engineer who memorizes the entire deployment runbook is scoring well on DORA while building a fragile single point of failure. DORA does not see the cognitive load that single engineer carries.

The SPACE framework, developed by researchers at GitHub, Microsoft, and the University of Victoria, addresses five dimensions: Satisfaction and wellbeing, Performance, Activity, Communication and collaboration, and Efficiency and flow. Efficiency and flow is the closest SPACE gets to cognitive absorption. The researchers acknowledge flow state as a relevant dimension. The framework does not specify how to measure it at the platform level, and most organizations treat the Activity dimension as the most instrumented one because it is the easiest to instrument. Activity metrics are not absorption metrics.

DX Core 4 (Forsgren, Storey, and Zimmermann, 2024) comes closest. It explicitly includes developer experience alongside speed and quality, and its "ease of delivery" construct touches the friction that cognitive absorption addresses. The framework is the most useful of the three for platform teams that want to move beyond pure outcome measurement. It still does not instrument the specific signals that reveal what the platform is and is not absorbing under pressure.

The gap is consistent across all three frameworks: they measure the output of the developer or the output of the system, but not the exchange between the platform and the developer. That exchange is where cognitive absorption lives.

Three signals worth instrumenting

The Foundations Framework operationalizes cognitive absorption through three specific signals. Each is measurable without new tooling in most organizations. Each tells a different part of the story.

Context switch cost

How many tools does a developer touch to complete a single production deployment?

Count them. Git client. CI/CD dashboard. Secrets manager. Deployment log viewer. Monitoring tool. Incident tracker. Some organizations add a change management form, a Slack approval thread, and a ticket update before the deployment is considered complete. That is eight to ten context switches for one change.

A mature platform collapses this. One command or one pipeline handles the handoffs. The developer stays in one surface for the entire cycle. The number of tools touched is a direct proxy for what the platform is not absorbing.

To measure it, shadow a developer doing a standard deployment, or run a structured interview asking them to narrate the last deployment they completed. List every tool they opened. Add every Slack message or email they sent to get something they needed. The total is your baseline. Mature platforms I have worked with sit at three to four surfaces. Immature ones regularly reach ten to twelve.

Paved road compliance under pressure

What percentage of deployments use the standard path when a deadline is tight?

This is the diagnostic that cognitive load surveys almost always miss. Under normal conditions, developers may follow the golden path because it is convenient. The real test is what happens when the release date is tomorrow and something is broken. At that point, the paved road either proves its value or gets routed around.

Route-arounds are not developer failures. They are data. If developers consistently abandon the standard deployment pipeline under pressure to use a manual alternative, the pipeline is not absorbing enough of the difficulty. The manual alternative is faster, which means it is simpler, which means the platform's added complexity is not paying back in absorbed friction.

To measure it, pull deployment metadata for the last quarter and classify each deployment as canonical (used the standard path) or variant (used a custom or manual alternative). Slice by sprint pressure: deployments in the week before a major release versus deployments in quiet periods. The compliance ratio should go up under pressure, not down. If it goes down, the platform is present but not absorbing.

State of Platform Engineering Vol 4 (2026) reports that 29.6 percent of engineering organizations measure nothing about their platform's impact. Among those that do measure, the metrics are almost always deployment frequency and lead time. Compliance under pressure is rare to the point of being invisible.

Flow state retention

How long after a developer starts work before their first meaningful commit?

This is the coarsest of the three signals, but also the most revealing at scale. In a platform that absorbs well, a developer can open their laptop, orient quickly, write code, and ship a commit in minutes. In a platform that does not absorb, the first twenty to forty minutes of a work session disappear into setup: pulling the right environment, checking what changed overnight, waiting for a local build, figuring out which credential expired.

Tool logs can recover this. CI logs show when the first build was triggered after a developer's previous idle period. IDE telemetry, where available, shows when editing began. Deployment metadata shows when the first commit landed. The gap between login and first productive output is measurable without developer surveys.

Mature platforms I have worked with show a first-commit gap under fifteen minutes for standard work. Immature ones are frequently over forty-five minutes. The forty-five-minute number does not show up in DORA metrics. The developer is not counted as unproductive during that period. But the platform is not absorbing anything during it either.

What these numbers look like in practice

The contrast between platforms at different maturity levels is consistent enough that it reads as a pattern rather than a case-by-case variation.

On mature platforms (high deployment frequency, low change failure rate, genuine golden paths), context switch cost sits between three and five tools per deployment. Paved road compliance under pressure is above 85 percent. Flow state retention produces first commits within fifteen minutes of the start of focused work.

On immature platforms (weekly or less deployment frequency, change failure rates above 15 percent, documentation-heavy but thin automated support), context switch cost rises to eight to twelve tools. Paved road compliance drops to 40 to 60 percent under pressure, with teams building ad hoc alternatives during crunch time. Flow state retention stretches past forty-five minutes, with developers spending meaningful portions of their first hour on environment and orientation problems the platform should handle.

The mature numbers come from platforms where a team has deliberately instrumented absorption as a design goal, not just as a outcome. They built absorbers before they built features. The immature numbers come from platforms where absorption was never the stated accountability of the platform team.

The State of Platform Engineering Vol 4 (2026) finding that mature platforms operate at 3.5x the deployment frequency of immature ones is consistent with this pattern. The difference is not primarily in tooling choices. It is in how much the platform absorbs so developers do not have to.

Why measurement tends to stop before it gets here

There is an honest answer to why most platform teams do not measure these signals. The DORA four are already established as industry standard. Adding three more metrics requires justification to leadership. Deployment frequency and lead time feel like the important numbers because they are the ones in the research and in the benchmarks. Cognitive absorption signals feel like developer experience extras.

The counterargument is that the DORA four tell you whether the system is working, but not whether it is sustainable. A team that hits elite DORA metrics by loading every operational concern onto three senior engineers is not building a sustainable platform. When one of those engineers leaves (and they will, because they are the kind of person who has options), the DORA metrics collapse. The cognitive absorption signals would have predicted it.

Stack Overflow 2024-2025 research reports that 84 percent of developers now use AI tools, with 51 percent using them daily. AI adoption at that scale introduces a new layer of cognitive work: evaluating AI output, integrating it, verifying it against system context the model does not have. On a platform that absorbs well, this additional cognitive work lands on a capable absorber. On a platform that does not, it lands on the developer, on top of everything else. The DORA four will not capture that load increase. The absorption signals will.

How to start measuring it

The instrumentation does not require a new tool category. It requires a different question applied to data most platforms already generate.

For context switch cost, start with a structured interview or shadowing exercise with five to eight developers. Ask each to narrate their last production deployment, listing every tool and every Slack message in order. Aggregate the list. The average tool count is your baseline. The mode tells you where the platform's gaps cluster.

For paved road compliance under pressure, query your deployment records for the last two quarters. Tag each deployment by the path it used (standard CI pipeline, manual script, direct push, other). Cross-reference with sprint or release calendar to identify high-pressure periods. Build the compliance ratio for each period. If you do not have the metadata to do this, the absence of metadata is itself the finding: the platform cannot see its own usage.

For flow state retention, pull CI job start times for the first job of each day per developer, or per team if individual data is not available. Pair with time-to-first-commit or time-to-first-build. The daily histogram of first-commit gaps shows you the cost of the morning orientation problem across the organization.

Running all three for one quarter gives you a baseline. Running it for two quarters gives you a trend. The trend is the data that changes platform investment decisions, because it shows whether platform work is actually absorbing or just adding surface area.

The Foundations Assessment baseline

The first phase of every Clouditive engagement, Horizon, includes a cognitive load baseline survey alongside the three operational signals above. The survey is not a replacement for the signals. It is a triangulation instrument. When survey responses and operational signals diverge, the divergence is the most interesting data point. Teams that report low cognitive load with low paved road compliance are telling you the platform is invisible to them. That is a different problem than high cognitive load with high compliance, but it is still a problem.

The baseline exists because platform teams that invest without measuring cannot know whether the investment is absorbing. They build absorbers and then measure DORA. If DORA improves, they attribute it to the platform work. If it does not, they look for other explanations. Neither path tells them whether the platform is actually carrying what it claims to carry.

The Foundations Assessment is a four to six week structured engagement that produces this baseline across all five pillars of the Foundations Framework. Cognitive Absorption is the third pillar. The baseline survey and the three operational signals run in parallel during Horizon, and the combined read tells the platform team what it is absorbing, what it is not absorbing, and where the highest-return absorption investments are.

If you want to start with a faster read across all five pillars, the free Platform Score takes fifteen minutes and gives you a radar. The cognitive absorption dimension is one of the five axes.

The measurement question is not hard to answer once you decide to ask it. The challenge is deciding it matters enough to ask.

Read more

References

  • Agarwal, R., and Karahanna, E. (2000). Time flies when you're having fun: Cognitive absorption and beliefs about information technology usage. MIS Quarterly, 24(4), 665-694. https://www.jstor.org/stable/3250951
  • Skelton, M., and Pais, M. (2019). Team Topologies: Organizing Business and Technology Teams for Fast Flow. IT Revolution Press.
  • Forsgren, N., Storey, M., and Zimmermann, T. (2024). DX Core 4. DX Research.
  • DORA. (2025). State of AI Assisted Software Development. https://dora.dev/dora-report-2025/
  • State of Platform Engineering Vol 4. (2026). Puppet. Mature platforms 3.5x deploy frequency. 29.6% measure nothing.
  • Stack Overflow. (2024-2025). Developer Survey. 84% developers use AI tools. 51% daily.
cognitive-loadplatform-engineeringdxfoundations-frameworkmetricsDeveloper ExperienceDX Core 4Platform Engineering MetricsMat Caniglia

Found this useful? Share it with your network.

Matías Caniglia

Mat Caniglia

LinkedIn

Founder of Clouditive. 18+ years transforming engineering organizations across LATAM and globally through Developer Experience consulting.

38 articles published

Related Articles

Platform Engineering

Why your DORA metrics are lying to you (and how to fix it)

Most DORA implementations produce numbers that look authoritative but cannot survive a single definitional question. Mat Caniglia explains the four failure modes behind unreliable engineering metrics and what Signal Integrity actually requires.

Read More →
Platform Engineering

Golden paths that developers actually choose (without being forced to)

Most golden paths fail the moment a deadline hits. The path the team uses under pressure is the real path. Here is how to build one they choose instead of one they tolerate.

Read More →
Platform Engineering

The AI amplifier. What DORA 2025 is actually telling you about your platform.

DORA 2025 calls AI an amplifier, not an accelerator. It magnifies what your platform already does, good or bad. Mat Caniglia explains the three observable signals that tell you which side of that curve you are on, and why the METR paradox is an argument for better platforms, not fewer AI tools.

Read More →

Stay updated with Clouditive

Long-form analysis on platform engineering, DORA, and AI readiness from Mat Caniglia. Sent when there is something worth reading.

Two ways to start

Two ways forward.

A thirty minute strategy call, or a fifteen minute self diagnostic. Both close with a roadmap.

Want to read first? See the Foundations Framework