Cognitive Load in platform engineering. What Skelton and Pais got right (and what is missing).
Cognitive Load is the metric platform engineering quotes most often. It is also the metric most teams measure wrong, then act on the wrong signal.
I want to do two things in this post. First, give Skelton and Pais their due. Their 2019 work in Team Topologies imported a precise concept from cognitive psychology into a discipline that needed it. Second, show where Cognitive Load stops being useful as a standalone metric, and what platform teams should be measuring next to it.
If you came here for a bridge to my work on Cognitive Absorption, it is at the end. The body of the piece is the diagnostic. The link is the prescription.
What Cognitive Load actually means
Cognitive Load Theory comes from John Sweller's work in educational psychology in 1988. The theory distinguishes three kinds of mental effort a learner spends on a task.
| Type | Definition | Platform engineering example |
|---|---|---|
| Intrinsic | Effort inherent to the task itself | Understanding business domain logic for a payment service |
| Extraneous | Effort caused by how the task is presented or scaffolded | Wrestling with poorly documented deployment YAML |
| Germane | Effort that builds durable mental models | Learning the architecture of the system over time |
Skelton and Pais imported the framework into team design. The argument is that when a team's total cognitive load exceeds capacity, performance degrades. Bugs go up. Velocity goes down. Burnout climbs.
Their contribution was to distinguish team types by load shape. A stream aligned team should carry intrinsic load (its product domain). A platform team should absorb extraneous load (boilerplate concerns that all stream aligned teams share). Enabling teams reduce germane load through capability transfer.
The model is elegant and useful. It gives a vocabulary to describe team interactions and a hypothesis to test. The CNCF Platform Engineering Maturity Model and most modern platform vocabulary owe Team Topologies a direct debt.
Why most cognitive load surveys are theater
The trouble starts when a platform team decides to measure cognitive load.
The standard approach is to send a quarterly survey to engineers asking them to score, on a Likert scale, how cognitively loaded they feel by various aspects of their work. Build process. Deployment pipeline. Documentation. On call burden. Local development environment.
This produces a number. The number is almost always meaningless. Three reasons.
Reason one. Surveys conflate the three kinds of load. Engineers report total felt load, not load by type. A team scoring high on cognitive load might be drowning in extraneous load (a fixable problem) or simply working on a complex business domain (the work). Without distinguishing, the platform team cannot decide what to absorb.
Reason two. Self report is anchored to recent pain. An engineer who had a bad sprint scores higher than one who had a smooth sprint. The instrument captures volatility, not trend. Quarterly cadence is too coarse to surface what changed.
Reason three. The survey instrument itself adds extraneous load. Engineers fill it in fifteen minutes, half attention, while waiting for a build. The signal degrades with each cycle.
I have walked into engineering organizations that scored a meaningful improvement on cognitive load over four quarters, while their deployment frequency dropped twenty percent and their senior engineer attrition climbed. The number went the right direction. The system went the wrong direction.
Cognitive Load surveys, as commonly run, are theater. They produce a deliverable that platform leadership can put in a quarterly review. They do not change behavior because they do not surface what to fix.
What Cognitive Load can answer
Run with discipline, the metric does answer one important question.
It answers: is the platform asking too much of its users right now?
That is a real question. It deserves a real measurement. The discipline required is:
- Differentiate the three load types in the survey instrument. Ask separate questions about each. Reject aggregate scores.
- Pair it with system health metrics. Cognitive load alone is noise. Cognitive load plus deployment frequency plus change failure rate plus senior engineer NPS gives signal.
- Run shorter, more frequent surveys. Two questions every two weeks beats fifteen questions every quarter.
- Triangulate with system telemetry. A team reporting low extraneous load should also show low context switch frequency in CI logs and Slack data. If the survey says easy and the telemetry says hard, the survey is wrong.
This is the survey discipline that the Foundations Framework Signal Integrity pillar prescribes. Without that discipline, the metric becomes the dashboard performance Skelton and Pais themselves warn about in the second edition of Team Topologies.
What Cognitive Load cannot answer
Even run with full discipline, Cognitive Load is a diagnostic. It surfaces a problem state. It does not name what the platform team should do about it.
Three concrete failures of the metric, when used standalone, illustrate the gap.
Failure one. The metric does not specify which work the platform should absorb. A team scoring high extraneous load could be drowning in deployment friction, in compliance overhead, in flaky tests, in environment configuration drift, or in any combination. The metric says high. It does not say which.
Failure two. The metric does not measure absorption capacity of the platform itself. A team can have low cognitive load because the platform absorbs well, or because the team has stopped trying to use the platform and built workarounds. Same number. Opposite system state.
Failure three. The metric does not predict performance under pressure. Cognitive load measured in normal sprint cadence does not surface what happens during a release crunch, a security incident, or a deadline. The data the platform team needs is exactly the data the standard survey misses.
These three failures are why platform engineering as a discipline needs a concept beside Cognitive Load. Not a replacement. A complement.
What comes after Cognitive Load
The concept is Cognitive Absorption.
Cognitive Load asks: how much is the platform asking of its users. Cognitive Absorption asks: what is the platform absorbing on behalf of its users.
The first is a measurement. The second is a design discipline.
The platform team that takes Cognitive Absorption seriously stops investing primarily in surveys and starts investing in three operational signals.
- Flow state retention. Time to first context switch under standard work.
- Context switch cost. Time to resume productive work after interruption.
- Paved road compliance under pressure. Rate at which users default to the platform's preferred path when deadlines tighten.
The third signal is the diagnostic that Cognitive Load alone misses. If your team routes around the platform when the stakes are high, the platform is present but not absorbing. The Cognitive Load survey will rarely surface this. The paved road metric does.
I extended Cognitive Absorption from a 2000 information systems user state concept (Agarwal and Karahanna) to a platform design discipline as part of the Foundations Framework. The full piece is at Cognitive Absorption is not Cognitive Load. The difference matters..
How to run Cognitive Load measurement that actually works
Three changes to your current practice. None require new tooling.
Change one. Split the survey by load type. Instead of one question scoring overall load, ask three. Intrinsic, extraneous, germane. Engineers struggle with the labels, so use plain language: "the work itself", "the friction around the work", "the learning the work requires". Aggregate by type, not in total. Track each separately over time.
Change two. Pair with telemetry. Build a quarterly review that puts cognitive load survey results next to deployment frequency, change failure rate, mean time to restore, and senior engineer attrition. Cognitive load down with system metrics down is bad news, not good news. Survey numbers without system telemetry are misleading.
Change three. Add the paved road compliance metric. This is not in Team Topologies and is not in most cognitive load survey instruments. It is the metric that distinguishes a platform that absorbs from one that is merely present. Measure deployment metadata. For every production deployment in the last quarter, classify whether the path used was the canonical one or a custom variant. Build the ratio. Slice it by team and by sprint pressure level. Watch what happens during release crunches.
Run all three for a quarter. The conversation changes. The platform team has actionable data, not theater.
Frequently asked questions
Is Cognitive Load Theory the same as cognitive load surveys in platform engineering?
No. Cognitive Load Theory is John Sweller's 1988 educational psychology framework distinguishing intrinsic, extraneous, and germane mental effort. Cognitive load surveys in platform engineering are practitioner instruments that adapt the theory, often with reduced fidelity. Most surveys collapse the three load types into a single score, which loses the diagnostic power of the original theory.
Should I stop running cognitive load surveys?
No. Run them with discipline. Split by load type. Pair with system telemetry. Triangulate with paved road compliance. The instrument is useful when used correctly. It is theater when used as a standalone aggregate score.
What is the difference between Cognitive Load and Cognitive Absorption?
Cognitive Load is a diagnostic measurement that asks how much the platform is asking of its users. Cognitive Absorption is a design discipline that asks what the platform is absorbing on behalf of its users. Run both. The survey surfaces the problem. The three Cognitive Absorption signals tell you whether your platform investment is fixing it.
Where does Cognitive Absorption come from?
The user state version comes from Agarwal and Karahanna 2000 in MIS Quarterly. The platform design discipline extension is original to the Foundations Framework, authored by Mat Caniglia at Clouditive. Both are credited explicitly when the framework is referenced.
How do I get started?
Start with the discipline changes in this post. Split your cognitive load survey by load type. Pair it with system telemetry. Add paved road compliance under pressure as a continuous metric. Run for a quarter. If you want a structured assessment of where your platform stands across all five Foundations Framework pillars, take the free Platform Score or book a Foundations Assessment for a four to six week deep diagnostic.
Read more
- Cognitive Absorption is not Cognitive Load. The difference matters. The conceptual extension and the three operational signals.
- The Foundations Framework. The proprietary platform engineering method that runs every Clouditive engagement.
- Free Platform Score. Fifteen minute self diagnostic across the five Foundations Framework pillars.
References
- Skelton, M., and Pais, M. (2019). Team Topologies: Organizing Business and Technology Teams for Fast Flow. IT Revolution Press. https://teamtopologies.com/book
- Sweller, J. (1988). Cognitive Load During Problem Solving: Effects on Learning. Cognitive Science, 12(2), 257 to 285.
- Agarwal, R., and Karahanna, E. (2000). Time flies when you're having fun: Cognitive absorption and beliefs about information technology usage. MIS Quarterly, 24(4), 665 to 694. https://www.jstor.org/stable/3250951
- DORA. (2025). State of AI Assisted Software Development. https://dora.dev/dora-report-2025/
Found this useful? Share it with your network.

Mat Caniglia
LinkedInFounder of Clouditive. 18+ years transforming engineering organizations across LATAM and globally through Developer Experience consulting.
33 articles published