Term FF-002

Three persona platform user

The platform user is no longer one thing.

The three persona platform user taxonomy formalizes who uses your Internal Developer Platform in 2026: human developers, AI agents, and hybrid collaborators. Every Foundations Framework pillar designs for all three from day one.

What it is

A taxonomy, not a metaphor

The three persona platform user is a formal taxonomy introduced in the Foundations Framework by Mat Caniglia. It names the three distinct entities that interact with an Internal Developer Platform today and defines what each entity needs from the platform, what constitutes a platform failure for each, and what observability looks like for each.

Most platform engineering frameworks assume one user: the human developer. That assumption was reasonable in 2020. It is not reasonable in 2026. AI coding assistants, autonomous deployment agents, and hybrid human-AI workflows are now standard configurations in engineering teams. A platform designed only for human developers is a platform that will create friction, observability gaps, and risk surfaces the team cannot see.

The taxonomy is not decorative. Each persona drives different design decisions at the pillar level. Security and Compliance by Default looks different when one of your platform users cannot read a policy doc. Delivery Reliability looks different when one of your deployers does not sleep.

The three personas

What each persona needs, and what breaks for each

01

Human developer

Who they are

The engineer who reads docs, follows golden paths, attends retrospectives, and files tickets. They bring context and judgment. They get tired, distracted, and context-switched.

Their platform contract

Reduced cognitive load. Fast feedback loops. A paved road that does not require expertise to use. Onboarding that does not take weeks.

Common failure modes

Documentation debt. Broken golden paths. High context switch cost from poorly designed tooling. Long CI feedback cycles.

02

AI agent

Who they are

The autonomous system that opens pull requests at three in the morning, runs test suites, triggers deployments, and sometimes does all three in a single workflow no human reviewed.

Their platform contract

Machine-readable contracts. Deterministic interfaces. Observability that can distinguish agent-originated changes from human changes. Review gates appropriate to the risk of autonomous action.

Common failure modes

No observability on agent-originated changes. Review gates designed for humans that agents can bypass. Blast radius of a bad agent deploy indistinguishable from a bad human deploy.

03

Hybrid collaborator

Who they are

A human working alongside an AI pair on the same task: reviewing AI-generated code, directing agent workflows, or co-authoring architecture documents. The most common configuration in engineering teams today.

Their platform contract

Context engineering compatible with agent-assisted workflows. Golden paths that work for both human intuition and agent pattern matching. Observability that surfaces the provenance of each change regardless of who or what originated it.

Common failure modes

Platforms designed for pure human use that create friction when agents are involved. Context that lives in human memory but not in machine-readable form. Paved roads that agents cannot follow because they require implicit knowledge.

Why it matters

AI agent platform failure modes are invisible to human-only monitoring

When an engineering team adopts AI coding assistants and autonomous deployment agents without updating their platform design, they introduce failure modes that their existing monitoring cannot see. An AI agent that merges a pull request at three in the morning is indistinguishable from a human merge in a system not instrumented to track provenance.

State of Platform Engineering Vol 4 (2026) reports that 29.6 percent of platform teams measure nothing. That number was calculated before widespread AI agent adoption. The risk of instrumentation gaps grows proportionally with the percentage of changes originating from agents.

The three persona taxonomy creates a forcing function. When a team acknowledges that AI agents are platform users, they must answer: what is the agent's contract with the platform? What observability do we have on agent activity? What review gates apply to agent-originated changes? The taxonomy makes those questions unavoidable.

How Clouditive uses it

Designed into every pillar from day one

The Foundations Assessment includes an AI readiness score across all three personas. The diagnostic questions differ by persona. Human developer readiness covers cognitive load, onboarding velocity, and golden path coverage. AI agent readiness covers observability instrumentation, review gate design, and blast radius controls. Hybrid collaborator readiness covers context engineering maturity and paved road compatibility with agent-assisted workflows.

During the Forge phase, every capability delivered is validated against all three personas. A CI pipeline that works well for human developers but creates a bypass surface for agents is not a complete delivery. A deployment system observable to human operators but opaque to automated incident detection is not complete observability.

The four AI metrics Clouditive instruments on every engagement (throughput quality coupling, cognitive offload, AI agent observability, decision quality preservation) each have per-persona breakdowns. Agent observability, in particular, is a signal unique to the second and third personas.

Related pillar page

The AI agent platform page covers the second persona in depth, including agent failure modes, observability requirements, and what the platform owes the agent.

Read the AI agent platform pillar

Assess your platform for all three personas

The Foundations Assessment includes AI readiness scoring across the three persona taxonomy.

Four to six weeks. Maturity radar. DORA baseline. AI readiness score. 90 day roadmap.