All essays
Agentic AI··3 min·Series: Agent Experience

Agent Experience is more important than User Experience

Okay, put down your pitchforks. The title is a bit of a joke.

Agent Experience is more important than User Experience

Okay, put down your pitchforks. The title is a bit of a joke.

Here’s the irony though: optimizing your platform for agents is the cheat code for making the human experience better. They're improvements that help everyone, including people who never use AI.

Humans can compensate for rough edges. We’ll read the tooltip, ask a coworker, or brute-force the workflow until it works.

Agents do, too. But they do it at a scale that can have massive downstream effects.

When agents don’t have a clear map, they wander: they scrape UIs, guess workflows, generate plausible payloads, and retry in a dozen different ways. That’s brittle automation that burns time, tokens, and trust.

So if you want good AX, you’re forced to build the things we always should’ve built:

  • Stable APIs: no UI scraping / fewer "made it work with what I had" workflows

  • Strict schemas + validation: fewer invented payloads / safer retries

  • Predictable paths: less wandering / less one-off hacks

  • Crystal-clear roles + access controls: fewer "R&D must unblock it" escalations

  • Audit logs that make sense: faster debugging + compliance + feedback loop

  • Safe controls, rate limits, kill switches: Manage runaway or malaligned actors

And guess what? That’s exactly what your developers and power users have been begging for. AI is just the forcing function.

If you want to survive the agentic future, product and engineering teams have to pivot to AX:

Stop building "the agent." Build the action surface.

Don’t obsess over building a chatbot with a branded personality. Focus on the boring stuff that makes automation reliable: API contracts, versioning, idempotency, schema validation, and clear action boundaries.

The future is customers bringing a team of humans and tools and agents.

Many enterprises won’t standardize on your single agent. They’ll standardize on their own hybrid teams, which could be a mix of humans, internal tools, or agents from lots of different sources, and they’ll want all of them to execute work safely on your platform. The value you provide isn’t "the agent." It’s access, governance, and reliability. Your UI becomes one of many convenience wrappers over a rock-solid foundation.

Accountability helps your vibe-code grow up.

Prototypes are cheap. Enterprise automation isn’t. Your moat is governance:

  • Identity: no shared "mystery" accounts. You likely might not know who's a human and who's a bot anyway.

  • On-behalf-of attribution: "Agent X executed action on behalf of User Y"

  • Enforced gates: high-stakes actions require approvals + reversibility. This might be a human in the loop, but maybe it's a tool or agent in the loop (think CI / tests)

  • Machine and human readable failures: agents need structured errors to self-correct, humans do to. Don't presume you know which you're working with.

Build circuit breakers.

Agents operate at machine speed. You need quotas, rate limits, anomaly detection, that work for agents and humans. Again though, you'll likely never fully know what's a bot, what's a human, or what's both working together and in practice, you can’t rely on "a human will notice and do the right thing" - so design governance that works regardless of whether the caller is human, agent, or both.

The takeaway

When you stop treating agents like UI users and start building the governable, API-first substrate they actually need… you accidentally build the most robust platform your human users have ever seen.

Good AX is good UX.

The next time an agent fails on your platform, don’t blame the model. Ask it:

"What API endpoint, permission scope, error contract, or audit log would have helped you here?"

The answer to that question becomes your roadmap.

Keep reading