The conversation is changing. For the first time ever, the person or thing on the other side of an interaction isn’t always human. Every time I talk with other executives, the “agentic future” comes up. It’s a compelling idea: agents replacing old systems to actually solve problems for us without oversight. With more than a billion AI agents poised to handle everything from customer complaints to complex trades by 2029, the hurdle isn’t the tech itself. It’s whether we can actually trust it.
The reality is that most businesses are stuck in the pilot stage. Not for failure of imagination, but because we don’t have the right tools to move from a cool demo to a smart system that works safely at scale.
The old plumbing, or legacy infrastructure, wasn’t built for an agentic future. Workflows break easily. Data is trapped in silos. Trust is bolted on versus being built in. The result: As we deploy more agents, complexity will turn into chaos.
What’s missing is a trusted, neutral middle ground, a Switzerland for the modern tech stack.
As billions of these interactions happen, we need a layer that acts like a nervous system, connecting and coordinating every app and agent. Think of it as a conversational command center that fixes the trust gap by focusing on three things: identity, governance, and visibility.
IDENTITY: VERIFY WHO IS DOING WHAT
Let’s say you task an agent with purchasing an expensive driver that’ll add 20 yards off the tee, or in my case, one with AI to help me find the fairway more often. The retailer needs to know in real time that it was actually you who authorized the purchase, not some bad actor or rogue agent trying to improve their own handicap.
And as agents get more autonomy, the stakes get higher. A several hundred-dollar golf club purchased without approval is a nuisance. An unsanctioned bank transfer or a leaked confidential email is a disaster.
This goes far beyond the traditional machine-to-machine logins and identity tools we’ve used for years. Unlike traditional machines that follow a fixed script, agents use “reasoning” that is fluid and responds to each situation differently. They are built to work around problems and develop new skills. Expecting old school authentication, which is built for systems that react the same every time, to do the job against autonomous agents sets us up for disaster.
Forget one-time logins. Identity in the agentic era has to be alive, dynamic, and real-time, constantly checking user intent and behavior against specific rules. That’s how you make interactions secure, whether you’re talking about a person or a bot.
GOVERNANCE: DEFINE WHAT IS HAPPENING
Agents are autonomous by design. They’re meant to go off and do things on their own. To do this accurately, they need clear, defined guardrails and policies that say what systems, applications, or data they have permission to access, and for how long.
Let’s revisit the agent buying your driver. Instead of sticking to your budget, it orders a custom TaylorMade for four times as much. Again, this sounds silly when it’s golf, but there’s absolutely no margin for error when agents are making calls on patient care or power grids.
Without strict access controls, scope creep can happen faster than you can say, “I didn’t authorize that.”Since agents from different companies have to work together across thousands of enterprise workflows, governance rules have to apply to everyone, regardless of the AI model they’re using.
OBSERVABILITY: UNDERSTAND WHAT HAPPENED AND WHY
As agents start making decisions at the enterprise level, they create more liability. They’ll be roaming through sensitive networks and talking to your customers. We can’t let that be a black box. We have to be able to explain exactly what happened, why it happened, and who gave the green light.
Did that agent skirt its guardrails to spend $1,000 on your driver, or was a precise spend ceiling never initially established? Did it share private data on its own, or did someone on your team tell it to? Without a clear audit trail, businesses will be stuck in a loop with no way to improve.
It’s simple: You can’t manage what you can’t see. Without real observability, we lose accountability over agent behavior, which leads to waste, frustrated customers, and very real legal headaches. Nobody needs more of that.
ORCHESTRATE EVERY INTERACTION
We’re not just hosting conversations anymore; we’re managing a world of humans and AI. The way we run our businesses has to reflect that, starting now. Without continuous identity, governance, and observability, we’re heading for smarter, faster dysfunction.
No one vendor will own the entire AI ecosystem. To close the trust gap, we need a neutral broker that doesn’t care what cloud, data warehouse, or model you use; a layer that acts as the agentic nervous system, regulating signals, making sure things are secure, and keeping us in control of every single interaction.
Khozema Shipchandler is the CEO of Twilio.