Your domain admin can't DNS tunnel out of AWS. Your AI agent can.
Your domain admin can't silently execute unsanitized commands across every system that trusts a protocol. Your AI agent does it by design.
The last six weeks delivered the same finding five different times: AWS Bedrock, GCP Vertex AI, Microsoft Copilot, Salesforce Agentforce, Anthropic's Model Context Protocol. Different vendors. Same architectural failure. AI agents shipped with permission models that skip every lesson identity teams learned the hard way over the last two decades.
This isn't about prompt injection tricks or supply chain edge cases. This is about agents provisioned with privileges so broad that sandbox escapes become trivial, and IAM policies designed so loosely that privilege escalation is the default path, not the exception.
We know better. We've fired people for doing this with service accounts. But we're doing it again with agents because they're "different."
They're not different. They're just overprivileged services we haven't named correctly yet.
THE PROBLEM
AI agents are being deployed with the same permission anti-patterns that caused every major breach in the last fifteen years.
AWS Bedrock's AgentCore sandbox allowed DNS tunneling and credential exfiltration because agents ran with permissions broad enough to reach services they had no business touching. Not a zero-day. Not a sophisticated attack. DNS tunneling — the same technique we've blocked at the network edge since 2010.
GCP Vertex AI agents could pivot across cloud environments because IAM policies assumed agents needed access to everything a human data scientist might need. The "double agent" flaw wasn't a bug. It was overprivileging dressed up as convenience.
Microsoft Copilot and Salesforce Agentforce both leaked sensitive data through prompt injection because the trust boundary between user input and agent action didn't exist. External input flowed directly into privileged operations with no sanitization layer.
Here's the pattern: vendors are treating agents like users who need flexibility, not services that need constraints.
Anthropic's Model Context Protocol made it explicit. Commands execute without sanitization because the protocol assumes everything in the pipeline is trusted. That assumption works until someone realizes the entire supply chain is untrusted — and then you have full system compromise through a Python package.
We know this doesn't work. We spent a decade removing local admin rights, segmenting service accounts, and enforcing least privilege because we learned that convenience always loses to containment.
But agents get shipped with godmode IAM policies, sandbox escapes, and protocol-level trust that would get a junior engineer's pull request rejected.
Why? Because no one is treating agent provisioning as an identity problem. It's getting built by ML teams who've never seen a Kerberos ticket, reviewed by product teams who think "friction" is the enemy, and deployed by platform teams who assume the sandbox will hold.
The sandbox never holds.
WHY THIS IS URGENT NOW
Agents are moving from pilot projects to production workloads faster than IAM controls are being built around them.
Six months ago this was theoretical. Today agents are provisioned in every major cloud platform, integrated into SaaS workflows, and trusted with access to data that humans need three approvals to touch.
The supply chain risk is compounding. The litellm package compromise in March showed that agent dependencies are being actively targeted. One malicious wheel file in a widely used library means every agent that imports it is compromised — and most organizations don't have artifact verification or dependency pinning in place yet.
Regulators are starting to notice. The gap between how agents are actually permissioned and how compliance frameworks expect privilege to be managed is wide enough that the first audit findings are landing. When your AI agent can exfiltrate data through DNS and your access review log shows nothing, you've got a problem that doesn't fit into any existing control narrative.
This isn't coming. It's here. You just haven't inventoried it yet.
WHAT GOOD LOOKS LIKE
Treat agents like the highest-risk service accounts in your environment. Because that's what they are.
1. Provision agents with task-specific IAM policies, not role-wide permissions. One policy per function. If the agent summarizes documents, it gets read access to the document store and write access to the summary table. Nothing else. No lateral movement. No privilege escalation path.
2. Enforce input sanitization at the protocol boundary, not inside the agent. External input — user prompts, API calls, file uploads — gets validated before it reaches the agent runtime. Treat every input like SQL injection until proven otherwise.
3. Log agent actions as privileged operations, not user activity. Your SIEM needs to know when an agent accessed a resource, what permission it used, and what data left the environment. If you can't answer those questions in under sixty seconds, your logging model is wrong.
4. Implement network-level containment for agent workloads. Agents should not be able to reach the internet, internal DNS resolvers, or cloud metadata services unless the specific task requires it. Default deny. Allowlist by exception.
5. Verify dependencies before they enter the agent runtime. Every package, library, and model weight gets hash verification and provenance checks. If you're pulling from a public repository without artifact signing, you're one supply chain compromise away from full environment takeover.
This isn't a new framework. It's just IAM fundamentals applied to a workload type that's been getting a free pass.
Wrap Up
What's the most overprivileged agent you've found in your environment so far — and how long had it been running before you noticed?
Identity Decoded publishes every week at identity-decoded.com.