You spent two years hardening credential policies. MFA everywhere. Passwordless where you could. Risk-based step-up that actually works.

Then an employee's AI assistant got compromised and the attacker walked past all of it.

The Vercel breach in early 2025 happened because OAuth tokens don't care about your authentication posture. The attacker didn't need to phish credentials. They didn't need to bypass MFA. They compromised an employee's AI tool, extracted the OAuth tokens it was using, and those tokens became a master key. Customer preview deployment tokens. Source code access. Data exfiltration at scale.

Here's what makes this different: the tokens weren't stolen from your infrastructure. They were harvested from the applications your employees connected to do their jobs faster. AI coding assistants. Workflow automation. The productivity tools you approved because they used OAuth and that felt safe.

It wasn't safe. It was just a different perimeter, and you weren't watching it.

---

Your MFA Doesn't See This Coming

OAuth was supposed to fix the password problem. No more embedding credentials in code. No more long-lived API keys in spreadsheets. Delegate access, scope it tight, revoke it when the session ends.

That's the theory. In production, OAuth tokens behave like credentials with infinite privilege and no expiration policy.

When your employee connects an AI code assistant to GitHub, that tool gets a token. The token carries the employee's full permission set. It doesn't expire when they close their laptop. It doesn't trigger your conditional access policies. It doesn't log to your SIEM in a way that correlates with user activity.

The tool caches it. The tool uses it. And if the tool gets compromised — through prompt injection, supply chain attack, or a simple data breach like Moltbook's exposed database of 35,000 AI agent credentials — the attacker inherits everything.

Microsoft and Salesforce both patched prompt injection flaws in their AI agents in early 2025 that would have allowed external attackers to exfiltrate sensitive data by manipulating agent behavior. The attack vector wasn't the identity system. It was the token the agent was holding.

We're also seeing self-propagating worms in npm packages that steal developer tokens and use them to push malicious code into other repositories. The worm doesn't break in — it uses legitimate tokens to walk through open doors.

Here's the part that should bother you: you have no idea how many of these tokens are live in your environment right now. Your identity governance tooling doesn't see them. Your privileged access management system doesn't track them. They're not in your directory. They're in SaaS application databases and local credential stores on devices you don't manage.

And when they get used, they look like normal user activity — because technically, they are.

---

The Threat Landscape Moved While You Were Watching Passwords

AI agents and coding assistants went from pilot projects to production tooling in 18 months. Developers are using them. Product teams are using them. Even security teams are using them.

Every one of those tools requires OAuth tokens to access the systems they automate. GitHub. Jira. Slack. Your internal APIs. The tokens are long-lived because short-lived tokens break the user experience. They're broadly scoped because narrow scopes break functionality.

And now we have researchers demonstrating prompt injection attacks against Claude Code, Gemini CLI, and GitHub Copilot Agents that let attackers exfiltrate data by hiding commands in code comments. The attack surface isn't hypothetical anymore.

The Harvester threat group is already deploying Linux backdoors that abuse Microsoft Graph API tokens for persistence and lateral movement. They're not exploiting your firewall — they're exploiting the token ecosystem you built to improve productivity.

---

Your Token Hardening Playbook

This isn't a technology problem. It's an architecture problem. You need to treat tokens the same way you treat credentials — maybe more strictly, because tokens carry more implicit trust.

1. Inventory OAuth grants like you inventory privileged accounts. If you don't know which applications have tokens scoped to your critical systems, you can't revoke them when something goes wrong. Build a recurring process that enumerates active OAuth grants per user and flags anomalies.

2. Enforce token expiration and rotation policies at the directory level, not the application level. Most SaaS apps will honor token lifetime policies if your IdP enforces them. Set maximum token lifetime to hours, not months. Yes, users will need to re-authenticate more often. That's the point.

3. Scope tokens to the minimum permission set required for the specific task, then audit which applications are requesting broad scopes. If an AI coding assistant is asking for admin-level GitHub access, that's a red flag. Treat scope creep the same way you treat privilege creep.

4. Log token issuance and usage in your SIEM and correlate it with user behavior analytics. A token issued to a developer's AI tool at 3 AM from an IP in a different country should trigger the same response as a compromised password. Right now, it doesn't even generate an alert.

5. Build a revocation playbook that includes third-party OAuth tokens, not just internal credentials. When you offboard an employee or detect a compromise, revoking their directory password isn't enough. You need to revoke every active OAuth grant tied to their identity across every connected application.

---

What's your current process for tracking OAuth tokens issued to AI tools and third-party applications — and how are you handling revocation when something smells wrong?

Keep reading