No items found.
Our blog

Get the latest Civic news and updates

The Auth Problem Nobody Talks About

OAuth isn’t built for AI agents. We explore why trust, not just access, is the real authentication challenge for agentic systems.

Product
Ty Avnit
July 24, 2025

Everyone is building AI agents.

Nobody is solving the real authentication problem.

I discovered this gap during a recent conversation with a founder building AI agents that connect to enterprise tools. He was confident the problem was solved—OAuth exists, users grant permissions, case closed.

I wasn't convinced. At Civic, we're building MCP Hub specifically to tackle AI agent authentication because we see the complexity he's overlooking.

Turns out, we were both right. And we were both missing something critical.

The Surface Problem

Let's start with what this person sees. When you use an app, you go through OAuth once. You click "Allow access to Gmail" and boom — the app can read your email. The permissions are clear. The access is limited. 

This works fine for apps because apps are predictable. They do the same thing every time. Read your calendar. Send a tweet. Upload a photo.

But agents are different.

The Real Problem

Agents don't just execute one function. They think. They adapt. They make decisions you didn't explicitly program.

Say you tell an agent to "help me organize my week." It might:

  • Read your calendar
  • Check your email for commitments
  • Look at your task list
  • Maybe even book a meeting

Each step requires different permissions. And the agent figures out these steps as it goes.

With traditional OAuth, you'd have to approve each service individually. Gmail access? Approve. Calendar access? Approve. Task management? Approve.

That's annoying. But here's the bigger problem: OAuth scopes are too broad.

The Hidden Problem

When you give an agent Gmail access, you're not just giving it access to today's emails. You're giving it access to everything. Every email you've ever sent or received. Every attachment. Every conversation.

The agent might only need to read emails from your boss this week. But OAuth can't make that distinction.

I get this. I brought up GitHub's recent security vulnerability. Researchers found that it’s possible to  inject a malicious comment in GitHub’s MCP server, which could result in code leaking from private repositories.

The solution seems obvious: only read comments from trusted team members. But OAuth can't enforce that rule.

The Timing Problem

There’s also another problem: the time dimension.

Traditional apps ask for permissions once. Agents need permissions that evolve.

Your agent might start by reading your calendar. Then it realizes it needs to send emails. Then it wants to create documents. Then it needs to access your CRM.

Do you want to approve each new permission request? That breaks the flow. Do you want to pre-approve everything? That's too risky.

The Gap We're Not Addressing

This person I was talking to made another good point toward the end of our conversation. His users don't seem worried about giving agents access to their tools. They're careful about permissions. They use read-only access when possible. They create agent-specific databases.

But here's what I think he's missing: his users are early adopters. They understand the risks. They know how to set up proper safeguards.

Most people don't.

When AI agents go mainstream, we'll need better guardrails. Not just OAuth scopes, but real policy enforcement. Not just "read access to Gmail," but "read emails from these senders, during work hours, for these specific purposes."

What This Means for Builders

These conversations are powerful. They're fruitful. Two people who actually build things, wrestling with a real problem, staying focused on what matters.

But here's what I notice: be wary of anyone who claims they know what they're talking about in this space. There are no experts yet. We're all figuring this out as we go.

Another interesting point surfaced during our conversation. Authentication for AI agents could be compared to Linux permissions. You have to specifically grant applications access to do things using chmod. That pattern isn't going away.

Something rings true about this observation But I think there’s a missing nuance related to how agents operate over time.

I think the builders who solve this early will have a huge advantage. Solving this problem today is better than solving it for tomorrow.

And here's why the timing might be more urgent than we think: Texas just passed TRAIGA (Texas Responsible Artificial Intelligence Governance Act) — the most comprehensive AI governance law in the US

Starting January 2026, companies must evaluate and monitor every third-party AI vendor for compliance, ethical standards, and risk management.

Think about what that means. 

This means you're now responsible for how your AI vendors handle data, make decisions, and comply with regulations. Every AI service agreement needs compliance language. You need ongoing monitoring for bias and accuracy. If your vendor's AI violates the law, you're potentially on the hook.

The compliance burden is massive. But there's another way: local AI deployment eliminates vendor dependencies entirely. When you own your AI infrastructure, compliance becomes internal governance instead of vendor management.

Sometimes the best vendor risk management strategy is having no vendors to manage.

Think about it: every company building AI agents will eventually hit this wall. The bigger the company, the more they'll need granular control. The more sensitive the data, the more they'll need audit trails.

We're not just building authentication. We're building trust infrastructure for the AI age.

The Real Question

The question isn't whether OAuth is enough. The question is: how do you build trust with a system where machines act on your behalf?

This is the crux of everything we're building. Trust isn't just about permissions. It's about transparency. It's about being able to review what your agent did and why. It's about being able to revoke access not just to services, but to specific actions within those services.

But trust goes deeper than that. It's about predictability. When I give an agent access to my email, I need to know it won't suddenly decide to send messages I didn't approve. When it reads my calendar, I need to know it won't book meetings without asking.

The challenge is that trust and autonomy are in tension. The more autonomous an agent becomes, the harder it is to predict what it will do. But prediction is what builds trust.

We're still figuring this out. But I think the companies that get it right will build the foundation for everything else.

That's why I'm excited about this problem. It's not just technical. It's about reimagining how humans and machines work together.

The future of AI isn't just about making agents smarter. It's about making them trustworthy.

And trust, it turns out, is a lot more complicated than OAuth.

___


Intrigued? Check out what we’re working on at Civic.