
The Fundamental Shift
Here's a radical proposition: to build truly effective AI agents, stop thinking like a developer.
Instead, become the assistant.
This isn't just a thought experiment—it's a methodology that exposes a fundamental truth about human-AI interaction. When you put yourself in the position of an AI assistant trying to help someone, you suddenly discover something startling: humans are terrible at communication. We leave out crucial details, assume shared context that doesn't exist, and make requests that would baffle even the most dedicated human assistant.
The Art of Reverse Engineering Intent
Consider this seemingly simple request: "Find Bob's email and book dinner at 7 pm."
As a human, you might nod and start searching. But step into the assistant's shoes for a moment. The questions multiply instantly:
- Which Bob? (You might know three)
- What email? (The one about dinner? From last week? Today?)
- Which day for dinner? (Tonight? Tomorrow? Next Tuesday?)
- Where should dinner be? (Bob's favorite place? Somewhere new? Near the office or home?)
- Book for how many people? (Just you and Bob? Your usual group?)
- What about dietary restrictions? (Is Bob still vegetarian?)
This is the assistant's dilemma: every human request is an iceberg, with 90% of the necessary information hidden beneath the surface of what's actually said.
Context Engineering: A New Discipline
This realization gives birth to what we might call "context engineering"—the systematic process of identifying and filling the gaps in human communication. It works like this:
- Become the Assistant
Forget about APIs, databases, and technical architecture for a moment. Simply ask yourself: "If I were a human assistant receiving this request, what would I need to know to complete it successfully?" - Map the Information Gaps
List every piece of missing context. Be exhaustive. Include not just the obvious gaps ("which day?") but the subtle ones ("does the user prefer restaurants they can walk to?"). - Design the Context Collection
Only now do you think about sources. Where would a human assistant find this information? The user's calendar? Their email history? Their stated preferences? Past behavior patterns?
The Hidden Complexity of Simple Tasks
Let's trace through our dinner example to see how context engineering reveals hidden complexity:
Identity Verification (The zeroth step we often forget)
Before anything else, the assistant must confirm it's acting for the right person. In the physical world, this is automatic—you see who's asking. In the digital realm, this requires explicit verification through systems like Civic Auth.
Email Discovery
"Find Bob's email" seems straightforward until you realize the assistant needs to:
- Search through potentially thousands of messages
- Identify which Bob (surname? company?)
- Determine which email is relevant (about dinner? most recent?)
- Respect privacy boundaries (only emails the user should access)
Time Resolution
"7 pm" is meaningless without a date. The assistant must:
- Check the calendar for availability
- Infer likely dates (tonight? this week?)
- Consider the user's planning habits (do they book same-day or ahead?)
Preference Mapping
Choosing where to eat requires understanding:
- Dietary restrictions (Bob's and the user's)
- Location preferences (near home? the office? Bob's place?)
- Past restaurant choices
- Budget considerations
- Ambiance preferences (business or casual?)
Execution Coordination
Actually booking involves:
- Restaurant availability
- Reservation systems
- Confirmation preferences
- Calendar updates
- Notifying other parties
The Security Paradox
Here's where traditional approaches break down. To be helpful, an assistant needs broad access to information. But giving blanket permissions ("read all email") is a security nightmare.
This is why systems like MCP Hub matter. They enable fine-grained, purpose-limited access: "Read only emails from Bob in the last week that mention dinner." The assistant gets exactly what it needs—nothing more, nothing less.
Why This Matters
The implications extend far beyond dinner reservations. Every interaction between humans and AI assistants suffers from the communication gap. By adopting the assistant's perspective, we can:
- Design better interfaces that prompt for missing context upfront
- Build smarter agents that know which questions to ask
- Create safer systems that request only necessary permissions
- Reduce friction by anticipating needs before they're expressed
The Practice of Empathetic Engineering
Context engineering is ultimately an exercise in empathy—not for the user, but as the assistant. It requires us to inhabit a peculiar headspace: that of an intelligent entity trying to be helpful while working with incomplete information.
This shift in perspective transforms how we approach agent development. Instead of asking "What can this agent do?" we ask "What does this agent need to know?" Instead of building features, we're filling communication gaps.
Moving Forward
The next time you're designing an AI agent, try this exercise: Write out a typical user request. Then, spend five minutes as the assistant, listing every question you'd need answered to fulfill that request perfectly.
You'll be amazed at how much hidden context emerges—context that your users assume you'll magically understand.
That gap between what users say and what they mean? That's where the real work of building intelligent assistants begins. And it starts with a simple shift in perspective: stop thinking like a builder, and start thinking like the assistant you're trying to create.
Because in the end, the best AI agents aren't the ones with the most features or the largest models. They're the ones that understand what we meant to say, even when we couldn't quite say it ourselves.