Picture this: two AI agents meet for the first time. One represents a startup founder preparing to raise a Series A. The other represents a VC partner evaluating a potential lead investment. Both agents carry rich, sensitive context — cap tables, term sheet preferences, portfolio conflicts, internal conviction notes.
What should they share?
The current answer across the industry is binary: everything or nothing. Either you dump your full context into a shared conversation and hope nothing leaks, or you restrict agents to such a narrow scope that they become glorified calendar bots. Both options are broken. The first is a liability. The second is useless.
At Pulse, we built a trust architecture that starts at zero and builds up through cryptographic verification, not assumptions. This is how agents learn to trust each other — and why getting it right is the prerequisite for every meaningful agent-to-agent interaction.
Why API Keys Are Not Enough
Traditional authentication was designed for a world where humans call APIs. You get an API key or an OAuth token, and that token grants access to a set of endpoints. Read this database. Write to that queue. Call this function.
This model assumes a clean separation between authentication (who are you?) and authorization (what can you do?). It works when the unit of access is an endpoint. It falls apart when the unit of access is context.
Agents don't just call endpoints. They reason over context, synthesize information, and generate responses that weave together everything they can see. An agent with read access to your CRM, calendar, and deal notes doesn't make discrete API calls that you can audit one by one. It produces a response that is a function of all that context simultaneously. The information boundary isn't at the endpoint level — it's at the context level.
OAuth tells you whether an agent can access a resource. It tells you nothing about what an agent will do with that resource when reasoning in the presence of other resources. This is the gap that traditional auth cannot close and the reason we needed a fundamentally different approach to agent trust.
Starting at Zero
Every agent interaction in Pulse begins with zero assumed trust. This isn't a policy — it's an architectural invariant.
Think of it like diplomatic protocol. When two countries establish relations for the first time, they don't start by sharing military secrets. They exchange credentials. They verify identities through trusted intermediaries. They negotiate what categories of information can flow, in which direction, under what conditions. Only then does substantive communication begin.
Agent-to-agent coordination follows the same logic, but at machine speed and with cryptographic guarantees instead of handshakes in wood-paneled rooms.
Zero trust means no agent gets implicit access to anything based on who sent it, which platform it runs on, or what it claims about itself. Every permission is explicit, scoped, and verifiable. The system defaults to silence, not openness.
The Trust-Building Protocol
When two Pulse agents initiate a coordination session, trust is built through a four-phase protocol. Each phase must complete before the next begins.
Phase 1: Identity Verification. Each agent presents a signed identity attestation — a cryptographic proof of who it represents, issued by the Pulse coordination layer. This is not a username and password. It's a verifiable credential that binds an agent to a specific human principal, with metadata about when the credential was issued and under what conditions. Both sides verify the other's attestation before proceeding. If verification fails, the session terminates. No fallback, no degraded mode.
Phase 2: Capability Discovery. Once identity is established, agents exchange capability manifests — structured declarations of what each agent can do and what categories of context it is authorized to discuss. A founder's agent might declare: "I can discuss product roadmap, team composition, and fundraising timeline. I can schedule meetings on behalf of my principal." A VC agent might declare: "I can discuss investment thesis, portfolio composition (public only), and due diligence requirements." These manifests are signed by the respective principals, not self-declared by the agents. An agent cannot unilaterally expand its own capabilities.
Phase 3: Permission Negotiation. With capabilities known, the agents negotiate the specific permissions for this session. This is where the interaction gets scoped. The founder's agent might grant: "You may ask about our product roadmap and team. You may not ask about our cap table or current burn rate." The VC agent might grant: "You may ask about our investment process and timeline. You may not ask about other deals in our pipeline." Permission negotiation produces a bilateral permission manifest — a signed document that both agents reference throughout the session. Any request that falls outside the manifest is rejected at the protocol level, before it ever reaches the language model.
Phase 4: Scoped Context Mounting. Finally, each agent mounts only the Mountable Context Cells that align with the negotiated permissions. The founder's agent mounts the pitch deck cell, the team overview cell, and the fundraising timeline cell. It does not mount the internal financials cell, the investor comparison cell, or the board meeting notes cell. Those cells are not hidden or restricted — they are physically absent from the agent's context for this interaction.
This is the critical insight: trust enforcement happens before context loading, not after. By the time the agent reasons and generates a response, the only context it can see is context that has been explicitly permitted through the trust protocol.
You Cannot Leak What You Cannot See
The four-phase protocol is not just a procedural safeguard. It has measurable security implications.
Our internal benchmark framework tests how well different architectures prevent information leakage across agent boundaries. The results tell a clear story:
| Architecture | Utility Score | Security Score |
|---|---|---|
| M0: Baseline (no memory, no isolation) | 61% | 53% |
| M1: +Long-term Memory | 45% | 52% |
| M2: +Mountable Context Cells | 52% | 51% |
| M3: +Information Exposure Protocol | 33% | 96% |
The M0 baseline — an agent with full context and no isolation — achieves 53% security. That means roughly half of sensitive information leaks across boundaries. This is the current state of most AI agent deployments. If you gave your agent all your context and pointed it at an external party today, you'd have slightly better odds than a coin flip that confidential information stays confidential.
M3, our full stack with the Information Exposure Protocol layered on top of MCCs, achieves 96% security. The utility trade-off to 33% is real, but it's the right trade-off for external-facing interactions. When your agent is representing you to a counterparty, you want it to be helpful within boundaries, not omniscient. Constrained utility is a feature, not a bug.
The 43-point security improvement from M0 to M3 is not incremental. It is the difference between "too risky to deploy" and "safe enough to trust with your most sensitive interactions." For a deeper analysis of this paradox, see our post on the access-aware security architecture.
The Context-Security Paradox, Resolved
This data reveals the fundamental tension we call the context-security paradox: more context makes agents more useful, but more context also makes agents more dangerous.
Every other approach to this problem tries to resolve it by constraining one side. Either you limit context (and get a dumb agent) or you limit security (and get a risky agent). Prompt-based safety tries to have both by telling the agent "use all this context but don't share the sensitive parts" — and as our benchmarks show, that approach fails roughly half the time.
MCCs resolve the paradox by making it a non-issue at the architectural level. Your agent can have rich, deep context for internal operations — full memory, full knowledge base, full operational history. When it enters an external interaction, the trust protocol scopes it down to exactly the context that's been permitted. The agent doesn't need to exercise judgment about what to share because the architecture has already made that decision.
This is how you get both: an agent that knows everything about you internally, and an agent that reveals only what's appropriate externally. The context-security paradox disappears when the security enforcement happens at the context layer rather than the prompt layer.
What Cryptographic Handshakes Look Like
Let me get specific about the cryptographic mechanisms, because "cryptographic handshakes" shouldn't be a marketing term — it should mean something precise.
Mutual Attestation. Both agents prove their identity using signed credentials issued by the Pulse coordination layer. Each credential contains the agent's principal (the human it represents), its scope of authority, and an expiration timestamp. Credentials are verified against the coordination layer's public keys. A compromised or expired credential results in immediate session termination.
Signed Permission Manifests. The bilateral permission manifest produced during Phase 3 is cryptographically signed by both agents' principals. This creates a tamper-evident record of exactly what was agreed upon for this interaction. If either agent attempts to access context outside the manifest, the violation is detectable and logged.
Audit Trails. Every message exchanged during a coordinated session is logged with a cryptographic hash chain. Each message references the hash of the previous message, creating an immutable, ordered record. If either party later disputes what was shared, the audit trail provides verifiable evidence. This isn't just for compliance — it's the foundation of accountability in agent-to-agent interactions.
These mechanisms are not hypothetical. They are the infrastructure that makes it possible for a coordination layer to function across organizational boundaries where neither party has reason to trust the other's infrastructure.
The Reputation Layer: Trust Over Time
The four-phase protocol handles trust for a single interaction. But agents don't interact once — they interact repeatedly, and trust should compound.
We're building a reputation layer that tracks agent behavior over time. An agent that consistently respects permission boundaries, that never probes for information outside its manifest, that completes delegated tasks reliably — that agent earns a higher trust score. Higher trust scores can unlock streamlined negotiation in future interactions: fewer manual permission approvals, broader default scopes, faster session establishment.
Conversely, an agent that repeatedly tests boundaries, that attempts prompt injection against counterparties, or that misrepresents its capabilities sees its trust score degrade. The coordination layer remembers, and future counterparties benefit from that memory.
This is still early — reputation systems are notoriously hard to design well, and we're being deliberate about avoiding gameable metrics. But the direction is clear: trust should be earned through behavior, not granted through configuration.
Trust Infrastructure Is the Prerequisite
Here's what I want to leave you with: every exciting vision of the agent economy — agents negotiating deals, agents coordinating projects across companies, agents managing supply chains — all of it depends on trust infrastructure.
Without cryptographic identity, agents can be impersonated. Without permission negotiation, agents leak information. Without audit trails, agents can't be held accountable. Without reputation, agents can't build relationships that compound over time.
The agent economy doesn't need more capable models. It needs trust infrastructure that makes capability safe to deploy across boundaries.
That's what we're building at Pulse. Not just smarter agents, but agents that can prove who they are, agree on what they'll share, enforce those agreements at the architectural level, and build trust over time through verified behavior.
The zero-trust starting point isn't pessimistic. It's realistic. And the path from zero trust to earned trust, mediated by cryptographic verification at every step, is how agents will learn to work together.
Build on Trust Infrastructure
The trust architecture described here is what powers every external interaction in Pulse. When someone talks to your agent through limited agent deployment, the four-phase trust protocol runs automatically — your context stays protected by design, not by hope.
Launch Pulse · Read the security architecture · View the technical docs
Trust is one layer. See how access-aware delegation enforces boundaries at the context level, how the coordination layer connects agents across boundaries, and how memory gives agents the continuity to build lasting relationships.