In early 2024, we had a working prototype of Pulse's agent coordination system. Agents could discover each other, exchange context, and delegate tasks across organizational boundaries. The problem was that every integration was bespoke. Connecting to a calendar agent meant building a custom adapter. Connecting to a research agent meant building another. Each new capability was weeks of integration work, and every adapter was a maintenance liability that only we could fix.
We were building the pre-HTTP internet. Every node spoke its own language, and we were writing translators by hand.
This is the state of AI agents in 2026 for most teams. Thousands of capable agents exist, but they cannot talk to each other without custom glue code. The agent landscape looks like computer networking in the 1970s: powerful machines, no common protocol, human operators manually routing everything.
Then Anthropic released the Model Context Protocol, and we rewrote our entire integration layer in a month. Here is why.
What MCP Actually Is
Model Context Protocol is a specification for how AI agents expose their capabilities to the outside world. It defines a standard JSON-RPC interface through which an agent (or any software system) can advertise three things: tools it can execute, resources it can provide, and prompts it can offer as interaction templates.
The analogy that holds up best: MCP is to AI agents what HTTP was to web servers. Before HTTP, every networked application had its own protocol for requesting and serving data. HTTP did not make servers smarter. It made them addressable in a uniform way. Any client that spoke HTTP could talk to any server that spoke HTTP.
MCP does the same for agents. An MCP server exposes a schema — "here are the tools I offer, here are the parameters they accept, here are the resources I can provide." An MCP client reads that schema and knows exactly how to interact with that server. No custom adapters. No proprietary SDKs. No bilateral integration agreements.
The protocol itself is deliberately minimal. A tool declaration looks like a JSON schema with a name, description, and input parameters. A resource is a URI with a MIME type. Communication happens over standard transports — stdio for local processes, HTTP with server-sent events for remote ones. There is nothing exotic here. That is the point.
Why We Chose MCP Over Building Our Own
When we started Pulse, we considered building a proprietary protocol for agent coordination. We had specific requirements — access-aware context isolation, permission metadata on every message, audit trails for cross-boundary interactions — that no existing standard addressed.
Building proprietary would have given us full control over the protocol surface. We could design exactly the primitives we needed. Many infrastructure companies make this choice and defend it as "we need features the standard doesn't support."
We chose MCP anyway, for three reasons.
The network effect is the product. Pulse is a coordination layer. A coordination layer that only coordinates with itself is not a coordination layer — it is a walled garden. The value of Pulse scales with the number of agents that can plug into it. MCP already has adoption from major model providers and a growing ecosystem of servers. Building proprietary would mean every agent that wanted to work with Pulse would need to learn our protocol. Building on MCP means every agent that already speaks MCP works with Pulse on day one.
Standards age better than custom code. Proprietary protocols accumulate debt. Every edge case you handle, every versioning decision you make, every transport you support — it all falls on your team. A community-maintained standard distributes that burden. MCP has hundreds of contributors refining edge cases we would never have encountered on our own. Our integration layer went from the most fragile part of our codebase to the most stable.
Differentiation belongs above the protocol. HTTP did not prevent Amazon, Google, and Stripe from building wildly different products. The protocol is the common ground. The value is in what you build on top. For Pulse, that means the coordination logic, the access-aware security model, and the Mountable Context Cells that enforce isolation boundaries. None of that needs to live in the wire protocol. It lives in the layer above.
How MCP Enables the Coordination Layer
Here is the architecture in concrete terms.
At the bottom layer, MCP servers act as context providers. A calendar MCP server exposes your availability as a resource and offers tools to create events. A CRM server exposes contact records and offers search tools. A document server exposes files and offers summarization tools. Each server is independently operated and speaks the same protocol.
In the middle, Pulse acts as the coordination and routing layer. When an agent needs to accomplish a task — say, scheduling a meeting with someone whose availability requires checking two different calendar systems — Pulse discovers the relevant MCP servers, negotiates which tools to invoke, sequences the calls, and merges the results into a coherent response.
At the boundary layer, Mountable Context Cells enforce isolation. This is where Pulse's architecture diverges from a generic MCP client. When your agent interacts with an external party, the MCC determines which MCP servers are mounted for that interaction. An investor-facing deployment might mount your pitch deck server and calendar server but not your internal Slack server or financial data server. The MCP servers that are not mounted do not exist in that context. There is nothing to leak because there is nothing to see.
The key insight: MCP handles the interoperability problem (how do agents talk to each other), while MCCs handle the security problem (what are agents allowed to talk about). These are orthogonal concerns, and separating them lets us use a standard protocol without compromising on access control.
┌─────────────────────────────────────────────┐
│ External Party │
│ (investor, client, partner) │
└──────────────────┬──────────────────────────┘
│
┌──────────────────▼──────────────────────────┐
│ Mountable Context Cell (MCC) │
│ ┌─────────────────────────────────────┐ │
│ │ Mounted: Pitch Deck, Calendar │ │
│ │ Not Mounted: Slack, Financials │ │
│ └─────────────────────────────────────┘ │
└──────────────────┬──────────────────────────┘
│
┌──────────────────▼──────────────────────────┐
│ Pulse Coordination Layer │
│ (discovery, routing, sequencing, merging) │
└──────┬───────────┬───────────┬──────────────┘
│ │ │
┌────▼───┐ ┌───▼────┐ ┌──▼──────┐
│Calendar│ │ CRM │ │ Docs │
│MCP Srv │ │MCP Srv │ │MCP Srv │
└────────┘ └────────┘ └─────────┘
The Interoperability Challenge Nobody Talks About
MCP solves the protocol-level interoperability problem. But there is a harder problem underneath: getting agents from different providers to coordinate safely when each provider has different trust assumptions, different capability models, and different ideas about what "safe" means.
Consider a concrete scenario. Your Pulse agent needs to coordinate with a partner's agent that runs on a different provider. Both agents speak MCP. But your agent operates under Pulse's access-aware security model with explicit permission boundaries. Their agent might operate under a completely different security model — or no security model at all.
When these agents exchange context, who enforces the boundaries? If your agent shares a resource with their agent, what prevents their agent from forwarding it somewhere you did not authorize?
This is the agent interoperability problem that goes beyond protocol compatibility. It requires:
Capability attestation. When an MCP server advertises tools, there is currently no standard way to verify that the server actually does what it claims. A tool called send_email could do anything. Agents coordinating across trust boundaries need cryptographic proof of capabilities, not just schema declarations.
Permission propagation. When Agent A shares a resource with Agent B under specific access conditions, those conditions need to travel with the resource. MCP resources carry MIME types but not access policies. The permission metadata is something Pulse adds at the coordination layer, but it would be stronger as a protocol-level primitive.
Audit interoperability. Both agents need to produce audit trails that the other party can verify. If an interaction goes wrong, both sides need a shared record of what happened. Today, each system logs independently. There is no standard for cross-agent audit reconciliation.
These are hard problems. They are also solvable problems, and MCP provides the right foundation to solve them — which brings me to our wishlist.
What MCP Still Needs
We are committed to MCP because we believe in the trajectory. But the protocol is young, and there are gaps we hit regularly in production.
Richer authentication and authorization. The current auth story in MCP is minimal. For local servers running over stdio, auth is implicit. For remote servers, OAuth flows exist but the permission model is coarse. We need scoped, granular permissions — "this client can invoke read_calendar but not create_event" — at the protocol level. Today we enforce this in Pulse's coordination layer, but it should be a first-class MCP concept so that any MCP client can respect the same boundaries.
Streaming coordination primitives. MCP supports tool invocation as request-response. But real agent coordination often requires long-running, multi-step interactions. An agent researching a topic might need to stream partial results to a coordinating agent that is simultaneously querying other sources. The protocol needs primitives for streaming partial results, coordinating concurrent tool invocations, and signaling progress on long-running operations.
Capability negotiation. When two MCP-speaking systems connect, there is no standard handshake for negotiating capabilities and constraints. Today it is "here is my schema, call whatever you want." We need a negotiation phase where systems can agree on interaction boundaries, rate limits, and acceptable use policies before any tools are invoked.
Resource lifecycle management. MCP resources are currently static URIs. In a coordination context, resources are dynamic — they expire, they get updated, they have versioning requirements. A shared document resource needs to communicate when it was last modified, whether the current version is still valid, and how to subscribe to changes.
We are contributing to some of these proposals in the MCP community. The protocol's governance model is open, and the maintainers are receptive to production-informed feedback. This is another advantage of building on an open standard — we can shape it rather than route around it.
Why MCP-Native Is a Structural Advantage
Here is the strategic argument for why building MCP-native matters for Pulse specifically.
The network of agents we described in earlier posts requires a common protocol to function. Without one, every agent-to-agent connection is a bilateral integration. The number of integrations grows quadratically with the number of agents. This does not scale.
With MCP as the common protocol, the number of integrations grows linearly — each new agent implements MCP once and becomes compatible with every other MCP-speaking system. Pulse, sitting at the coordination layer, benefits from every new MCP server that anyone builds. A team in Tokyo publishes an MCP server for Japanese regulatory compliance checking. Without us writing a single line of code, Pulse users can mount that server and use it in their agent workflows.
This is the flywheel. More MCP servers make Pulse more capable. More capable Pulse makes MCP more attractive for server developers. The ecosystem compounds.
Being MCP-native from the start — rather than bolting on MCP support later — means the protocol's primitives are deeply integrated into our architecture. Our tool discovery, permission enforcement, context routing, and audit logging all operate on MCP's abstractions. When the protocol evolves, we evolve with it naturally rather than maintaining a translation layer.
The Bet
We bet our stack on MCP because we believe the agent ecosystem will converge on open protocols the same way the web converged on HTTP, email converged on SMTP, and package management converged on shared registries. The specific protocol matters less than the convergence itself. But MCP has the right properties: it is minimal, transport-agnostic, backed by a major AI lab, and growing fast.
For Pulse, the bet means we are building the coordination layer on a foundation we do not fully control. That is uncomfortable for a CTO. It also means we are building on a foundation that the entire ecosystem is invested in maintaining and improving. That is a trade-off we take every day.
The agents are coming. They will need to talk to each other. The question is whether they do it through a thousand proprietary adapters or through a common protocol that anyone can implement. We chose the protocol.
Pulse is building the coordination layer for the agent era at aicoo.io. To understand how access-aware security works alongside MCP, read how Pulse solves the security paradox. To see where the network of coordinating agents is heading, read from AI COO to virtual economy.