AI Agent Protocols Explained: MCP, A2A, and ANP

14 min read·Matthieu|

A developer-focused comparison of the three protocols that matter for AI agent builders: MCP for tool access, A2A for agent collaboration, and ANP for cross-network discovery. Includes a side-by-side table, layered architecture breakdown, and a decision guide.

What are AI agent protocols and why do they matter?

AI agent protocols are open standards that define how agents connect to tools, talk to each other, and find peers across networks. Without them, every integration is a custom API wrapper. With them, you can swap the model or the tool and things still work.

Three protocols matter for developers building agent systems: MCP (Model Context Protocol), A2A (Agent-to-Agent Protocol), and ANP (Agent Network Protocol). They are not competitors. They are layers in a stack.

MCP connects agents to tools and data. A2A lets agents delegate tasks to other agents. ANP handles cross-network discovery. If you are self-hosting agents on a VPS, knowing where each protocol sits saves you from over-engineering simple setups or under-engineering complex ones.

This article is the mental model. No code. Just the map you need before you start building.

What is MCP (Model Context Protocol)?

MCP is an open standard, originally created by Anthropic, that defines how AI agents connect to external tools and data sources. MCP servers expose capabilities to MCP clients embedded in AI applications. Think of it as a USB port for AI: one standard interface between any agent and any tool. Anthropic donated MCP to the Agentic AI Foundation (AAIF) under the Linux Foundation in December 2025.

Before MCP, connecting an AI model to a database meant writing a custom integration. Connecting it to a second database meant writing another one. MCP replaces that pattern with a universal protocol. Build one MCP server for your database, and every MCP-compatible agent can use it.

How does MCP work?

MCP defines three roles:

  • Host: The AI application (Claude Desktop, Cursor, your custom app). It manages one or more MCP clients.
  • Client: A connector inside the host that maintains a 1:1 connection with an MCP server.
  • Server: A lightweight service that exposes tools, resources, and prompts to clients.

Communication uses JSON-RPC 2.0 over one of two transport types:

  • stdio: Standard input/output. Best for local servers running on the same machine. The host spawns the server as a child process.
  • Streamable HTTP: HTTP-based transport that lets MCP servers run as remote services. The client sends JSON-RPC requests via HTTP POST. The server can stream responses back. This is what unlocks running MCP servers on a VPS and connecting to them remotely.

The older SSE (Server-Sent Events) transport still works but is being superseded by Streamable HTTP, which supports stateless operation and horizontal scaling without sticky sessions.

MCP servers expose three types of primitives:

Primitive What it does Who controls it
Tools Functions the AI model can call (query a database, send an email, run a script) Model-driven (the LLM decides when to invoke)
Resources Data the agent can read (files, database records, API responses) Application-driven (the host decides what to expose)
Prompts Template messages and workflows User-driven (the user selects from available prompts)

Connections are stateful. Client and server negotiate capabilities during initialization, then exchange messages over a persistent connection. Servers can also request sampling (asking the LLM to generate text) and elicitation (asking the user for input), which enables recursive agentic behavior.

What is the MCP ecosystem like in 2026?

MCP is the default integration protocol for AI tooling in 2026:

  • 97M+ monthly SDK downloads across Python and TypeScript
  • 10,000+ public MCP servers in production
  • Adopted by every major AI platform: Claude, ChatGPT, Gemini, Copilot, Cursor, VS Code
  • Governed by AAIF under the Linux Foundation, co-founded by Anthropic, OpenAI, and Block
  • Current spec version: 2025-11-25

The 2026 roadmap focuses on making Streamable HTTP production-ready at scale: stateless request handling, simpler horizontal scaling, and a standard way for registries to discover server capabilities without connecting.

[-> run-claude-code-vps]

What is A2A (Agent-to-Agent Protocol)?

A2A (Agent-to-Agent Protocol) is an open protocol originally created by Google that standardizes how AI agents communicate and collaborate with each other. One agent sends a task to another using JSON-RPC over HTTP. The receiving agent publishes an Agent Card describing its capabilities, authentication requirements, and endpoint URL. A2A handles task lifecycle management, result streaming, and push notifications for long-running operations.

MCP connects an agent to tools. A2A connects an agent to other agents. MCP servers are transparent: you can see exactly what they do. A2A agents are opaque. The calling agent does not know or care what model, framework, or logic runs inside the remote agent. It sends a task and gets results back.

How does A2A work?

A2A defines three actors:

  • User: The human or automated service requesting work.
  • A2A Client: The application sending tasks on behalf of the user.
  • A2A Server: A remote agent exposing an HTTP endpoint. It operates as a black box.

Agent Cards are the discovery mechanism. Each A2A server publishes a JSON document (typically at /.well-known/agent-card.json) that describes:

  • Agent identity (name, description, provider)
  • Service endpoint URL
  • Supported capabilities (streaming, push notifications)
  • Available skills with descriptions
  • Authentication requirements
  • Optional digital signature for verification

A Task is the unit of work. Each task has a unique ID and progresses through a lifecycle:

submitted → working → completed
                   → failed
                   → canceled
                   → rejected
           → input-required (multi-turn: agent needs more info from the client)

Messages contain Parts: text, file references, or structured JSON data. Task outputs are called Artifacts, which also contain Parts. An agent can return a code file, a text summary, and a JSON report in a single response.

A2A supports three communication patterns:

  1. Request/Response: Client sends a task, polls for status with GetTask.
  2. Streaming (SSE): Real-time incremental updates over persistent HTTP connections. The client calls SendStreamingMessage and receives events as the agent works.
  3. Push Notifications (Webhooks): For long-running tasks, the agent POSTs status updates to a client-registered webhook URL.

All communication uses JSON-RPC 2.0 over HTTPS. Version 0.3 added gRPC support and signed agent cards. The current version is A2A v1.0.0.

How did A2A and ACP merge?

IBM launched the Agent Communication Protocol (ACP) in March 2025 to power its BeeAI platform. Google announced A2A the following month. Both protocols solved the same problem: agent-to-agent communication.

In August 2025, the Linux Foundation announced that ACP would merge into A2A. IBM's ACP team, led by Kate Blair, joined the A2A Technical Steering Committee alongside Google, Microsoft, AWS, Cisco, Salesforce, ServiceNow, and SAP. The BeeAI platform switched from ACP to A2A, and ACP-specific development wound down.

If you were evaluating ACP separately, stop. The answer is A2A.

What is ANP (Agent Network Protocol)?

ANP (Agent Network Protocol) is a peer-to-peer protocol that lets AI agents discover and communicate with each other across open networks without a central authority. Unlike MCP's client-server model and A2A's client-server task delegation, ANP treats every agent as an equal peer. It uses W3C Decentralized Identifiers (DIDs) for identity, JSON-LD for data exchange, and includes a meta-protocol layer where agents negotiate how they will communicate.

ANP targets a different problem than MCP and A2A. Those protocols assume you know which server or agent you want to talk to. ANP solves the question: how does an agent find other agents it has never interacted with before, across organizational boundaries, without a central directory?

How does ANP differ from A2A?

ANP has a three-layer architecture:

Layer 1: Identity and Encrypted Communication. Every agent gets a W3C Decentralized Identifier using the did:wba (Web-Based Agent) method. Each DID maps to an HTTPS-hosted DID document, so identity resolution uses existing web infrastructure. Two agents can verify each other's identity and establish encrypted channels without a central authority.

Layer 2: Meta-Protocol. Agents negotiate communication protocols dynamically. Instead of both agents needing to support the same fixed protocol, they exchange proposed requirements in structured form, agree on a protocol, and then communicate using it. This makes ANP adaptable to scenarios that A2A's fixed JSON-RPC approach cannot handle.

Layer 3: Application Protocol. Agent descriptions use JSON-LD linked to schema.org ontologies. Discovery works two ways:

  • Active discovery: Query a domain's /.well-known/agent-descriptions endpoint.
  • Passive discovery: Agents register with indexing services that crawl and catalog descriptions.

How ANP and A2A compare architecturally:

Aspect A2A ANP
Topology Client-server Peer-to-peer
Identity Agent Cards (self-published JSON) W3C DIDs (decentralized, verifiable)
Discovery Known URL (/.well-known/agent-card.json) Decentralized indexing + well-known endpoints
Protocol flexibility Fixed (JSON-RPC 2.0) Meta-protocol negotiation
Data format JSON Parts (text, files, structured data) JSON-LD with semantic linking

Current status: ANP is still in the proposal and early development stage. It has a GitHub repository and a W3C Community Group white paper, but no production-grade SDKs or widespread adoption yet. The spec is not governed by AAIF.

How do MCP, A2A, and ANP compare?

The three protocols compared across the dimensions that matter when you are designing an agent system:

MCP A2A ANP
Problem solved Agent-to-tool connection Agent-to-agent task delegation Cross-network agent discovery and communication
Architecture Client-server (host → client → server) Client-server (client → remote agent) Peer-to-peer (agent ↔ agent)
Transport stdio, Streamable HTTP HTTPS (JSON-RPC 2.0), SSE, gRPC HTTPS, negotiable via meta-protocol
Identity model Server identity implicit (configured by host) Agent Cards (self-published JSON) W3C DIDs (did:wba)
Data format JSON-RPC 2.0 JSON-RPC 2.0 with Parts (text, files, structured) JSON-LD (semantic, linked data)
Discovery Manual configuration or registry lookup /.well-known/agent-card.json DID resolution + decentralized indexing
Governance AAIF / Linux Foundation AAIF / Linux Foundation W3C Community Group (independent)
Spec version 2025-11-25 v1.0.0 White paper stage
Maturity Production (97M+ monthly SDK downloads) Production (v1.0.0, major vendor SDKs) Early development (no production SDKs)
Use case Give your agent access to databases, APIs, files Have your agent delegate work to specialized agents Let agents find each other across the open internet

How do these protocols work together?

These protocols are not alternatives. They are layers. Take a concrete example.

Say you run a coding agent on your VPS. It needs to:

  1. Read files from your Git repository
  2. Query your project's database for schema information
  3. Ask a separate review agent to check its work
  4. Find a deployment agent run by your client's infrastructure team

The protocols stack like this:

┌─────────────────────────────────────────────────────────┐
│                    Your VPS                              │
│                                                         │
│  ┌──────────────┐    MCP     ┌───────────────────────┐  │
│  │              │◄──────────►│ MCP Server: Git tools  │  │
│  │  Coding      │            └───────────────────────┘  │
│  │  Agent       │    MCP     ┌───────────────────────┐  │
│  │  (Host)      │◄──────────►│ MCP Server: DB schema  │  │
│  │              │            └───────────────────────┘  │
│  │              │                                       │
│  │              │    A2A     ┌───────────────────────┐  │
│  │              │───────────►│ Review Agent (A2A)     │  │
│  └──────┬───────┘            └───────────────────────┘  │
│         │                                               │
└─────────┼───────────────────────────────────────────────┘
          │
          │  ANP (cross-network discovery)
          ▼
┌─────────────────────┐
│ Client's Deploy      │
│ Agent (discovered    │
│ via DID resolution)  │
└─────────────────────┘

Layer 1 (MCP): The coding agent uses MCP clients to connect to local MCP servers for Git operations and database queries. These are tool integrations. The agent calls functions and reads data.

Layer 2 (A2A): The coding agent delegates code review to a separate review agent running on the same server (or a different one). It sends a task via A2A, the review agent works asynchronously, and streams results back. The coding agent does not know what model or framework the review agent uses.

Layer 3 (ANP): The coding agent needs to find a deployment agent it has never interacted with before, run by a different organization. ANP's DID-based discovery locates the agent, verifies its identity, and negotiates a communication protocol.

For most self-hosted setups today, you only need MCP. Add A2A when you have multiple specialized agents that need to collaborate. ANP is not useful yet for production, but it will matter when agent ecosystems span organizational boundaries.

Which protocol should you use?

Start with the simplest protocol that solves your problem. Add layers only when you hit a limitation.

Decision guide:

  1. Do you need your agent to access tools, databases, or APIs? Yes → Implement MCP. Build or install MCP servers for your data sources. This covers 80% of agent integration needs.

  2. Do you have multiple agents that need to delegate tasks to each other? Yes → Add A2A. Publish Agent Cards for each agent. Use A2A for task delegation and result streaming. No → You do not need A2A. If you have one agent calling APIs through MCP, that is enough.

  3. Do your agents need to discover unknown agents across organizational boundaries? Yes → Evaluate ANP when production SDKs are available. Today, you would handle this with a manual registry or a shared A2A agent directory. No → Skip ANP for now.

Common patterns:

Setup Protocols needed
Single agent + tools (most projects) MCP only
Multiple specialized agents on one server MCP + A2A
Multi-org agent collaboration MCP + A2A + ANP (when mature)
Agent marketplace / open discovery A2A + ANP

If you are just starting with AI agents on a VPS, begin with MCP. Get one agent connected to one tool. Make it work. Then scale the architecture as your needs grow.

What are the security risks of each protocol?

Each protocol opens a different attack surface. If you are self-hosting agents, these are the threats that matter.

Protocol Threat What happens Mitigation
MCP Server-Side Request Forgery (SSRF) A malicious prompt tricks the agent into calling an MCP tool that makes requests to internal services (metadata endpoints, databases, admin panels). Run MCP servers in isolated network namespaces. Restrict outbound connections with firewall rules. Validate tool inputs on the server side.
MCP Untrusted tool descriptions MCP tool annotations (descriptions, parameter schemas) come from the server. A compromised server can lie about what a tool does to manipulate the LLM. Only connect to MCP servers you control or trust. Review tool descriptions. The MCP spec explicitly marks tool annotations as untrusted.
A2A Agent impersonation Without signed Agent Cards, an attacker can publish a fake Agent Card at a known URL and intercept tasks meant for a legitimate agent. Use A2A's digital signature feature on Agent Cards (added in v0.3). Verify signatures before sending tasks. Pin known agent endpoints.
A2A Task data exfiltration Tasks can contain sensitive data (code, credentials, business logic). If the remote agent is compromised, that data leaks. Encrypt sensitive task payloads at the application layer. Use mutual TLS between agents. Minimize data sent in tasks.
ANP DID trust bootstrapping The did:wba method relies on HTTPS-hosted DID documents. If a domain is compromised, all agent identities on that domain are compromised. Use separate domains for agent identity. Monitor DID document changes. Implement DID document pinning for known agents.
ANP Meta-protocol abuse The negotiation layer could be exploited to trick an agent into using an insecure or malicious communication protocol. Restrict meta-protocol negotiation to a whitelist of known protocols. Log and audit all protocol negotiations.

For a full guide on locking down your agent server, see .

Where is governance heading?

Both MCP and A2A sit under the Agentic AI Foundation (AAIF), part of the Linux Foundation. AAIF was established in December 2025 by three co-founders: Anthropic, OpenAI, and Block. Google, Microsoft, AWS, Bloomberg, and Cloudflare joined as platinum members. No single vendor controls the protocol direction.

AAIF also hosts goose (Block's open-source agent framework) and AGENTS.md (OpenAI's standard for giving AI agents project-specific guidance).

ANP is governed independently through a W3C Community Group. It is not part of AAIF. Whether ANP eventually joins AAIF or remains independent will affect its adoption trajectory.

The governance split matters for one practical reason: MCP and A2A will evolve together under coordinated stewardship. ANP will evolve on its own timeline. If you are making architectural bets, MCP and A2A carry lower governance risk.


Copyright 2026 Virtua.Cloud. All rights reserved. This content is original work by the Virtua.Cloud team. Reproduction, republication, or redistribution without written permission is prohibited.

Ready to try it yourself?

Deploy your own server in seconds. Linux, Windows, or FreeBSD.

See VPS Plans