Building Your First A2A Pipeline: A Practical Walkthrough
Agent-to-Agent protocol is live. Here's how to wire two agents that actually talk to each other.
What A2A actually solves
MCP connects agents to tools. A2A connects agents to other agents. This distinction matters because agents are not tools — they have their own context, their own goals, and their own failure modes. Treating an agent like a tool call (fire and forget) doesn't work when the agent needs minutes to complete a task, might ask clarifying questions, or could reject the task entirely.
A2A gives you a protocol for this: discovery, capability negotiation, task assignment, progress tracking, and result delivery.
The architecture
An A2A system has three layers:
1. Agent Cards — JSON documents that describe what an agent can do, what inputs it accepts, and how to reach it. Think of these as the agent's resume.
2. Task lifecycle — A structured flow: submitted → working → input-required → completed (or failed). Both agents can track progress.
3. Messaging — Structured messages between agents, including text, files, and structured data.
Step 1: Define your agent card
Every A2A agent publishes an agent card at `/.well-known/agent.json`. This is how other agents discover it.
{
"name": "Research Agent",
"description": "Searches the web and summarizes findings on any topic.",
"url": "https://research-agent.example.com",
"capabilities": {
"streaming": true,
"pushNotifications": false
},
"skills": [
{
"id": "web-research",
"name": "Web Research",
"description": "Given a topic, returns a structured summary with citations.",
"inputModes": ["text/plain"],
"outputModes": ["text/markdown", "application/json"]
}
]
}Key decisions:
- Be specific about input/output modes. "text/plain" and "application/json" are different contracts.
- Don't list capabilities you haven't tested. If your agent can't stream, don't claim it can.
- The description field is what other agents (and their LLMs) will read to decide whether to use you. Write it for machines, not humans.
Step 2: Implement the task endpoint
Your agent needs to accept tasks via HTTP POST. The minimum viable implementation:
POST /tasks/send
{
"jsonrpc": "2.0",
"method": "tasks/send",
"params": {
"id": "task-uuid-here",
"message": {
"role": "user",
"parts": [{ "type": "text", "text": "Research the current state of MCP adoption in enterprise" }]
}
}
}Your agent receives this, processes the request, and returns a task object with a status.
Step 3: Handle the task lifecycle
This is where most implementations get interesting. A task goes through states:
- submitted — received, not yet started
- working — agent is processing
- input-required — agent needs more information from the caller
- completed — done, results attached
- failed — something went wrong
- canceled — caller or agent canceled
The `input-required` state is what separates A2A from a simple API call. Your research agent might come back with: "I found three conflicting sources. Which angle do you want me to prioritize?" The calling agent then responds with additional context, and the task resumes.
Step 4: Wire two agents together
The caller agent needs to:
1. Discover the target agent (fetch its agent card)
2. Check that the target has the skill it needs
3. Submit a task
4. Poll for status (or listen for streaming updates)
5. Handle input-required states by providing additional context
6. Process the completed result
Here's the conceptual flow:
Orchestrator Agent Research Agent
| |
|--- GET /.well-known/agent.json --->|
|<-- agent card with skills ---------|
| |
|--- POST /tasks/send -------------->|
|<-- { status: "working" } ----------|
| |
|--- GET /tasks/{id} --------------->|
|<-- { status: "input-required" } ---|
| |
|--- POST /tasks/send (follow-up) -->|
|<-- { status: "completed" } --------|Step 5: Error handling
A2A tasks can fail for many reasons. Your calling agent needs to handle:
- Task rejected: The target agent can decline tasks outside its capabilities.
- Task timeout: Set a deadline. If the target doesn't complete in time, cancel and try an alternative.
- Input-required loop: The target keeps asking for more context. Set a max interaction count.
- Network failure: The target agent goes offline mid-task. Implement retry with the same task ID.
What A2A doesn't solve
- Trust. A2A doesn't include authentication or authorization. You need to implement that yourself.
- Payment. If one agent calls another agent's API, who pays? No protocol-level answer yet.
- Quality. An agent card says what an agent can do. It doesn't say how well it does it. That's what Agentifact scores are for.
Getting started
Don't try to build a generic A2A platform. Start with two specific agents that need to talk to each other. Get the agent cards right, implement the task lifecycle, and handle errors. Once that works, adding more agents is mechanical.
Check our A2A Protocol profile and the framework comparison to pick the right tools for your implementation.