Supports the use of local LLMs by spawning Ollama as a subprocess, enabling agents to use local models for specialized subtasks or private data processing.
Facilitates integration with OpenAI-powered agents such as Codex, permitting agents to exchange instructions and results for complex coding operations.
agent-link-mcp
English | 한국어
MCP server for bidirectional AI agent collaboration. Spawn and communicate with any AI coding agent CLI — Claude Code, Codex, Gemini, Aider, and more.
When to Use
Stuck on a bug? — Your agent tried twice and failed. Let it ask another agent for a fresh perspective.
Need a second opinion? — Get code review or architectural advice from a different AI model.
Cross-model strengths — Use Claude for planning, Codex for execution, Gemini for research.
Parallel work — Spawn multiple agents to tackle independent subtasks simultaneously.
Rubber duck debugging — Have one agent explain the problem to another and get back a solution.
Use Cases
Get Help When Stuck
Your primary agent keeps failing on the same issue? Ask another agent:
# Claude Code is stuck on a TypeScript error it can't resolve.
# It spawns Codex for a second opinion:
spawn_agent("codex", "This TypeScript error keeps appearing. How do I fix it?", {
error: "Type 'string' is not assignable to type 'number'",
files: ["src/utils.ts"]
})Cross-Agent Code Review
Have another model review your agent's code changes:
spawn_agent("claude", "Review these changes for bugs and edge cases", {
files: ["src/api.ts", "src/handler.ts"],
intent: "Code review before merge"
})Multi-Agent Pipeline
Build a pipeline where agents handle different stages:
# Agent 1: Research
spawn_agent("gemini", "Find the best approach for WebSocket reconnection")
# Agent 2: Implementation (using Agent 1's advice)
spawn_agent("codex", "Implement WebSocket reconnection with exponential backoff", {
files: ["src/ws-client.ts"]
})
# Agent 3: Review
spawn_agent("claude", "Review this implementation for production readiness", {
files: ["src/ws-client.ts"]
})Bidirectional Collaboration
Agents can ask questions back. The host answers, and work continues:
Host: spawn_agent("codex", "Add caching to the API layer")
Codex: [QUESTION] Should I use Redis or in-memory cache?
Host: reply("codex-a1b2c3", "Use Redis, we have it in our docker-compose")
Codex: [RESULT] Added Redis caching with 5-minute TTL...Why
AI coding agents get stuck sometimes. Instead of waiting for you, they can ask another agent for help. agent-link-mcp lets any MCP-compatible agent spawn other agent CLIs as collaborators, exchange questions, and get results back — all through standard MCP tools.
One-side install — only the host agent needs this MCP server. Spawned agents are just CLI subprocesses.
Bidirectional — the host can ask questions to the spawned agent, and the spawned agent can ask questions back.
Any agent — works with any CLI that accepts a prompt and returns text. Built-in profiles for Claude, Codex, Gemini, and Aider.
Multi-agent — spawn multiple agents simultaneously for parallel collaboration.
Prerequisites
agent-link-mcp spawns other AI agents as CLI subprocesses. You need to install and authenticate the agent CLIs you want to collaborate with:
Agent | Install | Auth |
Claude Code |
|
|
Codex |
|
|
Gemini CLI |
|
|
Aider |
| Set |
You only need the ones you plan to use. agent-link-mcp auto-detects which CLIs are installed.
Install
# Claude Code
claude mcp add agent-link npx agent-link-mcp
# Codex
codex mcp add agent-link npx agent-link-mcp
# Any MCP client
npx agent-link-mcpNote: Only the agent you're working in needs this MCP server installed. The other agents are spawned as subprocesses — they don't need agent-link-mcp.
Tools
spawn_agent
Spawn an agent and send it a task.
{
"agent": "codex",
"task": "Refactor this function for better performance",
"context": {
"files": ["src/utils.ts"],
"error": "TypeError: Cannot read property 'x' of undefined",
"intent": "Performance improvement"
},
"model": "o3",
"timeoutMs": 7200000
}Parameter | Type | Default | Description |
| string | required | Agent name ( |
| string | required | Task description |
| object | — | Optional |
| string | cwd | Working directory for the agent process |
| string | — | Model to use (e.g. |
| number | 3600000 | Timeout in ms. Default: 1 hour. |
Returns one of:
{ status: "done", agentId: "codex-a1b2c3", result: "..." }— task completed{ status: "waiting_for_reply", agentId: "codex-a1b2c3", question: "..." }— agent needs clarification{ error: "...", agentId: "codex-a1b2c3" }— something went wrong
reply
Answer a spawned agent's question and continue the conversation.
{
"agentId": "codex-a1b2c3",
"message": "Yes, you can remove the side effects"
}kill_agent
Abort a running agent session.
{
"agentId": "codex-a1b2c3"
}list_agents
List available agent CLIs.
{
"agents": [
{ "name": "claude", "command": "claude", "source": "auto", "available": true },
{ "name": "codex", "command": "codex", "source": "auto", "available": true },
{ "name": "gemini", "command": "gemini", "source": "auto", "available": false }
]
}get_status
Get active agent sessions.
{
"sessions": [
{ "agentId": "codex-a1b2c3", "agent": "codex", "status": "waiting_for_reply", "startedAt": "..." }
]
}How It Works
You (using Claude Code)
↓
"Ask Codex to help with this refactoring"
↓
Claude Code → spawn_agent("codex", task, context)
↓
agent-link-mcp server → spawns `codex` CLI as subprocess
↓
Codex processes the task...
↓
Codex: "[QUESTION] Should I remove the side effects?"
↓
agent-link-mcp → parses response → returns to Claude Code
↓
Claude Code → reply("codex-a1b2c3", "Yes, remove them")
↓
agent-link-mcp → re-invokes Codex with accumulated context
↓
Codex: "[RESULT] Refactoring complete. Here's what I changed..."
↓
Claude Code receives the result and continues workingConfiguration
Auto-detection
agent-link-mcp automatically detects installed agent CLIs:
Agent | CLI Command |
Claude Code |
|
Codex |
|
Gemini |
|
Aider |
|
Custom agents
Add custom agents via config file at ~/.agent-link/config.json:
{
"agents": {
"codex": {
"command": "/usr/local/bin/codex",
"args": ["--full-auto"],
"promptFlag": null,
"outputFormat": "text"
},
"my-local-llm": {
"command": "ollama",
"args": ["run", "codellama"],
"promptFlag": null,
"outputFormat": "text"
}
}
}Override config path with AGENT_LINK_CONFIG environment variable.
Model Selection
You can specify which model the spawned agent should use via the model parameter:
# Use a specific model for Codex
spawn_agent("codex", "Debug this issue", { model: "o3" })
# Use a specific model for Claude
spawn_agent("claude", "Review this code", { model: "claude-sonnet-4" })The model name is passed to the agent CLI via its --model flag. If omitted, the agent uses its default model.
Timeout
Default timeout is 1 hour (3,600,000ms). You can override per-call:
# 2 hour timeout for complex tasks
spawn_agent("codex", "Refactor the entire auth system", { timeoutMs: 7200000 })Conversation Protocol
Spawned agents receive instructions to format their responses:
[QUESTION] ...— needs clarification from the host agent[RESULT] ...— task completed
If the agent doesn't follow the format, the entire output is treated as a result.
License
MIT