Integrates with the @openai/codex-sdk to launch and monitor local coding-agent runs, providing a unified interface for subagent orchestration and process auditing.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Orchestration MCPspawn a codex worker to analyze the repository architecture in /Users/dev/my-project"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Orchestration MCP
TypeScript MCP server for launching and tracking external coding-agent runs.
The MCP surface stays stable while the internal execution backend can target:
local
codexlocal
claude_coderemote
remote_a2a
This lets a top-level agent call one MCP toolset while the orchestration layer decides whether subagents are local SDK processes or remote A2A-compatible agents.
Install And Build
cd orchestration-mcp
npm install
npm run buildRun The MCP Server
cd orchestration-mcp
npm startThis starts the MCP server from dist/index.js.
Codex MCP Config Example
If you want Codex to load this MCP server, add an entry like this to ~/.codex/config.toml:
[mcp_servers.orchestration-mcp]
command = "node"
args = ["/abs/path/to/orchestration-mcp/dist/index.js"]
enabled = trueExample using this repository path:
[mcp_servers.orchestration-mcp]
command = "node"
args = ["/Users/fonsh/PycharmProjects/Treer/nanobot/orchestration-mcp/dist/index.js"]
enabled = trueAfter updating the config, restart Codex so it reloads MCP servers.
What The MCP Exposes
The server registers these tools:
spawn_runget_runpoll_eventscancel_runcontinue_runlist_runsget_event_artifact
Typical MCP Flow
Call
spawn_runto create a subagent run.Call
poll_eventsuntil you see a terminal event or a waiting state.If the run enters
input_requiredorauth_required, callcontinue_run.Call
get_runfor the latest run summary.If an event contains
artifact_refs, callget_event_artifactto fetch the full payload.
spawn_run notes
backend:"codex","claude_code", or"remote_a2a"role: orchestration role label such asplanner,worker, orreviewerprompt: plain-text instruction for simple runsinput_message: optional structured message for multipart/A2A-style inputscwd: absolute working directorysession_mode:neworresumesession_id: required when resuming a prior sessionprofile: optional path to a persona/job-description file for future profile-driven behavior
Unless you are explicitly instructed to use a profile, leave profile empty.
output_schema: optional JSON Schema for structured final outputmetadata: optional orchestration metadata stored for correlation and auditingbackend_config: optional backend-specific settings. Forremote_a2a, setagent_urland any auth headers/tokens here.
For all backends, cwd is the orchestration-side working directory used for run/session storage.
For remote_a2a, spawn_run.cwd is also forwarded to the remote subagent and becomes that A2A task context's execution directory.
At least one of prompt or input_message is required.
Simple example:
{
"backend": "codex",
"role": "worker",
"prompt": "Inspect the repository and summarize the architecture.",
"cwd": "/abs/path/to/project",
"session_mode": "new"
}Remote A2A example:
{
"backend": "remote_a2a",
"role": "worker",
"prompt": "Inspect the repository and summarize the architecture.",
"cwd": "/abs/path/to/project",
"session_mode": "new",
"backend_config": {
"agent_url": "http://127.0.0.1:53552"
}
}continue_run notes
Use continue_run when a run enters input_required or auth_required and the backend supports interactive continuation.
Inputs:
run_idinput_message
get_event_artifact notes
Use get_event_artifact when a sanitized event returned by poll_events contains event.data.artifact_refs and you need the full original payload.
Inputs:
run_idseqfield_path: JSON Pointer relative toevent.data, for example/stdout,/raw_tool_use_result, or/input/contentoffset: optional byte offset, default0limit: optional byte limit, default65536
Typical flow:
Call
poll_events.Inspect
event.data.artifact_refson any sanitized event.Call
get_event_artifactwith the samerun_id, the eventseq, and one of the exposedfield_pathvalues.
Backend defaults
codex: uses the current@openai/codex-sdkdefaults plus non-interactive execution settings already wired in the adapterclaude_code: uses@anthropic-ai/claude-agent-sdkwithpermissionMode: "bypassPermissions"so the MCP call stays non-blocking, and reuses persisted backend session ids forresumeremote_a2a: connects to a remote A2A-compatible agent using@a2a-js/sdk, streams task updates into normalized orchestration events, and supportscontinue_runforinput_required
For claude_code, make sure the local environment already has a working Claude Code authentication setup before testing.
Test A2A agents
The repo includes helper modules for local A2A-wrapped test agents:
dist/test-agents/codex-a2a-agent.jsdist/test-agents/claude-a2a-agent.jsdist/test-agents/start-a2a-agent.js
These export startup helpers that wrap the local Codex and Claude SDKs behind an A2A server so the orchestration MCP can test its internal remote_a2a backend against realistic subagents.
To start an interactive wrapper launcher:
npm run start:a2a-agentThe script will ask whether to wrap codex or claude_code.
After startup, it prints the agent_url and a ready-to-use spawn_run payload for the MCP layer. The wrapper no longer locks a working directory at startup. Each remote_a2a call uses the cwd provided to spawn_run, and the wrapper keeps that cwd fixed for the lifetime of the same A2A contextId.
Storage
Run data is stored under:
<cwd>/.nanobot-orchestrator/
runs/
<run_id>/
run.json
events.jsonl
result.json
artifacts/
000008-command_finished/
manifest.json
stdout.0001.txt
stdout.0002.txt
sessions/
<session_id>.jsonNotes:
events.jsonlstores sanitized events intended forpoll_eventsconsumption.Oversized raw payloads are moved into per-event artifact files and referenced from
event.data.artifact_refs.run.jsonandresult.jsonkeep the current run snapshot and final result behavior.The storage directory name is currently
.nanobot-orchestrator/for backward compatibility with the existing implementation.
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.