Orchestration MCP
A TypeScript MCP server for launching, tracking, and managing external coding-agent runs across local and remote backends like Codex and Claude Code. It allows top-level agents to orchestrate subagents through tools for spawning tasks, polling events, and handling interactive sessions.
README
Orchestration MCP
TypeScript MCP server for launching and tracking external coding-agent runs.
The MCP surface stays stable while the internal execution backend can target:
- local
codex - local
claude_code - remote
remote_a2a
This lets a top-level agent call one MCP toolset while the orchestration layer decides whether subagents are local SDK processes or remote A2A-compatible agents.
Install And Build
cd orchestration-mcp
npm install
npm run build
Run The MCP Server
cd orchestration-mcp
npm start
This starts the MCP server from dist/index.js.
Codex MCP Config Example
If you want Codex to load this MCP server, add an entry like this to ~/.codex/config.toml:
[mcp_servers.orchestration-mcp]
command = "node"
args = ["/abs/path/to/orchestration-mcp/dist/index.js"]
enabled = true
Example using this repository path:
[mcp_servers.orchestration-mcp]
command = "node"
args = ["/Users/fonsh/PycharmProjects/Treer/nanobot/orchestration-mcp/dist/index.js"]
enabled = true
After updating the config, restart Codex so it reloads MCP servers.
What The MCP Exposes
The server registers these tools:
spawn_runget_runpoll_eventscancel_runcontinue_runlist_runsget_event_artifact
Typical MCP Flow
- Call
spawn_runto create a subagent run. - Call
poll_eventsuntil you see a terminal event or a waiting state. - If the run enters
input_requiredorauth_required, callcontinue_run. - Call
get_runfor the latest run summary. - If an event contains
artifact_refs, callget_event_artifactto fetch the full payload.
spawn_run notes
backend:"codex","claude_code", or"remote_a2a"role: orchestration role label such asplanner,worker, orreviewerprompt: plain-text instruction for simple runsinput_message: optional structured message for multipart/A2A-style inputscwd: absolute working directorysession_mode:neworresumesession_id: required when resuming a prior sessionprofile: optional path to a persona/job-description file for future profile-driven behavior
Unless you are explicitly instructed to use a profile, leave profile empty.
output_schema: optional JSON Schema for structured final outputmetadata: optional orchestration metadata stored for correlation and auditingbackend_config: optional backend-specific settings. Forremote_a2a, setagent_urland any auth headers/tokens here.
For all backends, cwd is the orchestration-side working directory used for run/session storage.
For remote_a2a, spawn_run.cwd is also forwarded to the remote subagent and becomes that A2A task context's execution directory.
At least one of prompt or input_message is required.
Simple example:
{
"backend": "codex",
"role": "worker",
"prompt": "Inspect the repository and summarize the architecture.",
"cwd": "/abs/path/to/project",
"session_mode": "new"
}
Remote A2A example:
{
"backend": "remote_a2a",
"role": "worker",
"prompt": "Inspect the repository and summarize the architecture.",
"cwd": "/abs/path/to/project",
"session_mode": "new",
"backend_config": {
"agent_url": "http://127.0.0.1:53552"
}
}
continue_run notes
Use continue_run when a run enters input_required or auth_required and the backend supports interactive continuation.
Inputs:
run_idinput_message
get_event_artifact notes
Use get_event_artifact when a sanitized event returned by poll_events contains event.data.artifact_refs and you need the full original payload.
Inputs:
run_idseqfield_path: JSON Pointer relative toevent.data, for example/stdout,/raw_tool_use_result, or/input/contentoffset: optional byte offset, default0limit: optional byte limit, default65536
Typical flow:
- Call
poll_events. - Inspect
event.data.artifact_refson any sanitized event. - Call
get_event_artifactwith the samerun_id, the eventseq, and one of the exposedfield_pathvalues.
Backend defaults
codex: uses the current@openai/codex-sdkdefaults plus non-interactive execution settings already wired in the adapterclaude_code: uses@anthropic-ai/claude-agent-sdkwithpermissionMode: "bypassPermissions"so the MCP call stays non-blocking, and reuses persisted backend session ids forresumeremote_a2a: connects to a remote A2A-compatible agent using@a2a-js/sdk, streams task updates into normalized orchestration events, and supportscontinue_runforinput_required
For claude_code, make sure the local environment already has a working Claude Code authentication setup before testing.
Test A2A agents
The repo includes helper modules for local A2A-wrapped test agents:
dist/test-agents/codex-a2a-agent.jsdist/test-agents/claude-a2a-agent.jsdist/test-agents/start-a2a-agent.js
These export startup helpers that wrap the local Codex and Claude SDKs behind an A2A server so the orchestration MCP can test its internal remote_a2a backend against realistic subagents.
To start an interactive wrapper launcher:
npm run start:a2a-agent
The script will ask whether to wrap codex or claude_code.
After startup, it prints the agent_url and a ready-to-use spawn_run payload for the MCP layer. The wrapper no longer locks a working directory at startup. Each remote_a2a call uses the cwd provided to spawn_run, and the wrapper keeps that cwd fixed for the lifetime of the same A2A contextId.
Storage
Run data is stored under:
<cwd>/.nanobot-orchestrator/
runs/
<run_id>/
run.json
events.jsonl
result.json
artifacts/
000008-command_finished/
manifest.json
stdout.0001.txt
stdout.0002.txt
sessions/
<session_id>.json
Notes:
events.jsonlstores sanitized events intended forpoll_eventsconsumption.- Oversized raw payloads are moved into per-event artifact files and referenced from
event.data.artifact_refs. run.jsonandresult.jsonkeep the current run snapshot and final result behavior.- The storage directory name is currently
.nanobot-orchestrator/for backward compatibility with the existing implementation.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.