Orchestration MCP

Orchestration MCP

A TypeScript MCP server for launching, tracking, and managing external coding-agent runs across local and remote backends like Codex and Claude Code. It allows top-level agents to orchestrate subagents through tools for spawning tasks, polling events, and handling interactive sessions.

Category
Visit Server

README

Orchestration MCP

TypeScript MCP server for launching and tracking external coding-agent runs.

The MCP surface stays stable while the internal execution backend can target:

  • local codex
  • local claude_code
  • remote remote_a2a

This lets a top-level agent call one MCP toolset while the orchestration layer decides whether subagents are local SDK processes or remote A2A-compatible agents.

Install And Build

cd orchestration-mcp
npm install
npm run build

Run The MCP Server

cd orchestration-mcp
npm start

This starts the MCP server from dist/index.js.

Codex MCP Config Example

If you want Codex to load this MCP server, add an entry like this to ~/.codex/config.toml:

[mcp_servers.orchestration-mcp]
command = "node"
args = ["/abs/path/to/orchestration-mcp/dist/index.js"]
enabled = true

Example using this repository path:

[mcp_servers.orchestration-mcp]
command = "node"
args = ["/Users/fonsh/PycharmProjects/Treer/nanobot/orchestration-mcp/dist/index.js"]
enabled = true

After updating the config, restart Codex so it reloads MCP servers.

What The MCP Exposes

The server registers these tools:

  • spawn_run
  • get_run
  • poll_events
  • cancel_run
  • continue_run
  • list_runs
  • get_event_artifact

Typical MCP Flow

  1. Call spawn_run to create a subagent run.
  2. Call poll_events until you see a terminal event or a waiting state.
  3. If the run enters input_required or auth_required, call continue_run.
  4. Call get_run for the latest run summary.
  5. If an event contains artifact_refs, call get_event_artifact to fetch the full payload.

spawn_run notes

  • backend: "codex", "claude_code", or "remote_a2a"
  • role: orchestration role label such as planner, worker, or reviewer
  • prompt: plain-text instruction for simple runs
  • input_message: optional structured message for multipart/A2A-style inputs
  • cwd: absolute working directory
  • session_mode: new or resume
  • session_id: required when resuming a prior session
  • profile: optional path to a persona/job-description file for future profile-driven behavior

Unless you are explicitly instructed to use a profile, leave profile empty.

  • output_schema: optional JSON Schema for structured final output
  • metadata: optional orchestration metadata stored for correlation and auditing
  • backend_config: optional backend-specific settings. For remote_a2a, set agent_url and any auth headers/tokens here.

For all backends, cwd is the orchestration-side working directory used for run/session storage.

For remote_a2a, spawn_run.cwd is also forwarded to the remote subagent and becomes that A2A task context's execution directory.

At least one of prompt or input_message is required.

Simple example:

{
  "backend": "codex",
  "role": "worker",
  "prompt": "Inspect the repository and summarize the architecture.",
  "cwd": "/abs/path/to/project",
  "session_mode": "new"
}

Remote A2A example:

{
  "backend": "remote_a2a",
  "role": "worker",
  "prompt": "Inspect the repository and summarize the architecture.",
  "cwd": "/abs/path/to/project",
  "session_mode": "new",
  "backend_config": {
    "agent_url": "http://127.0.0.1:53552"
  }
}

continue_run notes

Use continue_run when a run enters input_required or auth_required and the backend supports interactive continuation.

Inputs:

  • run_id
  • input_message

get_event_artifact notes

Use get_event_artifact when a sanitized event returned by poll_events contains event.data.artifact_refs and you need the full original payload.

Inputs:

  • run_id
  • seq
  • field_path: JSON Pointer relative to event.data, for example /stdout, /raw_tool_use_result, or /input/content
  • offset: optional byte offset, default 0
  • limit: optional byte limit, default 65536

Typical flow:

  1. Call poll_events.
  2. Inspect event.data.artifact_refs on any sanitized event.
  3. Call get_event_artifact with the same run_id, the event seq, and one of the exposed field_path values.

Backend defaults

  • codex: uses the current @openai/codex-sdk defaults plus non-interactive execution settings already wired in the adapter
  • claude_code: uses @anthropic-ai/claude-agent-sdk with permissionMode: "bypassPermissions" so the MCP call stays non-blocking, and reuses persisted backend session ids for resume
  • remote_a2a: connects to a remote A2A-compatible agent using @a2a-js/sdk, streams task updates into normalized orchestration events, and supports continue_run for input_required

For claude_code, make sure the local environment already has a working Claude Code authentication setup before testing.

Test A2A agents

The repo includes helper modules for local A2A-wrapped test agents:

  • dist/test-agents/codex-a2a-agent.js
  • dist/test-agents/claude-a2a-agent.js
  • dist/test-agents/start-a2a-agent.js

These export startup helpers that wrap the local Codex and Claude SDKs behind an A2A server so the orchestration MCP can test its internal remote_a2a backend against realistic subagents.

To start an interactive wrapper launcher:

npm run start:a2a-agent

The script will ask whether to wrap codex or claude_code.

After startup, it prints the agent_url and a ready-to-use spawn_run payload for the MCP layer. The wrapper no longer locks a working directory at startup. Each remote_a2a call uses the cwd provided to spawn_run, and the wrapper keeps that cwd fixed for the lifetime of the same A2A contextId.

Storage

Run data is stored under:

<cwd>/.nanobot-orchestrator/
  runs/
    <run_id>/
      run.json
      events.jsonl
      result.json
      artifacts/
        000008-command_finished/
          manifest.json
          stdout.0001.txt
          stdout.0002.txt
  sessions/
    <session_id>.json

Notes:

  • events.jsonl stores sanitized events intended for poll_events consumption.
  • Oversized raw payloads are moved into per-event artifact files and referenced from event.data.artifact_refs.
  • run.json and result.json keep the current run snapshot and final result behavior.
  • The storage directory name is currently .nanobot-orchestrator/ for backward compatibility with the existing implementation.

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured