
Orchestrator MCP
An intelligent MCP server that orchestrates multiple MCP servers with AI-enhanced workflow automation and production-ready context engine capabilities for codebase analysis.
README
Orchestrator MCP
An intelligent MCP (Model Context Protocol) server that orchestrates multiple MCP servers and provides AI-enhanced workflow automation with production-ready context engine capabilities.
🌟 Features
Core Orchestration
- Multi-Server Orchestration: Connect to multiple MCP servers simultaneously
- Universal Compatibility: Works with npm, uvx, Python, and other MCP server types
- Server Management: Dynamic server discovery and health monitoring
- Scalable Architecture: Easy to add new servers and capabilities
🧠 AI Enhancement Layer
- Intelligent Tool Routing: AI analyzes requests and selects optimal tools
- Workflow Automation: Multi-step processes orchestrated automatically
- Intent Understanding: Natural language request analysis and planning
- Context Synthesis: Combines results from multiple tools into coherent responses
- Result Enhancement: AI improves and formats outputs for better user experience
🎯 Context Engine (PRODUCTION READY!)
- Large Context Analysis: Process 50K+ characters using Gemini's 1M+ token context
- Intelligent Code Understanding: AI-powered codebase analysis with 95% confidence
- Real-time File Discovery: Dynamic file loading and relationship mapping
- Quality Assessment: Identify placeholder vs real implementations
- Performance Optimized: 30s execution time for complex analysis
Built-in Capabilities
- Web Fetching: HTTP content retrieval capabilities
- Fallback Mode: Graceful degradation when AI is not available
Current Status
🎉 PRODUCTION READY - Context Engine Complete!
✅ Context Engine: 85.7% quality score, 95% analysis confidence ✅ AI Enhancement Layer: Complete with intelligent routing and workflow automation ✅ Multi-Server Orchestration: 10/10 MCP servers connected and functional
🚀 Quick Start
-
Install dependencies:
npm install
-
Build the project:
npm run build
-
Configure in your MCP client (e.g., Claude Desktop, VS Code):
See the example configuration files in the
examples/
directory:examples/claude-desktop-config.json
- For Claude Desktopexamples/vscode-mcp.json
- For VS Code
-
Start using the orchestrator through your MCP client!
MCP Integration
For Stdio MCP Server:
- Name:
Orchestrator MCP
- Command:
node
- Arguments:
/path/to/orchestrator-mcp/dist/index.js
For Development:
- Command:
npx
- Arguments:
orchestrator-mcp
(after publishing to npm)
🛠️ Available Tools
Core AI Enhancement Tools
The orchestrator exposes a minimal set of tools focused on unique capabilities that enhance AI assistants:
ai_process
- Primary Interface - Process requests using AI orchestration with intelligent tool selectionget_info
- System introspection - Get information about connected servers and available capabilitiesai_status
- Health monitoring - Get the status of AI orchestration capabilities
Connected Server Tools
All tools from connected MCP servers are automatically available through AI orchestration:
- Filesystem operations (read, write, search files)
- Git operations (repository management, status, history)
- Memory system (knowledge graph storage)
- Web fetching (fetch and process web content)
- Web search (DuckDuckGo search for current information)
- Browser automation (Playwright for web testing)
- Security analysis (Semgrep for vulnerability scanning)
- And more...
🔗 Connected Servers
Currently configured servers:
- filesystem (npm) - File operations with secure access controls
- sequential-thinking (npm) - Dynamic problem-solving through thought sequences
- git (uvx) - Git repository tools and operations
- memory (npm) - Knowledge graph-based persistent memory
- fetch (uvx) - Enhanced web content fetching
- github (npm) - Official GitHub API integration (requires GITHUB_TOKEN)
- playwright (npm) - Browser automation and web testing
- puppeteer (npm) - Alternative browser automation
🤖 AI Configuration
To enable AI features, you need an OpenRouter API key. Additional API keys can be configured for enhanced integrations:
-
Required for AI features: Get an API key from OpenRouter
-
Optional integrations:
- GitHub Personal Access Token for GitHub server integration
- Semgrep App Token for enhanced security scanning
-
Configure the API keys in your MCP client settings:
For Claude Desktop (
~/.claude_desktop_config.json
):{ "mcpServers": { "Orchestrator MCP": { "command": "node", "args": ["/path/to/project/dist/index.js"], "env": { "OPENROUTER_API_KEY": "your_api_key_here", "OPENROUTER_DEFAULT_MODEL": "anthropic/claude-3.5-sonnet", "OPENROUTER_MAX_TOKENS": "2000", "OPENROUTER_TEMPERATURE": "0.7" } } } }
For VS Code (
.vscode/mcp.json
):{ "inputs": [ { "type": "promptString", "id": "openrouter-key", "description": "OpenRouter API Key", "password": true } ], "servers": { "Orchestrator MCP": { "type": "stdio", "command": "node", "args": ["/path/to/project/dist/index.js"], "env": { "OPENROUTER_API_KEY": "${input:openrouter-key}", "OPENROUTER_DEFAULT_MODEL": "anthropic/claude-3.5-sonnet", "OPENROUTER_MAX_TOKENS": "2000", "OPENROUTER_TEMPERATURE": "0.7" } } } }
AI Models Supported
The orchestrator works with any model available on OpenRouter, including:
- Anthropic Claude (recommended)
- OpenAI GPT models
- Meta Llama models
- Google Gemini models
- And many more!
📖 Usage Examples
🎯 Context Engine (Production Ready!)
# Intelligent Codebase Analysis
{"tool": "ai_process", "arguments": {"request": "Analyze the current intelligence layer implementation. Show me what's actually implemented vs placeholder code"}}
{"tool": "ai_process", "arguments": {"request": "Find all quality assessment code and identify which parts are real vs mock implementations"}}
{"tool": "ai_process", "arguments": {"request": "Analyze the context management capabilities and identify gaps in the current implementation"}}
# Large Context Code Understanding
{"tool": "ai_process", "arguments": {"request": "Load the entire src/intelligence directory and provide a comprehensive analysis of the architecture"}}
{"tool": "ai_process", "arguments": {"request": "Analyze relationships between context engine, AI workflows, and orchestrator components"}}
{"tool": "ai_process", "arguments": {"request": "Identify all placeholder implementations across the codebase and prioritize which to implement first"}}
Primary AI Interface
# Code Analysis & Development
{"tool": "ai_process", "arguments": {"request": "Find all TypeScript files with TODO comments and create a summary report"}}
{"tool": "ai_process", "arguments": {"request": "Analyze the codebase architecture and identify potential improvements"}}
{"tool": "ai_process", "arguments": {"request": "Check git status, review recent commits, and summarize changes since last week"}}
# Research & Information Gathering
{"tool": "ai_process", "arguments": {"request": "Search for Next.js 15 new features and create a comparison with version 14"}}
{"tool": "ai_process", "arguments": {"request": "Fetch the latest TypeScript 5.3 release notes and extract breaking changes"}}
{"tool": "ai_process", "arguments": {"request": "Research React Server Components best practices and save key insights to memory"}}
# Security & Quality Analysis
{"tool": "ai_process", "arguments": {"request": "Run security analysis on all JavaScript files and prioritize vulnerabilities by severity"}}
{"tool": "ai_process", "arguments": {"request": "Analyze code quality across the project and generate improvement recommendations"}}
# Complex Multi-Step Workflows
{"tool": "ai_process", "arguments": {"request": "Search for React testing best practices, analyze our current test files, and suggest specific improvements"}}
{"tool": "ai_process", "arguments": {"request": "Fetch competitor documentation, compare with our API design, and identify feature gaps"}}
System Introspection
# Get server information and capabilities
{"tool": "get_info", "arguments": {}}
# Check AI orchestration health
{"tool": "ai_status", "arguments": {}}
AI-Enhanced Workflows
The ai_process
tool can handle complex requests like:
- "Analyze my project structure and suggest improvements"
- "Find recent commits and create a summary"
- "Search for TODO comments and organize them by priority"
- "Take a screenshot of the homepage and analyze its performance"
🏗️ Architecture
Multi-Runtime Support
The orchestrator uses a registry-based architecture supporting:
- npm servers: TypeScript/JavaScript servers via npx
- uvx servers: Python servers via uvx
- Built-in tools: Native orchestrator capabilities
AI Enhancement Layer
User Request → Intent Analysis → Tool Selection → Workflow Planning → Execution → Result Synthesis
⚙️ Configuration
Server Configuration
Server configurations are managed in src/orchestrator/server-configs.ts
. Each server includes:
- Runtime environment (npm, uvx, python, etc.)
- Command and arguments
- Environment requirements
- Enable/disable status
- Development phase assignment
Environment Variables
All environment variables are configured through your MCP client settings. The following variables are supported:
AI Configuration (OpenRouter):
OPENROUTER_API_KEY
(required for AI features) - Your OpenRouter API keyOPENROUTER_DEFAULT_MODEL
(optional) - Default model to use (default: "anthropic/claude-3.5-sonnet")OPENROUTER_MAX_TOKENS
(optional) - Maximum tokens per request (default: "2000")OPENROUTER_TEMPERATURE
(optional) - Temperature for AI responses (default: "0.7")
MCP Server Integrations:
GITHUB_TOKEN
(optional) - GitHub Personal Access Token for GitHub server integrationSEMGREP_APP_TOKEN
(optional) - Semgrep App Token for enhanced security scanningSLACK_BOT_TOKEN
(optional) - Slack Bot Token for Slack integration (if enabled)SLACK_APP_TOKEN
(optional) - Slack App Token for Slack integration (if enabled)
🔧 Development
Scripts
npm run build
- Build the projectnpm run dev
- Watch mode for development (TypeScript compilation)npm run start
- Start the server (for MCP client use)npm run start:dev
- Start with .env file support (for local development/testing)npm test
- Run tests (when available)
Local Development
For local development and testing, you can use the development script that loads environment variables from a .env
file:
-
Copy the example environment file:
cp .env.example .env
-
Edit
.env
with your actual API keys -
Run the development server:
npm run start:dev
Note: The regular npm start
command is intended for MCP client use and expects environment variables to be provided by the MCP client configuration.
📝 License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.