
MeshAI MCP Server
Enables Claude Code and other MCP-compatible tools to leverage MeshAI's multi-agent orchestration capabilities for code review, refactoring, debugging, documentation, architecture analysis, and feature development. Automatically selects appropriate AI agents based on task content and works with agents built on LangChain, CrewAI, AutoGen, and other frameworks.
README
MeshAI MCP Server
A standalone Model Context Protocol (MCP) server that enables Claude Code and other MCP-compatible tools to leverage MeshAI's multi-agent orchestration capabilities.
🚀 Features
- 🤖 Multi-Agent Workflows: 6 pre-configured workflows for code review, refactoring, debugging, documentation, and more
- 🧠 Intelligent Agent Selection: Automatically selects appropriate AI agents based on task content
- 🔧 Framework Agnostic: Works with agents built on LangChain, CrewAI, AutoGen, and other frameworks
- 🐋 Docker Ready: Full Docker support with development and production configurations
- 📦 Easy Installation: Available as PyPI package or Docker container
- 🔄 Fallback Protocol: Works without official MCP package using built-in implementation
📋 Quick Start
Option 1: Docker with stdio (Claude Code)
# Run with Docker for Claude Code integration
docker run -it \
-e MESHAI_API_URL=http://localhost:8080 \
-e MESHAI_API_KEY=your-api-key \
ghcr.io/meshailabs/meshai-mcp-server:latest
# Or with docker-compose
git clone https://github.com/meshailabs/meshai-mcp.git
cd meshai-mcp
cp .env.template .env # Edit with your settings
docker-compose up
Option 2: HTTP Server Mode
# Run as HTTP API server
docker run -p 8080:8080 \
-e MESHAI_API_URL=http://localhost:8080 \
ghcr.io/meshailabs/meshai-mcp-server:latest \
meshai-mcp-server serve --transport http
# Test the HTTP API
curl -H "Authorization: Bearer dev_your-api-key" \
http://localhost:8080/v1/tools
Option 3: PyPI Installation
# Install from PyPI
pip install meshai-mcp-server
# Run in stdio mode (for Claude Code)
export MESHAI_API_URL=http://localhost:8080
export MESHAI_API_KEY=your-api-key
meshai-mcp-server
# Or run as HTTP server
meshai-mcp-server serve --transport http --port 8080
Option 4: Development Setup
# Clone and install
git clone https://github.com/meshailabs/meshai-mcp.git
cd meshai-mcp
pip install -e ".[dev]"
# Run in development mode
python -m meshai_mcp.cli serve --dev --transport http
🔧 Configuration
Environment Variables
Variable | Description | Default | Required |
---|---|---|---|
MESHAI_API_URL |
MeshAI API endpoint | http://localhost:8080 |
Yes |
MESHAI_API_KEY |
API key for authentication | None | For stdio mode |
MESHAI_LOG_LEVEL |
Logging level | INFO |
No |
🔐 Authentication
For HTTP Mode:
- API Key Required: Pass via
Authorization: Bearer YOUR_API_KEY
header - Development Keys: Use
dev_
prefix for testing (e.g.,dev_test123
) - Rate Limiting: 100 requests/hour for development, configurable for production
For stdio Mode:
- Environment Variable: Set
MESHAI_API_KEY
for backend communication - No HTTP Auth: Authentication handled by Claude Code
Claude Code Integration
stdio Transport (Recommended):
{
"servers": {
"meshai": {
"command": "docker",
"args": [
"run", "--rm", "-i",
"-e", "MESHAI_API_URL=${MESHAI_API_URL}",
"-e", "MESHAI_API_KEY=${MESHAI_API_KEY}",
"ghcr.io/meshailabs/meshai-mcp-server:latest"
],
"transport": "stdio"
}
}
}
HTTP Transport (For hosted deployments):
{
"servers": {
"meshai": {
"command": "curl",
"args": [
"-X", "POST",
"-H", "Authorization: Bearer ${MESHAI_MCP_API_KEY}",
"-H", "Content-Type: application/json",
"-d", "@-",
"https://your-mcp-server.com/v1/mcp"
],
"transport": "http"
}
}
}
Local pip Installation:
{
"servers": {
"meshai": {
"command": "meshai-mcp-server",
"transport": "stdio",
"env": {
"MESHAI_API_URL": "${MESHAI_API_URL}",
"MESHAI_API_KEY": "${MESHAI_API_KEY}"
}
}
}
}
🛠️ Available Workflows
1. Code Review (mesh_code_review
)
Comprehensive code review with security and best practices analysis.
- Agents: code-reviewer, security-analyzer, best-practices-advisor
2. Refactor & Optimize (mesh_refactor_optimize
)
Refactor code with performance optimization and test generation.
- Agents: code-optimizer, performance-analyzer, test-generator
3. Debug & Fix (mesh_debug_fix
)
Debug issues and generate tests for fixes.
- Agents: debugger-expert, log-analyzer, test-generator
4. Document & Explain (mesh_document_explain
)
Generate documentation and explanations with examples.
- Agents: doc-writer, code-explainer, example-generator
5. Architecture Review (mesh_architecture_review
)
Comprehensive architecture analysis and recommendations.
- Agents: system-architect, performance-analyst, security-auditor
6. Feature Development (mesh_feature_development
)
End-to-end feature development from design to testing.
- Agents: product-designer, senior-developer, test-engineer, doc-writer
🌐 HTTP API Usage
Starting HTTP Server
# Using Docker
docker run -p 8080:8080 \
-e MESHAI_API_URL=http://localhost:8080 \
ghcr.io/meshailabs/meshai-mcp-server:latest \
meshai-mcp-server serve --transport http
# Using pip
meshai-mcp-server serve --transport http --port 8080
API Endpoints
Endpoint | Method | Description | Auth Required |
---|---|---|---|
/health |
GET | Health check | No |
/v1/tools |
GET | List available tools | Yes |
/v1/workflows |
GET | List workflows | Yes |
/v1/resources |
GET | List resources | Yes |
/v1/mcp |
POST | Execute MCP request | Yes |
/v1/stats |
GET | Usage statistics | Yes |
/docs |
GET | API documentation | No |
Usage Examples
# Health check (no auth required)
curl http://localhost:8080/health
# List available tools
curl -H "Authorization: Bearer dev_test123" \
http://localhost:8080/v1/tools
# Execute a workflow
curl -X POST \
-H "Authorization: Bearer dev_test123" \
-H "Content-Type: application/json" \
-d '{"method":"mesh_code_review","id":"test","params":{"files":"app.py"}}' \
http://localhost:8080/v1/mcp
# Get usage stats
curl -H "Authorization: Bearer dev_test123" \
http://localhost:8080/v1/stats
🐋 Docker Deployment
Development Setup
# Development with hot reload
docker-compose -f docker-compose.dev.yml up
# Run tests
docker-compose -f docker-compose.dev.yml run --rm mcp-tests
# With mock API
docker-compose -f docker-compose.dev.yml --profile mock up
Production Considerations
For production deployment:
- Use proper API key management
- Set up rate limiting and monitoring
- Configure HTTPS/TLS termination
- Implement proper logging and metrics
- Consider using a reverse proxy (nginx, Traefik)
- Set resource limits and scaling policies
🧪 Development
Setup Development Environment
# Clone repository
git clone https://github.com/meshailabs/meshai-mcp.git
cd meshai-mcp
# Install in development mode
pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install
# Run tests
pytest tests/ -v
# Run with coverage
pytest tests/ -v --cov=src/meshai_mcp --cov-report=html
Code Quality
# Format code
black src tests
isort src tests
# Type checking
mypy src/meshai_mcp
# Linting
flake8 src tests
Building Docker Images
# Build production image
docker build -t meshai-mcp-server .
# Build development image
docker build -f Dockerfile.dev --target development -t meshai-mcp-server:dev .
# Multi-architecture build
docker buildx build --platform linux/amd64,linux/arm64 -t meshai-mcp-server:multi .
📚 Documentation
🤝 Contributing
We welcome contributions! Please see our Contributing Guide for details.
Development Workflow
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Run the test suite
- Submit a pull request
📝 License
This project is licensed under the MIT License - see the LICENSE file for details.
🆘 Support
- GitHub Issues: Report bugs or request features
- Documentation: docs.meshai.dev
- Discord: Join our community
🗺️ Roadmap
- [ ] HTTP transport support for MCP
- [ ] WebSocket transport for real-time communication
- [ ] Custom workflow configuration via YAML
- [ ] Plugin system for custom agents
- [ ] Prometheus metrics integration
- [ ] Official MCP package integration when available
Built with ❤️ by the MeshAI Labs team.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.