MCPeasy
A production-grade multi-tenant MCP server that provides different tools and configurations to different clients using API key-based routing.
README
MCPeasy
the easiest way to set-up and self-host your own multi-MCP server with streamable http transport and multi-client and key management
A production-grade multi-tenant MCP server that provides different tools and configurations to different clients using API key-based routing.
Architecture
- FastMCP 2.6: Core MCP implementation following https://gofastmcp.com/llms-full.txt
- FastAPI: Web framework with API key-based URL routing
- PostgreSQL: Multi-tenant data storage with SQLAlchemy
- Streamable HTTP: All subservers provide streamable transport
- Multi-tenancy: Clients can have multiple API keys with tool-specific configurations
Key Features
- Multi-tenant design: Clients manage multiple rotatable API keys
- Per-tool configuration: Each client can configure tools differently (e.g., custom email addresses)
- Dynamic tool sets: Different clients get different tool combinations
- Tool auto-discovery: Modular tool system with automatic registration
- Custom tools support: Add organization-specific tools via git submodules with clean upstream separation
- Per-resource configuration: Each client can access different resources with custom settings
- Dynamic resource sets: Different clients get different resource combinations
- Resource auto-discovery: Modular resource system with automatic registration
- Enhanced tool responses: Multiple content types (text, JSON, markdown, file) for optimal LLM integration
- Deployment filtering: YAML-based tool/resource whitelisting for environment-specific deployments
- Shared infrastructure: Database, logging, and configuration shared across servers
- Admin interface: Web-based client and API key management with CORE/CUSTOM tool source badges
- Production ready: Built for Fly deployment with Neon database
- High performance: Background task processing, request timeouts, configuration caching, and optimized database connections
Quick Start
-
Setup environment:
cp .env.example .env # Edit .env with your database URL, admin password, and session secret -
Start all services with Docker Compose (recommended):
docker-compose up -
Access the services:
http://localhost:8000 # Main API endpoint http://localhost:3000 # Admin interface (on prod: https://yourdomain.com/admin) http://localhost:8080 # Database inspector (Adminer) # Login to admin with your SUPERADMIN_PASSWORD
That's it! Docker Compose handles all dependencies, database setup, and migrations automatically.
Alternative: Local Development Without Docker
If you prefer to run without Docker:
-
Install dependencies:
uv sync -
Run development server:
python dev.py -
Access admin interface:
http://localhost:3000
API Endpoints
GET /health- Health checkGET /admin- Admin login pageGET /admin/clients- Client management dashboardPOST /admin/clients- Create new clientGET /admin/clients/{id}/keys- Manage API keys for clientPOST /admin/clients/{id}/keys- Generate new API keyGET /admin/clients/{id}/tools- Configure tools for clientGET|POST /mcp/{api_key}- MCP endpoint (streamable)
Client & API Key Management
Creating Clients
- Visit
/adminand login with superadmin password - Create client with name and description
- Generate API keys for the client
- Configure tools and resources with their settings per client
Managing API Keys
- Multiple keys per client: Production, staging, development keys
- Key rotation: Generate new keys without losing configuration
- Expiry management: Set expiration dates for keys
- Secure deletion: Deactivate compromised keys immediately
Tool Configuration
Each client must explicitly configure tools to access them:
- Simple tools:
echo,get_weather- click "Add" to enable (no configuration needed) - Configurable tools:
send_email- click "Configure" to set from address, SMTP settings - Per-client settings: Same tool, different configuration per client
- Strict access control: Only configured tools are visible and callable
Available Tools
Core Tools:
echo- Simple echo tool for testing (no configuration needed)get_weather- Weather information (no configuration needed)send_email- Send emails (requires: from_email, optional: smtp_server)youtube_lookup- YouTube video information lookup (no configuration needed)
Custom Tools:
- Custom tools can be added via git submodules in organization-specific repositories
- Each deployment can whitelist different custom tools via YAML configuration
- Custom tools show with purple "CUSTOM" badges in admin UI vs blue "CORE" badges
- (Tool ecosystem grows with auto-discovery and organization contributions)
Tool Call Tracking
All tool executions are automatically tracked in the database for monitoring and auditing:
- Complete tracking: Input arguments, output data, execution time, and errors
- Per-client logging: Track usage patterns by client and API key
- Performance monitoring: Execution time tracking in milliseconds
- Error logging: Failed tool calls with detailed error messages
- Automatic: No configuration needed - all tool calls are logged transparently
Resource Configuration
Each client must explicitly configure resources to access them:
- Simple resources:
knowledge- click "Add" to enable with default settings - Configurable resources:
knowledge- click "Configure" to set category filters, article limits, search permissions - Per-client settings: Same resource, different configuration per client (e.g., different category access)
- Strict access control: Only configured resources are visible and accessible
Available Resources
knowledge- Knowledge base articles and categories (configurable: allowed_categories, max_articles, allow_search, excluded_tags)- (Resource ecosystem grows with auto-discovery)
Configuration
Environment Variables
DATABASE_URL=postgresql://user:pass@host:port/db
PORT=8000
SESSION_SECRET=your_secure_session_key_here
SUPERADMIN_PASSWORD=your_secure_password
see .env.example for more
Multi-Tenant Architecture
The system uses three main entities:
- Clients: Organizations or users (e.g., "ACME Corp") with UUID identifiers
- API Keys: Multiple rotatable keys per client
- Tool Configurations: Per-client tool settings stored as JSON with strict access control
- Resource Configurations: Per-client resource settings stored as JSON with strict access control
Custom Tools Development
MCPeasy supports adding organization-specific tools while maintaining clean separation from core functionality:
Quick Custom Tool Setup
- Fork mcpeasy repository
- Create custom tool repository with your organization's tools
- Add as git submodule:
git submodule add https://github.com/yourorg/mcp-tools.git src/custom_tools/yourorg - Configure deployment: Add tools to
config/deployment.yaml - Enable for clients: Use admin UI to configure tools per client
Enhanced Tool Response Types
Custom tools support multiple content types for optimal LLM integration:
# Structured data (recommended for LLM processing)
return ToolResult.json({"result": 42, "status": "success"})
# Human-readable text
return ToolResult.text("Operation completed successfully!")
# Markdown formatting
return ToolResult.markdown("# Success\n\n**Result**: 42")
# File references
return ToolResult.file("s3://bucket/report.pdf", mime_type="application/pdf")
# Error handling
return ToolResult.error("Invalid operation: division by zero")
Templates and Documentation
- Templates: Complete tool/resource templates in
templates/directory - Best practices: Examples show proper dependency management and configuration
- Git submodule workflow: Clean separation between core and custom code
- YAML deployment filtering: Environment-specific tool availability
Development
Docker Development (Recommended)
# Start all services with live reload
docker-compose up
# Access services:
# - App: http://localhost:8000
# - Admin: http://localhost:3000
# - Database Inspector: http://localhost:8080
Live reload on both frontend and backend
Database Inspector (Adminer)
When running with Docker Compose, Adminer provides a lightweight web interface to inspect your PostgreSQL database:
- URL:
http://localhost:8080 - Login credentials:
- Server:
db - Username:
postgres - Password:
postgres - Database:
mcp
- Server:
Features:
- Browse all tables (clients, api_keys, tool_configurations, resource_configurations, tool_calls)
- View table data and relationships
- Run SQL queries
- Export data
- Monitor database schema changes
- Analyze tool usage patterns and performance metrics
Local Development
- Dependencies: Managed with
uv - Code structure: Modular design with SQLAlchemy models, session auth, admin UI
- Database: PostgreSQL with async SQLAlchemy and Alembic migrations
- Authentication: Session-based admin authentication with secure cookies
- Migrations: Automatic database migrations with Alembic
- Testing: Run development server with auto-reload
Testing MCP Endpoints
Using MCP Inspector (Recommended)
- Get token URL: From admin dashboard, copy the MCP URL for your token
- Install inspector:
npx @modelcontextprotocol/inspector - Open inspector: Visit http://localhost:6274 in browser (include proxy auth if needed, following instructions at inspector launch)
- Add server: Enter your MCP URL:
http://localhost:8000/mcp/{token} - Configure tools and resources: In admin interface, add/configure tools and resources for your client
- Test functionality: Click on configured tools and resources to test them (unconfigured items won't appear)
✅ Verified Working: The MCP Inspector successfully connects and displays only configured tools and resources!
Manual Testing
# Test capability discovery
curl http://localhost:8000/mcp/{your_api_key}
# Test echo tool (no configuration needed)
curl -X POST http://localhost:8000/mcp/{your_api_key} \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "echo",
"arguments": {"message": "Hello MCP!"}
}
}'
# Test send_email tool (uses client-specific configuration)
curl -X POST http://localhost:8000/mcp/{your_api_key} \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "send_email",
"arguments": {
"to": "user@example.com",
"subject": "Test",
"body": "This uses my configured from address!"
}
}
}'
# Test knowledge resource (uses client-specific configuration)
curl -X POST http://localhost:8000/mcp/{your_api_key} \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "resources/read",
"params": {
"uri": "knowledge://search?q=api"
}
}'
Database Migrations
The system uses Alembic for database migrations with automatic execution on Docker startup for the best developer experience.
Migration Workflow (Simplified)
# 1. Create a new migration after making model changes
./migrate.sh create "add user preferences table"
# 2. Restart the app (migrations apply automatically)
docker-compose restart app
# That's it! No manual migration commands needed.
Available Migration Commands
The ./migrate.sh script provides all migration functionality:
# Create new migration (auto-starts database if needed)
./migrate.sh create "migration message"
# Apply pending migrations manually (optional)
./migrate.sh upgrade
# Check current migration status
./migrate.sh status
# View migration history
./migrate.sh history
How It Works
- Development: Use
./migrate.sh create "message"to generate migration files - Automatic Application: Migrations run automatically when Docker containers start
- No Manual Steps: The Docker containers handle
alembic upgrade headon startup - Database Dependency: Docker waits for database health check before running migrations
- Volume Mounting: Migration files are immediately available in containers via volume mounts
Model Organization
Models are organized in separate files by domain:
src/models/base.py- SQLAlchemy Base classsrc/models/client.py- Client and APIKey modelssrc/models/configuration.py- Tool and Resource configurationssrc/models/knowledge.py- Knowledge base modelssrc/models/tool_call.py- Tool call tracking and auditing
Migration Workflow
- Make model changes in the appropriate model files
- Generate migration: The system auto-detects changes and creates migration files
- Review migration: Check the generated SQL in
src/migrations/versions/ - Deploy: Migrations run automatically on startup in production
Production Migration Behavior
- ✅ Automatic execution: Migrations run on app startup
- ✅ Safe rollouts: Failed migrations prevent app startup
- ✅ Version tracking: Database tracks current migration state
- ✅ Idempotent: Safe to run multiple times
Performance & Scalability
The system is optimized for production workloads with several performance enhancements:
- Background processing: Tool call logging moved to background tasks for faster response times
- Request timeouts: 30-second timeouts prevent runaway tool executions
- Configuration caching: 5-minute TTL cache reduces database queries for configuration lookups
- Connection pooling: Optimized PostgreSQL connection management with pre-ping validation
- Multi-worker setup: 2 workers optimized for Fly.io deployment with automatic recycling
Deployment
- Platform: Recommended deployment with Fly.io. NB! In some situations (e.g. if your MCP client connected to this runs inside cloudflare workers - you should set
force_https = falsein your fly.toml, because otherwise you may get endless redirect issues on the MCP client side) - Database: Any postgres will do, tested on Neon PostgreSQL with automatic migrations
- Environment: Production-ready with proper error handling and migration safety
- Workers: 2 Uvicorn workers with 1000 request recycling for optimal memory management
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.