MCP Doctor
A diagnostic tool that evaluates the contract quality of MCP servers across dimensions like safety, efficiency, and documentation to provide actionable improvements. It helps developers ensure their servers are optimized for human users, distribution platforms, and AI agents.
README
<!-- mcp-name: io.github.Jiansen/mcp-doctor -->
<p align="center"> <img src="assets/avatar-512.png" width="120" alt="mcp-doctor logo"> </p>
<h1 align="center">MCP Doctor</h1>
<p align="center"> <a href="https://pypi.org/project/mcp-doctor/"><img src="https://img.shields.io/pypi/v/mcp-doctor" alt="PyPI"></a> <a href="https://github.com/Jiansen/mcp-doctor/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue" alt="License"></a> <a href="https://pypi.org/project/mcp-doctor/"><img src="https://img.shields.io/pypi/pyversions/mcp-doctor" alt="Python"></a> <a href="https://github.com/Jiansen/mcp-doctor/stargazers"><img src="https://img.shields.io/github/stars/Jiansen/mcp-doctor?style=social" alt="GitHub Stars"></a> </p>
<p align="center">Check and improve the contract quality of any MCP server — for humans, agents, and platforms.</p>
Problem
Most MCP servers are built with only one audience in mind (usually human developers reading a README). But a successful MCP server needs to satisfy three audiences simultaneously:
- Distribution platforms (Registry, Smithery, PulseMCP, Glama) need structured metadata
- Human users need clear purpose, trust signals, and low install friction
- AI agents need unambiguous tool descriptions, declared side effects, and token-efficient responses
MCP Doctor checks all six dimensions of "contract quality" and gives you actionable recommendations.
Quick Start
pip install mcp-doctor
mcp-doctor check /path/to/your-mcp-server
What It Checks
| Dimension | Question |
|---|---|
| Task Clarity | Is the server's purpose immediately clear? |
| Trust & Safety | Are side effects, permissions, and safety boundaries declared? |
| Interface Quality | Are tools well-named, well-described, and well-typed? |
| Token Efficiency | Will tool responses fit an agent's context budget? |
| Install Friction | How fast can someone go from discovery to first use? |
| Cross-platform Readiness | Is metadata complete for Registry, Smithery, PulseMCP, Glama? |
Usage
CLI
# Terminal output (default, rule-based)
mcp-doctor check /path/to/server
# JSON output (for CI/CD)
mcp-doctor check /path/to/server --format json
# Markdown output (for README badges or reports)
mcp-doctor check /path/to/server --format markdown
# AI-enhanced evaluation (requires OPENAI_API_KEY)
mcp-doctor check /path/to/server --mode ai
# AI mode with a specific model
mcp-doctor check /path/to/server --mode ai --model gpt-4o
As an MCP Server
MCP Doctor is also an MCP server, so AI agents can use it to check other servers:
{
"mcpServers": {
"mcp-doctor": {
"command": "python",
"args": ["-m", "mcp_doctor.mcp_server"]
}
}
}
Tools:
check_server(path, format, mode, model)— Run all 6 checks. Usemode="ai"for LLM-enhanced review.list_dimensions()— List what MCP Doctor evaluates
For AI Agents: Quick Install
One-liner (paste this to your AI agent):
Install mcp-doctor (
pip install mcp-doctor) and add it to your MCP config with commandpython -m mcp_doctor.mcp_server. Then use thecheck_servertool to check any MCP server's quality.
Step by step:
- Install:
pip install mcp-doctor - Add to your MCP configuration:
{ "mcpServers": { "mcp-doctor": { "command": "python", "args": ["-m", "mcp_doctor.mcp_server"] } } } - Use the
check_servertool with a path to any MCP server repo - If MCP is not available, fall back to CLI:
mcp-doctor check /path/to/server --format json
Agent workflow: check server -> review scores -> identify lowest dimensions -> follow recommendations
Scoring
Each dimension is scored 0-100 and graded A/B/C/D:
- A (90+): Exemplary
- B (75-89): Good, minor gaps
- C (55-74): Adequate, notable gaps
- D (<55): Needs significant improvement
Theoretical Foundation
MCP Doctor is based on the Contract Quality Framework — the principle that a successful MCP product's promises (what it does, what it costs, what risks it carries) should be equally parseable by platforms, humans, and agents.
The framework draws from:
- Anthropic's "Writing effective tools for agents" (tool description engineering, namespacing, token efficiency)
- Official MCP Registry requirements (server.json schema, namespace verification)
- Cross-platform analysis of Smithery, PulseMCP, Glama ranking signals
Evaluation Modes
| Mode | Flag | Deterministic | Network | API Key |
|---|---|---|---|---|
| Rule-based (default) | --mode rule |
Yes | No | No |
| AI-enhanced | --mode ai |
No | Yes (LLM API) | OPENAI_API_KEY |
Rule-based mode is fully offline and deterministic. No network calls, no LLM, no API keys needed.
AI mode sends a metadata summary (server name, tool definitions, README preview, rule-based scores) to an OpenAI-compatible API for qualitative review. No source code is sent. Reports include model name and version for reproducibility.
AI mode supports any OpenAI-compatible provider:
OPENAI_API_KEY— API key (required for AI mode)OPENAI_BASE_URL— Custom endpoint (for Ollama, vLLM, LM Studio, Azure, etc.)MCP_DOCTOR_MODEL— Model name (default:gpt-4o-mini)
Install AI support: pip install mcp-doctor[ai]
Trust & Permissions
MCP Doctor is read-only. It never modifies, deletes, or creates files.
| What it accesses | Why |
|---|---|
server.json |
Parse MCP metadata |
README.md |
Check documentation quality |
LICENSE |
Verify license presence |
*.py, *.json in src/ |
Detect tool definitions |
All MCP tools have readOnlyHint: true and destructiveHint: false annotations.
Feedback
Found a bug? Have a suggestion? We'd love to hear from you:
- GitHub Issues: https://github.com/Jiansen/mcp-doctor/issues
- Quick report (if you have
ghCLI):gh issue create --repo Jiansen/mcp-doctor --title "Bug: ..." --body "..."
If you're an agent, include the JSON output of check_server in your report for context.
Contributing
git clone https://github.com/Jiansen/mcp-doctor.git
cd mcp-doctor
pip install -e ".[dev,ai]"
ruff check src/ tests/
pytest tests/ -v
If MCP Doctor helped you improve your server, consider giving it a star on GitHub — it helps others discover the tool.
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.