MCP Code Reviewer
An AI-powered server for code analysis, requirements validation, and automated fix proposals with human-in-the-loop confirmation. It supports multiple LLM providers and ensures safe file modifications through automatic backups and path validation.
README
MCP Code Refiner
A powerful second-layer LLM MCP server that refines and reviews code using AI. Perfect for improving AI-generated code or enhancing human-written code through natural language feedback.
What is This?
This is an MCP (Model Context Protocol) server that adds code refinement and review capabilities to any MCP client like Claude Desktop. It acts as a "second layer" AI that specializes in code improvement, working alongside your primary AI assistant.
Use it to:
- Refine code generated by ChatGPT, Claude, or any AI with natural language feedback
- Get comprehensive code reviews with security and performance analysis
- Iteratively improve code until it meets your standards
- Learn from AI-suggested improvements
Features
- Code Refinement - Improve code with natural language feedback ("make it more logical", "add error handling")
- Code Review - AI-powered analysis for bugs, security, performance, and best practices
- Multi-Model Support - Choose between Gemini, Claude, or OpenAI models
- Plug & Play - Works with Claude Desktop and any MCP client
- Smart Prompts - Optimized prompts for high-quality, actionable results
- Diff View - See exactly what changes before applying them
Quick Start
Prerequisites
- Python 3.10 or higher
- At least one AI provider API key (Gemini recommended for free tier)
1. Clone and Install
git clone https://github.com/yourusername/mcp_code_review.git
cd mcp_code_review
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -r requirements.txt
2. Configure API Keys
Create a .env file from the example:
cp .env.example .env
Edit .env and add at least ONE API key:
# Recommended: Google Gemini (free tier available)
GOOGLE_API_KEY=your-gemini-api-key-here
# Alternative: Anthropic Claude
ANTHROPIC_API_KEY=your-anthropic-api-key-here
# Alternative: OpenAI
OPENAI_API_KEY=your-openai-api-key-here
Get API keys from:
- Gemini: https://ai.google.dev/
- Claude: https://console.anthropic.com/
- OpenAI: https://platform.openai.com/api-keys
3. Connect to Claude Desktop
Edit your Claude Desktop config file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
Add the server configuration:
{
"mcpServers": {
"code-refiner": {
"command": "python",
"args": ["/absolute/path/to/mcp_code_review/mcp_server.py"],
"env": {
"GOOGLE_API_KEY": "your-gemini-api-key"
}
}
}
}
Important: Replace /absolute/path/to/mcp_code_review/ with the actual path on your system.
Restart Claude Desktop to load the server.
Usage
Once configured, just talk to Claude naturally in Claude Desktop. The tools are automatically available!
Code Refinement
Improve existing code with natural language instructions:
You: "Refine ./my_script.py to make it more logical and add error handling"
Claude will:
- Call
refine_code_toolwith your request - Show you a diff of proposed changes
- Explain what was changed and why
- Ask for your approval
- Apply changes with
apply_refinement_toolif you confirm
Code Review
Get comprehensive code analysis:
You: "Review ./server.py for security issues and performance problems"
Claude will:
- Call
review_code_toolon the file - Show issues found with severity levels (high/medium/low)
- Highlight code strengths
- Provide an overall quality score
- Suggest specific improvements
Real-World Examples
Refinement:
- "Make ./app.py more performant by optimizing loops"
- "Simplify the logic in ./utils/helper.py"
- "Add comprehensive error handling to ./api/routes.py"
- "Refactor ./legacy_code.py to follow modern Python best practices"
- "Add type hints and docstrings to ./calculator.py"
Review:
- "Review ./authentication.py for security vulnerabilities"
- "Check ./database.py for SQL injection risks"
- "Analyze ./api_client.py for error handling issues"
- "Review ./main.py and suggest improvements"
Available Models
Configure via ai_provider parameter or Claude will use the default (gemini).
Gemini (Google)
gemini- Gemini 2.0 Flash (fast, free tier)gemini-pro- Gemini 1.5 Pro (more capable)
Claude (Anthropic)
claudeorclaude-sonnet- Claude 3.5 Sonnet (high quality)claude-opus- Claude 3 Opus (most capable)claude-haiku- Claude 3.5 Haiku (fastest)
OpenAI
openaiorgpt-4o- GPT-4o (balanced)gpt-4- GPT-4 Turbogpt-3.5- GPT-3.5 Turbo (fastest)
MCP Tools Reference
This server provides three MCP tools that Claude Desktop can call automatically:
1. refine_code_tool
Purpose: Improves existing code based on natural language feedback using a second-layer LLM.
Parameters:
user_request(string, required) - What you want to improve (e.g., "make it more logical", "add error handling")file_path(string, required) - Path to the code file to refineai_provider(string, optional) - AI model to use (default: "gemini")
Returns:
{
"status": "success",
"explanation": "Added error handling and simplified logic...",
"diff": "--- original\n+++ refined\n...",
"refined_code": "def improved_function():\n ...",
"file_path": "./app.py"
}
2. review_code_tool
Purpose: Analyzes code for bugs, security vulnerabilities, performance issues, and quality.
Parameters:
file_path(string, required) - Path to the code file to reviewai_provider(string, optional) - AI model to use (default: "gemini")
Returns:
{
"status": "success",
"issues": [
{
"severity": "high",
"category": "security",
"issue": "SQL injection vulnerability",
"line": 42,
"suggestion": "Use parameterized queries..."
}
],
"strengths": ["Good error handling", "Clear naming"],
"overall_assessment": "Code is functional but has security concerns...",
"score": 7
}
3. apply_refinement_tool
Purpose: Applies refined code to the file after user approval.
Parameters:
file_path(string, required) - Path to the file to updaterefined_code(string, required) - The improved code fromrefine_code_tool
Returns:
{
"status": "success",
"message": "Code successfully applied to ./app.py"
}
Important: Only use this after the user has reviewed and approved the changes!
Testing
Test the server without Claude Desktop:
python client.py
This runs a simple test client to verify the server works.
Project Structure
mcp_code_review/
├── mcp_server.py # Main MCP server entry point
├── client.py # Test client for local testing
├── requirements.txt # Python dependencies
├── .env.example # Environment variables template
├── .env # Your API keys (git-ignored)
│
├── tools/ # MCP tool implementations
│ ├── __init__.py
│ ├── file_ops.py # File read/write utilities
│ ├── code_refinement.py # Code refinement logic
│ └── code_review.py # Code review logic
│
├── prompts/ # AI prompt templates
│ ├── code_refinement.txt # Refinement prompt template
│ └── code_review.txt # Review prompt template
│
└── utils/ # Helper utilities
├── __init__.py
├── llm_client.py # LiteLLM wrapper for multi-provider support
└── diff_generator.py # Unified diff generation
How It Works
This server implements a "second-layer LLM" architecture:
- You interact with Claude Desktop (first-layer AI) using natural language
- Claude understands your intent and calls the appropriate MCP tool
- MCP Server receives the request and invokes a second-layer LLM specialized for code tasks
- Second-layer LLM analyzes or refines the code using optimized prompts
- Results are returned to Claude with diffs, explanations, and suggestions
- Claude presents the results to you for review
- You approve or reject the changes
- Changes are applied only after your confirmation
This two-layer approach combines Claude's conversational abilities with specialized code analysis/refinement models.
Use Cases
1. Refining AI-Generated Code
First LLM generates code → Use this to improve it
2. Code Review Assistant
Get AI-powered feedback on your code
3. Iterative Improvement
Keep refining until perfect
4. Learning Tool
See how AI would improve your code and learn from it
Requirements
- Python 3.10 or higher
- At least one AI provider API key (Gemini recommended for free tier)
- Dependencies listed in
requirements.txt:fastmcp- FastMCP frameworkmcp- Model Context Protocol SDKlitellm- Multi-provider LLM wrapperrich- Terminal formattingpython-dotenv- Environment variable management
Troubleshooting
Server Not Appearing in Claude Desktop
- Check that the path in
claude_desktop_config.jsonis absolute, not relative - Verify the Python path is correct (use
which pythonin your activated venv) - Check Claude Desktop logs for errors:
- macOS:
~/Library/Logs/Claude/ - Windows:
%APPDATA%\Claude\logs\
- macOS:
- Restart Claude Desktop after config changes
API Key Errors
- Verify your API key is correct in the
.envfile - Make sure the key is also in the
claude_desktop_config.jsonenv section - Check that you have API credits/quota remaining
- Try using a different AI provider as a fallback
File Path Issues
- Always use absolute paths or paths relative to where you run the command
- On Windows, use forward slashes
/or escaped backslashes\\ - Verify the file exists:
ls /path/to/file.py
Module Import Errors
- Ensure virtual environment is activated
- Reinstall dependencies:
pip install -r requirements.txt --upgrade - Check Python version:
python --version(must be 3.10+)
Testing the Server
Run the test client to verify the server works:
python client.py
This bypasses Claude Desktop and tests the MCP server directly.
Contributing
Contributions are welcome! Here's how you can help:
- Report bugs - Open an issue with details about the problem
- Suggest features - Share ideas for new capabilities
- Improve prompts - The prompt templates in
prompts/can always be refined - Add AI providers - Extend support for additional LLM providers
- Submit PRs - Fix bugs, add features, improve documentation
License
MIT License - see LICENSE file for details
Acknowledgments
Built with:
- FastMCP - FastMCP framework for building MCP servers
- LiteLLM - Unified interface for multiple LLM providers
- MCP Protocol - Model Context Protocol specification
Resources
Questions or issues? Open an issue on GitHub or check the troubleshooting section above.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.