AI Peer Review MCP Server
Enables local language models to request real-time peer review feedback from Google Gemini to improve response quality, accuracy, and completeness through AI collaboration.
README
AI Peer Review MCP Server
Enhance your local LLM responses with real-time peer review from Google Gemini
A Model Context Protocol (MCP) server that enables local language models to request peer review feedback from Google Gemini, dramatically improving response quality through AI collaboration.
๐ Features
- Real-time peer review from Google Gemini for any local LLM response
- Manual trigger system - user controls when to request peer review
- Detailed feedback analysis - accuracy, completeness, clarity, and improvement suggestions
- Comprehensive logging - see exactly what feedback Gemini provides
- Privacy-conscious - only shares content when explicitly requested
- Free to use - leverages Google Gemini's free tier
- Easy integration - works with any MCP-compatible local LLM setup
๐ฏ Use Cases
- Fact-checking complex or technical responses
- Quality improvement for educational content
- Writing enhancement for creative tasks
- Technical validation for coding explanations
- Research assistance with multiple AI perspectives
๐ Prerequisites
- Python 3.8+ installed on your system
- LMStudio (or another MCP-compatible LLM client)
- Google AI Studio account (free) for Gemini API access
- Local LLM with tool calling support (e.g., Llama 3.1, Mistral, Qwen)
๐ Quick Start
1. Get Google Gemini API Key
- Visit Google AI Studio
- Sign in with your Google account
- Click "Get API key" โ "Create API key in new project"
- Copy your API key (starts with
AIza...)
2. Install the MCP Server
# Clone or create project directory
git clone https://github.com/your-repo/ai-peer-review-mcp # Replace with the actual repo URL
cd ai-peer-review-mcp
# Create a virtual environment (recommended)
python3 -m venv venv
source venv/bin/activate # On Windows use `venv\Scripts\activate`
# Install dependencies
pip install -r requirements.txt
# Create environment file
cp .env.example .env
# Now, edit the .env file and add your API key:
# GEMINI_API_KEY=your_actual_api_key_here
3. Review Server Files
requirements.txt:
requests
python-dotenv
server.py: (See full code in the repository)
4. Configure LMStudio or any other supported MCP Host (e.g Claude Desktop)
Add this configuration to your LMStudio MCP settings:
{
"mcpServers": {
"ai-peer-review": {
"command": "python",
"args": ["server.py"],
"cwd": "/path/to/your/ai-peer-review-mcp",
"env": {
"GEMINI_API_KEY": "your_actual_api_key_here"
}
}
}
}
Finding MCP Settings in LMStudio:
- Look for: Settings โ MCP Servers
- Or: Tools & Integrations โ MCP Configuration
- Or: Program button โ Edit MCP JSON
5. Test the Setup
- Restart LMStudio after adding the MCP configuration
- Start a new chat in LMStudio
- Ask any question: "What is quantum computing?"
- Request peer review: "Use the ai_peer_review tool to check and improve your answer"
๐ Usage Examples
Basic Usage
User: What causes climate change?
LLM: [Provides initial response about greenhouse gases...]
User: Use AI Peer Review to verify and improve that answer
LLM: [Calls ai_peer_review tool, receives feedback, provides enhanced response]
Technical Questions
User: Explain how neural networks work
LLM: [Initial technical explanation...]
User: Can you use ai_peer_review to make sure the explanation is accurate?
LLM: [Enhanced response with better technical details and examples]
Creative Tasks
User: Write a short story about AI
LLM: [Initial creative writing...]
User: Use peer review to improve the story structure and clarity
LLM: [Improved story with better narrative flow and character development]
๐ง Configuration Options
Environment Variables
GEMINI_API_KEY- Your Google Gemini API key (required)
Customization
You can modify the peer review prompt in server.py to focus on specific aspects:
review_prompt = f"""PEER REVIEW REQUEST:
# Customize this section for your specific needs
# Examples:
# - Focus on technical accuracy for coding questions
# - Emphasize creativity for writing tasks
# - Prioritize safety for medical/legal topics
...
"""
๐ Monitoring and Logs
The server creates detailed logs in mcp-server.log:
# Watch logs in real-time
tail -f mcp-server.log
# View recent activity
cat mcp-server.log | tail -50
Log Information Includes:
- Tool calls from LMStudio
- Requests sent to Gemini
- Raw Gemini responses
- Parsed feedback
- Error details
๐ Troubleshooting
Common Issues
"Tool not available"
- Verify MCP server configuration in LMStudio
- Ensure your local model supports tool calling
- Restart LMStudio after configuration changes
"GEMINI_API_KEY not found"
- Check your
.envfile exists and has the correct key - Verify API key is valid in Google AI Studio
- Ensure environment variable is properly set in LMStudio config
"Rate limit exceeded"
- Google Gemini free tier has generous limits
- Wait a moment and try again
- Check Google AI Studio quota usage
"Model not found"
- API model names change over time
- Update
GEMINI_API_URLin server.js if needed - Check Google's latest API documentation
Debug Mode
Run the server manually to see detailed output. Make sure your virtual environment is active.
export GEMINI_API_KEY=your_api_key_here
python server.py
๐ Privacy and Security
- Data sharing only on request - content is only sent to Gemini when explicitly triggered
- No persistent storage - conversations are not stored or logged beyond current session
- API key security - keep your Gemini API key private and secure
- Local processing - MCP runs entirely on your machine
๐ง Limitations
- Requires tool-calling models - basic instruction-following models won't work
- Internet connection required - needs access to Google Gemini API
- Rate limits - subject to Google Gemini API quotas (free tier is generous)
- Language support - optimized for English, other languages may work but aren't tested
๐ฃ๏ธ Roadmap
- [ ] Multi-provider support - Add Groq, DeepSeek, and other AI APIs
- [ ] Smart routing - Automatic provider selection based on question type
- [ ] Confidence thresholds - Auto-trigger peer review for uncertain responses
- [ ] Custom review templates - Domain-specific review criteria
- [ ] Usage analytics - Track improvement metrics and API usage
- [ ] Batch processing - Review multiple responses at once
๐ค Contributing
We welcome contributions! Here's how to help:
Development Setup
git clone https://github.com/your-repo/ai-peer-review-mcp # Replace with your repo URL
cd ai-peer-review-mcp
# Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Set up your environment
cp .env.example .env
# --> Add your GEMINI_API_KEY to the .env file
echo "Development environment ready. Run with 'python server.py'"
Ways to Contribute
- ๐ Bug reports - Open issues for any problems you encounter
- ๐ก Feature requests - Suggest new capabilities or improvements
- ๐ Documentation - Improve setup guides, add examples
- ๐ง Code contributions - Submit pull requests for fixes or features
- ๐งช Testing - Try with different models and report compatibility
- ๐ Localization - Help support more languages
Contribution Guidelines
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes with clear, descriptive commits
- Add tests if applicable
- Update documentation for any new features
- Submit a pull request with a clear description
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Acknowledgments
- Anthropic - For creating the Model Context Protocol standard
- Google - For providing the Gemini API
- LMStudio - For excellent MCP integration
- Community contributors - Everyone who helps improve this project
๐ Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
๐ Star History
If this project helps you, please consider giving it a star on GitHub! โญ
Made with โค๏ธ for the AI community
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.