
Reviewer MCP
An MCP service that provides AI-powered development workflow tools including specification generation, code review, and project management with support for both OpenAI and Ollama models.
README
Reviewer MCP
An MCP (Model Context Protocol) service that provides AI-powered development workflow tools. It supports multiple AI providers (OpenAI and Ollama) and offers standardized tools for specification generation, code review, and project management.
Features
- Specification Generation: Create detailed technical specifications from prompts
- Specification Review: Review specifications for completeness and provide critical feedback
- Code Review: Analyze code changes with focus on security, performance, style, or logic
- Test Runner: Execute tests with LLM-friendly formatted output
- Linter: Run linters with structured output formatting
- Pluggable AI Providers: Support for both OpenAI and Ollama (local models)
Installation
npm install
npm run build
Configuration
Environment Variables
Create a .env
file based on .env.example
:
# AI Provider Configuration
AI_PROVIDER=openai # Options: openai, ollama
# OpenAI Configuration
OPENAI_API_KEY=your_api_key_here
OPENAI_MODEL=o1-preview
# Ollama Configuration (for local models)
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama2
Project Configuration
Create a .reviewer.json
file in your project root to customize commands:
{
"testCommand": "npm test",
"lintCommand": "npm run lint",
"buildCommand": "npm run build",
"aiProvider": "ollama",
"ollamaModel": "codellama"
}
Using with Claude Desktop
Add the following to your Claude Desktop configuration:
{
"mcpServers": {
"reviewer": {
"command": "node",
"args": ["/path/to/reviewer-mcp/dist/index.js"],
"env": {
"OPENAI_API_KEY": "your-api-key-here"
}
}
}
}
Using with Ollama
- Install Ollama: https://ollama.ai
- Pull a model:
ollama pull llama2
orollama pull codellama
- Set
AI_PROVIDER=ollama
in your.env
file - The service will use your local Ollama instance
Available Tools
generate_spec
Generate a technical specification document.
Parameters:
prompt
(required): Description of what specification to generatecontext
(optional): Additional context or requirementsformat
(optional): Output format - "markdown" or "structured"
review_spec
Review a specification for completeness and provide critical feedback.
Parameters:
spec
(required): The specification document to reviewfocusAreas
(optional): Array of specific areas to focus the review on
review_code
Review code changes and provide feedback.
Parameters:
diff
(required): Git diff or code changes to reviewcontext
(optional): Context about the changesreviewType
(optional): Type of review - "security", "performance", "style", "logic", or "all"
run_tests
Run standardized tests for the project.
Parameters:
testCommand
(optional): Test command to run (defaults to configured command)pattern
(optional): Test file pattern to matchwatch
(optional): Run tests in watch mode
run_linter
Run standardized linter for the project.
Parameters:
lintCommand
(optional): Lint command to run (defaults to configured command)fix
(optional): Attempt to fix issues automaticallyfiles
(optional): Array of specific files to lint
Development
# Run in development mode
npm run dev
# Run tests
npm test
# Run unit tests only
npm run test:unit
# Run integration tests (requires Ollama)
npm run test:integration
# Type checking
npm run typecheck
# Linting
npm run lint
End-to-End Testing
The project includes a comprehensive e2e test that validates the full workflow using a real Ollama instance:
- Install and start Ollama: https://ollama.ai
- Pull a model:
ollama pull llama2
- Run the test:
npm run test:e2e
The e2e test demonstrates:
- Specification generation
- Specification review
- Code creation
- Code review
- Linting
- Test execution
All using real AI responses from your local Ollama instance.
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.