Code Analysis MCP Server
A modular MCP server that provides tools for file operations, regex-based code searching, and structural analysis of functions and classes across multiple programming languages. It also includes AI-powered features for intelligently updating files according to architectural changes.
README
Code Analysis MCP Server
A modular MCP (Model Context Protocol) server for code analysis with file operations, code search, and structure analysis capabilities.
Features
š File Operations
- read_file: Read contents of any code file
- list_files: List files in directories with pattern matching
- file_info: Get detailed file information (size, type, line count)
š Code Search
- search_code: Search for patterns in code using regex
- find_definition: Find symbol definitions (functions, classes, variables)
š Code Analysis
- analyze_structure: Analyze code structure (imports, classes, functions)
Installation
# Clone the repository
git clone https://github.com/yourusername/code-mcp.git
cd code-mcp
# Create virtual environment
python -m venv venv
# Activate environment
source venv/bin/activate # On Unix/macOS
venv\Scripts\activate # On Windows
# Install dependencies
pip install -r requirements.txt
Usage
1. With Claude Desktop
Add to your Claude Desktop configuration file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"code-analyzer": {
"command": "python",
"args": ["/absolute/path/to/code-mcp/server.py"]
}
}
}
Then restart Claude Desktop.
2. With Continue.dev (VS Code)
Add to your Continue configuration:
{
"models": [...],
"mcpServers": {
"code-analyzer": {
"command": "python",
"args": ["/absolute/path/to/code-mcp/server.py"]
}
}
}
3. With Other MCP Clients
Any MCP-compatible client can use this server by pointing to the server.py file.
Available Tools
š read_file
Read the contents of a file.
{
"tool": "read_file",
"arguments": {
"path": "src/main.py",
"encoding": "utf-8" // optional, default: utf-8
}
}
š list_files
List files in a directory with optional pattern matching.
{
"tool": "list_files",
"arguments": {
"directory": "./src", // optional, default: current dir
"pattern": "*.py", // optional, default: *
"recursive": true // optional, default: false
}
}
ā¹ļø file_info
Get detailed information about a file.
{
"tool": "file_info",
"arguments": {
"path": "src/main.py"
}
}
š search_code
Search for patterns in code files using regex.
{
"tool": "search_code",
"arguments": {
"pattern": "def.*test", // regex pattern
"directory": "./src", // optional
"file_pattern": "*.py", // optional
"case_sensitive": false // optional, default: true
}
}
šÆ find_definition
Find where a symbol is defined.
{
"tool": "find_definition",
"arguments": {
"symbol": "MyClass",
"directory": "./src", // optional
"language": "python" // optional: python, javascript
}
}
šļø analyze_structure
Analyze the structure of a code file.
{
"tool": "analyze_structure",
"arguments": {
"path": "src/main.py",
"include_docstrings": true // optional, default: false
}
}
š¤ update_with_architecture
Compare old and new architecture versions and intelligently update the new file.
{
"tool": "update_with_architecture",
"arguments": {
"old_file": "src/legacy/module.py", // Reference file (old architecture)
"new_file": "src/modern/module.py", // Target file (will be updated)
"backup": true // optional, default: true
}
}
AI Configuration
To use the AI-powered tools, you need to configure your API keys:
-
Copy
.env.exampleto.env:cp .env.example .env -
Edit
.envand add your API keys:AI_PROVIDER=openai OPENAI_API_KEY=your-openai-api-key # or AI_PROVIDER=anthropic ANTHROPIC_API_KEY=your-anthropic-api-key
Thinking Models Support
The tool automatically handles "thinking" models (like o1, o1-preview) that include reasoning in their responses:
- Thinking sections are automatically removed
- Only the actual code is extracted
- Supports various thinking formats:
<think>,[thinking], etc.
-
Install AI dependencies:
pip install openai anthropic -
Test LLM connectivity:
./test_llm.sh # or python tests/test_llm.py
Examples
In Claude Desktop
After configuring, you can ask Claude:
- "Read the file src/main.py"
- "Search for all functions that contain 'test' in the src directory"
- "Find where the class 'UserModel' is defined"
- "Analyze the structure of app.py"
- "List all Python files in the project"
Programmatic Usage
# Example of calling tools programmatically
import asyncio
from mcp import Client
async def main():
client = Client()
# Read a file
result = await client.call_tool("read_file", {
"path": "src/main.py"
})
# Search for patterns
result = await client.call_tool("search_code", {
"pattern": "TODO|FIXME",
"directory": "./",
"recursive": True
})
# Analyze structure
result = await client.call_tool("analyze_structure", {
"path": "src/main.py",
"include_docstrings": True
})
asyncio.run(main())
Architecture
The server follows a modular architecture:
āāā server.py # Main MCP server
āāā tools/ # Tool definitions
ā āāā file_tools.py # File operations
ā āāā code_tools.py # Code analysis tools
āāā handlers/ # Request handlers
ā āāā file_handler.py
ā āāā search_handler.py
ā āāā analyze_handler.py
āāā core/ # Core services
āāā file_system.py # File system operations
āāā code_parser.py # Code parsing logic
Supported Languages
- Python (.py)
- JavaScript/TypeScript (.js, .ts, .jsx, .tsx)
- Java (.java)
- C/C++ (.c, .cpp, .h)
- Go (.go)
- Rust (.rs)
- Ruby (.rb)
- And more...
Security
- File access is restricted to prevent directory traversal
- Large files are handled efficiently with streaming
- Search results are limited to prevent memory issues
Contributing
Feel free to submit issues and enhancement requests!
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.