better-code-review-graph

better-code-review-graph

Knowledge graph for token-efficient code reviews. Builds a structural map of your codebase with Tree-sitter, tracks changes incrementally, and gives AI agents precise context via MCP tools. Features fixed multi-word search, qualified call resolution, dual-mode embedding (ONNX local + LiteLLM cloud), and output pagination.

Category
Visit Server

README

better-code-review-graph

mcp-name: io.github.n24q02m/better-code-review-graph

Knowledge graph for token-efficient code reviews -- fixed search, configurable embeddings, qualified call resolution.

CI codecov PyPI Docker License: MIT

Python MCP semantic-release Renovate

Fork of code-review-graph with critical bug fixes, configurable embeddings, and production CI/CD. Parses your codebase with Tree-sitter, builds a structural graph of functions/classes/imports, and gives Claude (or any MCP client) precise context so it reads only what matters.


Why Better

Feature code-review-graph better-code-review-graph
Multi-word search Broken (literal substring) AND-logic word splitting
callers_of/callees_of Empty results (bare name targets) Qualified name resolution + bare fallback
Embedding sentence-transformers + torch (1.1 GB) qwen3-embed ONNX + LiteLLM (200 MB), dual-mode
Output size Unbounded (500K+ chars) Paginated (max_results, truncated flag)
Tool design 9 individual tools 3-tier: graph (mega) + config + help
Plugin hooks Invalid PostEdit/PostGit Valid PostToolUse

All fixes are submitted upstream as standalone PRs (see Upstream PRs). If all are merged, this repo will be archived.


Installation

Claude Code

claude mcp add better-code-review-graph -- uvx --python 3.13 better-code-review-graph serve

Claude Code Plugin

claude plugin install n24q02m/better-code-review-graph@better-code-review-graph

Cursor (~/.cursor/mcp.json)

{
  "mcpServers": {
    "better-code-review-graph": {
      "command": "uvx",
      "args": ["--python", "3.13", "better-code-review-graph", "serve"]
    }
  }
}

Codex (~/.codex/config.toml)

[mcp_servers.better-code-review-graph]
command = "uvx"
args = ["--python", "3.13", "better-code-review-graph", "serve"]

Gemini CLI (~/.gemini/settings.json)

{
  "mcpServers": {
    "better-code-review-graph": {
      "command": "uvx",
      "args": ["--python", "3.13", "better-code-review-graph", "serve"]
    }
  }
}

OpenCode (~/.opencode.json)

{
  "mcpServers": {
    "better-code-review-graph": {
      "command": "uvx",
      "args": ["--python", "3.13", "better-code-review-graph", "serve"]
    }
  }
}

Windsurf (~/.codeium/windsurf/mcp_config.json)

{
  "mcpServers": {
    "better-code-review-graph": {
      "command": "uvx",
      "args": ["--python", "3.13", "better-code-review-graph", "serve"]
    }
  }
}

Cline (cline_mcp_settings.json)

{
  "mcpServers": {
    "better-code-review-graph": {
      "command": "uvx",
      "args": ["--python", "3.13", "better-code-review-graph", "serve"]
    }
  }
}

Amp (~/.config/amp/settings.json)

{
  "mcpServers": {
    "better-code-review-graph": {
      "command": "uvx",
      "args": ["--python", "3.13", "better-code-review-graph", "serve"]
    }
  }
}

Docker

docker run -i --rm n24q02m/better-code-review-graph

pip

pip install better-code-review-graph
better-code-review-graph serve

Tools

graph -- Knowledge graph operations

Actions: build | update | query | search | impact | review | embed | stats | large_functions

Action Description
build Full or incremental graph build. Set full_rebuild=true to re-parse all files.
update Alias for build with full_rebuild=false (incremental).
query Run predefined queries: callers_of, callees_of, imports_of, importers_of, children_of, tests_for, inheritors_of, file_summary.
search Search code entities by name/keyword or semantic similarity.
impact Blast radius of changed files. Auto-detects from git diff. Paginated with max_results.
review Token-optimized review context with structural summary, source snippets, and review guidance.
embed Compute vector embeddings for semantic search. Dual-mode: local ONNX or cloud LiteLLM.
stats Graph size, languages, node/edge breakdown, embedding count.
large_functions Find functions/classes exceeding a line-count threshold.

config -- Server configuration

Actions: status | set | cache_clear

Action Description
status Server info: version, graph path, node/edge counts, embedding backend.
set Update runtime settings (e.g., log_level).
cache_clear Remove all computed embeddings.

help -- Full documentation

Topics: graph | config

Returns complete documentation for each tool. Use when the compressed descriptions above are insufficient.


Embedding Backends

Backend Config Size Description
local (default) Nothing needed ~570 MB (first use) qwen3-embed ONNX. Zero-config.
litellm API_KEYS or LITELLM_PROXY_URL 0 MB Cloud providers via LiteLLM.
  • Auto-detection: API_KEYS or LITELLM_PROXY_URL set -> LiteLLM. Otherwise -> local ONNX.
  • Override: EMBEDDING_BACKEND=local or EMBEDDING_BACKEND=litellm.
  • Fixed 768-dim storage: Switching backends does NOT invalidate existing vectors.

Configuration

Variable Default Description
EMBEDDING_BACKEND (auto-detect) local or litellm
EMBEDDING_MODEL gemini/gemini-embedding-001 LiteLLM model (when backend=litellm)
API_KEYS - LLM API keys (format: ENV_VAR:key,...). Enables LiteLLM.
LITELLM_PROXY_URL - LiteLLM Proxy URL. Enables LiteLLM via proxy.
LITELLM_PROXY_KEY - LiteLLM Proxy virtual key.

Ignore files

Create .code-review-graphignore in your project root:

generated/**
*.generated.ts
vendor/**
node_modules/**

Supported Languages

Python, TypeScript, JavaScript, Go, Rust, Java, C#, Ruby, Kotlin, Swift, PHP, C/C++


Upstream PRs

All fixes are submitted to code-review-graph:

  • #37 -- Multi-word search AND logic
  • #38 -- Parser call target resolution (fixes #20)
  • #39 -- Impact radius output pagination

If all upstream PRs are merged, this repository will be archived.


Build from Source

git clone https://github.com/n24q02m/better-code-review-graph
cd better-code-review-graph
uv sync --group dev
uv run pytest
uv run better-code-review-graph serve

Requirements: Python 3.13, uv


Compatible With

Claude Desktop Claude Code Cursor VS Code Copilot Antigravity Gemini CLI OpenAI Codex OpenCode

Also by n24q02m

Server Description Install
wet-mcp Web search, content extraction, library docs uvx --python 3.13 wet-mcp@latest
mnemo-mcp Persistent AI memory with hybrid search uvx mnemo-mcp@latest
better-notion-mcp Notion API for AI agents npx -y @n24q02m/better-notion-mcp@latest
better-email-mcp Email (IMAP/SMTP) for AI agents npx -y @n24q02m/better-email-mcp@latest
better-godot-mcp Godot Engine for AI agents npx -y @n24q02m/better-godot-mcp@latest
better-telegram-mcp Telegram Bot API + MTProto for AI agents uvx --python 3.13 better-telegram-mcp@latest

License

MIT - See LICENSE

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured