Markdown RAG Documentation
Enables semantic search over local Markdown documentation using hybrid retrieval combining embeddings, keyword search, and graph traversal with automatic file watching and zero-configuration setup.
README
mcp-markdown-ragdocs
A Model Context Protocol server that provides semantic search over local Markdown documentation using hybrid retrieval.
What it is
This is an MCP server that indexes local Markdown files and exposes a query_documents tool for retrieval-augmented generation. The server combines semantic search, keyword matching, and graph traversal to retrieve relevant document chunks.
Why it exists
Technical documentation, personal notes, and project wikis are typically stored as Markdown files. Searching these collections manually or with grep is inefficient. This server provides a conversational interface to query documentation using natural language while automatically keeping the index synchronized with file changes.
Existing RAG solutions require manual database setup, explicit indexing steps, and ongoing maintenance. This server eliminates that friction with automatic file watching, zero-configuration defaults, and built-in index versioning.
Features
- Hybrid search combining semantic embeddings (FAISS), keyword search (Whoosh), and graph traversal (NetworkX)
- Cross-encoder re-ranking for improved precision (optional, ~50ms latency)
- Query expansion via concept vocabulary for better recall
- Multi-project support: Manage isolated indices for multiple projects on one machine with automatic project detection
- Server-Sent Events (SSE) streaming for real-time response delivery
- CLI query command with rich formatted output
- Automatic file watching with debounced incremental indexing
- Zero-configuration operation with sensible defaults
- Index versioning with automatic rebuild on configuration changes
- Pluggable parser architecture (Markdown with tree-sitter)
- Rich Markdown parsing: frontmatter, wikilinks, tags, transclusions
- Reciprocal Rank Fusion for multi-strategy result merging
- Recency bias for recently modified documents
- Local-first architecture with no external dependencies
Installation
Requires Python 3.13+.
git clone https://github.com/yourusername/mcp-markdown-ragdocs.git
cd mcp-markdown-ragdocs
uv sync
Quick Start
For VS Code / MCP Clients (Recommended)
Start the stdio-based MCP server for use with VS Code or other MCP clients:
uv run mcp-markdown-ragdocs mcp
The server will:
- Scan for
*.mdfiles in the current directory - Build vector, keyword, and graph indices
- Start file watching for automatic updates
- Expose query_documents tool via stdio transport
See MCP Integration below for VS Code configuration.
For HTTP API / Development
Start the HTTP server on default port 8000:
uv run mcp-markdown-ragdocs run
The server will:
- Index documents (same as mcp command)
- Expose HTTP API at
http://127.0.0.1:8000 - Provide REST endpoints for queries
See API Endpoints below for HTTP usage.
Basic Usage
Configuration
Create .mcp-markdown-ragdocs/config.toml in your project directory or at ~/.config/mcp-markdown-ragdocs/config.toml:
[server]
host = "127.0.0.1"
port = 8000
[indexing]
documents_path = "~/Documents/Notes" # Path to your Markdown files
index_path = ".index_data/" # Where to store indices
[search]
semantic_weight = 1.0 # Weight for semantic search results
keyword_weight = 1.0 # Weight for keyword search results
recency_bias = 0.5 # Boost for recently modified documents
rrf_k_constant = 60 # Reciprocal Rank Fusion constant
min_confidence = 0.0 # Score threshold (0.0 = disabled)
max_chunks_per_doc = 0 # Per-document limit (0 = disabled)
dedup_enabled = false # Semantic deduplication
The server searches for configuration files in this order:
.mcp-markdown-ragdocs/config.tomlin current directory.mcp-markdown-ragdocs/config.tomlin parent directories (walks up to root)~/.config/mcp-markdown-ragdocs/config.toml(global fallback)
This supports monorepo workflows where you can place a shared configuration in the repository root.
If no configuration file exists, the server uses these defaults:
- Documents path:
.(current directory) - Server:
127.0.0.1:8000 - Index storage:
.index_data/
CLI Commands
Start MCP Server (stdio)
uv run mcp-markdown-ragdocs mcp
Starts stdio-based MCP server for VS Code and compatible MCP clients. Runs persistently until stopped.
Start HTTP Server
uv run mcp-markdown-ragdocs run
Starts HTTP API server on port 8000 (default). Override with:
uv run mcp-markdown-ragdocs run --host 0.0.0.0 --port 8080
Query Documents (CLI)
Query documents directly from command line:
uv run mcp-markdown-ragdocs query "How do I configure authentication?"
With options:
# JSON output for scripting
uv run mcp-markdown-ragdocs query "authentication" --json
# Limit number of results
uv run mcp-markdown-ragdocs query "authentication" --top-n 3
# Specify project context
uv run mcp-markdown-ragdocs query "authentication" --project my-project
Configuration Management
Check your configuration:
uv run mcp-markdown-ragdocs check-config
Force a full index rebuild:
uv run mcp-markdown-ragdocs rebuild-index
| Command | Purpose | Use When |
|---|---|---|
mcp |
Stdio MCP server | Integrating with VS Code or MCP clients |
run |
HTTP API server | Development, testing, or HTTP-based integrations |
query |
CLI query | Scripting or quick document searches |
check-config |
Validate config | Debugging configuration issues |
rebuild-index |
Force reindex | Config changes or corrupted indices |
MCP Integration
VS Code Configuration
Configure the MCP server in VS Code user settings or workspace settings.
File: .vscode/settings.json or ~/.config/Code/User/mcp.json
{
"mcpServers": {
"markdown-docs": {
"command": "uv",
"args": [
"--directory",
"/absolute/path/to/mcp-markdown-ragdocs",
"run",
"mcp-markdown-ragdocs",
"mcp"
],
"type": "stdio"
}
}
}
With project override:
{
"mcpServers": {
"markdown-docs": {
"command": "uv",
"args": [
"--directory",
"/absolute/path/to/mcp-markdown-ragdocs",
"run",
"mcp-markdown-ragdocs",
"mcp",
"--project",
"my-project"
],
"type": "stdio"
}
}
}
Claude Desktop Configuration
File: ~/Library/Application Support/Claude/claude_desktop_config.json (macOS)
{
"mcpServers": {
"markdown-docs": {
"command": "uv",
"args": [
"--directory",
"/absolute/path/to/mcp-markdown-ragdocs",
"run",
"mcp-markdown-ragdocs",
"mcp"
]
}
}
}
Available Tools
The server exposes one MCP tool:
query_documents(query: string, top_n?: int): Search indexed documents using hybrid search and return synthesized answer with source documents.
Parameters:
query(required): Natural language query or questiontop_n(optional): Maximum results to return (1-100, default: 5)
Example query from MCP client:
{
"query": "How do I configure authentication in the API?",
"top_n": 5
}
The server returns a synthesized answer with source document citations.
API Endpoints
Health check:
curl http://127.0.0.1:8000/health
Server status (document count, queue size, failed files):
curl http://127.0.0.1:8000/status
Query endpoint (standard):
curl -X POST http://127.0.0.1:8000/query_documents \
-H "Content-Type: application/json" \
-d '{"query": "authentication configuration"}'
Query endpoint (streaming SSE):
curl -X POST http://127.0.0.1:8000/query_documents_stream \
-H "Content-Type: application/json" \
-d '{"query": "authentication configuration", "top_n": 3}' \
-N
The streaming endpoint returns Server-Sent Events:
event: search_complete
data: {"count": 3}
event: token
data: {"token": "Authentication"}
event: token
data: {"token": " is"}
event: done
data: {"results": [{"content": "...", "file_path": "auth.md", "header_path": ["Configuration"], "score": 1.0}]}
Example response (standard endpoint):
{
"answer": "Authentication is configured via the auth.toml file...",
"results": [
{
"content": "Authentication is configured in the auth section...",
"file_path": "docs/authentication.md",
"header_path": ["Configuration", "Authentication"],
"score": 1.0
},
{
"content": "Security settings include authentication tokens...",
"file_path": "docs/security.md",
"header_path": ["Security", "API Keys"],
"score": 0.85
}
]
}
Each result contains:
content: The text from the matching document chunkfile_path: Source file path relative to documents directoryheader_path: Document structure showing nested headers (semantic "breadcrumbs")score: Normalized similarity score [0, 1] where 1.0 is the best match
Configuration Details
See docs/configuration.md for exhaustive configuration reference including all TOML options, defaults, and environment variable support.
Documentation
- Architecture - System design, component overview, data flow
- Configuration - Complete configuration reference
- Hybrid Search - Search strategies and RRF fusion algorithm
- Integration - VS Code MCP setup and client integration
- Development - Development setup, testing, contributing
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.