LightRAG MCP
Enables querying an internal LightRAG knowledge base to retrieve technical documentation and structured information using hybrid search modes. It facilitates the integration of raw text or HTML results from a LightRAG server into LLM workflows.
README
LightRAG MCP tool — README
Summary
- Tool name:
query_knowledge_base(registered asquery_knowledge_baseinlightrag_mcp.py)- If you prefer the tool be named
lightrag_query, change the decorator to@mcp.tool("lightrag_query")inlightrag_mcp.py.
- If you prefer the tool be named
- Description: Query the internal LightRAG knowledge base for technical documentation. Returns raw text or HTML from the LightRAG server.
- Input schema: QueryInput { query: str, mode: str = 'hybrid' }
- Output schema: QueryOutput { result: str }
Quick start (local)
- Create and activate a virtualenv, then install dependencies:
python -m pip install -r requirements.txt
(Or install packages individually: pip install modelcontextprotocol httpx pydantic python-dotenv)
- Configure environment variables:
- Copy the example env file and edit as needed:
cp .env.example .env
- Set a system prompt file (optional):
export LIGHTRAG_SYSTEM_PROMPT_FILE="$(pwd)/prompts/light_rag_system.txt"
- Run the MCP server:
python lightrag_mcp.py
Notes:
- Default FastMCP port is 8000. To change the port, set it in code before
mcp.run():
mcp.settings.port = 9680
mcp.run()
- FastMCP exposes streamable HTTP at
/mcp(default) and may expose SSE endpoints depending on transport.
Example: call the tool from Python (local test)
from importlib import util
import asyncio
spec = util.spec_from_file_location('mod','./lightrag_mcp.py')
mod = util.module_from_spec(spec)
spec.loader.exec_module(mod)
async def demo():
# Wrap the payload under 'input' when calling via mcp.call_tool
resp = await mod.mcp.call_tool('query_knowledge_base', {'input': {'query': 'How does the TMS module work?', 'mode': 'hybrid'}})
print(resp)
asyncio.run(demo())
The call returns a QueryOutput-like structure; depending on transport the response may be a JSON string or an object containing response/result fields.
Test connectivity to LightRAG server (direct)
If you need to verify the upstream LightRAG service the MCP calls, you can test it directly (example):
curl -X POST "http://localhost:9621/query" \
-H "Content-Type: application/json" \
-d '{"query":"Who owns the TMS module?","mode":"hybrid"}'
This helps verify the configured LIGHTRAG_URL is reachable and responding.
Configuration and environment variables
The server looks for these values in order:
LIGHTRAG_SYSTEM_PROMPT— inline prompt (env var)LIGHTRAG_SYSTEM_PROMPT_FILE— path to a file containing the promptprompts/light_rag_system.txt— project default
Recommended .env entries (see .env.example):
LIGHTRAG_SYSTEM_PROMPT_FILE=./prompts/light_rag_system.txt
LIGHTRAG_URL=http://localhost:9621
Security note: Avoid committing sensitive prompts to the repository. Use a secrets manager for production.
Transport security and allowed hosts example
If the client (e.g., remote Claude) will connect to your MCP server, restrict allowed hosts in FastMCP settings. Example (add in lightrag_mcp.py before mcp.run()):
# Restrict which remote hosts may call the MCP server (example)
mcp.settings.transport_security.allowed_hosts = ["agents.example.com", "claude.ai"]
Adjust based on your deployment and transport configuration.
Registering the tool with clients
Claude Code (VSCode extension / local development)
- Ensure the MCP server is running locally (e.g.,
http://localhost:8000). - In the Claude Code extension (or your local client), add or point to the server origin and streamable path (usually
/mcp). - The extension should detect registered tools; verify
query_knowledge_baseappears inmcp.list_tools().
Remote Claude clients (claude.ai)
- Expose your local server using a secure tunnel (for testing) or deploy it to a publicly reachable HTTPS endpoint.
- Example (temporary):
ngrok http 8000→ use the provided HTTPS URL
- Example (temporary):
- Configure transport settings and allowed hosts on the MCP server (see transport security example).
- Provide the server URL to claude.ai or the remote client in its external tools configuration and confirm the transport type (streamable HTTP vs SSE).
Troubleshooting
-
Tool not visible in client:
- Confirm the MCP server is running and reachable.
- Verify
query_knowledge_baseappears inmcp.list_tools(). - Ensure client and server use the same transport (streamable HTTP or SSE).
-
Validation errors:
- Client must send payload matching Pydantic input schema. When calling via the SDK or
mcp.call_tool, wrap the payload underinput.
- Client must send payload matching Pydantic input schema. When calling via the SDK or
-
Port already in use:
- Set
mcp.settings.portto a free port before callingmcp.run().
- Set
Development notes
-
lightrag_mcp.pyregisters the tool asquery_knowledge_base. To rename the tool tolightrag_query, either:- Change the decorator to
@mcp.tool("lightrag_query"), or - Update README to reference
query_knowledge_base(current state).
- Change the decorator to
-
A
requirements.txtfile is included with the minimal runtime dependencies. Pin versions if you need reproducible installs.
Contributing
PRs are welcome. Please avoid committing secrets or sensitive prompts. For changes that alter tool names or schemas, update README and tests accordingly.
License
MIT (or replace with your preferred license)
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.