OmniDocs MCP

OmniDocs MCP

An intelligent MCP server that enables AI agents to crawl, index, and semantically search official framework documentation using local RAG. It prevents hallucinations by providing precise, up-to-date documentation excerpts directly into the AI's context window.

Category
Visit Server

README

OmniDocs MCP

Python 3.10+ Local RAG FastMCP

OmniDocs MCP is an intelligent Model Context Protocol (MCP) server that empowers AI agents to instantly search, index, summarize, and inject live framework documentation directly into their context window.

Stop hallucinating code for outdated framework versions. Let OmniDocs fetch the exact documentation your AI needs, the moment it needs it.

[AI Agent] Calling get_library_docs("react", "useActionState usage")
[OmniDocs] 🔍 Library 'react' not indexed. Crawling react.dev...
[OmniDocs] 🧠 Chunking 150 pages and computing embeddings (ONNX)...
[OmniDocs] ⚡ Returning top 5 semantic chunks (Dense + BM25)
[AI Agent] Receives 1,500 highly-relevant tokens. Writes perfect code.

🤔 Why OmniDocs? (The Comparison)

Approach The Problem The OmniDocs Solution
Context Stuffing (Full URLs) Destroys token limits (50k+ tokens/page); high latency; high API costs. Semantically retrieves only the relevant 512-token chunks to save context.
Web Search Tools (Tavily/Exa) Returns SEO fluff, outdated blog posts, and stack overflow threads. Exclusively targets official, canonical framework documentation.
Cloud RAG / Vector APIs Requires expensive API subscriptions and sends queries to 3rd parties. 100% Local embedding (ONNX + ChromaDB). Zero API keys, completely free.
LLM Internal Knowledge Hallucinates deprecated APIs (e.g., React 17 vs 19, or Next.js App Router). Guarantees up-to-date syntax directly from the live documentation.

✨ Core Features

  • Deep HTML Crawling: Employs an Indexer & Sniper architecture to map deep documentation sites via XML Sitemaps or pure HTML-crawling, returning dense Tables of Contents for agents to navigate.
  • Local RAG & Semantic Search: Embeds documentation locally using ONNX (via fastembed) and chunks it semantically. Exposes a natural language query interface so agents receive precise, high-density excerpts instead of full massive pages.
  • Local Manifest Auto-Discovery: Point OmniDocs at any package.json or requirements.txt. It will seamlessly communicate with the NPM/PyPI registries to auto-discover library documentation URLs and register them in its tracking file.
  • Persistent Disk Caching: Prevents excessive redundant scraping and LLM token usage by storing fetched markdown via diskcache, allowing the user to configure granular TTLs (Time-To-Live).

🏗 Architecture & How it Works

OmniDocs operates as a middleware server between an AI Agent and official documentation websites. Instead of the AI browsing the web blindly, it uses OmniDocs to precisely retrieve, parse, chunk, embed, and cache documentation locally.

Core Modules

  1. Server CLI (server.py): The main entry point. Exposes get_library_docs which agents use to ask natural language questions.
  2. Fetcher (fetcher.py): Handles outbound HTTP requests, crawling Sitemaps and pure HTML. Uses BeautifulSoup to strip away navbars and footers, and markdownify to convert perfectly to Markdown.
  3. Chunker (chunker.py): Splits massive Markdown pages into smaller, semantically coherent 512-token chunks, keeping Markdown headers intact so the context isn't lost.
  4. Vector Store (vector_store.py): Embeds chunks locally using the fastembed ONNX model and stores them persistently in ChromaDB. Uses a hybrid retrieval method (Dense Vector Search + BM25 keyword re-ranking) for maximum accuracy on exact API names.
  5. Cache Layer (cache.py): Uses diskcache to store the raw downloaded Markdown on the local hard drive to prevent redundant network requests.
  6. Auto-Discovery (discovery.py): Parses local package.json or requirements.txt files to auto-register libraries.

🔄 Retrieval Workflow

When an AI encounters a library it doesn't know, it just issues a natural language query, and the following flow occurs:

sequenceDiagram
    participant AI as AI Agent
    participant MCP as OmniDocs
    participant VectorDB as ChromaDB
    participant Web as fetcher.py
    
    AI->>MCP: Call `get_library_docs("react", "useActionState usage")`
    MCP->>VectorDB: Check if 'react' is indexed
    
    alt Not Indexed
        MCP->>Web: Crawl entire doc site & convert to Markdown
        Web-->>MCP: Return Markdown pages
        MCP->>MCP: Chunk pages & compute local embeddings
        MCP->>VectorDB: Store chunks & vectors
    end
    
    MCP->>VectorDB: Perform hybrid search (Dense + BM25) for query
    VectorDB-->>MCP: Top 5 semantic chunks
    MCP-->>AI: Return pure, precise Markdown context

🚀 Quick Start

Prerequisites

  • Python 3.10+ (Required for FastMCP and ChromaDB)
  • OS: Windows, macOS, or Linux
  • Hardware: Runs entirely on CPU. No GPU required (FastEmbed uses lightweight ONNX models).
  1. Clone & Install

    git clone https://github.com/your-username/omnidocs-mcp.git
    cd omnidocs-mcp
    pip install -r requirements.txt
    
  2. Seed your Libraries OmniDocs stores your tracked libraries in libraries.yaml. You can auto-fill this by running the server tool auto_import_from_manifest against your project, or manually insert overrides to customize tracking:

    libraries:
      react:
        docs_url: https://react.dev/learn
        ttl_hours: 24
      fastapi:
        docs_url: https://fastapi.tiangolo.com
        ttl_hours: 48
      tailwindcss:
        docs_url: https://tailwindcss.com/docs
        ttl_hours: 72
    

🔌 Connecting to AI Agents (Antigravity, Roo/Cline, Claude Desktop)

To connect OmniDocs to your MCP-compatible client, add this configuration block to your client's MCP settings file (e.g., %APPDATA%\Code\User\globalStorage\rooveterinaryinc.roo-cline\settings\cline_mcp_settings.json):

{
  "mcpServers": {
    "omnidocs-mcp": {
      "command": "C:/absolute/path/to/omnidocs-mcp/venv/Scripts/python.exe",
      "args": [
        "C:/absolute/path/to/omnidocs-mcp/server.py"
      ],
      "env": {}
    }
  }
}

Note: Ensure you point the command variable to the Python executable from inside the virtual environment where you installed the requirements.txt dependencies.

🛠 Available Tools

Once connected, your AI gains the following native tools:

  • get_library_docs(library, query): The primary tool. Performs a semantic vector search and BM25 hybrid ranking over the library's documentation to answer specific questions, automatically crawling if not yet indexed.
  • get_changelog(library): Fetches recent release notes so the AI knows about breaking changes.
  • auto_import_from_manifest(manifest_path): Analyzes your package.json to self-populate the OmniDocs library registry.
  • list_tracked_libraries(): Shows what the server is currently tracking.
  • refresh_all_docs(): Hard-busts the cache and pulls live web data.

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured