Document Q&A MCP Server

Document Q&A MCP Server

A Python-based MCP server that enables document-based question answering by processing PDF, TXT, and Markdown files through OpenAI's API. It provides hallucination-free responses based strictly on document content using semantic search and includes a web interface for management.

Category
Visit Server

README

๐Ÿš€ Document Q&A MCP Server

A Python-based Model Context Protocol (MCP) server that provides document-based question answering using OpenAI's API. Upload documents, ask questions, and get answers based strictly on document content with zero hallucinations.

Python 3.8+ License: MIT OpenAI

๐ŸŒŸ Live Demo

Web Interface: Start the server and visit http://localhost:8000

Document Q&A Demo

โšก Quick Start

# 1. Install dependencies
pip install -r requirements.txt

# 2. Set your OpenAI API key
export OPENAI_API_KEY="your-api-key-here"

# 3. Start the web server
python web_server.py

# 4. Open http://localhost:8000 in your browser
# 5. Upload a document and start asking questions!

๐ŸŽฏ Features

  • ๐Ÿ“ค Web File Upload: Drag & drop PDF, TXT, Markdown files
  • ๐Ÿค– Smart Q&A: GPT-4 powered answers based strictly on your documents
  • ๐Ÿ” Semantic Search: OpenAI embeddings with cosine similarity
  • ๐Ÿšซ Zero Hallucinations: Only answers from document content
  • ๐Ÿ“Š Real-time Dashboard: Live status, confidence scores, source attribution
  • ๐Ÿ—๏ธ MCP Compliant: Standard protocol for AI integration
  • โšก Production Ready: Error handling, logging, async support

๐Ÿ›๏ธ Architecture

  • Multi-format Support: PDF, TXT, and Markdown files
  • Intelligent Chunking: Semantic document splitting with overlap
  • Vector Search: OpenAI embeddings with cosine similarity
  • Hallucination Prevention: Strict adherence to document content
  • MCP Compliant: Standard protocol endpoints
  • Production Ready: Clean architecture with error handling

Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    HTTP/Upload    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    MCP Protocol    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   Web Browser   โ”‚ โ—„โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–บ โ”‚   Web Server    โ”‚ โ—„โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–บ โ”‚ Document Q&A    โ”‚
โ”‚                 โ”‚                    โ”‚                 โ”‚                    โ”‚   MCP Server    โ”‚
โ”‚  โ€ข File Upload  โ”‚                    โ”‚  โ€ข File Handlingโ”‚                    โ”‚                 โ”‚
โ”‚  โ€ข Q&A Interfaceโ”‚                    โ”‚  โ€ข HTTP Endpointsโ”‚                    โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ€ข Results      โ”‚                    โ”‚  โ€ข JSON API     โ”‚                    โ”‚  โ”‚DocumentLoaderโ”‚  โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜                    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜                    โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
                                                                              โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
                                                                              โ”‚  โ”‚ Chunker   โ”‚  โ”‚
                                                                              โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
                                                                              โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
                                                                              โ”‚  โ”‚Embedding  โ”‚  โ”‚
                                                                              โ”‚  โ”‚  Store    โ”‚  โ”‚
                                                                              โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
                                                                              โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
                                                                              โ”‚  โ”‚  Query    โ”‚  โ”‚
                                                                              โ”‚  โ”‚ Handler   โ”‚  โ”‚
                                                                              โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
                                                                              โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

The server consists of five main components:

  1. DocumentLoader: Handles PDF, TXT, and Markdown file parsing
  2. DocumentChunker: Intelligently splits documents into semantic chunks
  3. EmbeddingStore: Manages vector embeddings for similarity search
  4. QueryHandler: Processes questions and generates context-aware answers
  5. MCPServer: Exposes MCP-compliant endpoints

๐Ÿš€ Usage Options

Option 1: Web Interface (Recommended)

python web_server.py
# Visit http://localhost:8000

Option 2: Interactive CLI

python interactive_client.py

Option 3: Simple Version (No MCP)

python simple_document_qa.py  
# Visit http://localhost:8001

Option 4: Run Tests

python test_server.py

๐Ÿ“ฑ Web Interface Features

  • ๐Ÿ“ค File Upload: Click "Choose File" or drag & drop documents
  • โ“ Question Input: Type questions in the text area
  • ๐Ÿ“Š Live Dashboard: Real-time status and document info
  • ๐ŸŽฏ Confidence Scores: See how confident the AI is in each answer
  • ๐Ÿ“š Source Attribution: Know exactly which document parts were used
  • โšก Real-time Processing: Instant feedback and results

๐Ÿ“ก MCP Endpoints

1. Load Document

Load a document into the system for question answering.

Request:

{
  "method": "load_document",
  "params": {
    "file_path": "/path/to/document.pdf"
  }
}

Response:

{
  "status": "success",
  "message": "Successfully loaded document: /path/to/document.pdf",
  "metadata": {
    "file_path": "/path/to/document.pdf",
    "content_length": 15420,
    "num_chunks": 12,
    "total_chunks_in_store": 12
  }
}

2. Ask Question

Ask a question about loaded documents.

Request:

{
  "method": "ask_question",
  "params": {
    "question": "What are the main features?"
  }
}

Response:

{
  "status": "success",
  "question": "What are the main features?",
  "answer": "Based on the document, the main features include...",
  "sources": [
    {
      "file": "/path/to/document.pdf",
      "chunk_id": "document_0",
      "similarity_score": 0.892
    }
  ],
  "confidence": 0.892
}

3. Get Status

Check server status and loaded documents.

Request:

{
  "method": "get_status",
  "params": {}
}

Response:

{
  "status": "active",
  "loaded_documents": ["/path/to/document.pdf"],
  "total_chunks": 12,
  "supported_formats": [".pdf", ".txt", ".md", ".markdown"]
}

๐Ÿ“ Project Structure

document-qa-mcp-server/
โ”œโ”€โ”€ ๐Ÿ“„ document_qa_server.py      # Main MCP server implementation
โ”œโ”€โ”€ ๐ŸŒ web_server.py              # Web interface with file upload
โ”œโ”€โ”€ ๐Ÿ–ฅ๏ธ  simple_document_qa.py     # Simplified version (no MCP)
โ”œโ”€โ”€ ๐Ÿ’ฌ interactive_client.py      # CLI interface
โ”œโ”€โ”€ ๐Ÿงช test_server.py             # Test suite
โ”œโ”€โ”€ ๐Ÿ“– example_usage.py           # Usage examples
โ”œโ”€โ”€ ๐Ÿ“‹ requirements.txt           # Dependencies
โ”œโ”€โ”€ ๐Ÿ“š MCP_SERVER_DOCUMENTATION.md # Complete MCP guide
โ”œโ”€โ”€ ๐ŸŽจ web_interface.py           # Static HTML generator
โ””โ”€โ”€ ๐Ÿ“„ README.md                  # This file

๐Ÿ”ง Configuration

Chunking Parameters

Modify chunking behavior in DocumentChunker:

chunker = DocumentChunker(
    chunk_size=1000,  # Target chunk size in characters
    overlap=200       # Overlap between chunks
)

Retrieval Parameters

Adjust retrieval in QueryHandler.answer_question():

similar_chunks = await self.embedding_store.search_similar(
    question, 
    top_k=3  # Number of chunks to retrieve
)

OpenAI Model Configuration

Change models in the respective methods:

# Embeddings model
model="text-embedding-3-small"

# Chat completion model  
model="gpt-4"

๐Ÿšจ Error Handling

The server handles common errors gracefully:

  • File not found: Clear error with file path
  • Unsupported format: Lists supported formats
  • API errors: Returns OpenAI error messages
  • No documents loaded: Prompts to load documents first
  • Missing information: Returns "The document does not contain this information"

๐Ÿ”ฎ Extending for Multiple Documents

The current architecture supports multiple documents. To extend:

  1. Document Management: Add document metadata tracking
  2. Source Filtering: Filter by specific documents
  3. Cross-Document Search: Search across all loaded documents
  4. Document Removal: Add endpoint to remove specific documents

Example extension:

async def remove_document(self, file_path: str) -> Dict[str, Any]:
    """Remove a specific document from the store."""
    self.embedding_store.chunks = [
        chunk for chunk in self.embedding_store.chunks 
        if chunk.source_file != file_path
    ]
    # Rebuild embeddings matrix...

โšก Performance Considerations

  • Chunk Size: Larger chunks = more context but slower search
  • Overlap: More overlap = better context continuity but more storage
  • Top-K: More retrieved chunks = better context but higher API costs
  • Embedding Model: text-embedding-3-small balances cost and quality

๐Ÿ”’ Security Notes

  • Store API keys securely (environment variables, secrets management)
  • Input validation implemented for file paths and parameters
  • Consider rate limiting for production deployments
  • Sanitize file paths to prevent directory traversal

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

๐Ÿ“ž Support

For issues and questions:

  1. Check the error messages and logs
  2. Verify OpenAI API key and quota
  3. Ensure document formats are supported
  4. Review the example usage patterns
  5. Open an issue on GitHub

๐Ÿ™ Acknowledgments

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
E2B

E2B

Using MCP to run code via e2b.

Official
Featured