Personal Research Assistant MCP
Enables semantic search and conversational querying across a personal research library of PDFs, DOCX, and other documents using a vector database. It provides tools for document summarization, finding related papers, and high-accuracy retrieval for AI clients like Claude Desktop.
README
๐ Personal Research Assistant MCP
A production-ready MCP (Model Context Protocol) server that enables semantic search across your personal research library. Built for AI Engineers who need fast, accurate document retrieval integrated with Claude Desktop and other AI tools.
๐ฏ Problem Statement
Researchers and professionals accumulate dozens of papers and documents but struggle to:
- Find relevant information across multiple documents
- Remember which paper contained specific insights
- Connect related concepts across different sources
- Spend 2+ hours daily searching for information
Traditional keyword search misses semantic connections, and reading everything is impractical.
๐ก Solution
An MCP server that:
- Indexes documents into a vector database using semantic embeddings
- Enables Claude (or any MCP client) to query your research library conversationally
- Provides sub-500ms response times with 85%+ retrieval accuracy
- Includes a Streamlit dashboard for management and metrics
๐๏ธ Architecture
Documents (PDF/DOCX/HTML/MD)
โ
Document Processor โ Text Chunker โ Embeddings
โ
ChromaDB Vector Store
โ
โโโ MCP Server (FastMCP) โ Claude Desktop
โโโ Streamlit UI โ Monitoring/Testing
โจ Features
- Semantic Search: Natural language queries across your entire library
- Multi-Format Support: PDF, DOCX, HTML, Markdown, TXT
- Fast Retrieval: <500ms query latency on 1000+ chunks
- MCP Integration: Works with Claude Desktop, VS Code, and any MCP client
- Metadata Extraction: Automatically extracts titles, authors, keywords
- Query Logging: Track usage and performance metrics
- Streamlit Dashboard: Upload, search, and visualize metrics
๐ Performance Metrics
| Metric | Target | Actual |
|---|---|---|
| Retrieval Accuracy | 85% | See METRICS.md |
| Query Latency | <500ms | See METRICS.md |
| Scale | 10k+ chunks | 1782+ chunks |
๐ Installation
Prerequisites
- Python 3.11+
- 2GB RAM minimum
- Git
Setup
# Clone repository
git clone https://github.com/yourusername/research-assistant-mcp.git
cd research-assistant-mcp
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Install local embeddings
pip install sentence-transformers
# Configure environment
cp .env.example .env
# Edit .env - add OPENAI_API_KEY if using OpenAI embeddings
Download Sample Data
# Download 25 AI/ML papers from arXiv
python scripts/download_sample_papers.py --count 25
Index Documents
# Index sample papers
python scripts/index_docs.py --folder ./sample_papers
# Or index your own documents
python scripts/index_docs.py --folder /path/to/your/papers --recursive
๐ Usage
Start MCP Server
python mcp_server/server.py
Configure Claude Desktop
Add to claude_desktop_config.json:
Mac: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"research-assistant": {
"command": "python",
"args": ["/full/path/to/research-assistant-mcp/mcp_server/server.py"],
"env": {}
}
}
}
Restart Claude Desktop.
Launch Streamlit UI
streamlit run ui/app.py
Opens at http://localhost:8501
๐ ๏ธ MCP Tools
search_documents
Semantic search across your library.
Query: "What are the challenges in RAG systems?"
Returns: Top-k results with sources, scores, and metadata
get_document_summary
Get quick overview of a document.
Input: Document path or title
Returns: Title, author, keywords, preview
find_related_papers
Find documents similar to a topic.
Query: "prompt engineering techniques"
Returns: Related papers with relevance scores
๐ Project Structure
research-assistant-mcp/
โโโ mcp_server/ # MCP server implementation
โ โโโ server.py
โโโ rag_pipeline/ # RAG components
โ โโโ config.py
โ โโโ document_processor.py
โ โโโ chunker.py
โ โโโ vector_store.py
โ โโโ retriever.py
โ โโโ metadata_extractor.py
โโโ ui/ # Streamlit dashboard
โ โโโ app.py
โ โโโ pages/
โโโ scripts/ # CLI utilities
โ โโโ index_docs.py
โ โโโ download_sample_papers.py
โโโ tests/ # Testing & benchmarks
โ โโโ sample_queries.json
โ โโโ benchmark_performance.py
โโโ data/ # Data storage
โ โโโ chroma_db/
โ โโโ query_logs/
โโโ docs/ # Documentation
โโโ METRICS.md
๐งช Testing
# Run performance benchmarks
python tests/benchmark_performance.py
# Output: Accuracy, latency, scale metrics
๐ณ Docker Deployment
# Build and run
docker-compose up -d
# Access UI at http://localhost:8501
# MCP server runs on localhost:8000
๐ Example Queries
-
Cross-document synthesis
"Compare different fine-tuning approaches for LLMs" -
Concept exploration
"How does RLHF improve model alignment?" -
Technical details
"Explain transformer attention mechanisms" -
Literature review
"What are recent developments in RAG systems?"
๐ง Customization
Change Embedding Model
Edit .env:
# OpenAI (paid, best quality)
EMBEDDING_MODEL=text-embedding-3-small
# Or use local (free) by default - already configured
Adjust Chunk Size
Edit .env:
CHUNK_SIZE=1000 # Characters per chunk
CHUNK_OVERLAP=200 # Overlap between chunks
Add Document Types
Edit rag_pipeline/document_processor.py to add new file type handlers.
๐ Troubleshooting
ChromaDB errors: Delete data/chroma_db and re-index
Import errors: Verify pip install -r requirements.txt completed
UI blank: Check browser console, try Chrome/Firefox
Slow queries: Reduce TOP_K_RESULTS in .env
๐ง Future Enhancements
- [ ] Auto-watch folder for new documents
- [ ] Cross-encoder reranking for better accuracy
- [ ] Multi-modal support (images, diagrams)
- [ ] Citation network graph
- [ ] Export to Notion/Obsidian
- [ ] Web interface (FastAPI + React)
๐ฅ Demo Video
[Link to 2-minute demo video - Coming soon]
๐ค Contributing
Contributions welcome! Please open issues or PRs.
๐ License
MIT License - see LICENSE
๐ Acknowledgments
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.