
MCP RAG Server
Provides Retrieval-Augmented Generation capabilities by parsing documents (PDF, Markdown) and storing them in ChromaDB for semantic search and context retrieval.
README
MCP RAG Server
A Model Context Protocol (MCP) server for Retrieval-Augmented Generation with document parsing and vector database storage using ChromaDB.
Features
- Document Processing: Supports PDF, Markdown, and text files with enhanced parsing
- Vector Search: Uses ChromaDB v2 API for semantic similarity search
- File Watching: Automatically processes new and changed files
- MCP Integration: Full MCP protocol support with resources and tools
- Large Document Support: Intelligent chunking for documents of any size
- Robust Error Handling: Comprehensive error recovery and retry mechanisms
- Silent Operation: No console output for clean MCP server behavior
- Enhanced PDF Processing: Node.js-optimized PDF.js with memory management
Recent Improvements
Version 1.x Features
- ✅ ChromaDB v2 Compatibility: Updated to use ChromaDB 2.4.6 API
- ✅ PDF.js Node.js Fix: Resolved DOMMatrix errors using legacy build
- ✅ Large Document Handling: Automatic chunking for files >4KB with reconstruction
- ✅ Enhanced Error Recovery: Retry logic with exponential backoff
- ✅ Silent MCP Operation: Removed all console logging for proper MCP server behavior
- ✅ Memory Optimization: Batch processing and garbage collection for large PDFs
- ✅ Content Sanitization: Robust handling of special characters and encoding issues
Prerequisites
- Node.js 18+
- ChromaDB server running and accessible (v0.4.0+)
- Network access to your ChromaDB instance
Installation
- Clone and install dependencies:
npm install
- Build the project:
npm run build
Configuration
Environment Variables
Create a .env
file in the project root:
# Chroma Database URL (required)
CHROMA_URL=http://localhost:8000
# Directory to watch for documents (optional)
WATCH_DIRECTORY=./documents
# Document chunking settings (optional)
CHUNK_SIZE=1000
CHUNK_OVERLAP=200
Command Line Options
You can also configure via command line:
node build/index.js \
--chroma-url http://localhost:8000 \
--watch-directory ./documents \
--chunk-size 1000 \
--chunk-overlap 200
Set the Chroma server URL via the CHROMA_URL
environment variable. For example, in your MCP config:
{
"env": {
"CHROMA_URL": "http://localhost:8000"
}
}
If not set, the default is http://localhost:8000
.
ChromaDB Setup
This server requires ChromaDB v2 API. Install and run ChromaDB:
# Using pip
pip install chromadb
# Run ChromaDB server
chroma run --host 0.0.0.0 --port 8000
For Docker:
docker run -p 8000:8000 chromadb/chroma:latest
Testing ChromaDB Connection
Run the included test suite to verify your setup:
# Run all tests including connection verification
npm test
# Run specific connection tests
npm run test -- --grep "connection"
This will verify:
- Network connectivity to ChromaDB v2 API
- Document insertion and retrieval
- Large document chunking and reconstruction
- Search functionality
Running Tests
Run the comprehensive test suite:
# Run all tests
npm test
# Run with verbose output
npm run test:verbose
# Watch mode for development
npm run test:watch
Note: You may see a deprecation warning about the punycode
module during tests. This is from a dependency and does not affect functionality.
Usage
As MCP Server
The server runs in stdio mode for MCP protocol communication:
npm run dev
The server operates silently without console output, suitable for MCP integration.
MCP Resources
rag://documents
- List all documentsrag://document/{id}
- Get specific document
MCP Tools
search
- Query documents with semantic similarityadd_document
- Manually add document contentremove_document
- Delete a document by IDstatus
- Get server statistics
Document Support
File Type | Extension | Parser | Features |
---|---|---|---|
.pdf |
PDF.js (Legacy) | ✅ Large file support, ✅ Memory optimization, ✅ Batch processing | |
Markdown | .md , .markdown |
Marked | ✅ HTML tag removal, ✅ UTF-8 support |
Text | .txt , .text |
UTF-8 | ✅ Encoding detection, ✅ Special character handling |
Large Document Handling
The server automatically handles large documents:
- Files >4KB: Split into 4KB chunks with smart sentence boundaries
- Content Reconstruction: Seamlessly rebuilds full documents when requested
- Memory Efficiency: Batch processing prevents memory overflow
- Progress Tracking: Metadata tracks chunk relationships for proper reconstruction
Troubleshooting
Connection Issues
Error: ECONNREFUSED
or fetch failed
-
Verify ChromaDB v2 is running:
curl http://localhost:8000/api/v1/heartbeat
-
Check for v1/v2 API compatibility:
# Should return 200 OK for v2 curl http://localhost:8000/api/v1/version
-
Verify ChromaDB server configuration allows external connections
Error: 410 Gone
responses
This indicates a v1/v2 API mismatch. Ensure you're running ChromaDB v0.4.0+ and the client is using v2.4.6+.
Error: chromadb-default-embed not found
The chromadb-default-embed
package should be automatically installed. If you see this error:
npm install chromadb-default-embed
Document Processing Issues
PDF.js DOMMatrix errors:
✅ Fixed in v1.x - Now uses Node.js-compatible legacy build.
Large PDF processing:
- ✅ Enhanced in v1.x - Automatic batch processing with 100MB file size limits
- Memory management with garbage collection between batches
- 30-second timeout protection per PDF
Files not being processed:
- Check the watch directory exists and is readable
- Verify file extensions are supported
- Check file permissions
- Run tests to verify parsing functionality
Memory Issues
For large document collections:
- The server now includes automatic memory management
- Large documents are automatically chunked
- If still needed, increase Node.js memory limit:
node --max-old-space-size=4096 build/index.js
Silent Operation
The server now runs silently by design for proper MCP integration. If you need debugging:
- Run tests with verbose output:
npm run test:verbose
- Use the debug test files in
src/tests/
- Check error responses from MCP tools
Known Issues
- Punycode Deprecation Warning: You may see deprecation warnings about the
punycode
module from Node.js dependencies. This does not affect functionality and will be resolved when dependencies are updated.
Development
Project Structure
src/
├── database/
│ └── manager.ts # ChromaDB v2 interface with chunking
├── parsers/
│ └── document-parser.ts # Enhanced document parsing with PDF.js legacy
├── vector/
│ └── vector-store.ts # Vector operations wrapper
├── watchers/
│ └── directory-watcher.ts # File system monitoring
├── tests/
│ ├── database.test.ts # Database integration tests
│ ├── pdf.test.ts # PDF parsing tests
│ ├── chroma-debug.test.ts # ChromaDB debugging
│ └── verification.test.ts # Document verification
└── index.ts # Main MCP server
Building
# Clean build
npm run clean && npm run build
# Development with watch mode
npm run watch
Adding New Document Types
- Add parser logic to
src/parsers/document-parser.ts
- Update file extension handling
- Add tests in
src/tests/
- Consider chunking strategy for large files
Performance Optimization
ChromaDB Settings
For optimal performance, configure your ChromaDB server with:
- ChromaDB v0.4.0+ for v2 API compatibility
- Sufficient memory allocation (4GB+ recommended)
- SSD storage for better I/O
- Network optimization for remote connections
Document Chunking
The server now uses intelligent chunking:
- Automatic chunking: Files >4KB split at sentence boundaries
- Content reconstruction: Full documents rebuilt on retrieval
- Metadata tracking: Chunk relationships preserved
- Memory efficiency: Batch processing prevents overflow
Adjust chunk settings based on your use case:
- Small chunks (500-800 chars): Better precision, more storage
- Large chunks (1200-2000 chars): Better context, less storage
- Overlap: 10-20% of chunk size for continuity
Security Considerations
- ChromaDB server should be behind a firewall
- Use authentication if ChromaDB supports it
- Validate document sources before processing
- Monitor resource usage to prevent DoS
- Content sanitization prevents injection attacks
API Reference
MCP Tools
search
Query documents using semantic similarity.
Parameters:
query
(string): Search querylimit
(number, optional): Max results (default: 5)
Returns: Array of matching document chunks with scores
Enhanced Features:
- ✅ Searches across chunked content
- ✅ Includes parent document information
- ✅ Cosine similarity scoring
add_document
Add document content manually.
Parameters:
content
(string): Document text contentfilename
(string): Document filenamemetadata
(object, optional): Additional metadata
Returns: Success message with chunk count
Enhanced Features:
- ✅ Automatic chunking for large content
- ✅ Content sanitization
- ✅ Retry logic with exponential backoff
remove_document
Delete a document by ID.
Parameters:
id
(string): Document ID
Returns: Success confirmation
Enhanced Features:
- ✅ Removes all associated chunks
- ✅ Graceful error handling
status
Get server statistics and configuration.
Returns: Object with document count, chunk count, total size, and configuration
Enhanced Features:
- ✅ Real-time statistics
- ✅ Memory usage information
- ✅ ChromaDB connection status
Migration Guide
Future Versions
When upgrading to future versions:
- Update Dependencies: Run
npm update
to get latest compatible versions - Update Environment Variables: Check for any new configuration options
- Rebuild Project: Run
npm run clean && npm run build
- Test Connection: Run
npm test
to verify compatibility
Breaking Changes in v1.x
- Environment variable
CHROMA_HOST
renamed toCHROMA_URL
- ChromaDB v1 API no longer supported
- Console logging removed (use tests for debugging)
License
MIT
Contributing
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass (
npm test
) - Verify no console output in main code
- Submit a pull request
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.