LangChain Documentation MCP Server
Provides real-time access to official LangChain documentation, API references, and GitHub code examples to assist in LangChain-based development. It enables LLMs to search for tutorials, version info, and detailed class specifications directly from live sources.
README
LangChain Documentation MCP Server
A comprehensive dual-mode server that provides real-time access to LangChain documentation, API references, and code examples. Supports both FastAPI web service and native Model Context Protocol (MCP) server modes, fetching live data from official LangChain sources.
š Features
- ļæ½ļø Dual Server Modes - Run as FastAPI web service or native MCP server
- ļæ½š Live Documentation Search - Search through official LangChain documentation in real-time
- š API Reference Lookup - Get detailed API references from GitHub source code
- š GitHub Code Examples - Fetch real code examples from the LangChain repository
- š Tutorial Discovery - Find and access LangChain tutorials and guides
- š¦ Version Tracking - Get latest version information from PyPI
- š Direct API Search - Search specifically through API reference documentation
- š MCP Protocol Support - Native Model Context Protocol implementation
š Data Sources
This server fetches live data from:
- python.langchain.com - Official LangChain documentation
- GitHub LangChain Repository - Source code and examples
- PyPI - Latest version and release information
š API Endpoints
Core Endpoints
GET /- API documentation (Swagger UI)GET /health- Health check and service status
LangChain Documentation
GET /search- Search general documentationGET /search/api- Search API reference specificallyGET /api-reference/{class_name}- Get detailed API reference for a classGET /examples/github- Get real code examples from GitHubGET /tutorials- Get tutorials and guidesGET /latest-version- Get latest LangChain version info
š Quick Start
Option 1: Docker Compose (Recommended)
-
Clone the repository
git clone https://github.com/LiteObject/langchain-mcp-server.git cd langchain-mcp-server -
Start the FastAPI server
docker-compose up --build -
Access the API
- API Documentation: http://localhost:8080/docs
- Health Check: http://localhost:8080/health
Option 2: Local Development
FastAPI Mode
-
Install dependencies
pip install -r requirements.txt -
Run the FastAPI server
# Using the main entry point python run.py # Or using the dedicated script python scripts/run_fastapi.py # Or directly with uvicorn uvicorn src.api.fastapi_app:app --host 0.0.0.0 --port 8000
MCP Server Mode
-
Install dependencies
pip install -r requirements.txt -
Run the MCP server
# Using the main entry point python run.py mcp # Or using the dedicated script python scripts/run_mcp.py
š Usage Examples
Search Documentation
# Search for "ChatOpenAI" in documentation
curl "http://localhost:8080/search?query=ChatOpenAI&limit=5"
# Search API reference specifically
curl "http://localhost:8080/search/api?query=embeddings"
Get API Reference
# Get detailed API reference for ChatOpenAI
curl "http://localhost:8080/api-reference/ChatOpenAI"
# Get API reference for LLMChain
curl "http://localhost:8080/api-reference/LLMChain"
Fetch Code Examples
# Get real examples from GitHub
curl "http://localhost:8080/examples/github?query=chatbot&limit=3"
# Get general examples
curl "http://localhost:8080/examples/github"
Get Tutorials
# Fetch all available tutorials
curl "http://localhost:8080/tutorials"
Version Information
# Get latest version from PyPI
curl "http://localhost:8080/latest-version"
š MCP Server Usage
When running in MCP mode, the server provides the following tools:
Available MCP Tools
search_langchain_docs- Search LangChain documentationsearch_api_reference- Search API reference specificallyget_api_reference- Get detailed API reference for a classget_github_examples- Get code examples from GitHubget_tutorials- Get available tutorialsget_latest_version- Get latest LangChain version
MCP Client Integration
{
"mcpServers": {
"langchain-docs": {
"command": "python",
"args": ["path/to/langchain-mcp-server/run.py", "mcp"],
"env": {
"PYTHONPATH": "path/to/langchain-mcp-server"
}
}
}
}
š ļø Configuration
Environment Variables
| Variable | Description | Default |
|---|---|---|
HOST |
Server host address | 0.0.0.0 |
PORT |
Server port | 8000 |
DEBUG |
Enable debug mode | False |
LOG_LEVEL |
Logging level | INFO |
REQUEST_TIMEOUT |
Timeout for external API calls | 30 seconds |
GITHUB_TOKEN |
GitHub API token (optional) | None |
Docker Configuration
The service runs on port 8080 by default to avoid conflicts. You can modify this in docker-compose.yml:
ports:
- "8080:8000" # Host:Container
š§ Development
Project Structure
āāā src/ # Main source code package
ā āāā main.py # Main entry point with dual mode support
ā āāā api/ # API layer
ā ā āāā fastapi_app.py # FastAPI application
ā ā āāā mcp_server.py # Native MCP server implementation
ā āāā config/ # Configuration management
ā ā āāā settings.py # Application settings
ā ā āāā logging.py # Logging configuration
ā āāā models/ # Data models and schemas
ā ā āāā schemas.py # Pydantic models
ā āāā services/ # Business logic
ā ā āāā langchain_service.py # LangChain documentation service
ā āāā utils/ # Utility modules
ā āāā exceptions.py # Custom exceptions
ā āāā helpers.py # Helper functions
āāā scripts/ # Convenience scripts
ā āāā run_fastapi.py # Run FastAPI mode
ā āāā run_mcp.py # Run MCP mode
ā āāā health_check.py # Health check utility
āāā tests/ # Test suite
ā āāā test_api.py # API tests
ā āāā test_services.py # Service tests
ā āāā test_integration.py # Integration tests
āāā docs/ # Documentation
ā āāā API.md # API documentation
āāā logs/ # Log files
āāā run.py # Simple entry point
āāā requirements.txt # Python dependencies
āāā pyproject.toml # Project configuration
āāā Dockerfile # Docker configuration
āāā docker-compose.yml # Docker Compose setup
āāā DOCKER.md # Docker documentation
āāā README.md # This file
Key Dependencies
- FastAPI - Web framework for REST API mode
- MCP - Native Model Context Protocol support
- FastAPI-MCP - MCP integration for FastAPI
- httpx - Async HTTP client for external API calls
- BeautifulSoup4 - HTML parsing for documentation scraping
- Pydantic - Data validation and settings management
- uvicorn - ASGI server for FastAPI
Adding New Endpoints
- Define Pydantic models for request/response
- Add endpoint function with proper type hints
- Include comprehensive docstrings
- Add error handling with specific exceptions
- Update health check endpoint count
š Error Handling
The server includes robust error handling for:
- Network failures - Graceful degradation when external APIs are unavailable
- Rate limiting - Handles GitHub API rate limits
- Invalid requests - Proper HTTP status codes and error messages
- Timeouts - Configurable request timeouts
š Health Monitoring
The /health endpoint provides:
- Service status
- Available endpoints count
- Data source URLs
- Current timestamp
- Updated documentation sections
š Security Considerations
- Rate Limiting - Consider implementing rate limiting for production
- CORS - Configure CORS headers if needed for web access
- API Keys - Add GitHub token for higher API limits
- Input Validation - All inputs are validated using Pydantic
š Production Deployment
For production use, consider:
- Caching - Add Redis/Memcached for response caching
- Rate Limiting - Implement request rate limiting
- Monitoring - Add application monitoring and logging
- Load Balancing - Use multiple instances behind a load balancer
- Database - Store frequently accessed data
- CI/CD - Set up automated deployment pipeline
š¤ Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
š License
This project is licensed under the MIT License - see the LICENSE file for details.
š Related Links
- LangChain Documentation
- LangChain GitHub
- FastAPI Documentation
- Model Context Protocol
- MCP Python SDK
š Support
If you encounter any issues:
- Check the health endpoint for service status (FastAPI mode)
- Review Docker logs:
docker-compose logs - Check application logs in the
logs/directory - Ensure network connectivity to external APIs
- Verify all dependencies are installed correctly
- For MCP mode issues, check the MCP client configuration
Note: This server requires internet connectivity to fetch live data from LangChain's official sources. API rate limits may apply for GitHub API calls.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.