MCP LLM API Server
ianrichard
README
MCP LLM API Server
An API server for LLMs using the Model-Call-Protocol pattern and Pydantic AI.
Features
- Terminal interface for CLI interaction
- API server with WebSocket streaming
- Web client demonstration
- Tool call support through MCP
Prerequisites
- Python 3.9+
- A virtual environment (venv)
Quick Start (Local Development)
- Copy
.env.example
to.env
and add your API keys - Create and activate a virtual environment:
python3 -m venv .venv source .venv/bin/activate # On Linux/macOS # or .venv\Scripts\activate # On Windows
- Install dependencies:
pip install -r requirements.txt
- Run the CLI:
python src/main.py --mode cli
- Run the API:
python src/main.py --mode api
Docker
Build the Docker image
docker build -t mcp-llm-api-server .
Running API mode (default)
docker run -p 8000:8000 --env-file .env mcp-llm-api-server
Running CLI mode (interactive)
docker run -it --env-file .env mcp-llm-api-server --mode cli
Docker Development with Live Reloading
# Run with volume mount for live code reloading during development
docker run -p 8000:8000 --env-file .env -v $(pwd):/app mcp-llm-api-server
Using Docker Compose
# Start the API service
docker-compose up
Model Configuration
This project uses Pydantic AI for AI model integration. You can configure which model to use by setting the BASE_MODEL
environment variable.
The format follows the Pydantic AI convention: provider:model_name
Examples:
openai:gpt-4o
anthropic:claude-3-opus-20240229
groq:llama-3.3-70b-versatile
See the complete list of supported models at: https://ai.pydantic.dev/models/
API Keys
For each provider, you'll need to set the corresponding API key in your .env
file:
# Example .env configuration
BASE_MODEL=groq:llama-3.3-70b-versatile
GROQ_API_KEY=your-groq-api-key
OPENAI_API_KEY=your-openai-api-key
ANTHROPIC_API_KEY=your-anthropic-api-key
The API key environment variable follows the pattern: {PROVIDER_NAME}_API_KEY
API Documentation
Once the API server is running, access the auto-generated API documentation at:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
Making API Calls
The primary endpoint is /chat
, which accepts POST requests with a JSON body containing the user's message.
Example using curl:
curl -X POST -H "Content-Type: application/json" -d '{"message": "Hello, agent!"}' http://localhost:8000/chat
For streaming responses, use the WebSocket endpoint:
ws://localhost:8000/ws
Web Client
A demo web client is included in the /static
directory. Access it at:
http://localhost:8000/
Important Notes
- Ensure that the virtual environment is activated before running either the client or the server.
- The API server runs on port 8000 by default.
- Both the CLI interface and API server use the same underlying agent functionality.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
MCP Package Docs Server
Facilitates LLMs to efficiently access and fetch structured documentation for packages in Go, Python, and NPM, enhancing software development with multi-language support and performance optimization.
Claude Code MCP
An implementation of Claude Code as a Model Context Protocol server that enables using Claude's software engineering capabilities (code generation, editing, reviewing, and file operations) through the standardized MCP interface.
@kazuph/mcp-taskmanager
Model Context Protocol server for Task Management. This allows Claude Desktop (or any MCP client) to manage and execute tasks in a queue-based system.
Linear MCP Server
Enables interaction with Linear's API for managing issues, teams, and projects programmatically through the Model Context Protocol.
mermaid-mcp-server
A Model Context Protocol (MCP) server that converts Mermaid diagrams to PNG images.
Jira-Context-MCP
MCP server to provide Jira Tickets information to AI coding agents like Cursor

Linear MCP Server
A Model Context Protocol server that integrates with Linear's issue tracking system, allowing LLMs to create, update, search, and comment on Linear issues through natural language interactions.

Sequential Thinking MCP Server
This server facilitates structured problem-solving by breaking down complex issues into sequential steps, supporting revisions, and enabling multiple solution paths through full MCP integration.