FastMCP

FastMCP

A Model Context Protocol server that bridges MCP clients with local LLM services, enabling seamless integration with MCP-compatible applications through standard tools like chat completion, model listing, and health checks.

Category
Visit Server

README

FastMCP - Model Context Protocol Server

FastMCP is a Model Context Protocol (MCP) server that provides LLM services through the MCP standard. It acts as a bridge between MCP clients and your local LLM service, enabling seamless integration with MCP-compatible applications.

Features

  • 🚀 MCP Protocol Compliance: Full implementation of Model Context Protocol
  • 🔧 Tools: Chat completion, model listing, health checks
  • 📝 Prompts: Pre-built prompts for common tasks (assistant, code review, summarization)
  • 📊 Resources: Server configuration and LLM service status
  • 🔄 Streaming Support: Both streaming and non-streaming responses
  • 🔒 Configurable: Environment-based configuration
  • 🛡️ Robust: Built-in error handling and health monitoring
  • 🔌 Integration Ready: Works with any OpenAI-compatible LLM service

Getting Started

Prerequisites

  • Python 3.9+
  • pip
  • Local LLM service running on port 5001 (OpenAI-compatible API)
  • MCP client (e.g., Claude Desktop, MCP Inspector)

Installation

  1. Clone the repository:

    git clone https://github.com/yourusername/fastmcp.git
    cd fastmcp
    
  2. Create a virtual environment and activate it:

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
    
  3. Install dependencies:

    pip install -r requirements.txt
    
  4. Create a .env file (copy from .env.mcp) and configure:

    # Server Settings
    MCP_SERVER_NAME=fastmcp-llm-router
    MCP_SERVER_VERSION=0.1.0
    
    # LLM Service Configuration
    LOCAL_LLM_SERVICE_URL=http://localhost:5001
    
    # Optional: API Key for LLM service
    # LLM_SERVICE_API_KEY=your_api_key_here
    
    # Timeouts (in seconds)
    LLM_REQUEST_TIMEOUT=60
    HEALTH_CHECK_TIMEOUT=10
    
    # Logging
    LOG_LEVEL=INFO
    

Running the MCP Server

Option 1: Using the CLI script

python run_server.py

Option 2: Direct execution

python mcp_server.py

Option 3: With custom configuration

python run_server.py --llm-url http://localhost:5001 --log-level DEBUG

The MCP server will run on stdio and can be connected to by MCP clients.

MCP Client Integration

Claude Desktop Integration

Add to your Claude Desktop configuration:

{
  "mcpServers": {
    "fastmcp-llm-router": {
      "command": "python",
      "args": ["/path/to/fastmcp/mcp_server.py"],
      "env": {
        "LOCAL_LLM_SERVICE_URL": "http://localhost:5001"
      }
    }
  }
}

MCP Inspector

Test your server with MCP Inspector:

npx @modelcontextprotocol/inspector python mcp_server.py

Available Tools

1. Chat Completion

Send messages to your LLM service:

{
  "name": "chat_completion",
  "arguments": {
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Hello!"}
    ],
    "model": "default",
    "temperature": 0.7
  }
}

2. List Models

Get available models from your LLM service:

{
  "name": "list_models",
  "arguments": {}
}

3. Health Check

Check if your LLM service is running:

{
  "name": "health_check",
  "arguments": {}
}

Available Prompts

  • chat_assistant: General AI assistant prompt
  • code_review: Code review and analysis
  • summarize: Text summarization

Available Resources

  • config://server: Server configuration
  • status://llm-service: LLM service status

Project Structure

fastmcp/
├── app/
│   ├── api/
│   │   └── v1/
│   │       └── api.py          # API routes
│   ├── core/
│   │   └── config.py          # Application configuration
│   ├── models/                # Database models
│   ├── services/              # Business logic
│   └── utils/                 # Utility functions
├── tests/                     # Test files
├── .env.example               # Example environment variables
├── requirements.txt           # Project dependencies
└── README.md                  # This file

Contributing

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured