MCP Server with Local LLM

MCP Server with Local LLM

Integrates local language models (like Qwen3-8B) with MCP clients, providing tools for chat, code analysis, text generation, translation, and content summarization using your own hardware.

Category
Visit Server

README

MCP Server with Local LLM

A Model Context Protocol (MCP) server that integrates with your local Large Language Model, providing tools and resources that can be used by MCP-compatible clients like Claude Desktop, IDEs, and other AI assistants.

Features

  • Local LLM Integration: Uses your local language model (default: Qwen3-8B) for all operations
  • MCP-Compatible: Implements the Model Context Protocol specification
  • Multiple Tools: Provides various tools for different use cases
  • Resource Access: Exposes model information and conversation history
  • Memory Efficient: Uses 4-bit quantization for optimal performance

Available Tools

1. chat_with_llm

Basic chat interface with the local LLM.

Parameters:

  • message (required): The message to send to the LLM
  • system_prompt (optional): System prompt to guide behavior
  • temperature (optional): Temperature for generation (0-1, default: 0.7)
  • max_tokens (optional): Maximum tokens to generate (default: 512)

2. analyze_code

Analyze and review code using the local LLM.

Parameters:

  • code (required): The code to analyze
  • language (optional): Programming language
  • analysis_type (optional): Type of analysis (review, explain, optimize, debug)

3. generate_text

Generate text content in various styles and lengths.

Parameters:

  • prompt (required): The prompt for text generation
  • style (optional): Writing style (formal, casual, technical, creative)
  • length (optional): Desired length (short, medium, long)

4. translate_text

Translate text between languages.

Parameters:

  • text (required): Text to translate
  • target_language (required): Target language
  • source_language (optional): Source language (auto-detect if not provided)

5. summarize_content

Summarize content in different formats.

Parameters:

  • content (required): Content to summarize
  • summary_type (optional): Type of summary (brief, detailed, bullet_points)
  • max_length (optional): Maximum length in words (default: 150)

Available Resources

1. llm://model/info

Information about the loaded local LLM model.

2. llm://conversations/history

History of conversations with the LLM.

3. llm://system/status

Current system status and resource usage.

Installation

  1. Clone or download this repository to your local machine.

  2. Install dependencies:

    pip install -r requirements.txt
    
  3. Set up environment variables (create a .env file):

    MODEL_NAME=Qwen/Qwen3-8B
    MAX_NEW_TOKENS=512
    TEMPERATURE=0.7
    
  4. Ensure you have sufficient hardware:

    • GPU with at least 8GB VRAM (recommended)
    • 16GB+ system RAM
    • CUDA-compatible GPU (for optimal performance)

Usage

Running the Server

  1. Start the MCP server:

    python mcp_server.py
    
  2. The server will:

    • Load the specified model (Qwen3-8B by default)
    • Initialize the MCP server
    • Wait for JSON-RPC requests via stdin/stdout

Integrating with Claude Desktop

To use this server with Claude Desktop, add it to your MCP configuration:

  1. Create or edit your Claude Desktop MCP configuration file:

    • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
    • Windows: %APPDATA%\Claude\claude_desktop_config.json
  2. Add the server configuration:

    {
      "mcpServers": {
        "local-llm": {
          "command": "python",
          "args": ["/Users/sazzad/Documents/mcp-server/mcp_server.py"],
          "env": {
            "MODEL_NAME": "Qwen/Qwen3-8B"
          }
        }
      }
    }
    
  3. Restart Claude Desktop to load the new server.

Example Usage in Claude Desktop

Once configured, you can use the tools in Claude Desktop:

Please use the local LLM to analyze this Python code:

Claude will then use the analyze_code tool to get analysis from your local LLM.

Configuration

Environment Variables

  • MODEL_NAME: HuggingFace model name (default: "Qwen/Qwen3-8B")
  • MAX_NEW_TOKENS: Maximum tokens to generate (default: 512)
  • TEMPERATURE: Default temperature for generation (default: 0.7)

Supported Models

This server works with most HuggingFace transformers models that support:

  • Chat templates
  • 4-bit quantization via BitsAndBytesConfig
  • Causal language modeling

Popular choices:

  • Qwen/Qwen3-8B (default)
  • microsoft/DialoGPT-medium
  • meta-llama/Llama-2-7b-chat-hf
  • mistralai/Mistral-7B-Instruct-v0.1

Hardware Requirements

Minimum Requirements

  • 8GB RAM
  • 4GB GPU VRAM (with 4-bit quantization)
  • 10GB disk space (for model storage)

Recommended Requirements

  • 16GB+ RAM
  • 8GB+ GPU VRAM
  • CUDA-compatible GPU
  • SSD storage for faster model loading

Troubleshooting

Common Issues

  1. Out of Memory Error:

    • Reduce MAX_NEW_TOKENS
    • Use a smaller model
    • Ensure 4-bit quantization is working
  2. Model Loading Fails:

    • Check internet connection for initial download
    • Verify model name is correct
    • Ensure sufficient disk space
  3. CUDA Errors:

    • Update CUDA drivers
    • Check GPU compatibility
    • Fall back to CPU if needed

Performance Optimization

  1. Use GPU acceleration:

    pip install torch --index-url https://download.pytorch.org/whl/cu118
    
  2. Enable Flash Attention (if compatible):

    pip install flash-attn
    
  3. Adjust batch size and sequence length based on your hardware.

Development

Project Structure

mcp-server/
├── mcp_server.py          # Main MCP server implementation
├── mcp-server.py          # Legacy FastAPI server (keep for reference)
├── requirements.txt       # Python dependencies
├── mcp/                   # MCP protocol implementation
│   ├── __init__.py
│   ├── server.py         # MCP server base
│   └── types.py          # MCP protocol types
├── .env                  # Environment variables (create this)
└── README.md            # This file

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Test thoroughly
  5. Submit a pull request

License

MIT License - see LICENSE file for details.

Support

For issues and questions:

  1. Check the troubleshooting section
  2. Review the configuration
  3. Open an issue on GitHub

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured