MCP Prompt Cleaner

MCP Prompt Cleaner

Enhances and cleans raw prompts using AI to make them more clear, actionable, and effective. Provides quality assessment, suggestions, and supports both general and code-specific optimization modes.

Category
Visit Server

README

This project was created as an enchancement to Prompt Cleaner that was written in Typescript. As well as an enhancement. This is my 'rosetta' stone project. That I can easily follow to have a deeper understanding on Python. Oh obvious this was Coded with the help of Cursor

MCP Prompt Cleaner

Prompt Cleaner Banner

A Model Context Protocol (MCP) server that uses AI to enhance and clean raw prompts, making them more clear, actionable, and effective.

Features

  • AI-Powered Enhancement: Uses large language models to improve prompt clarity and specificity
  • Concise System Prompt: Uses a structured, efficient prompt format for consistent results
  • Context-Aware Processing: Accepts additional context to guide the enhancement process
  • Mode-Specific Optimization: Supports both "general" and "code" modes for different use cases
  • Quality Assessment: Provides quality scores and detailed feedback on enhanced prompts
  • Two-Level Retry Strategy: HTTP-level retries for network issues, content-level retries for AI output quality
  • Exponential Backoff: Robust error handling with jitter to prevent thundering herd
  • MCP Integration: Full MCP protocol compliance with stdio transport
  • Production Ready: Comprehensive test coverage, clean code, and robust error handling

Installation

Using uv (recommended)

uv sync

Using pip

pip install -e .

Note: This project uses pyproject.toml for dependency management.

Configuration

Local LLM (LMStudio) - Default Setup

The server is configured by default to work with local LLMs like LMStudio. No API key is required:

# Default configuration (no .env file needed)
# LLM_API_ENDPOINT=http://localhost:1234/v1/chat/completions
# LLM_API_KEY=None (not required for local LLMs)
# LLM_MODEL=local-model

Cloud LLM (OpenAI, Anthropic, etc.)

For cloud-based LLMs, create a .env file in the project root:

# LLM API Configuration
LLM_API_ENDPOINT=https://api.openai.com/v1/chat/completions
LLM_API_KEY=your-api-key-here
LLM_MODEL=gpt-4
LLM_TIMEOUT=60
LLM_MAX_TOKENS=600

# Retry Configuration
CONTENT_MAX_RETRIES=2

Note: .env file support is provided by pydantic-settings - no additional dependencies required.

LMStudio Setup

  1. Download and install LMStudio
  2. Start LMStudio and load a model
  3. Start the local server (usually on http://localhost:1234)
  4. The MCP server will automatically connect to your local LLM

Running the Server

To run the MCP server:

python main.py

Tool Usage

The server provides a clean_prompt tool that accepts:

  • raw_prompt (required): The user's raw, unpolished prompt
  • context (optional): Additional context about the task
  • mode (optional): Processing mode - "general" or "code" (default: "general")
  • temperature (optional): AI sampling temperature 0.0-1.0 (default: 0.2)

Example Tool Call

The tool is called directly with parameters:

# Direct function call
result = await clean_prompt_tool(
    raw_prompt="help me write code",
    context="web development with Python",
    mode="code",
    temperature=0.1
)

Or via MCP protocol:

{
  "method": "tools/call",
  "params": {
    "name": "clean_prompt",
    "arguments": {
      "raw_prompt": "help me write code",
      "context": "web development with Python",
      "mode": "code",
      "temperature": 0.1
    }
  }
}

Example Response

{
  "cleaned": "Help me write Python code for web development. I need assistance with [specific task] using [framework/library]. The code should [requirements] and handle [error cases].",
  "notes": [
    "Added placeholders for specific task and framework",
    "Specified requirements and error handling"
  ],
  "open_questions": [
    "What specific web development task?",
    "Which Python framework?",
    "What are the exact requirements?"
  ],
  "risks": ["Without specific details, the code may not meet requirements"],
  "unchanged": false,
  "quality": {
    "score": 4,
    "reasons": ["Clear structure", "Identifies missing information", "Actionable guidance"]
  }
}

MCP Client Configuration

Claude Desktop

For Local LLM (LMStudio) - No API Key Required

{
  "mcpServers": {
    "mcp-prompt-cleaner": {
      "command": "python",
      "args": ["main.py"]
    }
  }
}

For Cloud LLM (OpenAI, etc.) - API Key Required

{
  "mcpServers": {
    "mcp-prompt-cleaner": {
      "command": "python",
      "args": ["main.py"],
      "env": {
        "LLM_API_KEY": "your-api-key-here",
        "LLM_API_ENDPOINT": "https://api.openai.com/v1/chat/completions",
        "LLM_MODEL": "gpt-4"
      }
    }
  }
}

Other MCP Clients

The server uses stdio transport and can be configured with any MCP-compatible client by pointing to the main.py file.

Development

Running Tests

uv run pytest

Test Coverage

The project includes comprehensive tests for:

  • JSON extraction from mixed content
  • LLM client with retry logic
  • Prompt cleaning functionality
  • MCP protocol integration

Project Structure

├── main.py                 # MCP server with tool registration
├── config.py              # Configuration management
├── schemas.py             # Pydantic models for validation
├── tools/
│   └── cleaner.py         # Main clean_prompt implementation
├── llm/
│   └── client.py          # AI API client with retry logic
├── utils/
│   └── json_extractor.py  # JSON extraction utilities
├── prompts/
│   └── cleaner.md         # AI system prompt
└── tests/                 # Comprehensive test suite

Requirements

  • Python 3.11+
  • MCP Python SDK
  • httpx for HTTP client
  • pydantic for data validation
  • pytest for testing

License

MIT

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured