MCPMake

MCPMake

Enables management and execution of Python scripts with automatic LLM-extracted argument schemas and validation. Provides script registry, execution history, and intelligent argument parsing like make but for Python scripts.

Category
Visit Server

README

MCPMake

An MCP (Model Context Protocol) server for managing and running Python scripts with LLM-extracted schemas - like make, but smarter.

Features

  • Automatic Schema Extraction: Uses LLMs (Claude Sonnet 4 or GPT-4.1) to analyze Python scripts and extract argument schemas
  • Script Registry: Store and manage multiple scripts with metadata
  • Input Validation: Validates arguments against JSON Schema before execution
  • Execution History: Tracks all script runs with full output logs
  • Environment Variables: Pass custom env vars per execution
  • Flexible Execution: Custom Python interpreters, timeouts, and output truncation
  • Update & Re-analyze: Refresh script schemas when code changes

Installation

# Clone or navigate to the project directory
cd mcpmake

# Install in development mode
pip install -e .

Configuration

Set up API keys

You'll need an API key for either Anthropic or OpenAI (or both):

export ANTHROPIC_API_KEY="your-key-here"
# or
export OPENAI_API_KEY="your-key-here"

Add to MCP settings

Add the server to your MCP client configuration (e.g., Claude Desktop):

{
  "mcpServers": {
    "mcpmake": {
      "command": "python",
      "args": ["-m", "mcpmake.server"],
      "env": {
        "ANTHROPIC_API_KEY": "your-key-here"
      }
    }
  }
}

Usage

1. Register a Script

# Register a Python script with automatic schema extraction
register_script(
    name="data_processor",
    path="/path/to/script.py",
    description="Processes data files",  # optional, auto-generated if omitted
    python_path="/usr/bin/python3",      # optional
    timeout_seconds=240,                  # optional, default 240
    min_lines=1,                          # optional, default 1
    llm_provider="anthropic"              # optional, "anthropic" or "openai"
)

2. List Scripts

list_scripts()
# Shows all registered scripts with descriptions

3. Get Script Info

get_script_info(name="data_processor")
# Shows detailed schema, path, recent runs, etc.

4. Run a Script

run_script(
    name="data_processor",
    args={
        "input_file": "data.csv",
        "output_dir": "/tmp/output",
        "verbose": true
    },
    env_vars={                    # optional
        "API_KEY": "secret123"
    },
    python_path="/usr/bin/python3",  # optional, overrides default
    timeout=300,                      # optional, overrides default
    output_lines=100                  # optional, default 100
)

5. View Run History

get_run_history(
    name="data_processor",  # optional, shows all scripts if omitted
    limit=10                # optional, default 10
)

6. Update Script Schema

# Re-analyze script after code changes
update_script(
    name="data_processor",
    llm_provider="anthropic"  # optional
)

7. Delete Script

delete_script(name="data_processor")

Data Storage

MCPMake stores data in ~/.mcpmake/:

~/.mcpmake/
├── scripts.json          # Script registry and metadata
├── history.jsonl         # Execution history log
└── outputs/              # Full script outputs
    ├── script1_timestamp.log
    └── script2_timestamp.log

How It Works

  1. Registration: When you register a script, MCPMake:

    • Reads the script file
    • Sends it to an LLM (Claude Sonnet 4 or GPT-4.1)
    • Extracts a JSON Schema describing the script's arguments
    • Extracts a description from docstrings/comments
    • Stores everything in scripts.json
  2. Execution: When you run a script:

    • Validates your arguments against the stored JSON Schema
    • Checks if the script file still exists
    • Builds command-line arguments from your input
    • Runs the script with specified Python interpreter and env vars
    • Captures stdout/stderr with timeout protection
    • Saves full output to a log file
    • Returns truncated output (first N lines)
    • Logs execution details to history
  3. History: All runs are logged with:

    • Timestamp, arguments, exit code
    • Execution time
    • Full output file path
    • Environment variables used

Example Python Scripts

MCPMake works best with scripts that use:

argparse

import argparse

parser = argparse.ArgumentParser(description="Process data files")
parser.add_argument("--input-file", required=True, help="Input CSV file")
parser.add_argument("--output-dir", required=True, help="Output directory")
parser.add_argument("--verbose", action="store_true", help="Verbose output")
args = parser.parse_args()

click

import click

@click.command()
@click.option("--input-file", required=True, help="Input CSV file")
@click.option("--output-dir", required=True, help="Output directory")
@click.option("--verbose", is_flag=True, help="Verbose output")
def main(input_file, output_dir, verbose):
    pass

Simple functions

def main(input_file: str, output_dir: str, verbose: bool = False):
    """
    Process data files.

    Args:
        input_file: Path to input CSV file
        output_dir: Output directory path
        verbose: Enable verbose logging
    """
    pass

Requirements

  • Python 3.10+
  • MCP SDK
  • Anthropic SDK (for Claude)
  • OpenAI SDK (for GPT)
  • jsonschema

License

MIT

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured