
Prompt Auto-Optimizer MCP
An MCP server that automatically optimizes AI prompts using evolutionary algorithms, helping improve prompt performance, creativity, and reliability through iterative testing and refinement.
Tools
gepa_start_evolution
Initialize evolution process with configuration and seed prompt
gepa_record_trajectory
Record execution trajectory for prompt evaluation
gepa_evaluate_prompt
Evaluate prompt candidate performance across multiple tasks
gepa_reflect
Analyze failures and generate prompt improvements
gepa_get_pareto_frontier
Retrieve optimal candidates from Pareto frontier
gepa_select_optimal
Select best prompt candidate for given context
gepa_create_backup
Create system backup including evolution state and trajectories
gepa_restore_backup
Restore system from a specific backup
gepa_list_backups
List available system backups
gepa_recovery_status
Get comprehensive disaster recovery status and health information
gepa_integrity_check
Perform comprehensive data integrity check
gepa_recover_component
Recover a specific GEPA component
README
Prompt Auto-Optimizer MCP
AI-Powered Prompt Evolution - An MCP server that automatically optimizes your AI prompts using evolutionary algorithms.
🎯 Purpose
Automatically evolve and optimize AI prompts to improve performance, creativity, and reliability. Uses genetic algorithms to iteratively improve prompts based on real performance data.
🛠️ Installation
# Clone and install
git clone https://github.com/your-org/prompt-auto-optimizer-mcp.git
cd prompt-auto-optimizer-mcp
npm install
npm run build
# Start the MCP server
npm run mcp:start
⚙️ Configuration
Add to your Claude Code settings (.claude/settings.json
):
{
"mcp": {
"servers": {
"prompt-optimizer": {
"command": "npx",
"args": ["prompt-auto-optimizer-mcp"],
"cwd": "./path/to/prompt-auto-optimizer-mcp"
}
}
}
}
🔧 Available Tools
Core Optimization Tools
gepa_start_evolution
Start optimizing a prompt using evolutionary algorithms.
{
taskDescription: string; // What you want to optimize for
seedPrompt?: string; // Starting prompt (optional)
config?: {
populationSize?: number; // How many variants to test (default: 20)
generations?: number; // How many iterations (default: 10)
mutationRate?: number; // How much to change prompts (default: 0.15)
};
}
gepa_evaluate_prompt
Test how well a prompt performs on specific tasks.
{
promptId: string; // Which prompt to test
taskIds: string[]; // What tasks to test it on
rolloutCount?: number; // How many times to test (default: 5)
}
gepa_reflect
Analyze why prompts fail and get improvement suggestions.
{
trajectoryIds: string[]; // Which test runs to analyze
targetPromptId: string; // Which prompt needs improvement
analysisDepth?: 'shallow' | 'deep'; // How detailed (default: 'deep')
}
gepa_get_pareto_frontier
Get the best prompt candidates that balance multiple goals.
{
minPerformance?: number; // Minimum quality threshold
limit?: number; // Max results to return (default: 10)
}
gepa_select_optimal
Choose the best prompt for your specific use case.
{
taskContext?: string; // Describe your use case
performanceWeight?: number; // How much to prioritize accuracy (default: 0.7)
diversityWeight?: number; // How much to prioritize creativity (default: 0.3)
}
gepa_record_trajectory
Log the results of prompt executions for analysis.
{
promptId: string; // Which prompt was used
taskId: string; // What task was performed
executionSteps: ExecutionStep[]; // What happened during execution
result: {
success: boolean; // Did it work?
score: number; // How well did it work?
};
}
Backup & Recovery Tools
gepa_create_backup
- Save current optimization stategepa_restore_backup
- Restore from a previous backupgepa_list_backups
- Show available backupsgepa_recovery_status
- Check system healthgepa_integrity_check
- Verify data integrity
📝 Basic Usage
- Start Evolution: Use
gepa_start_evolution
with your task description - Record Results: Use
gepa_record_trajectory
to log how prompts perform - Analyze Failures: Use
gepa_reflect
to understand what went wrong - Get Best Prompts: Use
gepa_select_optimal
to find the best candidates
🔧 Environment Variables
# Optional performance tuning
GEPA_MAX_CONCURRENT_PROCESSES=3 # Parallel execution limit
GEPA_DEFAULT_POPULATION_SIZE=20 # Default prompt variants
GEPA_DEFAULT_GENERATIONS=10 # Default iterations
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.