MCP ComfyUI Flux
Enables AI image generation using FLUX models through ComfyUI with GPU acceleration, supporting image generation, 4x upscaling, and background removal with optimized Docker deployment.
README
MCP ComfyUI Flux - Optimized Docker Solution
A fully containerized MCP (Model Context Protocol) server for generating images with FLUX models via ComfyUI. Features optimized Docker builds, PyTorch 2.5.1, automatic GPU acceleration, and Claude Desktop integration.
š Features
- š Optimized Performance: PyTorch 2.5.1 with native RMSNorm support
- š¦ Efficient Images: 25% smaller Docker images (10.9GB vs 14.6GB)
- ā” Fast Rebuilds: BuildKit cache mounts for rapid iterations
- šØ FLUX Models: Supports schnell (4-step) and dev models with fp8 quantization
- š¤ MCP Integration: Works seamlessly with Claude Desktop
- šŖ GPU Acceleration: Automatic NVIDIA GPU detection and CUDA 12.1
- š Background Removal: Built-in RMBG-2.0 for transparent backgrounds
- š Image Upscaling: 4x upscaling with UltraSharp/AnimeSharp models
- š”ļø Production Ready: Health checks, auto-recovery, extensive logging
š Table of Contents
- Quick Start
- System Requirements
- Installation
- MCP Tools
- Docker Management
- Advanced Features
- Troubleshooting
- Architecture
š Quick Start
# Clone the repository
git clone <repository-url> mcp-comfyui-flux
cd mcp-comfyui-flux
# Run the automated installer
./install.sh
# Or build manually with the optimized build script
./build.sh --start
# That's it! The installer will:
# - Check prerequisites
# - Configure environment
# - Download FLUX models
# - Build optimized Docker containers
# - Start all services
š» System Requirements
Minimum Requirements
- OS: Linux, macOS, Windows 10+ (WSL2)
- CPU: 4 cores
- RAM: 16GB (20GB for WSL2)
- Storage: 50GB free space
- Docker: 20.10+
- Docker Compose: 2.0+ or 1.29+ (legacy)
Recommended Requirements
- CPU: 8+ cores
- RAM: 32GB
- GPU: NVIDIA RTX 3090/4090 (12GB+ VRAM)
- Storage: 100GB free space
- CUDA: 12.1+ with NVIDIA Container Toolkit
WSL2 Specific (Windows)
# .wslconfig in Windows user directory
[wsl2]
memory=20GB
processors=8
localhostForwarding=true
š¦ Installation
Prerequisites
-
Install Docker:
# Ubuntu/Debian curl -fsSL https://get.docker.com | bash # macOS brew install docker docker-compose # Windows - Install Docker Desktop -
Install NVIDIA Container Toolkit (for GPU):
# Ubuntu/Debian distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \ sudo tee /etc/apt/sources.list.d/nvidia-docker.list sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit sudo systemctl restart docker
Automated Installation
# Standard installation
./install.sh
# Non-interactive installation
./install.sh --yes
# CPU-only mode
./install.sh --cpu-only
# With specific models
./install.sh --models minimal # or all/none/auto
# Debug mode
./install.sh --debug
Build Script Options
# Build only
./build.sh
# Build and start
./build.sh --start
# Build with cleanup
./build.sh --start --cleanup
# Rebuild without cache
./build.sh --no-cache
šØ MCP Tools
Available Tools in Claude Desktop
1. generate_image
Generate images using FLUX schnell fp8 model (optimized defaults).
// Parameters
{
"prompt": "a majestic mountain landscape, golden hour", // Required
"negative_prompt": "blurry, low quality", // Optional
"width": 1024, // Default: 1024
"height": 1024, // Default: 1024
"steps": 4, // Default: 4 (schnell optimized)
"cfg_scale": 1.0, // Default: 1.0 (schnell optimized)
"seed": -1, // Default: -1 (random)
"batch_size": 1 // Default: 1 (max: 8)
}
// Example usage
generate_image({
prompt: "cyberpunk city at night, neon lights, detailed",
steps: 4,
seed: 42
})
2. upscale_image
Upscale images to 4x resolution using AI models.
// Parameters
{
"image_path": "flux_output_00001_.png", // Required
"model": "ultrasharp", // Options: "ultrasharp", "animesharp"
"scale_factor": 1.0, // Additional scaling (0.5-2.0)
"content_type": "general" // Auto-select model based on content
}
// Example usage
upscale_image({
image_path: "output/my_image.png",
model: "ultrasharp"
})
3. remove_background
Remove background using RMBG-2.0 AI model.
// Parameters
{
"image_path": "output/image.png", // Required
"alpha_matting": true, // Better edge quality (default: true)
"output_format": "png" // Options: "png", "webp"
}
// Example usage
remove_background({
image_path: "flux_output_00001_.png"
})
4. check_models
Verify available models in ComfyUI.
// No parameters required
check_models()
5. connect_comfyui / disconnect_comfyui
Manage ComfyUI connection (usually auto-connects).
MCP Configuration
Add to Claude Desktop config (%APPDATA%\Claude\claude_desktop_config.json on Windows):
{
"mcpServers": {
"comfyui-flux": {
"command": "wsl.exe",
"args": [
"bash", "-c",
"cd /path/to/mcp-comfyui-flux && docker exec -i mcp-comfyui-flux-mcp-server-1 node /app/src/index.js"
]
}
}
}
For macOS/Linux:
{
"mcpServers": {
"comfyui-flux": {
"command": "docker",
"args": [
"exec", "-i", "mcp-comfyui-flux-mcp-server-1",
"node", "/app/src/index.js"
]
}
}
}
š³ Docker Management
Service Commands
# Start services
docker-compose -p mcp-comfyui-flux up -d
# Stop services
docker-compose -p mcp-comfyui-flux down
# View logs
docker-compose -p mcp-comfyui-flux logs -f
docker-compose -p mcp-comfyui-flux logs -f comfyui
# Check status
docker-compose -p mcp-comfyui-flux ps
# Restart services
docker-compose -p mcp-comfyui-flux restart
Container Access
# Access ComfyUI container
docker exec -it mcp-comfyui-flux-comfyui-1 bash
# Access MCP server
docker exec -it mcp-comfyui-flux-mcp-server-1 sh
# Check GPU status
docker exec mcp-comfyui-flux-comfyui-1 nvidia-smi
# Test PyTorch
docker exec mcp-comfyui-flux-comfyui-1 python3.11 -c "import torch; print(f'PyTorch {torch.__version__}')"
Health Monitoring
# Full health check
./scripts/health-check.sh
# Check ComfyUI API
curl http://localhost:8188/system_stats
# Container health status
docker inspect mcp-comfyui-flux-comfyui-1 --format='{{.State.Health.Status}}'
š Advanced Features
Performance Optimizations
The optimized build includes:
- PyTorch 2.5.1: Latest stable with native RMSNorm support
- BuildKit Cache Mounts: Reduces I/O operations in WSL2
- FP8 Quantization: FLUX schnell fp8 uses ~10GB VRAM (vs 24GB fp16)
- Multi-stage Builds: Separates build and runtime dependencies
- Compiled Python: Pre-compiled bytecode for faster startup
FLUX Model Configurations
Schnell (Default - Fast)
- Steps: 4 (optimized for schnell)
- CFG Scale: 1.0 (works best with low guidance)
- Scheduler: simple
- Generation Time: ~2-4 seconds per image
- VRAM Usage: ~10GB base + 1GB per batch
Dev (High Quality)
- Steps: 20-50
- CFG Scale: 7.0
- Scheduler: normal/karras
- Requires: Hugging Face authentication
- VRAM Usage: ~12-16GB
Batch Generation
Generate multiple images efficiently:
generate_image({
prompt: "fantasy landscape",
batch_size: 4 // Generates 4 variations in parallel
})
Custom Nodes
Included custom nodes:
- ComfyUI-Manager: Node management and updates
- ComfyUI-KJNodes: Advanced processing nodes
- ComfyUI-RMBG: Background removal (31 nodes)
š§ Troubleshooting
Common Issues
GPU Not Detected
# Verify NVIDIA driver
nvidia-smi
# Check Docker GPU support
docker run --rm --gpus all nvidia/cuda:12.1.0-base-ubuntu22.04 nvidia-smi
# Ensure NVIDIA Container Toolkit is installed
sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
Out of Memory
# Reduce batch size
batch_size: 1
# Use CPU mode (in .env)
CUDA_VISIBLE_DEVICES=-1
# Adjust PyTorch memory
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256
WSL2 Specific Issues
# If Docker/WSL2 crashes with I/O errors
# Avoid recursive chown on large directories
# Use the optimized Dockerfile which handles this
# Increase WSL2 memory in .wslconfig
memory=20GB
# Reset WSL2 if needed
wsl --shutdown
Port Conflicts
# Check what's using port 8188
lsof -i :8188 # macOS/Linux
netstat -ano | findstr :8188 # Windows
# Use different port
PORT=8189 docker-compose -p mcp-comfyui-flux up -d
Log Locations
- Installation:
install.log - Docker builds:
docker-compose logs - ComfyUI: Inside container at
/app/ComfyUI/user/comfyui.log - MCP Server:
docker logs mcp-comfyui-flux-mcp-server-1
šļø Architecture
System Overview
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā Claude Desktop (MCP Client) ā
āāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā docker exec stdio
āāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā MCP Server Container ā
ā ⢠Node.js 20 Alpine (581MB) ā
ā ⢠MCP Protocol Implementation ā
ā ⢠Auto-connects to ComfyUI ā
āāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā WebSocket (port 8188)
āāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā ComfyUI Container ā
ā ⢠Ubuntu 22.04 + CUDA 12.1 ā
ā ⢠Python 3.11 + PyTorch 2.5.1 ā
ā ⢠FLUX schnell fp8 (4.5GB) ā
ā ⢠Custom nodes (KJNodes, RMBG) ā
ā ⢠Optimized image size: 10.9GB ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Key Improvements
-
Docker Optimization
- Multi-stage builds reduce image size by 25%
- BuildKit cache mounts speed up rebuilds
- No Python venv (Docker IS the isolation)
-
Model Configuration
- FLUX schnell fp8: 4.5GB (vs 11GB fp16)
- T5-XXL fp8: 4.9GB text encoder
- CLIP-L: 235MB text encoder
- VAE: 320MB decoder
-
Performance
- 4-step generation in 2-4 seconds
- Batch processing up to 8 images
- Native RMSNorm in PyTorch 2.5.1
- High VRAM mode for 24GB+ GPUs
Directory Structure
mcp-comfyui-flux/
āāā src/ # MCP server source
ā āāā index.js # MCP protocol handler
ā āāā comfyui-client.js # WebSocket client
ā āāā workflows/ # ComfyUI workflows
āāā models/ # Model storage
ā āāā unet/ # FLUX models (fp8)
ā āāā clip/ # Text encoders
ā āāā vae/ # VAE models
ā āāā upscale_models/ # Upscaling models
āāā output/ # Generated images
āāā scripts/ # Utility scripts
āāā docker-compose.yml # Service orchestration
āāā Dockerfile.comfyui # Optimized ComfyUI
āāā Dockerfile.mcp # MCP server
āāā requirements.txt # Python dependencies
āāā build.sh # Build script
āāā install.sh # Automated installer
š Security
- Local Execution: All processing happens locally
- No External APIs: Except model downloads from Hugging Face
- Container Isolation: Services run in isolated containers
- Non-root Execution: Containers run as non-root user
- Token Security: Stored in
.env(gitignored)
š Additional Documentation
- CLAUDE.md - Claude Code development guide
- ARCHITECTURE.md - Technical architecture details
- API.md - Complete MCP API reference
- TROUBLESHOOTING.md - Detailed troubleshooting
š¤ Contributing
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
š License
MIT License - see LICENSE file for details.
š Acknowledgments
- ComfyUI - The workflow engine
- Black Forest Labs - FLUX model creators
- Anthropic - MCP protocol and Claude
- NVIDIA - CUDA and GPU support
Made with ā¤ļø for efficient AI image generation
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.