MCP ComfyUI Flux

MCP ComfyUI Flux

Enables AI image generation using FLUX models through ComfyUI with GPU acceleration, supporting image generation, 4x upscaling, and background removal with optimized Docker deployment.

Category
Visit Server

README

MCP ComfyUI Flux - Optimized Docker Solution

License: MIT Docker PyTorch CUDA

A fully containerized MCP (Model Context Protocol) server for generating images with FLUX models via ComfyUI. Features optimized Docker builds, PyTorch 2.5.1, automatic GPU acceleration, and Claude Desktop integration.

🌟 Features

  • šŸš€ Optimized Performance: PyTorch 2.5.1 with native RMSNorm support
  • šŸ“¦ Efficient Images: 25% smaller Docker images (10.9GB vs 14.6GB)
  • ⚔ Fast Rebuilds: BuildKit cache mounts for rapid iterations
  • šŸŽØ FLUX Models: Supports schnell (4-step) and dev models with fp8 quantization
  • šŸ¤– MCP Integration: Works seamlessly with Claude Desktop
  • šŸ’Ŗ GPU Acceleration: Automatic NVIDIA GPU detection and CUDA 12.1
  • šŸ”„ Background Removal: Built-in RMBG-2.0 for transparent backgrounds
  • šŸ“ˆ Image Upscaling: 4x upscaling with UltraSharp/AnimeSharp models
  • šŸ›”ļø Production Ready: Health checks, auto-recovery, extensive logging

šŸ“‹ Table of Contents

šŸš€ Quick Start

# Clone the repository
git clone <repository-url> mcp-comfyui-flux
cd mcp-comfyui-flux

# Run the automated installer
./install.sh

# Or build manually with the optimized build script
./build.sh --start

# That's it! The installer will:
# - Check prerequisites
# - Configure environment
# - Download FLUX models
# - Build optimized Docker containers
# - Start all services

šŸ’» System Requirements

Minimum Requirements

  • OS: Linux, macOS, Windows 10+ (WSL2)
  • CPU: 4 cores
  • RAM: 16GB (20GB for WSL2)
  • Storage: 50GB free space
  • Docker: 20.10+
  • Docker Compose: 2.0+ or 1.29+ (legacy)

Recommended Requirements

  • CPU: 8+ cores
  • RAM: 32GB
  • GPU: NVIDIA RTX 3090/4090 (12GB+ VRAM)
  • Storage: 100GB free space
  • CUDA: 12.1+ with NVIDIA Container Toolkit

WSL2 Specific (Windows)

# .wslconfig in Windows user directory
[wsl2]
memory=20GB
processors=8
localhostForwarding=true

šŸ“¦ Installation

Prerequisites

  1. Install Docker:

    # Ubuntu/Debian
    curl -fsSL https://get.docker.com | bash
    
    # macOS
    brew install docker docker-compose
    
    # Windows - Install Docker Desktop
    
  2. Install NVIDIA Container Toolkit (for GPU):

    # Ubuntu/Debian
    distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
    curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
    curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
      sudo tee /etc/apt/sources.list.d/nvidia-docker.list
    sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
    sudo systemctl restart docker
    

Automated Installation

# Standard installation
./install.sh

# Non-interactive installation
./install.sh --yes

# CPU-only mode
./install.sh --cpu-only

# With specific models
./install.sh --models minimal  # or all/none/auto

# Debug mode
./install.sh --debug

Build Script Options

# Build only
./build.sh

# Build and start
./build.sh --start

# Build with cleanup
./build.sh --start --cleanup

# Rebuild without cache
./build.sh --no-cache

šŸŽØ MCP Tools

Available Tools in Claude Desktop

1. generate_image

Generate images using FLUX schnell fp8 model (optimized defaults).

// Parameters
{
  "prompt": "a majestic mountain landscape, golden hour",  // Required
  "negative_prompt": "blurry, low quality",               // Optional
  "width": 1024,                                          // Default: 1024
  "height": 1024,                                         // Default: 1024
  "steps": 4,                                            // Default: 4 (schnell optimized)
  "cfg_scale": 1.0,                                      // Default: 1.0 (schnell optimized)
  "seed": -1,                                            // Default: -1 (random)
  "batch_size": 1                                        // Default: 1 (max: 8)
}

// Example usage
generate_image({
  prompt: "cyberpunk city at night, neon lights, detailed",
  steps: 4,
  seed: 42
})

2. upscale_image

Upscale images to 4x resolution using AI models.

// Parameters
{
  "image_path": "flux_output_00001_.png",  // Required
  "model": "ultrasharp",                   // Options: "ultrasharp", "animesharp"
  "scale_factor": 1.0,                     // Additional scaling (0.5-2.0)
  "content_type": "general"                // Auto-select model based on content
}

// Example usage
upscale_image({
  image_path: "output/my_image.png",
  model: "ultrasharp"
})

3. remove_background

Remove background using RMBG-2.0 AI model.

// Parameters
{
  "image_path": "output/image.png",  // Required
  "alpha_matting": true,              // Better edge quality (default: true)
  "output_format": "png"              // Options: "png", "webp"
}

// Example usage
remove_background({
  image_path: "flux_output_00001_.png"
})

4. check_models

Verify available models in ComfyUI.

// No parameters required
check_models()

5. connect_comfyui / disconnect_comfyui

Manage ComfyUI connection (usually auto-connects).

MCP Configuration

Add to Claude Desktop config (%APPDATA%\Claude\claude_desktop_config.json on Windows):

{
  "mcpServers": {
    "comfyui-flux": {
      "command": "wsl.exe",
      "args": [
        "bash", "-c",
        "cd /path/to/mcp-comfyui-flux && docker exec -i mcp-comfyui-flux-mcp-server-1 node /app/src/index.js"
      ]
    }
  }
}

For macOS/Linux:

{
  "mcpServers": {
    "comfyui-flux": {
      "command": "docker",
      "args": [
        "exec", "-i", "mcp-comfyui-flux-mcp-server-1",
        "node", "/app/src/index.js"
      ]
    }
  }
}

🐳 Docker Management

Service Commands

# Start services
docker-compose -p mcp-comfyui-flux up -d

# Stop services
docker-compose -p mcp-comfyui-flux down

# View logs
docker-compose -p mcp-comfyui-flux logs -f
docker-compose -p mcp-comfyui-flux logs -f comfyui

# Check status
docker-compose -p mcp-comfyui-flux ps

# Restart services
docker-compose -p mcp-comfyui-flux restart

Container Access

# Access ComfyUI container
docker exec -it mcp-comfyui-flux-comfyui-1 bash

# Access MCP server
docker exec -it mcp-comfyui-flux-mcp-server-1 sh

# Check GPU status
docker exec mcp-comfyui-flux-comfyui-1 nvidia-smi

# Test PyTorch
docker exec mcp-comfyui-flux-comfyui-1 python3.11 -c "import torch; print(f'PyTorch {torch.__version__}')"

Health Monitoring

# Full health check
./scripts/health-check.sh

# Check ComfyUI API
curl http://localhost:8188/system_stats

# Container health status
docker inspect mcp-comfyui-flux-comfyui-1 --format='{{.State.Health.Status}}'

šŸš€ Advanced Features

Performance Optimizations

The optimized build includes:

  • PyTorch 2.5.1: Latest stable with native RMSNorm support
  • BuildKit Cache Mounts: Reduces I/O operations in WSL2
  • FP8 Quantization: FLUX schnell fp8 uses ~10GB VRAM (vs 24GB fp16)
  • Multi-stage Builds: Separates build and runtime dependencies
  • Compiled Python: Pre-compiled bytecode for faster startup

FLUX Model Configurations

Schnell (Default - Fast)

  • Steps: 4 (optimized for schnell)
  • CFG Scale: 1.0 (works best with low guidance)
  • Scheduler: simple
  • Generation Time: ~2-4 seconds per image
  • VRAM Usage: ~10GB base + 1GB per batch

Dev (High Quality)

  • Steps: 20-50
  • CFG Scale: 7.0
  • Scheduler: normal/karras
  • Requires: Hugging Face authentication
  • VRAM Usage: ~12-16GB

Batch Generation

Generate multiple images efficiently:

generate_image({
  prompt: "fantasy landscape",
  batch_size: 4  // Generates 4 variations in parallel
})

Custom Nodes

Included custom nodes:

  • ComfyUI-Manager: Node management and updates
  • ComfyUI-KJNodes: Advanced processing nodes
  • ComfyUI-RMBG: Background removal (31 nodes)

šŸ”§ Troubleshooting

Common Issues

GPU Not Detected

# Verify NVIDIA driver
nvidia-smi

# Check Docker GPU support
docker run --rm --gpus all nvidia/cuda:12.1.0-base-ubuntu22.04 nvidia-smi

# Ensure NVIDIA Container Toolkit is installed
sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker

Out of Memory

# Reduce batch size
batch_size: 1

# Use CPU mode (in .env)
CUDA_VISIBLE_DEVICES=-1

# Adjust PyTorch memory
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256

WSL2 Specific Issues

# If Docker/WSL2 crashes with I/O errors
# Avoid recursive chown on large directories
# Use the optimized Dockerfile which handles this

# Increase WSL2 memory in .wslconfig
memory=20GB

# Reset WSL2 if needed
wsl --shutdown

Port Conflicts

# Check what's using port 8188
lsof -i :8188  # macOS/Linux
netstat -ano | findstr :8188  # Windows

# Use different port
PORT=8189 docker-compose -p mcp-comfyui-flux up -d

Log Locations

  • Installation: install.log
  • Docker builds: docker-compose logs
  • ComfyUI: Inside container at /app/ComfyUI/user/comfyui.log
  • MCP Server: docker logs mcp-comfyui-flux-mcp-server-1

šŸ—ļø Architecture

System Overview

ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”
│      Claude Desktop (MCP Client)        │
ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¬ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜
             │ docker exec stdio
ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā–¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”
│      MCP Server Container               │
│   • Node.js 20 Alpine (581MB)          │
│   • MCP Protocol Implementation        │
│   • Auto-connects to ComfyUI           │
ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¬ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜
             │ WebSocket (port 8188)
ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā–¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”
│      ComfyUI Container                  │
│   • Ubuntu 22.04 + CUDA 12.1           │
│   • Python 3.11 + PyTorch 2.5.1        │
│   • FLUX schnell fp8 (4.5GB)           │
│   • Custom nodes (KJNodes, RMBG)       │
│   • Optimized image size: 10.9GB       │
ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜

Key Improvements

  1. Docker Optimization

    • Multi-stage builds reduce image size by 25%
    • BuildKit cache mounts speed up rebuilds
    • No Python venv (Docker IS the isolation)
  2. Model Configuration

    • FLUX schnell fp8: 4.5GB (vs 11GB fp16)
    • T5-XXL fp8: 4.9GB text encoder
    • CLIP-L: 235MB text encoder
    • VAE: 320MB decoder
  3. Performance

    • 4-step generation in 2-4 seconds
    • Batch processing up to 8 images
    • Native RMSNorm in PyTorch 2.5.1
    • High VRAM mode for 24GB+ GPUs

Directory Structure

mcp-comfyui-flux/
ā”œā”€ā”€ src/                    # MCP server source
│   ā”œā”€ā”€ index.js           # MCP protocol handler
│   ā”œā”€ā”€ comfyui-client.js  # WebSocket client
│   └── workflows/         # ComfyUI workflows
ā”œā”€ā”€ models/                # Model storage
│   ā”œā”€ā”€ unet/             # FLUX models (fp8)
│   ā”œā”€ā”€ clip/             # Text encoders
│   ā”œā”€ā”€ vae/              # VAE models
│   └── upscale_models/   # Upscaling models
ā”œā”€ā”€ output/               # Generated images
ā”œā”€ā”€ scripts/              # Utility scripts
ā”œā”€ā”€ docker-compose.yml    # Service orchestration
ā”œā”€ā”€ Dockerfile.comfyui    # Optimized ComfyUI
ā”œā”€ā”€ Dockerfile.mcp        # MCP server
ā”œā”€ā”€ requirements.txt      # Python dependencies
ā”œā”€ā”€ build.sh             # Build script
└── install.sh           # Automated installer

šŸ”’ Security

  • Local Execution: All processing happens locally
  • No External APIs: Except model downloads from Hugging Face
  • Container Isolation: Services run in isolated containers
  • Non-root Execution: Containers run as non-root user
  • Token Security: Stored in .env (gitignored)

šŸ“„ Additional Documentation

šŸ¤ Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Submit a pull request

šŸ“ License

MIT License - see LICENSE file for details.

šŸ™ Acknowledgments


Made with ā¤ļø for efficient AI image generation

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
E2B

E2B

Using MCP to run code via e2b.

Official
Featured