Runware MCP Server
Enables lightning-fast AI image and video generation, upscaling, background removal, captioning, and masking through the Runware API with automatic model selection and comprehensive validation.
README
Runware MCP Server
A powerful Model Context Protocol (MCP) server that provides lightning fast image and video generation tools using the Runware API. This server supports both SSE (Server-Sent Events) transport for custom claude connector and direct claude desktop installation as well.
Features
Image Generation Tools
imageInference: Full-featured image generation with advanced parametersphotoMaker: Subject personalization with PhotoMaker technologyimageUpscale: High-quality image resolution enhancementimageBackgroundRemoval: Background removal with multiple AI modelsimageCaption: AI-powered image description generationimageMasking: Automatic mask generation for faces, hands, and people
Video Generation Tools
videoInference: Text-to-video and image-to-video generationlistVideoModels: Discover available video modelsgetVideoModelInfo: Get detailed model specifications
Utility Tools
imageUpload: Upload local images to get Runware UUIDsmodelSearch: Search and discover AI models on the platform
Smart Features
- Automatic Model Selection: I2V uses
klingai:5@2, T2V usesgoogle:3@1 - Input Validation: Prevents Claude upload URL pasting and validates dimensions
- Comprehensive Error Handling: Clear error messages and guidance
Demo
Watch the demo video to see the Runware MCP server in action:
https://github.com/user-attachments/assets/9732096b-8513-455c-9759-cc88363c42f9
Architecture
[ MCP Client / AI Assistant ]
|
(connects via SSE over HTTP)
|
[ Uvicorn Server ]
|
[ Starlette App ]
|
[ FastMCP Server ]
|
[ Runware API ]
- Transport: SSE (Server-Sent Events) for real-time communication
- Framework: FastMCP with Starlette web framework
- Server: Uvicorn ASGI server
- API: Direct integration with Runware's AI services
Prerequisites
- Python: 3.10 or higher
- Runware API Key: Get your API key from Runware Dashboard
- Dependencies: See
requirements.txtorpyproject.toml
Installation
1. Clone the Repository
git clone https://github.com/Runware/MCP-Runware.git
cd MCP-Runware
2. Install Dependencies
# Using uv (recommended)
uv venv
source .venv/bin/activate
uv pip install .
# Or using pip
pip install -r requirements.txt
3. Environment Setup
Create a .env file in the project root:
RUNWARE_API_KEY=your_api_key_here
Deployment Methods
Method 1: SSE Server (Recommended for Production)
Docker Deployment
# Build the Docker image
docker build -t runware_mcp_sse .
# Run the container
docker run --rm -p 8081:8081 runware_mcp_sse
Method 2: MCP Install (Direct Integration)
Install in Claude Desktop
# From the project directory
mcp install --with-editable . runware_mcp_server.py
Model Recommendations
Image Generation
- Default:
civitai:943001@1055701(SDXL-based) - PhotoMaker:
civitai:139562@344487(RealVisXL V4.0) - Background Removal:
runware:109@1(RemBG 1.4)
Video Generation
- Image-to-Video (I2V):
klingai:5@2(1920x1080) - Text-to-Video (T2V):
google:3@1(1280x720)
You can find all additional models here: Runware Models
Configuration
Environment Variables
RUNWARE_API_KEY: Your Runware API key (required)
Input Validation
- Rejects Claude upload URLs (
https://files.*). Claude tends to include base64 strings in its reasoning/thinking process, which rapidly fills the context window with garbage data. Learn more about this issue - Supports local file paths, public accessible URLs (make sure it has proper file extension such as JPG, PNG, WEBP, etc), and Runware UUIDs
Support
- Documentation: Runware API Docs
- Models: Browse All Models
- Dashboard: Runware Dashboard
- Issues: Create an issue in this repository
- Email: support@runware.ai
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.