
OpenAI Image Generation MCP Server
Provides tools for generating and editing images using OpenAI's gpt-image-1 model via an MCP interface, enabling AI assistants to create and modify images based on text prompts.
README
OpenAI Image Generation MCP Server
This project implements an MCP (Model Context Protocol) server that provides tools for generating and editing images using OpenAI's gpt-image-1
model via the official Python SDK.
Features
This MCP server provides the following tools:
-
generate_image
: Generates an image using OpenAI'sgpt-image-1
model based on a text prompt and saves it.- Input Schema:
{ "type": "object", "properties": { "prompt": { "type": "string", "description": "The text description of the desired image(s)." }, "model": { "type": "string", "default": "gpt-image-1", "description": "The model to use (currently 'gpt-image-1')." }, "n": { "type": ["integer", "null"], "default": 1, "description": "The number of images to generate (Default: 1)." }, "size": { "type": ["string", "null"], "enum": ["1024x1024", "1536x1024", "1024x1536", "auto"], "default": "auto", "description": "Image dimensions ('1024x1024', '1536x1024', '1024x1536', 'auto'). Default: 'auto'." }, "quality": { "type": ["string", "null"], "enum": ["low", "medium", "high", "auto"], "default": "auto", "description": "Rendering quality ('low', 'medium', 'high', 'auto'). Default: 'auto'." }, "user": { "type": ["string", "null"], "default": null, "description": "An optional unique identifier representing your end-user." }, "save_filename": { "type": ["string", "null"], "default": null, "description": "Optional filename (without extension). If None, a default name based on the prompt and timestamp is used." } }, "required": ["prompt"] }
- Output:
{"status": "success", "saved_path": "path/to/image.png"}
or error dictionary.
- Input Schema:
-
edit_image
: Edits an image or creates variations using OpenAI'sgpt-image-1
model and saves it. Can use multiple input images as reference or perform inpainting with a mask.- Input Schema:
{ "type": "object", "properties": { "prompt": { "type": "string", "description": "The text description of the desired final image or edit." }, "image_paths": { "type": "array", "items": { "type": "string" }, "description": "A list of file paths to the input image(s). Must be PNG. < 25MB." }, "mask_path": { "type": ["string", "null"], "default": null, "description": "Optional file path to the mask image (PNG with alpha channel) for inpainting. Must be same size as input image(s). < 25MB." }, "model": { "type": "string", "default": "gpt-image-1", "description": "The model to use (currently 'gpt-image-1')." }, "n": { "type": ["integer", "null"], "default": 1, "description": "The number of images to generate (Default: 1)." }, "size": { "type": ["string", "null"], "enum": ["1024x1024", "1536x1024", "1024x1536", "auto"], "default": "auto", "description": "Image dimensions ('1024x1024', '1536x1024', '1024x1536', 'auto'). Default: 'auto'." }, "quality": { "type": ["string", "null"], "enum": ["low", "medium", "high", "auto"], "default": "auto", "description": "Rendering quality ('low', 'medium', 'high', 'auto'). Default: 'auto'." }, "user": { "type": ["string", "null"], "default": null, "description": "An optional unique identifier representing your end-user." }, "save_filename": { "type": ["string", "null"], "default": null, "description": "Optional filename (without extension). If None, a default name based on the prompt and timestamp is used." } }, "required": ["prompt", "image_paths"] }
- Output:
{"status": "success", "saved_path": "path/to/image.png"}
or error dictionary.
- Input Schema:
Prerequisites
- Python (3.8 or later recommended)
- pip (Python package installer)
- An OpenAI API Key (set directly in the script or via the
OPENAI_API_KEY
environment variable - using environment variables is strongly recommended for security). - An MCP client environment (like the one used by Cline) capable of managing and launching MCP servers.
Installation
- Clone the repository:
git clone https://github.com/IncomeStreamSurfer/chatgpt-native-image-gen-mcp.git cd chatgpt-native-image-gen-mcp
- Set up a virtual environment (Recommended):
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
- Install dependencies:
pip install -r requirements.txt
- (Optional but Recommended) Set Environment Variable:
Set the
OPENAI_API_KEY
environment variable with your OpenAI key instead of hardcoding it in the script. How you set this depends on your operating system.
Configuration (for Cline MCP Client)
To make this server available to your AI assistant (like Cline), add its configuration to your MCP settings file (e.g., cline_mcp_settings.json
).
Find the mcpServers
object in your settings file and add the following entry:
{
"mcpServers": {
// ... other server configurations ...
"openai-image-gen-mcp": {
"autoApprove": [
"generate_image",
"edit_image"
],
"disabled": false,
"timeout": 180, // Increased timeout for potentially long image generation
"command": "python", // Or path to python executable if not in PATH
"args": [
// IMPORTANT: Replace this path with the actual absolute path
// to the openai_image_mcp.py file on your system
"C:/path/to/your/cloned/repo/chatgpt-native-image-gen-mcp/openai_image_mcp.py"
],
"env": {
// If using environment variables for the API key:
// "OPENAI_API_KEY": "YOUR_API_KEY_HERE"
},
"transportType": "stdio"
}
// ... other server configurations ...
}
}
Important: Replace C:/path/to/your/cloned/repo/
with the correct absolute path to where you cloned this repository on your machine. Ensure the path separator is correct for your operating system (e.g., use backslashes \
on Windows). If you set the API key via environment variable, you can remove it from the script and potentially add it to the env
section here if your MCP client supports it.
Running the Server
You don't typically need to run the server manually. The MCP client (like Cline) will automatically start the server using the command
and args
specified in the configuration file when one of its tools is called for the first time.
If you want to test it manually (ensure dependencies are installed and API key is available):
python openai_image_mcp.py
Usage
The AI assistant interacts with the server using the generate_image
and edit_image
tools. Images are saved within an ai-images
subdirectory created where the openai_image_mcp.py
script is located. The tools return the absolute path to the saved image upon success.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.