Imagen MCP Server

Imagen MCP Server

Enables image generation using Google's Imagen and other AI models through Nexos.ai platform. Supports single and batch image generation with various quality settings and model options.

Category
Visit Server

README

Imagen MCP Server

A Model Context Protocol (MCP) server for image generation using Google's Imagen model and other models supported by the Nexos.ai platform.

Features

  • Simple Image Generation: Generate a single image from a text prompt
  • Batch Image Generation: Generate multiple images with background processing
    • First image is returned immediately
    • Remaining images are generated in the background
    • Query for additional images as they become available
  • Model Catalog: Access comprehensive information about all available models

Supported Models

Model Provider Description
imagen-4 Google Flagship model with excellent prompt following and photorealistic output
imagen-4-fast Google Faster variant optimized for speed
imagen-4-ultra Google Highest quality for premium image generation
dall-e-3 OpenAI High-quality model with excellent artistic capabilities
gpt-image-1 OpenAI Strong prompt understanding and versatile output

Installation

Option 1: Install with pipx (Recommended for CLI usage)

# Install directly from the repository
pipx install git+https://github.com/your-username/Imagen-MCP.git

# Or install from local directory
cd Imagen-MCP
pipx install .

# Run the server
imagen-mcp

Option 2: Install with Poetry (Recommended for development)

# Clone the repository
git clone <repository-url>
cd Imagen-MCP

# Install dependencies with Poetry
poetry install

# Run the server
poetry run imagen-mcp
# Or
poetry run python -m Imagen_MCP.server

Option 3: Install with pip

# Install from the repository
pip install git+https://github.com/your-username/Imagen-MCP.git

# Or install from local directory
pip install .

# Run the server
imagen-mcp

Environment Variables

Set up your Nexos.ai API key:

export NEXOS_API_KEY=your-api-key-here

Or create a .env file:

NEXOS_API_KEY=your-api-key-here

Usage

Running the Server

# If installed with pipx or pip
imagen-mcp

# If using Poetry (development)
poetry run imagen-mcp

# Alternative: run as Python module
poetry run python -m Imagen_MCP.server

# With FastMCP CLI (more options)
poetry run fastmcp run Imagen_MCP/server.py --transport http --port 8000

CLI Options

When using the fastmcp run command, you have additional options:

Option Description
--transport, -t Transport protocol: stdio (default), http, sse, streamable-http
--host Host to bind to (default: 127.0.0.1)
--port, -p Port for HTTP/SSE transport (default: 8000)
--log-level, -l Log level: DEBUG, INFO, WARNING, ERROR, CRITICAL
--no-banner Don't show the server banner

MCP Client Configuration

To use this MCP server with an AI agent, add the following configuration to your MCP client.

Claude Desktop (pipx installation)

If you installed with pipx, add to your Claude Desktop configuration file (~/.config/claude/claude_desktop_config.json on Linux, ~/Library/Application Support/Claude/claude_desktop_config.json on macOS):

{
  "mcpServers": {
    "imagen": {
      "command": "imagen-mcp",
      "env": {
        "NEXOS_API_KEY": "your-nexos-api-key-here"
      }
    }
  }
}

Claude Desktop (Poetry installation)

If you're using Poetry for development:

{
  "mcpServers": {
    "imagen": {
      "command": "poetry",
      "args": ["run", "imagen-mcp"],
      "cwd": "/path/to/Imagen-MCP",
      "env": {
        "NEXOS_API_KEY": "your-nexos-api-key-here"
      }
    }
  }
}

Cline / Roo Code

Add to your VS Code settings or Cline MCP configuration:

{
  "mcpServers": {
    "imagen": {
      "command": "imagen-mcp",
      "env": {
        "NEXOS_API_KEY": "your-nexos-api-key-here"
      }
    }
  }
}

Generic MCP Client (Copy-Paste Ready)

For pipx/pip installation:

{
  "imagen": {
    "command": "imagen-mcp",
    "env": {
      "NEXOS_API_KEY": "your-nexos-api-key-here"
    }
  }
}

For Poetry installation:

{
  "imagen": {
    "command": "poetry",
    "args": ["run", "imagen-mcp"],
    "cwd": "/path/to/Imagen-MCP",
    "env": {
      "NEXOS_API_KEY": "your-nexos-api-key-here"
    }
  }
}

Configuration Options:

Field Description
command The command to run (poetry for Poetry-managed projects)
args Command arguments to start the MCP server
cwd Working directory - set to your Imagen-MCP installation path
env Environment variables, including the required NEXOS_API_KEY

Important: Replace /path/to/Imagen-MCP with the actual path to your Imagen-MCP installation and your-nexos-api-key-here with your Nexos.ai API key.

Alternative: Using pip-installed package

If you install the package globally or in a virtual environment:

{
  "imagen": {
    "command": "python",
    "args": ["-m", "Imagen_MCP.server"],
    "env": {
      "NEXOS_API_KEY": "your-nexos-api-key-here"
    }
  }
}

Tools

list_models

List all available image generation models with their descriptions, capabilities, and use cases.

Parameters: None

Returns:

  • models: List of all available models with details
  • total_count: Number of available models
  • default_model: The default model ID
  • usage_hint: How to use the model parameter

Example Response:

{
  "models": [
    {
      "id": "imagen-4",
      "name": "Imagen 4",
      "provider": "Google",
      "description": "Google's flagship image generation model...",
      "use_cases": ["Photorealistic image generation", ...],
      "strengths": ["Excellent prompt adherence", ...],
      "weaknesses": ["Slower generation time", ...],
      "supported_sizes": ["256x256", "512x512", "1024x1024", ...],
      "max_images_per_request": 4,
      "supports_hd_quality": true,
      "rate_limit": "100 messages per 3 hours"
    },
    ...
  ],
  "total_count": 5,
  "default_model": "imagen-4"
}

get_model_details

Get detailed information about a specific image generation model.

Parameters:

  • model_id (required): The model identifier (e.g., "imagen-4", "imagen-4-fast", "dall-e-3")

Returns:

  • Complete model details including capabilities, rate limits, use cases, strengths, and weaknesses
  • Error message if model not found

Example:

result = get_model_details(model_id="imagen-4-fast")

generate_image

Generate a single image from a text prompt. The image is saved to a file (temporary file if no path specified).

Parameters:

  • prompt (required): Text description of the image to generate
  • model (optional): Model to use (default: "imagen-4")
  • size (optional): Image size (default: "1024x1024")
  • quality (optional): Image quality - "standard" or "hd" (default: "standard")
  • style (optional): Image style - "vivid" or "natural" (default: "vivid")

Returns:

  • success: Whether the image was generated successfully
  • file_path: Absolute path to the saved image file
  • file_size_bytes: Size of the saved image file in bytes
  • model_used: The model that was used for generation
  • revised_prompt: The revised prompt (if the model modified it)
  • error: Error message if generation failed

Example:

result = await generate_image(
    prompt="A serene mountain landscape at sunset",
    model="imagen-4",
    size="1024x1024",
    quality="hd",
    style="natural"
)
if result.success:
    print(f"Image saved to: {result.file_path}")
    print(f"File size: {result.file_size_bytes} bytes")

start_image_batch

Start generating multiple images and return the first one immediately. Images are saved to files (in a temporary directory if no path specified).

Parameters:

  • prompt (required): Text description of the image to generate
  • count (optional): Number of images to generate, 2-10 (default: 4)
  • model (optional): Model to use (default: "imagen-4")
  • size (optional): Image size (default: "1024x1024")
  • quality (optional): Image quality (default: "standard")
  • style (optional): Image style (default: "vivid")

Returns:

  • success: Whether the batch was started successfully
  • session_id: ID for retrieving more images
  • first_image_path: Path to the first generated image file
  • first_image_size_bytes: Size of the first image file in bytes
  • pending_count: Number of images still being generated
  • error: Error message if batch failed to start

Example:

result = await start_image_batch(
    prompt="A futuristic cityscape",
    count=5,
    model="imagen-4"
)
if result.success:
    print(f"Session ID: {result.session_id}")
    print(f"First image: {result.first_image_path}")

get_next_image

Get the next available image from a batch generation session. The image is saved to a file (temporary file if no path specified).

Parameters:

  • session_id (required): Session ID from start_image_batch
  • timeout (optional): Maximum wait time in seconds (default: 60)

Returns:

  • success: Whether an image was retrieved
  • file_path: Path to the saved image file (or null if no image available)
  • file_size_bytes: Size of the saved image file in bytes
  • has_more: Whether more images are available or pending
  • pending_count: Number of images still being generated
  • error: Error message if retrieval failed

Example:

while True:
    result = await get_next_image(session_id=session_id)
    if result.file_path:
        print(f"Image saved to: {result.file_path}")
    if not result.has_more:
        break

get_batch_status

Get the current status of a batch generation session.

Parameters:

  • session_id (required): Session ID from start_image_batch

Returns:

  • status: Session status (created, generating, partial, completed, failed)
  • completed_count: Number of completed images
  • pending_count: Number of pending images
  • total_count: Total number of requested images
  • errors: List of any errors encountered

Resources

models://image-generation

Get the complete catalog of available image generation models with their capabilities, rate limits, use cases, strengths, and weaknesses.

models://image-generation/{model_id}

Get detailed information about a specific model.

Development

Running Tests

# Run all tests
poetry run pytest

# Run with verbose output
poetry run pytest -v

# Run specific test file
poetry run pytest tests/unit/test_generate_image.py

Project Structure

Imagen_MCP/
├── __init__.py              # Package exports
├── server.py                # FastMCP server definition
├── config.py                # Configuration management
├── constants.py             # Constants and type definitions
├── exceptions.py            # Custom exceptions
├── tools/
│   ├── generate_image.py    # Simple image generation tool
│   └── batch_generate.py    # Batch generation tools
├── resources/
│   └── models.py            # Model catalog resource
├── services/
│   ├── nexos_client.py      # Nexos.ai API client
│   ├── session_manager.py   # Background generation session manager
│   └── model_registry.py    # Model information registry
└── models/
    ├── image.py             # Image data models
    ├── generation.py        # Generation request/response models
    └── session.py           # Session state models

Rate Limits

All models are in Category 3 on Nexos.ai:

  • 100 messages per 3 hours

License

MIT License

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
E2B

E2B

Using MCP to run code via e2b.

Official
Featured