mcp-alphabanana

mcp-alphabanana

Local MCP server for generating image assets with Google Gemini (Nano Banana 2 / Pro). Supports transparent PNG/WebP output, exact resizing/cropping, up to 14 reference images, and Google Search grounding.

Category
Visit Server

README

mcp-alphabanana

English | 日本語
alphananana on the move
A Model Context Protocol (MCP) server for generating image assets using Google Gemini AI (Gemini 3.1 Flash/Nano Banana 2 supported).

Built with FastMCP 3 for a simplified codebase and flexible output options.

Why alphabanana?

  • Zero Watermarks: API-native clean images.
  • Thinking/Grounding Support: Higher prompt adherence and search-backed accuracy.
  • Production Ready: Supports transparent WebP and exact aspect ratios for web/game assets.

Features

  • Ultra-fast image generation (Gemini 3.1 Flash, 0.5K/1K/2K/4K)
  • Advanced multi-image reasoning (up to 14 reference images)
  • Thinking/Grounding support (Flash3.1 only)
  • Transparent PNG/WebP output (color-key post-processing, despill)
  • Multiple output formats: file, base64, or both
  • Flexible resize modes: crop, stretch, letterbox, contain
  • Multiple model tiers: Flash3.1, Flash2.5, Pro3, legacy aliases

Installation

Add @tasopen/mcp-alphabanana to your MCP Servers configuration.

Configuration

Configure the GEMINI_API_KEY in your MCP configuration (e.g. mcp.json).

Examples:

  • Reference an OS environment variable from mcp.json:
{
  "env": {
    "GEMINI_API_KEY": "${env:GEMINI_API_KEY}"
  }
}
  • Provide the key directly in mcp.json:
{
  "env": {
    "GEMINI_API_KEY": "your_api_key_here"
  }
}

VS Code Integration

Add to your VS Code settings (.vscode/settings.json or user settings), configuring the server env in mcp.json or via the VS Code MCP settings.

{
  "mcp": {
    "servers": {
      "mcp-alphabanana": {
        "command": "npx",
        "args":["-y", "@tasopen/mcp-alphabanana"],
        "env": {
          "GEMINI_API_KEY": "${env:GEMINI_API_KEY}"  // or "your_api_key_here"
        }
      }
    }
  }
}

Optional: Set a custom fallback directory for write failures by adding MCP_FALLBACK_OUTPUT to the env object.

Model Selection & Parameters

Input Model ID Internal Model ID Description
Flash3.1 gemini-3.1-flash-image-preview Ultra-fast, supports Thinking/Grounding.
Flash2.5 gemini-2.5-flash-image Legacy Flash. High stability. Low cost.
Pro3 gemini-3.0-pro-image-preview High-fidelity Pro model.
flash gemini-3.1-flash-image-preview Alias for backward compatibility.
pro gemini-3.0-pro-image-preview Alias for backward compatibility.

Parameters (v2.0)

Parameter Type Default Description
prompt string required Description of the image to generate
model enum Flash3.1 Model: Flash3.1, Flash2.5, Pro3, flash, pro
output_resolution enum 1K 0.5K, 1K, 2K, 4K (0.5K/2K/4K: Flash3.1 only)
output_format enum png png, jpg, webp (WebP: alpha supported)
transparent boolean false Transparent background (PNG/WebP only)
grounding_type enum none none, text, image, both (Flash3.1 only)
thinking_mode enum minimal minimal, high (Flash3.1 only)
include_thoughts boolean false Return model's "thoughts" (Flash3.1 only)
include_metadata boolean false Include grounding/reasoning metadata in JSON output
reference_images array [] Up to 14 (Flash3.1/Pro3), 3 (Flash2.5)

Usage Examples

Basic Generation

{
  "prompt": "A pixel art treasure chest, golden trim, wooden texture",
  "model": "Flash3.1",
  "outputFileName": "chest",
  "outputWidth": 64,
  "outputHeight": 64,
  "transparent": true
}

Advanced (WebP, thinking, grounding)

{
  "prompt": "A photorealistic girl with wings flying over a European countryside",
  "model": "Flash3.1",
  "outputFileName": "girl_wings",
  "outputWidth": 632,
  "outputHeight": 424,
  "output_format": "webp",
  "thinking_mode": "high",
  "grounding_type": "both",
  "include_thoughts": true
}

Grounding Sample (Search-backed)

{
  "prompt": "A modern travel poster featuring today's weather and skyline highlights in Kuala Lumpur",
  "model": "Flash3.1",
  "outputFileName": "kl_travel_poster",
  "outputWidth": 1024,
  "outputHeight": 1024,
  "grounding_type": "text",
  "thinking_mode": "high",
  "include_metadata": true,
  "include_thoughts": true
}

This sample enables Google Search grounding and returns grounding/reasoning metadata in JSON.

With Reference Images

{
  "prompt": "A matching treasure chest, open state, same pixel art style as references",
  "model": "Pro3",
  "outputFileName": "chest_open",
  "outputWidth": 64,
  "outputHeight": 64,
  "transparent": true,
  "reference_images": [
    {
      "description": "Closed chest for style reference",
      "data": "...base64..."
    }
  ]
}

Transparency & Output Formats

  • PNG: Full alpha, color-key + despill
  • WebP: Full alpha, better compression (Flash3.1+)
  • JPEG: No transparency (falls back to solid background)

Development

# Development mode with MCP CLI
npm run dev


# MCP Inspector (Web UI)
npm run inspect

# Build for production
npm run build

License

MIT

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured