KOF Nano Banana MCP Server
Enables image generation using Gemini native models, supporting both single prompts and batch processing via a file-based queue. It allows for detailed configuration of aspect ratios and models using YAML frontmatter across various MCP-enabled clients.
README
KOF Nano Banana MCP Server π
MCP server for Gemini native image generation (Nano Banana). Part of the KeepOnFirst Agentic Workflow.
English | ηΉι«δΈζ
β οΈ Important: Billing Required
Note: The Gemini API Free Tier does NOT support image generation models. To use this MCP server, you must provide an API key from a Google Cloud project with billing enabled.
Features
- Generate images using Gemini 2.5 Flash Image or Gemini 3 Pro Image
- Process queue of prompt files with validation and dry-run support
- YAML frontmatter support for prompt configuration
- Cross-IDE compatible - works with any MCP-enabled client
Installation
Option 1: Use via npx (Recommended)
If you have published this package or use it locally:
{
"mcpServers": {
"nanobanana": {
"command": "npx",
"args": ["-y", "@keeponfirst/kof-nanobanana-mcp"],
"env": {
"GEMINI_API_KEY": "your-api-key-here"
}
}
}
}
Option 2: Local Development
cd kof-nanobanana-mcp
npm install
npm run build
Then configure absolute path in your MCP config (see Configuration section).
Configuration
1. Get a Gemini API Key
- Go to Google AI Studio
- Create an API key
- IMPORTANT: Ensure your project has billing enabled. These models are paid-only.
2. Add to MCP Config
Add to your MCP configuration file (e.g., ~/.gemini/antigravity/mcp_config.json):
{
"servers": {
"nanobanana": {
"command": "node",
"args": ["/path/to/kof-nanobanana-mcp/dist/index.js"],
"env": {
"GEMINI_API_KEY": "your-api-key-here"
}
}
}
}
Tools
nanobanana_generate_image
Generate a single image.
Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
prompt |
string | β | Image description (10-5000 chars) |
output_path |
string | β | Path to save the image |
model |
string | β | gemini-2.5-flash-image (default) or gemini-3-pro-image-preview |
aspect_ratio |
string | β | 1:1, 16:9, 9:16, 4:3, 3:4 |
overwrite |
boolean | β | Overwrite existing file (default: false) |
Example:
{
"prompt": "A modern flat illustration of three AI robots working together on code",
"output_path": "assets/generated/workflow-hero.png",
"model": "gemini-2.5-flash-image",
"aspect_ratio": "16:9"
}
nanobanana_list_queue
List and validate prompt files in the queue.
Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
queue_dir |
string | β | Queue directory (default: nanobanana/queue) |
validate |
boolean | β | Validate files (default: true) |
check_conflicts |
boolean | β | Check if outputs exist (default: true) |
nanobanana_process_queue
Batch process all prompt files.
Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
queue_dir |
string | β | Queue directory (default: nanobanana/queue) |
validate_only |
boolean | β | Only validate, no generation |
dry_run |
boolean | β | Preview without API calls |
overwrite |
string | β | skip, overwrite, or rename |
Prompt File Format
Create .md files in nanobanana/queue/ with YAML frontmatter:
---
output_path: assets/generated/workflow-hero.png
model: gemini-2.5-flash-image
aspect_ratio: 16:9
overwrite: false
---
# Workflow Hero Image
Create a modern, sleek hero illustration showing three AI agents
working together in a software development workflow.
## Style Guidelines
- Style: Flat illustration with gradients
- Color palette: Deep purple (#6B46C1) to blue (#3B82F6)
- Background: Subtle gradient
Pricing Reference
| Model | Price per Image |
|---|---|
| Gemini 2.5 Flash Image | ~$0.039 (1024x1024) |
| Gemini 3 Pro Image (2K) | ~$0.134 |
| Gemini 3 Pro Image (4K) | ~$0.24 |
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.