Together AI Image Generation MCP Server
MCP-Server for the together.ai API to generate images using Flux 1.1 pro
mystique920
README
⚠️ Preview Release: This MCP server is under active development. Expect verbose console logging and potential changes. Not recommended for production use yet. Important Notice:
- In this branch, generated images are not stored locally. They are only available in your browser cache, and Together AI deletes generated images from their servers quickly. It is strongly recommended to save any images you want to keep as soon as they are generated.
- For users who require permanent local storage of generated images, there is a separate branch:
v0.2.0(LibreChat exclusive) — This version supports only LibreChat and stores all generated images permanently on your local machine.
Together AI Image Generation MCP Server
A Model Context Protocol (MCP) server that provides image generation functionality using the Together AI API.
Prerequisites
- Node.js >= 18.0.0
- A valid Together AI API key (See Setup Guide below on how to obtain and configure)
- IMPORTANT: Access to the Flux 1.1 Pro model requires a Together AI account with Tier 2 enabled. A valid API key alone is not sufficient.
Installation (Standalone / General)
-
Clone the repository:
# Consider renaming the repository if it's a fork or new project git clone <your-repository-url> cd <your-repository-directory> -
Configure API Key: Before building, you need to provide your Together AI API key. See the Setup Guide section below for different methods (e.g., using a
.envfile or environment variables). -
Install dependencies and build:
npm install npm run buildNote: If using the project's
.envfile method for the API key, ensure it exists before this step.
Usage (Standalone / General)
Ensure your API key is configured (see Setup Guide).
Start the server:
npm start
The server will then be ready to accept connections from MCP clients.
Setup Guide
1. Get Together AI API Key
- Obtain an API key from Together AI. (You might need to sign up or check their documentation for specific instructions).
- Locate your API key in your Together AI account settings or dashboard.
2. Configure API Key
You need to make your API key available to the server. Choose one of the following methods:
Method A: Project .env File (Recommended for Standalone or LibreChat)
This method is required if integrating with the current version of LibreChat (see specific section below).
- In the project root directory, create a file named
.env:# In the project directory echo "TOGETHER_API_KEY=your_together_api_key_here" > .env - Replace
your_api_key_herewith your actual key. - Make sure this file exists before running
npm install. The build step might be triggered automatically bynpm installvia thepreparescript.
Method B: Environment Variable (Standalone Only)
Set the TOGETHER_API_KEY environment variable before starting the server.
export TOGETHER_API_KEY="your_together_api_key_here"
# Start the server (adjust command if needed, e.g., node build/index.js)
node build/index.js
Method C: MCP Client Configuration (Advanced)
Some MCP clients allow specifying environment variables directly in their configuration. This is useful for clients like Cursor, VS Code extensions, etc.
{
"mcpServers": {
"together-image-gen": { // Use a descriptive name
"command": "node", // Assuming direct execution after build
"args": [
"/path/to/your/together-image-gen-mcp/build/index.js" // Adjust path as needed
],
"env": {
"TOGETHER_API_KEY": "YOUR_TOGETHER_API_KEY"
}
}
}
}
Note on Integration: The specific integration steps (e.g., for LibreChat) might need adjustments based on the client application and how it manages MCP servers. The .env file method is often reliable if the server process inherits the environment from where it's launched.
Example Integration (Conceptual)
This section provides a conceptual guide. Adapt paths and commands based on your specific client (e.g., LibreChat, Cline, Cursor) and setup (Docker, local).
Overview:
- Ensure the server code is accessible to your client application.
- Configure the required
TOGETHER_API_KEYusing the Project.envFile method within this server's directory or via environment variables passed by the client. - Build this server (
npm installshould handle this via thepreparescript). - Configure your MCP client to run this server, providing the correct command, arguments, and environment variables (like
TOGETHER_API_KEY).
Step-by-Step:
-
Clone the Repository: Navigate to the directory on your host machine where you manage external services for LibreChat (this is often alongside your
docker-compose.yml). A common location is a dedicatedmcp-serverdirectory.# Example: Navigate to where docker-compose.yml lives, then into mcp-server # Example: Navigate to where you store MCP servers cd /path/to/your/mcp-servers git clone <your-repository-url> # Clone your adapted repository -
Navigate into the Server Directory:
cd <your-repository-directory> -
Configure API Key (Project
.envFile Method):# Create the .env file echo "TOGETHER_API_KEY=your_together_api_key_here" > .env # IMPORTANT: Replace 'your_together_api_key_here' with your actual Together AI key -
Install Dependencies and Build: This step compiles the server code into the
builddirectory.npm install npm run build -
Configure MCP Client (Example:
librechat.yaml): Edit your client's configuration file. Add an entry for this server:# Example for librechat.yaml mcp_servers: together-image-gen: # Use a descriptive name # Optional: Display name for the server in the UI # name: Together AI Image Gen # Command tells the client how to run the server command: node # Args specify the script for 'node' to run *inside the container/environment* args: # Adjust this path based on your volume mapping / setup - /app/mcp-servers/<your-repository-directory>/build/index.js- The
argspath (/app/...) is the location inside the LibreChat API container where the built server will be accessed (thanks to the volume bind in the next step).
- The
-
Configure Docker Volume Bind (If using Docker): If your client runs in Docker, map the server directory from your host into the container. Edit your
docker-compose.ymlordocker-compose.override.yml:# Example for docker-compose.yml services: your_client_service: # e.g., api for LibreChat # ... other service config ... volumes: # ... other volumes ... # Add this volume bind (adjust paths): - ./mcp-servers/<your-repository-directory>:/app/mcp-servers/<your-repository-directory>- Host Path (
./mcp-servers/...): Path on your host relative todocker-compose.yml. - Container Path (
:/app/mcp-servers/...): Path inside the container. Must align with the path used in the client config (librechat.yamlargs).
- Host Path (
-
Restart Client Application: Apply the configuration changes by restarting your client application (e.g., restart Docker containers).
Now, the Together AI Image Generation server should be available as a tool provider within your client.
Features
- Generates images based on text prompts using the Together AI API.
- Configurable parameters like model, dimensions, steps, etc.
- Integrates with MCP clients (e.g., Cline, LibreChat, Cursor).
Tool: image_generation
- Description: Generates images based on a text prompt using the Together AI API.
- Parameters:
model(required, string): The model ID to use (e.g., "stabilityai/stable-diffusion-xl-1024-v1.0").prompt(required, string): The text prompt to guide image generation.width(optional, integer, default: 1024): Image width in pixels.height(optional, integer, default: 1024): Image height in pixels.steps(optional, integer, default: 20): Number of diffusion steps.n(optional, integer, default: 1): Number of images to generate.seed(optional, integer): Seed for reproducibility.response_format(optional, string, default: 'b64_json'): Format for returned images ('url' or 'b64_json').stop(optional, array of strings): Sequences to stop generation at.
Version History
- v0.1.1 (Bugfix): Fixed a bug in
src/tools/image_generation.tswhere the image generation request would fail unless the model name matched the precise string as shown on the model card. The tool now always uses the correct default model ("black-forest-labs/FLUX.1.1-pro") internally, so users do not need to specify the exact model name. This version is working and recommended for all users. - v0.1.0 (Refactored): Initial version focused on Together AI image generation. Adapted from search1api-mcp v0.2.0.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
MCP Package Docs Server
Facilitates LLMs to efficiently access and fetch structured documentation for packages in Go, Python, and NPM, enhancing software development with multi-language support and performance optimization.
Claude Code MCP
An implementation of Claude Code as a Model Context Protocol server that enables using Claude's software engineering capabilities (code generation, editing, reviewing, and file operations) through the standardized MCP interface.
@kazuph/mcp-taskmanager
Model Context Protocol server for Task Management. This allows Claude Desktop (or any MCP client) to manage and execute tasks in a queue-based system.
Linear MCP Server
Enables interaction with Linear's API for managing issues, teams, and projects programmatically through the Model Context Protocol.
mermaid-mcp-server
A Model Context Protocol (MCP) server that converts Mermaid diagrams to PNG images.
Jira-Context-MCP
MCP server to provide Jira Tickets information to AI coding agents like Cursor
Linear MCP Server
A Model Context Protocol server that integrates with Linear's issue tracking system, allowing LLMs to create, update, search, and comment on Linear issues through natural language interactions.
Sequential Thinking MCP Server
This server facilitates structured problem-solving by breaking down complex issues into sequential steps, supporting revisions, and enabling multiple solution paths through full MCP integration.