fal-hidream-i1-full MCP Server
Enables high-quality AI image generation using the fal-ai/hidream-i1-full model with support for synchronous, streaming, and queue-based generation, custom image sizing, LoRA weights, and automatic local image downloads.
README
fal-ai/hidream-i1-full MCP Server
A Model Context Protocol (MCP) server that provides access to the fal-ai/hidream-i1-full image generation model. This server allows you to generate high-quality images using advanced AI technology through the fal.ai platform.
Features
- High-Quality Image Generation: Generate stunning images using the fal-ai/hidream-i1-full model
- Multiple Generation Methods: Support for synchronous, streaming, and queue-based generation
- Flexible Image Sizing: Support for predefined sizes and custom dimensions
- Advanced Parameters: Control over inference steps, guidance scale, safety checker, and more
- LoRA Support: Apply custom LoRA weights for specialized image styles
- Local Image Download: Automatically downloads generated images to local storage
- Queue Management: Submit long-running requests and check their status
- Webhook Support: Optional webhook notifications for completed requests
Installation
- Clone this repository:
git clone https://github.com/PierrunoYT/fal-hidream-i1-full-mcp-server.git
cd fal-hidream-i1-full-mcp-server
- Install dependencies:
npm install
- Build the project:
npm run build
Configuration
Environment Variables
Set your fal.ai API key as an environment variable:
export FAL_KEY="your_fal_api_key_here"
You can get your API key from fal.ai.
MCP Client Configuration
Add this server to your MCP client configuration. For example, in Claude Desktop's config file:
{
"mcpServers": {
"fal-hidream-i1-full": {
"command": "node",
"args": ["/path/to/fal-hidream-i1-full-mcp-server/build/index.js"],
"env": {
"FAL_KEY": "your_fal_api_key_here"
}
}
}
}
Available Tools
1. hidream_i1_full_generate
Generate images using the standard synchronous method.
Parameters:
prompt(required): Text description of the image to generatenegative_prompt(optional): What you don't want in the imageimage_size(optional): Predefined size or custom {width, height} objectnum_inference_steps(optional): Number of inference steps (1-100, default: 50)seed(optional): Random seed for reproducible resultsguidance_scale(optional): CFG scale (1-20, default: 5)sync_mode(optional): Wait for completion (default: true)num_images(optional): Number of images to generate (1-4, default: 1)enable_safety_checker(optional): Enable safety filtering (default: true)output_format(optional): "jpeg" or "png" (default: "jpeg")loras(optional): Array of LoRA weights to apply
Example:
{
"prompt": "a cat holding a skateboard which has 'fal' written on it in red spray paint",
"image_size": {"width": 1024, "height": 1024},
"num_inference_steps": 50,
"guidance_scale": 7.5
}
2. hidream_i1_full_generate_stream
Generate images using streaming for real-time progress updates.
Parameters: Same as hidream_i1_full_generate
3. hidream_i1_full_generate_queue
Submit a long-running image generation request to the queue.
Parameters: Same as hidream_i1_full_generate plus:
webhook_url(optional): URL for webhook notifications
Returns: A request ID for tracking the job
4. hidream_i1_full_queue_status
Check the status of a queued request.
Parameters:
request_id(required): The request ID from queue submissionlogs(optional): Include logs in response (default: true)
5. hidream_i1_full_queue_result
Get the result of a completed queued request.
Parameters:
request_id(required): The request ID from queue submission
Image Sizes
Predefined Sizes
square_hd: High-definition squaresquare: Standard squareportrait_4_3: Portrait 4:3 aspect ratioportrait_16_9: Portrait 16:9 aspect ratiolandscape_4_3: Landscape 4:3 aspect ratiolandscape_16_9: Landscape 16:9 aspect ratio
Custom Sizes
You can also specify custom dimensions:
{
"image_size": {
"width": 1280,
"height": 720
}
}
LoRA Support
Apply custom LoRA weights for specialized styles:
{
"loras": [
{
"path": "https://example.com/lora-weights.safetensors",
"scale": 1.0,
"weight_name": "optional_weight_name"
}
]
}
Output
Generated images are automatically downloaded to a local images/ directory with descriptive filenames. The response includes:
- Local file paths
- Original URLs
- Image dimensions
- Content types
- Generation parameters used
- Request IDs for tracking
Error Handling
The server provides detailed error messages for:
- Missing API keys
- Invalid parameters
- Network issues
- API rate limits
- Generation failures
Development
Running in Development Mode
npm run dev
Testing the Server
npm test
Getting the Installation Path
npm run get-path
API Reference
This server implements the fal-ai/hidream-i1-full API. For detailed API documentation, visit:
License
MIT License - see LICENSE file for details.
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
Support
For issues and questions:
- Open an issue on GitHub
- Check the fal.ai documentation
Changelog
v2.0.0
- Complete rewrite to use fal-ai/hidream-i1-full API
- Added streaming support
- Added queue management
- Added LoRA support
- Improved error handling
- Updated to latest MCP SDK
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.