GemForge-MCP
Professional Gemini API integration for Claude and MCP-compatible hosts with intelligent model selection and advanced file handling capabilities.
README
GemForge-mcp
GemForge-mcp is a professional Gemini API integration for Claude and MCP-compatible hosts with intelligent model selection and advanced file handling capabilities.
Overview
GemForge-mcp provides a Model Context Protocol (MCP) server that offers specialized tools for interacting with Google's Gemini AI models. It features intelligent model selection based on task type and content, advanced file handling, and optimized prompts for different use cases.
Installation
# Clone the repository
git clone https://github.com/your-username/GemForge-mcp.git
cd GemForge-mcp
# Install dependencies
npm install
# Build the project
npm run build
Configuration
Create a .env file in the root directory with the following variables:
GEMINI_API_KEY=your_gemini_api_key_here
DEFAULT_MODEL_ID=gemini-2.5-flash-preview-04-17 # Optional
GEMINI_PAID_TIER=false # Set to 'true' if using paid tier
Running the Server
# Run from source (development)
npm run start
# Run from compiled JavaScript (production)
npm run start:dist
Smithery.ai Deployment
This repository includes the necessary configuration for deploying the GemForge MCP server on smithery.ai.
Smithery Configuration
The smithery.yaml file contains the configuration needed for smithery deployment:
# Smithery.ai configuration
startCommand:
type: stdio
configSchema:
type: object
properties:
GEMINI_API_KEY:
type: string
description: "Google Gemini API key"
GEMINI_PAID_TIER:
type: boolean
description: "Whether using paid tier (for rate limits)"
default: false
DEFAULT_MODEL_ID:
type: string
description: "Default Gemini model ID to use"
default: "gemini-2.5-flash-preview-04-17"
required:
- GEMINI_API_KEY
# Command function that generates the startup command
commandFunction: |-
(config) => ({
"command": "node",
"args": ["dist/index.js"],
"env": {
"GEMINI_API_KEY": config.GEMINI_API_KEY,
"GEMINI_PAID_TIER": config.GEMINI_PAID_TIER ? "true" : "false",
"DEFAULT_MODEL_ID": config.DEFAULT_MODEL_ID || "gemini-2.5-flash-preview-04-17"
}
})
# Docker configuration
docker:
image: gemforge-mcp:latest
env:
# Environment variables configured through smithery UI
Deployment Steps
-
Prepare Your Repository:
- Ensure your code is committed and pushed to GitHub
- Verify the
smithery.yamlfile is properly configured
-
Sign Up for Smithery:
- Create an account at smithery.ai
- Connect your GitHub account to smithery.ai
-
Create a New Deployment:
- Select "New Tool" or equivalent option
- Choose this repository from your GitHub repositories
- Select the branch you want to deploy (usually
mainormaster)
-
Configure Environment Variables:
- Enter your
GEMINI_API_KEYin the smithery.ai dashboard - Optionally configure
GEMINI_PAID_TIERandDEFAULT_MODEL_ID
- Enter your
-
Deploy:
- Initiate the deployment process
- Smithery will build and deploy your MCP server
-
Integration:
- Once deployed, smithery will provide integration instructions
- Follow those instructions to connect the MCP server to your AI assistant
Updates and Maintenance
- Push changes to your GitHub repository
- Smithery can be configured to automatically rebuild and deploy on changes
- Monitor your deployment through the smithery.ai dashboard
Docker Deployment
Prerequisites
- Docker installed on your system
- Docker Compose (optional, for easier management)
- A Google Gemini API key
Building the Docker Image
# Using Docker directly
docker build -t gemforge-mcp .
# Using Docker Compose
docker-compose build
Running the Container
# Using Docker directly
docker run -e GEMINI_API_KEY=your_api_key -e GEMINI_PAID_TIER=false -e DEFAULT_MODEL_ID=gemini-2.5-flash-preview-04-17 gemforge-mcp
# Using Docker Compose (after setting variables in .env file)
docker-compose up -d
Docker Image Structure
The Dockerfile uses a multi-stage build process:
-
Builder Stage:
- Uses Node.js Alpine as the base image
- Installs all dependencies including dev dependencies
- Builds the TypeScript code to JavaScript
-
Production Stage:
- Uses a clean Node.js Alpine image
- Creates a non-root user for improved security
- Copies only the production dependencies and built code
- Includes a health check for container monitoring
Environment Variables
The Docker container requires the following environment variables:
GEMINI_API_KEY(required): Your Google Gemini API keyGEMINI_PAID_TIER(optional): Set totrueif using paid tier (default:false)DEFAULT_MODEL_ID(optional): Default Gemini model ID (default:gemini-2.5-flash-preview-04-17)
These can be set in the .env file when using Docker Compose.
Available Tools
GemForge-mcp provides four specialized tools for different AI tasks:
1. gemini_search
Generates responses based on the latest information using Gemini models with Google Search integration.
Input Parameters:
query(string, required): Your search query or questionfile_path(string, optional): File path to include with the querymodel_id(string, optional): Model ID overrideenable_thinking(boolean, optional): Enable thinking mode for step-by-step reasoning
Example:
{
"toolName": "gemini_search",
"toolParams": {
"query": "What are the latest developments in quantum computing?",
"enable_thinking": true
}
}
2. gemini_reason
Solves complex problems with step-by-step reasoning using advanced Gemini models.
Input Parameters:
problem(string, required): The complex problem or question to solvefile_path(string, optional): File path to include with the problemshow_steps(boolean, optional, default: false): Whether to show detailed reasoning stepsmodel_id(string, optional): Model ID override
Example:
{
"toolName": "gemini_reason",
"toolParams": {
"problem": "If a rectangle has a perimeter of 30 units and its length is twice its width, what are the dimensions of the rectangle?",
"show_steps": true
}
}
3. gemini_code
Analyzes codebases using Repomix and Gemini models to answer questions about code structure, logic, and potential improvements.
Input Parameters:
question(string, required): Question about the codebasedirectory_path(string, optional): Path to the code directorycodebase_path(string, optional): Path to pre-packed Repomix filerepomix_options(string, optional): Custom options for the Repomix command (for power users)model_id(string, optional): Model ID override
Example:
{
"toolName": "gemini_code",
"toolParams": {
"question": "What does this project do?",
"codebase_path": "path/to/codebase.xml"
}
}
Example with custom Repomix options:
{
"toolName": "gemini_code",
"toolParams": {
"question": "Analyze the log files in this directory",
"directory_path": "path/to/logs",
"repomix_options": "--include \"**/*.log\" --no-gitignore --no-default-patterns"
}
}
4. gemini_fileops
Performs efficient operations on files (text, PDF, images, etc.) using appropriate Gemini models.
Input Parameters:
file_path(string or array of strings, required): Path to the file(s)instruction(string, optional): Specific instruction for processingoperation(string, optional): Specific operation type (summarize,extract,analyze)use_large_context_model(boolean, optional, default: false): Set true for very large filesmodel_id(string, optional): Model ID override
Single File Example:
{
"toolName": "gemini_fileops",
"toolParams": {
"file_path": "path/to/document.pdf",
"operation": "summarize"
}
}
Multiple Files Example:
{
"toolName": "gemini_fileops",
"toolParams": {
"file_path": ["path/to/image1.jpg", "path/to/image2.jpg"],
"operation": "analyze",
"instruction": "Compare these images and describe the differences"
}
}
Important Notes for Multi-File Operations:
-
Path Format: When passing multiple files as an array, use forward slashes (
/) in the file paths, even on Windows systems:"file_path": ["C:/Users/Username/Documents/file1.txt", "C:/Users/Username/Documents/file2.txt"] -
File Type Consistency: For best results, use files of the same type in multi-file operations (e.g., all images, all text files).
-
Custom Instructions: When analyzing multiple files, provide a specific
instructionparameter to guide the comparison or analysis. -
File Limit: There is a practical limit to how many files can be processed at once, depending on their size and complexity. For large files, consider processing them individually or using
use_large_context_model: true. -
Concatenation: When multiple text files are provided, they are concatenated with clear separators before processing.
Model Selection
GemForge-mcp implements intelligent model selection based on:
-
Task Type:
- Search tasks: Prefers models with search capabilities
- Reasoning tasks: Prefers models with strong reasoning abilities
- Code analysis: Prefers models with code understanding
- File operations: Selects based on file type and size
-
Available Models:
FAST:gemini-2.0-flash-lite-001- Fast, efficient model for simple tasksBALANCED:gemini-2.0-flash-001- Balanced model for general-purpose useADVANCED:gemini-2.5-pro-exp-03-25- Advanced model for complex reasoningLARGE_CONTEXT:gemini-1.5-pro-002- Model for very large context windows
Special Features
- System Instruction Hoisting: Properly handles system instructions for all Gemini models
- XML Content Processing: Efficiently processes XML content for code analysis
- File Type Detection: Automatically detects file types and selects appropriate models
- Rate Limiting Handling: Implements exponential backoff and model fallbacks
- Error Recovery: Provides meaningful error messages and recovery options
- Custom Repomix Options: Allows power users to customize the Repomix command for code analysis, enabling fine-grained control over which files are included or excluded
- Multi-File Processing: Supports analyzing multiple files in a single operation, enabling comparison and transformation analysis
Advanced Usage
Multi-File Analysis with gemini_fileops
The gemini_fileops tool supports analyzing multiple files in a single operation, which is particularly useful for:
- Comparison Analysis: Compare multiple versions of a document or image
- Transformation Analysis: Analyze changes or progression across a series of files
- Batch Processing: Process multiple related files with a single instruction
Example: Fitness Transformation Analysis
{
"toolName": "gemini_fileops",
"toolParams": {
"file_path": [
"C:/Users/Username/Images/fitness2020.jpg",
"C:/Users/Username/Images/fitness2021.jpg",
"C:/Users/Username/Images/fitness2022.jpg"
],
"operation": "analyze",
"instruction": "Analyze these fitness images and provide a detailed fitness transformation analysis. Compare the physique changes across the images, noting any improvements in muscle definition, body composition, and overall fitness level."
}
}
Example: Document Comparison
{
"toolName": "gemini_fileops",
"toolParams": {
"file_path": [
"C:/Users/Username/Documents/contract_v1.pdf",
"C:/Users/Username/Documents/contract_v2.pdf"
],
"operation": "extract",
"instruction": "Compare these two contract versions and extract all significant changes between them. Highlight additions, deletions, and modifications."
}
}
Example: Code Evolution Analysis
{
"toolName": "gemini_fileops",
"toolParams": {
"file_path": [
"C:/Users/Username/Projects/v1/main.js",
"C:/Users/Username/Projects/v2/main.js"
],
"operation": "analyze",
"instruction": "Analyze how this code has evolved between versions. Identify improvements, new features, bug fixes, and any potential issues introduced."
}
}
Development
Project Structure
GemForge-mcp/
├── src/
│ ├── config/ # Configuration constants
│ ├── handlers/ # Tool handlers
│ ├── interfaces/ # TypeScript interfaces
│ ├── utils/ # Utility functions
│ └── index.ts # Main entry point
├── test/
│ ├── fixtures/ # Test fixtures
│ └── test-*.ts # Test files
├── dist/ # Compiled JavaScript files
├── .env # Environment variables
├── package.json # Project metadata
└── tsconfig.json # TypeScript configuration
Build Scripts
# Build the project
npm run build
# Run in development mode
npm run dev
# Run tests
npm run test
Troubleshooting
Common Issues
-
Module Not Found Errors:
- Ensure you've built the project with
npm run build - Check that the path to the module is correct
- Ensure you've built the project with
-
API Key Errors:
- Verify your Gemini API key is correctly set in the
.envfile - Check that the API key has the necessary permissions
- Verify your Gemini API key is correctly set in the
-
Rate Limiting:
- The server implements exponential backoff for rate limiting
- Consider setting
GEMINI_PAID_TIER=trueif you're on a paid tier
-
File Processing Issues:
- Ensure file paths are correct and accessible
- Check file permissions
- For large files, use
use_large_context_model: true - For multi-file operations, use forward slashes (
/) in file paths, even on Windows - When passing an array of files, ensure the array syntax is correct:
["path/to/file1.txt", "path/to/file2.txt"] - If files aren't being loaded properly, try using absolute paths instead of relative paths
-
Repomix File Inclusion Issues:
- By default, Repomix excludes certain file types (logs, binaries, etc.)
- Use the
repomix_optionsparameter to customize file inclusion/exclusion - For log files, try
repomix_options: "--include \"**/*.log\" --no-gitignore --no-default-patterns" - For binary files, try
repomix_options: "--include-binary"
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- Google Gemini API for providing the underlying AI capabilities
- Model Context Protocol (MCP) for standardizing AI tool interfaces
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.