gemini-bridge
A lightweight MCP server bridging AI agents to Google's Gemini AI via official CLI
README
Gemini Bridge
A lightweight MCP (Model Context Protocol) server that enables AI coding assistants to interact with Google's Gemini AI through the official CLI. Works with Claude Code, Cursor, VS Code, and other MCP-compatible clients. Designed for simplicity, reliability, and seamless integration.
✨ Features
- Direct Gemini CLI Integration: Zero API costs using official Gemini CLI
- Simple MCP Tools: Two core functions for basic queries and file analysis
- Stateless Operation: No sessions, caching, or complex state management
- Production Ready: Robust error handling with configurable 60-second timeouts
- Minimal Dependencies: Only requires
mcp>=1.0.0and Gemini CLI - Easy Deployment: Support for both uvx and traditional pip installation
- Universal MCP Compatibility: Works with any MCP-compatible AI coding assistant
🚀 Quick Start
Prerequisites
-
Install Gemini CLI:
npm install -g @google/gemini-cli -
Authenticate with Gemini:
gemini auth login -
Verify installation:
gemini --version
Installation
🎯 Recommended: PyPI Installation
# Install from PyPI
pip install gemini-bridge
# Add to Claude Code with uvx (recommended)
claude mcp add gemini-bridge -s user -- uvx gemini-bridge
Alternative: From Source
# Clone the repository
git clone https://github.com/shelakh/gemini-bridge.git
cd gemini-bridge
# Build and install locally
uvx --from build pyproject-build
pip install dist/*.whl
# Add to Claude Code
claude mcp add gemini-bridge -s user -- uvx gemini-bridge
Development Installation
# Clone and install in development mode
git clone https://github.com/shelakh/gemini-bridge.git
cd gemini-bridge
pip install -e .
# Add to Claude Code (development)
claude mcp add gemini-bridge-dev -s user -- python -m src
🌐 Multi-Client Support
Gemini Bridge works with any MCP-compatible AI coding assistant - the same server supports multiple clients through different configuration methods.
Supported MCP Clients
- Claude Code ✅ (Default)
- Cursor ✅
- VS Code ✅
- Windsurf ✅
- Cline ✅
- Void ✅
- Cherry Studio ✅
- Augment ✅
- Roo Code ✅
- Zencoder ✅
- Any MCP-compatible client ✅
Configuration Examples
<details> <summary><strong>Claude Code</strong> (Default)</summary>
# Recommended installation
claude mcp add gemini-bridge -s user -- uvx gemini-bridge
# Development installation
claude mcp add gemini-bridge-dev -s user -- python -m src
</details>
<details> <summary><strong>Cursor</strong></summary>
Global Configuration (~/.cursor/mcp.json):
{
"mcpServers": {
"gemini-bridge": {
"command": "uvx",
"args": ["gemini-bridge"],
"env": {}
}
}
}
Project-Specific (.cursor/mcp.json in your project):
{
"mcpServers": {
"gemini-bridge": {
"command": "uvx",
"args": ["gemini-bridge"],
"env": {}
}
}
}
Go to: Settings → Cursor Settings → MCP → Add new global MCP server
</details>
<details> <summary><strong>VS Code</strong></summary>
Configuration (.vscode/mcp.json in your workspace):
{
"servers": {
"gemini-bridge": {
"type": "stdio",
"command": "uvx",
"args": ["gemini-bridge"]
}
}
}
Alternative: Through Extensions
- Open Extensions view (Ctrl+Shift+X)
- Search for MCP extensions
- Add custom server with command:
uvx gemini-bridge
</details>
<details> <summary><strong>Windsurf</strong></summary>
Add to your Windsurf MCP configuration:
{
"mcpServers": {
"gemini-bridge": {
"command": "uvx",
"args": ["gemini-bridge"],
"env": {}
}
}
}
</details>
<details> <summary><strong>Cline</strong> (VS Code Extension)</summary>
- Open Cline and click MCP Servers in the top navigation
- Select Installed tab → Advanced MCP Settings
- Add to
cline_mcp_settings.json:
{
"mcpServers": {
"gemini-bridge": {
"command": "uvx",
"args": ["gemini-bridge"],
"env": {}
}
}
}
</details>
<details> <summary><strong>Void</strong></summary>
Go to: Settings → MCP → Add MCP Server
{
"mcpServers": {
"gemini-bridge": {
"command": "uvx",
"args": ["gemini-bridge"],
"env": {}
}
}
}
</details>
<details> <summary><strong>Cherry Studio</strong></summary>
- Navigate to Settings → MCP Servers → Add Server
- Fill in the server details:
- Name:
gemini-bridge - Type:
STDIO - Command:
uvx - Arguments:
["gemini-bridge"]
- Name:
- Save the configuration
</details>
<details> <summary><strong>Augment</strong></summary>
Using the UI:
- Click hamburger menu → Settings → Tools
- Click + Add MCP button
- Enter command:
uvx gemini-bridge - Name: Gemini Bridge
Manual Configuration:
"augment.advanced": {
"mcpServers": [
{
"name": "gemini-bridge",
"command": "uvx",
"args": ["gemini-bridge"],
"env": {}
}
]
}
</details>
<details> <summary><strong>Roo Code</strong></summary>
- Go to Settings → MCP Servers → Edit Global Config
- Add to
mcp_settings.json:
{
"mcpServers": {
"gemini-bridge": {
"command": "uvx",
"args": ["gemini-bridge"],
"env": {}
}
}
}
</details>
<details> <summary><strong>Zencoder</strong></summary>
- Go to Zencoder menu (...) → Tools → Add Custom MCP
- Add configuration:
{
"command": "uvx",
"args": ["gemini-bridge"],
"env": {}
}
- Hit the Install button
</details>
<details> <summary><strong>Alternative Installation Methods</strong></summary>
For pip-based installations:
{
"command": "gemini-bridge",
"args": [],
"env": {}
}
For development/local testing:
{
"command": "python",
"args": ["-m", "src"],
"env": {},
"cwd": "/path/to/gemini-bridge"
}
For npm-style installation (if needed):
{
"command": "npx",
"args": ["gemini-bridge"],
"env": {}
}
</details>
Universal Usage
Once configured with any client, use the same two tools:
- Ask general questions: "What authentication patterns are used in this codebase?"
- Analyze specific files: "Review these auth files for security issues"
The server implementation is identical - only the client configuration differs!
⚙️ Configuration
Timeout Configuration
By default, Gemini Bridge uses a 60-second timeout for all CLI operations. For longer queries (large files, complex analysis), you can configure a custom timeout using the GEMINI_BRIDGE_TIMEOUT environment variable.
Example configurations:
<details> <summary><strong>Claude Code</strong></summary>
# Add with custom timeout (120 seconds)
claude mcp add gemini-bridge -s user --env GEMINI_BRIDGE_TIMEOUT=120 -- uvx gemini-bridge
</details>
<details> <summary><strong>Manual Configuration (mcp_settings.json)</strong></summary>
{
"mcpServers": {
"gemini-bridge": {
"command": "uvx",
"args": ["gemini-bridge"],
"env": {
"GEMINI_BRIDGE_TIMEOUT": "120"
}
}
}
}
</details>
Timeout Options:
- Default: 60 seconds (if not configured)
- Range: Any positive integer (seconds)
- Recommended: 120-300 seconds for large file analysis
- Invalid values: Fall back to 60 seconds with warning
🛠️ Available Tools
consult_gemini
Direct CLI bridge for simple queries.
Parameters:
query(string): The question or prompt to send to Geminidirectory(string): Working directory for the query (default: current directory)model(string, optional): Model to use - "flash" or "pro" (default: "flash")
Example:
consult_gemini(
query="Find authentication patterns in this codebase",
directory="/path/to/project",
model="flash"
)
consult_gemini_with_files
CLI bridge with file attachments for detailed analysis.
Parameters:
query(string): The question or prompt to send to Geminidirectory(string): Working directory for the queryfiles(list): List of file paths relative to the directorymodel(string, optional): Model to use - "flash" or "pro" (default: "flash")
Example:
consult_gemini_with_files(
query="Analyze these auth files and suggest improvements",
directory="/path/to/project",
files=["src/auth.py", "src/models.py"],
model="pro"
)
📋 Usage Examples
Basic Code Analysis
# Simple research query
consult_gemini(
query="What authentication patterns are used in this project?",
directory="/Users/dev/my-project"
)
Detailed File Review
# Analyze specific files
consult_gemini_with_files(
query="Review these files and suggest security improvements",
directory="/Users/dev/my-project",
files=["src/auth.py", "src/middleware.py"],
model="pro"
)
Multi-file Analysis
# Compare multiple implementation files
consult_gemini_with_files(
query="Compare these database implementations and recommend the best approach",
directory="/Users/dev/my-project",
files=["src/db/postgres.py", "src/db/sqlite.py", "src/db/redis.py"]
)
🏗️ Architecture
Core Design
- CLI-First: Direct subprocess calls to
geminicommand - Stateless: Each tool call is independent with no session state
- Fixed Timeout: 60-second maximum execution time
- Simple Error Handling: Clear error messages with fail-fast approach
Project Structure
gemini-bridge/
├── src/
│ ├── __init__.py # Entry point
│ ├── __main__.py # Module execution entry point
│ └── mcp_server.py # Main MCP server implementation
├── .github/ # GitHub templates and workflows
├── pyproject.toml # Python package configuration
├── README.md # This file
├── CONTRIBUTING.md # Contribution guidelines
├── CODE_OF_CONDUCT.md # Community standards
├── SECURITY.md # Security policies
├── CHANGELOG.md # Version history
└── LICENSE # MIT license
🔧 Development
Local Testing
# Install in development mode
pip install -e .
# Run directly
python -m src
# Test CLI availability
gemini --version
Integration with Claude Code
The server automatically integrates with Claude Code when properly configured through the MCP protocol.
🔍 Troubleshooting
CLI Not Available
# Install Gemini CLI
npm install -g @google/gemini-cli
# Authenticate
gemini auth login
# Test
gemini --version
Connection Issues
- Verify Gemini CLI is properly authenticated
- Check network connectivity
- Ensure Claude Code MCP configuration is correct
- Check that the
geminicommand is in your PATH
Common Error Messages
- "CLI not available": Gemini CLI is not installed or not in PATH
- "Authentication required": Run
gemini auth login - "Timeout after 60 seconds": Query took too long, try breaking it into smaller parts
🤝 Contributing
We welcome contributions from the community! Please read our Contributing Guidelines for details on how to get started.
Quick Contributing Guide
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🔄 Version History
See CHANGELOG.md for detailed version history.
🆘 Support
- Issues: Report bugs or request features via GitHub Issues
- Discussions: Join the community discussion
- Documentation: Additional docs can be created in the
docs/directory
Focus: A simple, reliable bridge between Claude Code and Gemini AI through the official CLI.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.