gemini-cli-mcp
A secure MCP server that wraps the Google Gemini CLI, allowing clients to query Gemini models using local OAuth sessions without requiring an API key. It provides tools for model interaction and diagnostics with built-in protection against command injection.
README
gemini-cli-mcp
A secure MCP server that wraps Google's Gemini CLI. It lets Claude Code (or any MCP client) call Gemini models using your local OAuth session — no API key required.
Highlights
- Secure —
spawn(shell:false)on Unix; controlledshell:true+ arg escaping on Windows. No command injection. - Cross-platform — macOS, Linux, Windows. Auto-resolves
.cmdwrappers and forces UTF-8. - Activity-based timeout — idle timer resets on each output chunk. Long thinking won't be killed; stuck 429 retries will.
- Low token overhead — replaces Gemini's ~8 800-token default system prompt with a minimal one (~50 tokens).
- Clean output — internally uses
stream-jsonand parses structured responses. No stdout noise pollution. - 2 tools only —
gemini_query+gemini_info. Minimal context-window footprint for the host AI.
Prerequisites
- Node.js >= 18 — Download
- Google Gemini CLI — installed globally and logged in:
npm install -g @google/gemini-cli
gemini # run once — complete the Google OAuth login in your browser
- Verify it works before using this MCP server:
gemini -p "say hello" -o text
# Should print a response. If you see auth errors, re-run `gemini` to log in.
Install
NPM (recommended)
npm install -g @xjoker/gemini-cli-mcp
# Register with Claude Code
claude mcp add gemini-cli -s user -- gemini-cli-mcp
From source
git clone https://github.com/xjoker/gemini-cli-mcp.git
cd gemini-cli-mcp
npm install && npm run build
claude mcp add gemini-cli -s user -- node $(pwd)/dist/index.js
Claude Desktop
Edit ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"gemini-cli": {
"command": "gemini-cli-mcp"
}
}
}
Upgrade
npm update -g @xjoker/gemini-cli-mcp
Tools
gemini_query
Send a prompt to Gemini.
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt |
string | Yes | Prompt text. Use @file.ts to include local files. |
model |
string | No | Model name or alias (default: gemini-2.5-flash) |
sandbox |
boolean | No | Run in sandboxed environment |
yolo |
boolean | No | Auto-approve all tool actions |
approval_mode |
enum | No | default / auto_edit / yolo / plan |
include_stats |
boolean | No | Append token usage stats |
include_directories |
string[] | No | Extra workspace directories |
cwd |
string | No | Working directory for @file references |
gemini_info
Diagnostics and metadata — most actions cost zero API calls.
| Action | Description | API call? |
|---|---|---|
ping |
Test CLI connectivity | No |
version |
Get CLI version | No |
list_models |
Show available models and aliases | No |
list_sessions |
List past Gemini sessions | No |
list_extensions |
List installed Gemini extensions | No |
Models
| Model | Tier | Description |
|---|---|---|
gemini-2.5-pro |
stable | High reasoning & creativity |
gemini-2.5-flash |
stable | Fast, balanced (default) |
gemini-2.5-flash-lite |
stable | Fastest, lightest |
gemini-3-pro-preview |
preview | Gemini 3 Pro |
gemini-3-flash-preview |
preview | Gemini 3 Flash |
gemini-3.1-pro-preview |
preview | Gemini 3.1 Pro (rolling out) |
gemini-3.1-flash-lite-preview |
preview | Gemini 3.1 Flash Lite |
Aliases: auto, pro, flash, flash-lite
Free tier quota: 60 RPM / 1 000 requests per day.
Environment Variables
| Variable | Default | Description |
|---|---|---|
GEMINI_MODEL |
gemini-2.5-flash |
Default model |
GEMINI_STARTUP_TIMEOUT |
15000 |
Phase 1 idle timeout (ms) — CLI startup and initial response |
GEMINI_TIMEOUT |
120000 |
Phase 2 idle timeout (ms) — thinking, resets on each output chunk |
GEMINI_MAX_RESPONSE |
100000 |
Max response chars before truncation |
GEMINI_BIN |
gemini |
Path to Gemini CLI binary |
GEMINI_SYSTEM_MD |
(bundled minimal) | Path to custom system prompt, or "default" for Gemini built-in |
Security
| Platform | Strategy |
|---|---|
| Unix | child_process.spawn() with shell: false — user input never reaches a shell |
| Windows | shell: true (required for .cmd) with % -> %% and ! -> ^^! escaping |
- Zero usage of
exec()/execSync()/ template-string commands. - Verify:
grep -rn "exec(" src/returns nothing.
License
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.