gemini-researcher
A proxy that lets Claude, Cursor, and other MCP clients utilize Gemini's generous free tier for deep codebase analysis without burning the main coding agent's quota.
README
Gemini Researcher
<a href="https://glama.ai/mcp/servers/@capyBearista/gemini-researcher">
<img width="380" height="200" src="https://glama.ai/mcp/servers/@capyBearista/gemini-researcher/badge" />
</a>
A lightweight, stateless MCP (Model Context Protocol) server that lets developer agents (Claude Code, GitHub Copilot) delegate deep repository analysis to the Gemini CLI. The server is read-only, returns structured JSON (as text content), and is optimized to reduce the calling agent's context and model usage.
Status: v1 complete. Core features are stable, but still early days. Feedback welcome!
If this project extended the lifespan of your usage window, ⭐ please consider giving it a star! :)
Primary goals:
- Reduce agent context usage by letting Gemini CLI read large codebases locally and do its own research
- Reduce calling-agent model usage by offloading heavy analysis to Gemini
- Keep the server stateless and read-only for safety
Why use this?
Instead of copying entire files into your agent's context (burning tokens and cluttering the conversation), this server lets Gemini CLI read files directly from your project. Your agent sends a research query, Gemini does the heavy lifting with its large context window, and returns structured results. You save tokens, your agent stays focused, and complex codebase analysis becomes practical.
Verified clients: Claude Code, Cursor, VS Code (GitHub Copilot)
[!NOTE] It definitely works with other clients, but I haven't personally tested them yet. Please open an issue if you try it elsewhere!
Table of contents
Overview
Gemini Researcher accepts research-style queries over the MCP protocol and spawns the Gemini CLI in headless, read-only mode to perform large-context analysis on local files referenced with @path. Results are returned as pretty-printed JSON strings suitable for programmatic consumption by agent clients.
Prerequisites
- Node.js 18+ installed
- Gemini CLI installed:
npm install -g @google/gemini-cli - Gemini CLI authenticated (recommended:
gemini→ Login with Google) or setGEMINI_API_KEY
Quick checks:
node --version
gemini --version
Quickstart
Step 1: Validate environment
Run the setup wizard to verify Gemini CLI is installed and authenticated:
npx gemini-researcher init
Step 2: Configure your MCP client
Standard config works in most of the tools:
{
"mcpServers": {
"gemini-researcher": {
"command": "npx",
"args": [
"gemini-researcher"
]
}
}
}
<details> <summary>VS Code</summary>
Add to your VS Code MCP settings (create .vscode/mcp.json if needed):
{
"servers": {
"gemini-researcher": {
"command": "npx",
"args": [
"gemini-researcher"
]
}
}
}
</details>
<details> <summary>Claude Code</summary>
Option 1: Command line (recommended)
Local (user-wide) scope
# Add the MCP server via CLI
claude mcp add --transport stdio gemini-researcher -- npx gemini-researcher
# Verify it was added
claude mcp list
Project scope
Navigate to your project directory, then run:
# Add the MCP server via CLI
claude mcp add --scope project --transport stdio gemini-researcher -- npx gemini-researcher
# Verify it was added
claude mcp list
Option 2: Manual configuration
Add to .mcp.json in your project root (project scope):
{
"mcpServers": {
"gemini-researcher": {
"command": "npx",
"args": [
"gemini-researcher"
]
}
}
}
Or add to ~/.claude/settings.json for local scope.
After adding the server, restart Claude Code and use /mcp to verify the connection.
</details>
<details> <summary>Cursor</summary>
Go to Cursor Settings -> Tools & MCP -> Add a Custom MCP Server. Add the following configuration:
{
"mcpServers": {
"gemini-researcher": {
"type": "stdio",
"command": "npx",
"args": [
"gemini-researcher"
]
}
}
}
</details>
[!NOTE] The server automatically uses the directory where the IDE opened your workspace as the project root or where your terminal is. To analyze a different directory, optionally set
PROJECT_ROOT:
Example
{
"mcpServers": {
"gemini-researcher": {
"command": "npx",
"args": [
"gemini-researcher"
],
"env": {
"PROJECT_ROOT": "/path/to/your/project"
}
}
}
}
Step 3: Restart your MCP client
Step 4: Test it
Ask your agent: "Use gemini-researcher to analyze the project."
Tools
All tools return structured JSON (as MCP text content). Large responses are automatically chunked (~10KB per chunk) and cached for 1 hour.
| Tool | Purpose | When to use |
|---|---|---|
| quick_query | Fast analysis with flash model | Quick questions about specific files or small code sections |
| deep_research | In-depth analysis with pro model | Complex multi-file analysis, architecture reviews, security audits |
| analyze_directory | Map directory structure | Understanding unfamiliar codebases, generating project overviews |
| validate_paths | Pre-check file paths | Verify files exist before running expensive queries |
| health_check | Diagnostics | Troubleshooting server/Gemini CLI issues |
| fetch_chunk | Get chunked responses | Retrieve remaining parts of large responses |
Example workflows
Understanding a security vulnerability:
Agent: Use deep_research to analyze authentication flow across @src/auth and @src/middleware, focusing on security
Quick code explanation:
Agent: Use quick_query to explain the login flow in @src/auth.ts, be concise
Mapping an unfamiliar codebase:
Agent: Use analyze_directory on src/ with depth 3 to understand the project structure
<details> <summary>Full tool schemas (for reference)</summary>
quick_query
{
"prompt": "Explain @src/auth.ts login flow",
"focus": "security",
"responseStyle": "concise"
}
deep_research
{
"prompt": "Analyze authentication across @src/auth and @src/middleware",
"focus": "architecture",
"citationMode": "paths_only"
}
analyze_directory
{
"path": "src",
"depth": 3,
"maxFiles": 200
}
validate_paths
{
"paths": ["src/auth.ts", "README.md"]
}
health_check
{
"includeDiagnostics": true
}
fetch_chunk
{
"cacheKey": "cache_abc123",
"chunkIndex": 2
}
</details>
Docker
A pre-built multi-platform Docker image is available on Docker Hub:
# Pull the image (works on Intel/AMD and Apple Silicon)
docker pull capybearista/gemini-researcher:latest
# Run the server (mount your project and provide API key)
docker run -i --rm \
-e GEMINI_API_KEY="your-api-key" \
-v /path/to/your/project:/workspace \
capybearista/gemini-researcher:latest
For MCP client configuration with Docker:
{
"mcpServers": {
"gemini-researcher": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-e", "GEMINI_API_KEY",
"-v", "/path/to/your/project:/workspace",
"capybearista/gemini-researcher:latest"
],
"env": {
"GEMINI_API_KEY": "your-api-key-here"
}
}
}
}
[!NOTE]
- The
-iflag is required for stdio transport- The container mounts your project to
/workspace(the project root)- Replace
/path/to/your/projectwith your actual project path- Replace
your-api-keywith your actual Gemini API key (this is required for Docker usage)
Troubleshooting (common issues)
GEMINI_CLI_NOT_FOUND: Install Gemini CLI:npm install -g @google/gemini-cliAUTH_MISSING: Rungemini, and authenticate or setGEMINI_API_KEY.gitignoreblocking files: Gemini respects.gitignoreby default; togglefileFiltering.respectGitIgnoreingemini /settingsif you intentionally want ignored files included (note: this changes Gemini behavior globally)PATH_NOT_ALLOWED: All@pathreferences must resolve inside the configured project root (process.cwd()by default). Usevalidate_pathsto pre-check paths.QUOTA_EXCEEDED: Server retries with fallback models; if all tiers are exhausted, reduce scope (usequick_query) or wait for quota reset.
Contributing
We welcome contributions! Please read the Contributing Guide to get started.
Quick links:
License
<p align="center"> Made with ♡ for the AI-assisted dev community </p>
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.