Gemini Daily MCP Server (Compressed)
Token-optimized MCP server that integrates Google Gemini CLI for AI-powered coding assistance with 43.8% token savings through progressive disclosure, supporting multi-turn conversations and multiple Gemini models.
README
Gemini Daily MCP Server (Compressed) โก
Token-optimized MCP server for Google Gemini CLI with 43.8% token savings through Progressive Disclosure strategy.
๐ฏ Why Compressed Version?
This is an experimental token-optimized variant of gemini-daily-mcp that reduces tool schema overhead by 43.8% while maintaining full functionality.
Token Savings Comparison
| Version | Schema Size | Tokens | Savings |
|---|---|---|---|
| Original | 1,227 bytes | 306 tokens | - |
| Compressed โก | 691 bytes | 172 tokens | 43.8% |
Impact:
- Per conversation: Save 134 tokens
- 3,000 conversations/month: Save ~$1.61/month
- Scale to 50 tools: Save ~$66/month (61%)
๐ก How It Works: Progressive Disclosure
Traditional Approach (Verbose Schema)
{
"name": "gemini",
"description": "Execute Gemini CLI in non-interactive mode for AI assistance with high Token efficiency",
"inputSchema": {
"properties": {
"prompt": {
"type": "string",
"description": "The coding task, question, or analysis request"
},
"sessionId": {
"type": "string",
"description": "Optional session ID for conversational context (saves up to 74% tokens in multi-turn conversations)"
}
}
}
}
Size: 731 bytes โ 182 tokens
Progressive Disclosure (Compressed Schema)
{
"name": "gemini",
"description": "Execute Gemini CLI. Use help for details",
"inputSchema": {
"properties": {
"prompt": {"type": "string"},
"sessionId": {"type": "string"}
}
}
}
Size: 281 bytes โ 70 tokens (61.6% savings)
Where Did the Details Go?
They're in the enhanced help tool! When Claude needs details:
# Claude calls: mcp__gemini-daily-compressed__help
# Returns comprehensive documentation:
# Gemini Daily MCP - Detailed Documentation
## gemini Tool
Execute Gemini CLI in non-interactive mode for AI assistance.
**Parameters:**
- prompt (required, string): The coding task, question, or analysis request
- sessionId (optional, string): Session ID for conversational context
- Saves up to 74% tokens in multi-turn conversations
- Obtained from previous gemini call's _meta.sessionId
- resetSession (optional, boolean): Reset session history before processing
- model (optional, string): Specify which Gemini model to use
- Default: gemini-2.5-flash
- Options: gemini-2.5-flash, gemini-2.5-flash-lite, gemini-2.5-pro, gemini-3-pro-preview
**Example:**
{
"prompt": "Write a Python function to calculate fibonacci",
"sessionId": "session-abc123",
"model": "gemini-2.5-flash"
}
...
Result:
- โ 90% of conversations: Don't need details โ Save tokens
- โ 10% of conversations: Call help once โ Get full docs
- โ Overall: 43.8% token savings
๐ Features
Same Functionality as Original
- โ Full Gemini CLI integration
- โ Session management (saves up to 74% tokens in multi-turn conversations)
- โ Gemini 3 support (gemini-3-pro-preview with fallback)
- โ Multi-model selection (Flash, Flash-Lite, Pro, Gemini 3)
- โ Comprehensive error handling
Token Optimization
- โก 43.8% smaller tool schemas
- โก Progressive Disclosure via enhanced help tool
- โก Preserves all type information
- โก Maintains parameter validation
๐ฆ Installation
Prerequisites
- Google Gemini CLI must be pre-installed and configured
- Install: Follow Google's official Gemini CLI installation guide
- Setup: Run
gemini auth loginwith your Google account - API Key: Set
GOOGLE_API_KEYenvironment variable or configure viagemini config
- Claude Code or Claude Desktop installed
Claude Code
# Clone this repository
git clone https://github.com/kdoooh000h/gemini-daily-mcp-compressed.git
cd gemini-daily-mcp-compressed
# Install dependencies
npm install
# Build
npm run build
# Add to Claude Code
# Edit your .mcp.json:
{
"mcpServers": {
"gemini-daily-compressed": {
"command": "node",
"args": ["/path/to/gemini-daily-mcp-compressed/dist/index.js"],
"env": {
"GEMINI_ENABLED": "true",
"SESSION_TTL_HOURS": "24",
"MAX_SESSIONS": "100",
"GEMINI_CLI_COMMAND": "gemini",
"LOG_LEVEL": "info"
}
}
}
}
Claude Desktop
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%/Claude/claude_desktop_config.json
{
"mcpServers": {
"gemini-daily-compressed": {
"command": "node",
"args": ["/path/to/gemini-daily-mcp-compressed/dist/index.js"],
"env": {
"GEMINI_ENABLED": "true",
"SESSION_TTL_HOURS": "24",
"GEMINI_CLI_COMMAND": "gemini"
}
}
}
}
๐ง Tools
gemini - AI Coding Assistant
Execute Gemini CLI with optional session support.
Basic Usage:
Use gemini to explain this Python function
Advanced Usage:
# With session for multi-turn conversation
Use gemini with sessionId "my-session" to analyze this code
# Continue conversation
Use gemini with sessionId "my-session" to implement the improvements
# Specify model
Use gemini with model "gemini-3-pro-preview" for complex analysis
# Reset session
Use gemini with sessionId "my-session" and resetSession true to start fresh
listSessions - Session Management
List all active conversation sessions.
ping - Connection Test
Test if the MCP server is working properly.
help - Enhanced Documentation
Get detailed documentation for all tools (this is where the detailed info lives in the compressed version).
๐ Compression Details
Strategy
- Remove verbose descriptions from tool schemas
- Preserve all functionality:
- Parameter names
- Type information
- Required fields
- Enum values
- Enhance help tool to provide complete documentation
Token Breakdown
| Tool | Original | Compressed | Savings |
|---|---|---|---|
| gemini | 182 tokens | 70 tokens | 61.6% |
| ping | 49 tokens | 37 tokens | 24.5% |
| help | 33 tokens | 32 tokens | 3.0% |
| listSessions | 40 tokens | 33 tokens | 17.5% |
| Total | 306 tokens | 172 tokens | 43.8% |
Trade-offs
Pros โ :
- Significant token savings (43.8%)
- Lower costs ($66/month for 3K conversations with 50 tools)
- Full functionality preserved
- Claude understands compressed schemas
Cons โ ๏ธ:
- ~10% of conversations may need an extra help call
- Schema less readable for developers
- Need to maintain documentation in two places (schema + help tool)
๐ When to Use Compressed Version?
Recommended For โ
- High-frequency usage (>100 conversations/day)
- Many tools (>20 tools in your MCP setup)
- Cost-sensitive projects
- Complex tool schemas with verbose descriptions
Use Original Version Instead
- Low-frequency usage (<100 conversations/day)
- Few tools (<10 tools)
- Need maximum schema readability
- Occasional use
๐งช Experimental Status
This is an experimental project to validate the Progressive Disclosure approach for MCP tool schemas.
Testing Results (from lab-02-testbed):
- โ All tools function normally
- โ help tool provides complete documentation
- โ Claude understands compressed schemas
- โ 43.8% token savings confirmed
- โ ๏ธ ~10% scenarios require help call (adds 1 tool invocation)
Feedback Welcome: Please open issues if you encounter any problems or have suggestions.
๐ Related Projects
- Original Version: gemini-daily-mcp - Full verbose schemas
- Token Analysis: See TOOL-SCHEMA-COMPRESSION-EXPERIMENT.md
- Gemini CLI: Google Gemini CLI
๐ Documentation
- MODELS.md: Gemini model compatibility and selection guide
- CODEX-TO-GEMINI-ADAPTATION.md: How this was adapted from codex-mcp-server
- API Reference: Use the
helptool within Claude for detailed documentation
๐ ๏ธ Development
# Install dependencies
npm install
# Development mode
npm run dev
# Build
npm run build
# Test
npm test
# Lint
npm run lint
๐ Version Compatibility
This MCP server is compatible with:
- Gemini CLI: v0.17.0+ (with Gemini 3 support)
- Claude Code: Latest version
- Claude Desktop: Latest version
- MCP SDK: @modelcontextprotocol/sdk ^1.17.3
๐ค Contributing
This is an experimental project. Contributions, issues, and feature requests are welcome!
๐ License
ISC
๐ฌ Questions?
- Use the help tool:
mcp__gemini-daily-compressed__help - Open an issue: GitHub Issues
- Original project: gemini-daily-mcp
โก Compressed. Efficient. Same Power.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.