
Google Research MCP
Model Context Protocol (MCP) server that provides AI assistants with advanced web research capabilities, including Google search integration, intelligent content extraction, and multi-source synthesis.
README
Google Research MCP Server
A powerful Model Context Protocol (MCP) server that provides AI assistants with advanced web research capabilities, including Google search integration, intelligent content extraction, and multi-source synthesis.
🚀 Features
Core Research Capabilities
- Google Search Integration - Programmatic access to Google's search results with advanced filtering
- Intelligent Content Extraction - Clean, structured extraction from web pages with fallback strategies
- Multi-Source Synthesis - Combine information from multiple sources into coherent reports
- Contextual Navigation - Smart web browsing that follows relevant links automatically
- Research Workflow Automation - Complete research pipelines from query to final report
Production-Ready Features
- Smart Caching - Optimized performance with configurable cache strategies
- Rate Limiting - Built-in protection against API abuse
- Health Monitoring - Comprehensive system health checks and metrics
- Structured Logging - Production-grade logging with multiple output formats
- Container Support - Docker deployment with health checks and monitoring
Enhanced Content Processing
- Structure Preservation - Maintains tables, lists, and hierarchical content
- Multiple Output Formats - Markdown, HTML, and plain text support
- Metadata Extraction - Captures publication dates, authors, and citation information
- Content Summarization - Automatic generation of content summaries
- Image Context - Extracts and describes images within content
📦 Installation
Prerequisites
- Node.js 18+ and npm 8+
- Google Custom Search API key (Get one here)
- Google Custom Search Engine ID (Create one here)
Quick Start (Unified Server)
The Google Research MCP Server now provides both search and research capabilities in a single unified server - no need to run separate instances!
Option 1: Direct Installation (No Docker Required)
-
Clone and Install
git clone https://github.com/your-org/google-research-mcp-server.git cd google-research-mcp-server npm install
-
Configure Environment
cp .env.example .env # Edit .env file with your Google API credentials nano .env # or use your preferred editor
-
Validate Configuration
npm run validate-config
-
Build and Start
npm run build npm start
-
Verify Server is Running
npm run health-check
Option 2: Docker Installation (Recommended for Production)
# 1. Clone repository
git clone https://github.com/your-org/google-research-mcp-server.git
cd google-research-mcp-server
# 2. Configure environment
cp .env.example .env
# Edit .env with your API keys
# 3. Deploy with Docker
docker-compose up -d
# 4. Verify deployment
docker-compose logs -f google-research-mcp
npm run docker:health
Option 3: Development Mode
For development with auto-rebuild:
# Terminal 1: Watch for changes and rebuild
npm run dev
# Terminal 2: Start server (after initial build)
npm start
⚙️ Configuration
Required Environment Variables
# Google API Configuration (Required)
GOOGLE_API_KEY=your_google_api_key_here
GOOGLE_SEARCH_ENGINE_ID=your_search_engine_id_here
# Server Configuration (Optional)
NODE_ENV=production
LOG_LEVEL=info
Optional Configuration
# Performance Tuning
SEARCH_CACHE_TTL_MINUTES=5 # Search result cache duration
CONTENT_CACHE_TTL_MINUTES=30 # Content extraction cache duration
MAX_CACHE_ENTRIES=100 # Maximum cache entries
# Request Limits
REQUEST_TIMEOUT_MS=30000 # Request timeout
MAX_CONTENT_SIZE_MB=50 # Maximum content size
CONCURRENT_REQUEST_LIMIT=10 # Concurrent request limit
# Rate Limiting
RATE_LIMIT_WINDOW_MS=60000 # Rate limit window
RATE_LIMIT_MAX_REQUESTS=100 # Max requests per window
Validate Configuration
npm run validate-config
🔧 Usage
MCP Client Integration
The server provides unified search and research capabilities in a single MCP server. Add to your MCP client configuration (e.g., Claude Desktop):
{
"mcpServers": {
"google-research": {
"command": "node",
"args": ["path/to/google-research-mcp-server/dist/server.js"],
"env": {
"GOOGLE_API_KEY": "your_api_key",
"GOOGLE_SEARCH_ENGINE_ID": "your_search_engine_id"
}
}
}
}
Alternative Configuration (with environment file):
{
"mcpServers": {
"google-research": {
"command": "npm",
"args": ["start"],
"cwd": "path/to/google-research-mcp-server"
}
}
}
Note: This assumes you have a .env
file configured in the project directory.
Available Tools
Search Tools
google_search
- Search Google with advanced filtering optionsQuery: "climate change effects" Options: site filter, date restrictions, language, result type
Content Extraction Tools
extract_webpage_content
- Extract clean content from web pagesextract_multiple_webpages
- Batch extract from multiple URLsstructured_content_extraction
- Enhanced extraction with structure preservationsummarize_webpage
- Generate webpage summaries
Research & Synthesis Tools
research_topic
- Comprehensive topic research with multiple sourcessynthesize_content
- Combine multiple sources into coherent reportsenhanced_synthesis
- Advanced synthesis with contradiction detection
Navigation Tools
contextual_navigation
- Smart web browsing following relevant links
Example Usage Scenarios
Basic Research
1. Search: google_search("renewable energy trends 2024")
2. Extract: extract_webpage_content(top_result_url)
3. Analyze: Multiple sources for comprehensive view
Comprehensive Research Report
1. Research: research_topic("artificial intelligence in healthcare")
2. Synthesis: enhanced_synthesis(multiple_sources)
3. Export: Formatted report with citations
Competitive Analysis
1. Search: Multiple queries for competitor information
2. Navigate: contextual_navigation(competitor_websites)
3. Synthesize: Compare and contrast findings
🛠️ Troubleshooting
Common Issues
🔴 API Authentication Errors
Error: Missing required environment variables: GOOGLE_API_KEY
Solution:
- Verify API key is correctly set in
.env
file - Ensure Google Custom Search API is enabled in Google Cloud Console
- Check API key has proper permissions and quotas
- Validate configuration:
npm run validate-config
🔴 Rate Limiting Issues
Error: Rate limit exceeded for search requests
Solution:
- Check your Google API quota in Google Cloud Console
- Adjust rate limiting settings in environment variables
- Implement request queuing for high-volume usage
- Consider upgrading your Google API plan
🔴 Content Extraction Failures
Error: Failed to extract content from webpage
Solution:
- Verify the target URL is accessible
- Check if the website blocks automated requests
- Ensure proper User-Agent headers are configured
- Try different extraction methods (structured vs. standard)
🔴 Memory Issues
Warning: Memory usage high: 85%
Solution:
- Reduce cache sizes in configuration
- Lower concurrent request limits
- Monitor content extraction sizes
- Consider scaling horizontally
🔴 Docker Deployment Issues
Container health check failing
Solution:
- Check container logs:
docker-compose logs -f google-research-mcp
- Verify environment variables are properly set
- Ensure API connectivity from container
- Run manual health check:
npm run docker:health
🔴 Non-Docker Deployment Issues
Error: Cannot find module 'dist/server.js'
Solution:
- Ensure you've built the project:
npm run build
- Check that
dist/
directory exists and contains compiled files - Verify TypeScript compilation:
npx tsc --noEmit
- Clear and rebuild:
rm -rf dist/ && npm run build
Error: EACCES permission denied
Solution:
- Check file permissions:
ls -la dist/server.js
- Make executable if needed:
chmod +x dist/server.js
- Run with explicit node:
node dist/server.js
Debug Mode
# Enable detailed logging (Non-Docker)
export LOG_LEVEL=debug
npm start
# Enable detailed logging (Docker)
docker-compose exec google-research-mcp sh -c "LOG_LEVEL=debug npm start"
# Check system health
npm run health-check
# Monitor performance (Docker)
docker-compose exec google-research-mcp npm run health-check
Non-Docker Production Deployment
For production deployment without Docker:
Using PM2 (Recommended)
# Install PM2 globally
npm install -g pm2
# Start with PM2
pm2 start dist/server.js --name "google-research-mcp"
# Monitor
pm2 status
pm2 logs google-research-mcp
# Auto-restart on system reboot
pm2 startup
pm2 save
Using systemd (Linux)
Create /etc/systemd/system/google-research-mcp.service
:
[Unit]
Description=Google Research MCP Server
After=network.target
[Service]
Type=simple
User=your-user
WorkingDirectory=/path/to/google-research-mcp-server
ExecStart=/usr/bin/node dist/server.js
Restart=always
RestartSec=10
Environment=NODE_ENV=production
EnvironmentFile=/path/to/google-research-mcp-server/.env
[Install]
WantedBy=multi-user.target
Then:
sudo systemctl enable google-research-mcp
sudo systemctl start google-research-mcp
sudo systemctl status google-research-mcp
Direct Node.js (Development)
# Simple start
npm start
# With specific environment
NODE_ENV=production LOG_LEVEL=info npm start
# Background process
nohup npm start > server.log 2>&1 &
Performance Optimization
Cache Tuning
# For high-volume usage
SEARCH_CACHE_TTL_MINUTES=10
CONTENT_CACHE_TTL_MINUTES=60
MAX_CACHE_ENTRIES=200
# For memory-constrained environments
SEARCH_CACHE_TTL_MINUTES=2
CONTENT_CACHE_TTL_MINUTES=15
MAX_CACHE_ENTRIES=50
Request Optimization
# For faster responses
REQUEST_TIMEOUT_MS=15000
MAX_CONTENT_SIZE_MB=25
CONCURRENT_REQUEST_LIMIT=5
# For comprehensive extraction
REQUEST_TIMEOUT_MS=60000
MAX_CONTENT_SIZE_MB=100
CONCURRENT_REQUEST_LIMIT=15
📊 Monitoring & Health Checks
Built-in Health Monitoring
# Check overall system health
npm run health-check
# Monitor with Docker
docker-compose exec google-research-mcp npm run health-check
Health Check Response
{
"status": "healthy",
"timestamp": "2024-01-15T10:30:00Z",
"environment": "production",
"uptime": 3600000,
"checks": {
"googleSearch": { "status": "pass", "responseTime": 245 },
"contentExtraction": { "status": "pass", "responseTime": 567 },
"memory": { "status": "pass", "percentage": 45.2 }
}
}
Monitoring Integration
- Prometheus metrics available at
/metrics
(if enabled) - Structured logging compatible with ELK stack
- Docker health checks for container orchestration
🔄 Maintenance
Regular Maintenance Tasks
# Update dependencies
npm audit
npm update
# Security audit
npm run audit:security
# Dependency analysis
npm run audit:dependencies
# Container updates
docker-compose pull
docker-compose up -d
Log Management
# View logs
docker-compose logs -f google-research-mcp
# Log rotation (configure in docker-compose.yml)
docker-compose exec google-research-mcp logrotate -f /etc/logrotate.conf
🚀 Advanced Usage
Scaling Considerations
- Horizontal Scaling: Deploy multiple instances behind load balancer
- Caching Strategy: Consider Redis for shared caching across instances
- Rate Limiting: Implement distributed rate limiting for multi-instance deployments
Custom Configurations
- Research Templates: Create custom research workflow templates
- Content Filters: Implement custom content filtering rules
- Export Formats: Add custom export format handlers
Integration Examples
- CI/CD Pipeline: Automated research report generation
- Slack Bot: Real-time research queries from team chat
- Web Dashboard: Research workflow management interface
📝 Development
Development Setup
# Install dependencies
npm install
# Start in development mode
npm run dev
# Build for production
npm run build
Project Structure
src/
├── config/ # Configuration management
├── handlers/ # Tool request handlers
├── services/ # Core service implementations
├── tools/ # Tool definitions and schemas
├── types/ # TypeScript type definitions
├── utils/ # Utility functions
└── server.ts # Main server entry point
🤝 Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🆘 Support
Getting Help
- GitHub Issues: Report bugs and request features
- Documentation: Check
PRODUCTION_DEPLOYMENT.md
for detailed deployment guide - Health Checks: Use built-in diagnostics for troubleshooting
Common Support Scenarios
- API Setup: Verify Google API credentials and permissions
- Performance Issues: Check cache configuration and system resources
- Deployment Problems: Review Docker logs and health checks
- Integration Questions: Consult MCP client documentation
Built with ❤️ for AI-powered research workflows
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.