WebSearch
Web Search tools are a series of tools that allow Claude to acces de internet via MCP Server
m4s1t4
README
WebSearch - Advanced Web Search and Content Extraction Tool
A powerful web search and content extraction tool built with Python, leveraging the Firecrawl API for advanced web scraping, searching, and content analysis capabilities.
🚀 Features
- Advanced Web Search: Perform intelligent web searches with customizable parameters
- Content Extraction: Extract specific information from web pages using natural language prompts
- Web Crawling: Crawl websites with configurable depth and limits
- Web Scraping: Scrape web pages with support for various output formats
- MCP Integration: Built as a Model Context Protocol (MCP) server for seamless integration
📋 Prerequisites
- Python 3.8 or higher
- uv package manager
- Firecrawl API key
- OpenAI API key (optional, for enhanced features)
- Tavily API key (optional, for additional search capabilities)
🛠️ Installation
- Install uv:
# On Windows (using pip)
pip install uv
# On Unix/MacOS
curl -LsSf https://astral.sh/uv/install.sh | sh
# Add uv to PATH (Unix/MacOS)
export PATH="$HOME/.local/bin:$PATH"
# Add uv to PATH (Windows - add to Environment Variables)
# Add: %USERPROFILE%\.local\bin
- Clone the repository:
git clone https://github.com/yourusername/websearch.git
cd websearch
- Create and activate a virtual environment with uv:
# Create virtual environment
uv venv
# Activate on Windows
.\.venv\Scripts\activate.ps1
# Activate on Unix/MacOS
source .venv/bin/activate
- Install dependencies with uv:
# Install from requirements.txt
uv sync
- Set up environment variables:
# Create .env file
touch .env
# Add your API keys
FIRECRAWL_API_KEY=your_firecrawl_api_key
OPENAI_API_KEY=your_openai_api_key
🎯 Usage
Setting Up With Claude for Desktop
Instead of running the server directly, you can configure Claude for Desktop to access the WebSearch tools:
-
Locate or create your Claude for Desktop configuration file:
- Windows:
%env:AppData%\Claude\claude_desktop_config.json
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
-
Add the WebSearch server configuration to the
mcpServers
section:
{
"mcpServers": {
"websearch": {
"command": "uv",
"args": [
"--directory",
"D:\\ABSOLUTE\\PATH\\TO\\WebSearch",
"run",
"main.py"
]
}
}
}
-
Make sure to replace the directory path with the absolute path to your WebSearch project folder.
-
Save the configuration file and restart Claude for Desktop.
-
Once configured, the WebSearch tools will appear in the tools menu (hammer icon) in Claude for Desktop.
Available Tools
-
Search
-
Extract Information
-
Crawl Websites
-
Scrape Content
📚 API Reference
Search
query
(str): The search query- Returns: Search results in JSON format
Extract
urls
(List[str]): List of URLs to extract information fromprompt
(str): Instructions for extractionenableWebSearch
(bool): Enable supplementary web searchshowSources
(bool): Include source references- Returns: Extracted information in specified format
Crawl
url
(str): Starting URLmaxDepth
(int): Maximum crawl depthlimit
(int): Maximum pages to crawl- Returns: Crawled content in markdown/HTML format
Scrape
url
(str): Target URL- Returns: Scraped content with optional screenshots
🔧 Configuration
Environment Variables
The tool requires certain API keys to function. We provide a .env.example
file that you can use as a template:
- Copy the example file:
# On Unix/MacOS
cp .env.example .env
# On Windows
copy .env.example .env
- Edit the
.env
file with your API keys:
# OpenAI API key - Required for AI-powered features
OPENAI_API_KEY=your_openai_api_key_here
# Firecrawl API key - Required for web scraping and searching
FIRECRAWL_API_KEY=your_firecrawl_api_key_here
Getting the API Keys
-
OpenAI API Key:
- Visit OpenAI's platform
- Sign up or log in
- Navigate to API keys section
- Create a new secret key
-
Firecrawl API Key:
- Visit Firecrawl's website
- Create an account
- Navigate to your dashboard
- Generate a new API key
If everything is configured correctly, you should receive a JSON response with search results.
Troubleshooting
If you encounter errors:
- Ensure all required API keys are set in your
.env
file - Verify the API keys are valid and have not expired
- Check that the
.env
file is in the root directory of the project - Make sure the environment variables are being loaded correctly
🤝 Contributing
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
📝 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
- Firecrawl for their powerful web scraping API
- OpenAI for AI capabilities
- MCPThe MCP community for the protocol specification
📬 Contact
José Martín Rodriguez Mortaloni - @m4s1t425 - jmrodriguezm13@gmail.com
Made with ❤️ using Python and Firecrawl
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Playwright MCP Server
Provides a server utilizing Model Context Protocol to enable human-like browser automation with Playwright, allowing control over browser actions such as navigation, element interaction, and scrolling.
@kazuph/mcp-fetch
Model Context Protocol server for fetching web content and processing images. This allows Claude Desktop (or any MCP client) to fetch web content and handle images appropriately.
DuckDuckGo MCP Server
A Model Context Protocol (MCP) server that provides web search capabilities through DuckDuckGo, with additional features for content fetching and parsing.
YouTube Transcript MCP Server
This server retrieves transcripts for given YouTube video URLs, enabling integration with Goose CLI or Goose Desktop for transcript extraction and processing.
serper-search-scrape-mcp-server
This Serper MCP Server supports search and webpage scraping, and all the most recent parameters introduced by the Serper API, like location.
The Verge News MCP Server
Provides tools to fetch and search news from The Verge's RSS feed, allowing users to get today's news, retrieve random articles from the past week, and search for specific keywords in recent Verge content.
Google Search Console MCP Server
A server that provides access to Google Search Console data through the Model Context Protocol, allowing users to retrieve and analyze search analytics data with customizable dimensions and reporting periods.