Crawl4AI MCP Server
A locally-hosted MCP server that provides AI assistants with advanced web crawling capabilities, including structured data extraction, deep site crawling, and page screenshots. It enables users to convert single or multiple URLs into clean Markdown content for processing by LLMs without requiring external API keys for basic features.
README
Crawl4AI MCP Server
本地运行的 Crawl4AI MCP Server,为 AI 助手提供强大的网页爬取能力。
✨ 功能特性
- 🌐 网页爬取:爬取单个或多个 URL,返回干净的 Markdown
- 📝 结构化提取:使用 CSS/XPath 或 LLM 提取结构化数据
- 📸 截图功能:获取网页截图
- 🕸️ 深度爬取:支持深度爬取整个网站
- 🚀 本地运行:完全本地运行,无需 API Key(基础功能)
- ⚡ 高性能:异步并发,智能缓存
🚀 快速开始
1. 安装
# 克隆项目
git clone <your-repo-url>
cd crawl4ai-mcp
# 使用启动脚本(推荐)
chmod +x start.sh
./start.sh
# 或者手动安装
uv sync
uv run crawl4ai-setup
2. 配置
# 复制环境变量示例
cp .env.example .env
# 编辑 .env 文件,添加你的 API keys(用于结构化提取)
3. 运行
# stdio 模式(用于 Claude Code)
uv run crawl4ai-mcp
# HTTP 模式(用于开发调试)
uv run crawl4ai-mcp --transport http --port 8000
🔧 配置 Claude Code
stdio 模式(推荐)
claude mcp add crawl4ai uv run --project /path/to/crawl4ai-mcp crawl4ai-mcp
HTTP 模式
# 1. 启动服务器
uv run crawl4ai-mcp --transport http --port 8000
# 2. 添加到 Claude Code
claude mcp add --transport http crawl4ai http://localhost:8000/mcp
📚 可用工具
crawl_url - 爬取网页
crawl_url(
url="https://example.com",
word_count_threshold=10,
bypass_cache=False,
magic=False
)
crawl_multiple - 批量爬取
crawl_multiple(
urls=["https://example.com/page1", "https://example.com/page2"],
max_concurrent=3,
word_count_threshold=10
)
extract_structured - 结构化提取
extract_structured(
url="https://example.com/products",
instruction="提取所有产品名称和价格",
provider="openai/gpt-4o-mini",
api_token="your-api-key"
)
get_screenshot - 网页截图
get_screenshot(
url="https://example.com",
full_page=True,
viewport_width=1920,
viewport_height=1080
)
deep_crawl - 深度爬取
deep_crawl(
url="https://example.com",
max_depth=2,
max_pages=10,
strategy="bfs" # 或 "dfs"
)
📖 文档
- 详细使用指南 - 完整的工具说明和使用示例
- Crawl4AI 文档 - 底层库文档
- MCP 协议 - MCP 协议文档
🛠️ 开发
# 运行测试
make test
# 代码格式化
make fmt
# 代码检查
make lint
# 类型检查
make typecheck
# 运行所有检查
make check
🔒 环境变量
# LLM API Keys(用于结构化提取)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
# 或使用通用配置
LLM_PROVIDER=openai/gpt-4o-mini
LLM_API_KEY=sk-...
📄 许可证
MIT License
🤝 贡献
欢迎提交 Issue 和 PR!
💬 支持
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.