
Cloudflare AutoRAG MCP Server
Provides search capabilities for Cloudflare AutoRAG instances, enabling AI assistants like Claude to directly search and query knowledge bases using three distinct search methods: basic search, rewrite search, and AI search.
README
Cloudflare AutoRAG MCP Server
A Model Context Protocol (MCP) server that provides search capabilities for Cloudflare AutoRAG instances. This server enables AI assistants like Claude to directly search and query your AutoRAG knowledge base using three distinct search methods.
Features
- 🔍 Basic Search - Vector similarity search without query rewriting or answer generation
- ✏️ Rewrite Search - Vector search with AI query rewriting but no answer generation (returns document chunks only)
- 🤖 AI Search - Full AI-powered search with optional AI response and configurable query rewriting
- ⚙️ Configurable Parameters - Support for
score_threshold
(default: 0.5) andmax_num_results
(1-50, default: 10) - 📄 Pagination Support - AI search supports cursor-based pagination for large result sets (v1.2.0+)
- 🏢 Multi-AutoRAG Support - Manage and search across multiple AutoRAG instances (v2.0.0+)
- 🌐 Remote Deployment - Runs on Cloudflare Workers for scalability
- 🔗 MCP Compatible - Works with Claude Desktop and other MCP clients
Tools
autorag_basic_search
Performs a basic vector similarity search in your Cloudflare AutoRAG index without AI query rewriting or answer generation. Returns raw document chunks only.
Parameters:
query
(string, required) - The search query text (max 10,000 characters)score_threshold
(number, optional) - Minimum similarity score threshold (0.0-1.0, default: 0.5)max_num_results
(number, optional) - Maximum number of results to return (1-50, default: 10)autorag_name
(string, optional) - Name of the AutoRAG instance to use (defaults to configured default)
autorag_rewrite_search
Performs a vector search with AI query rewriting but no answer generation. Uses Cloudflare's search()
method with configurable rewrite_query
for better semantic matching and returns only document chunks.
Parameters:
query
(string, required) - The search query text (max 10,000 characters)score_threshold
(number, optional) - Minimum similarity score threshold (0.0-1.0, default: 0.5)max_num_results
(number, optional) - Maximum number of results to return (1-50, default: 10)rewrite_query
(boolean, optional) - Whether to rewrite query for better matching (default: true)autorag_name
(string, optional) - Name of the AutoRAG instance to use (defaults to configured default)
autorag_ai_search
Performs AI-powered search using Cloudflare's aiSearch()
method with optional AI-generated response. Returns document chunks and optionally an AI answer based on the include_ai_response
parameter. Supports pagination for large result sets.
Parameters:
query
(string, required) - The search query text (max 10,000 characters)score_threshold
(number, optional) - Minimum similarity score threshold (0.0-1.0, default: 0.5)max_num_results
(number, optional) - Maximum number of results to return (1-50, default: 10)rewrite_query
(boolean, optional) - Whether to rewrite the query for better semantic matching (default: true)include_ai_response
(boolean, optional) - Whether to include the AI-generated response in the output (default: false)cursor
(string, optional) - Pagination cursor from previous response to fetch next page of results (v1.2.0+)autorag_name
(string, optional) - Name of the AutoRAG instance to use (defaults to configured default)
Response includes:
data
- Array of source document chunks with scores and metadata (always included)response
- AI-generated answer based on retrieved documents (only wheninclude_ai_response: true
)has_more
- Boolean indicating if more results are availablenext_page
- Cursor token for fetching the next page (whenhas_more
is true)nextCursor
- MCP-compliant cursor field (mirrorsnext_page
value)
list_autorags
(v2.0.0+)
Lists all available AutoRAG instances configured in the server.
Parameters: None
Response includes:
autorags
- Array of AutoRAG instances with name, description, and is_default flagtotal
- Total number of configured AutoRAG instancesdefault
- Name of the default AutoRAG instance
get_current_autorag
(v2.0.0+)
Gets information about the currently configured default AutoRAG instance.
Parameters: None
Response includes:
current_autorag
- Name of the current default AutoRAG instancedescription
- Description of the instanceis_default
- Always true for this endpoint
Prerequisites
- Cloudflare Account with AutoRAG access
- AutoRAG Instance - Created and indexed in your Cloudflare account
- Wrangler CLI - For deployment (
npm install --save-dev wrangler
)
Deployment
-
Clone the repository:
git clone <repository-url> cd cf-autorag-mcp
-
Install dependencies:
npm install
-
Configure your AutoRAG instance: Edit
wrangler.toml
and update the configuration:For a single AutoRAG instance:
[vars] AUTORAG_NAME = "your-autorag-instance-name"
For multiple AutoRAG instances:
[vars] AUTORAG_INSTANCES = "instance1,instance2,instance3" AUTORAG_DESCRIPTIONS = "Description 1,Description 2,Description 3"
-
Deploy to Cloudflare Workers:
npx wrangler deploy
This will output your Worker URL, which you'll need for the MCP client configuration.
Claude Desktop Configuration
To use this MCP server with Claude Desktop, add the following configuration to your Claude Desktop config file:
macOS
Edit ~/Library/Application Support/Claude/claude_desktop_config.json
:
Windows
Edit %APPDATA%/Claude/claude_desktop_config.json
:
Configuration
{
"mcpServers": {
"cf-autorag-mcp": {
"command": "npx",
"args": [
"mcp-remote",
"https://your-worker-url.workers.dev/"
]
}
}
}
Replace https://your-worker-url.workers.dev/
with your actual deployed Worker URL.
After updating the configuration:
- Restart Claude Desktop
- You should see the AutoRAG search tools available in your conversation
Configuration
Environment Variables
The server uses the following Cloudflare Worker bindings:
AI
- Cloudflare AI binding for AutoRAG access (handles all AutoRAG operations)AUTORAG_NAME
- Your AutoRAG instance name (for single instance configuration)AUTORAG_INSTANCES
- Comma-separated list of AutoRAG instances (for multi-instance configuration)AUTORAG_DESCRIPTIONS
- Comma-separated list of descriptions for each instance
Wrangler Configuration
The wrangler.toml
file includes:
name = "cf-autorag-mcp"
main = "src/server.ts"
compatibility_date = "2024-09-23"
compatibility_flags = ["nodejs_compat"]
[vars]
# For single AutoRAG instance:
AUTORAG_NAME = "your-autorag-instance-name"
# For multiple AutoRAG instances (v2.0.0+):
# AUTORAG_INSTANCES = "default-autorag,secondary-autorag,specialized-autorag"
# AUTORAG_DESCRIPTIONS = "Main knowledge base,Secondary knowledge base,Specialized documents"
[ai]
binding = "AI"
Note: The VECTORIZE binding is not required. AutoRAG manages its own vector index access internally through the AI binding.
Usage Examples
Once configured with Claude Desktop, you can use the tools like this:
Basic Search (no query rewriting, no AI response):
Search for documents about "machine learning" in my AutoRAG with a minimum score threshold of 0.7
Rewrite Search (AI query rewriting, no AI response):
Use rewrite search to find information about "deployment strategies" with query rewriting enabled
AI Search with Document Chunks Only (default behavior):
Use AI search to find information about "deployment strategies" with max 5 results
AI Search with AI-Generated Response:
Use AI search to find information about "deployment strategies" and include the AI-generated response
Multi-AutoRAG Usage (v2.0.0+):
List all available AutoRAG instances
Search for "security policies" in the secondary-autorag instance
Use AI search in specialized-autorag to find "compliance requirements" with AI response
Important Notes:
autorag_basic_search
performs pure vector search without any AI enhancementsautorag_rewrite_search
uses AI query rewriting but returns document chunks onlyautorag_ai_search
by default returns document chunks only (letting the client LLM generate responses), but can optionally include Cloudflare's AI-generated response- All tools use a default score threshold of 0.5 if not specified
- All tools support the same parameter structure for consistent usage
- Metadata filtering is not supported in Workers bindings - use the REST API if you need filtered queries
Development
Local Development
# Start local development server
npm run dev
# Build for production
npm run build
Project Structure
cf-autorag-mcp/
├── src/
│ └── server.ts # Main MCP server implementation
├── wrangler.toml # Cloudflare Workers configuration
├── package.json # Dependencies and scripts
└── README.md # This file
Technical Details
- Protocol: JSON-RPC 2.0 over HTTP
- Runtime: Cloudflare Workers with Node.js compatibility
- MCP Version: 2024-11-05
- Transport: HTTP-based (no streaming)
- Default Score Threshold: 0.5 for all search tools
- Parameter Validation: Comprehensive validation with clear error messages
Troubleshooting
Common Issues
-
"AutoRAG instance not found"
- Verify your
AUTORAG_NAME
inwrangler.toml
- Ensure your AutoRAG instance is properly created and indexed
- Verify your
-
"MCP server disconnected"
- Check that your Worker URL is correct in the Claude Desktop config
- Verify the Worker is deployed and accessible
-
"Tool not found" errors
- Restart Claude Desktop after configuration changes
- Check the Worker logs:
npx wrangler tail
-
Empty search results
- Try lowering the
score_threshold
parameter (default is 0.5) - Verify your AutoRAG index has been populated with documents
- Check that your query terms exist in the indexed content
- Try lowering the
Logs
View real-time logs from your deployed Worker:
npx wrangler tail
Version History
- v2.0.0 - Multi-AutoRAG support, enhanced schema documentation, removed VECTORIZE binding
- v1.2.0 - Added cursor-based pagination support for AI search tool
- v1.1.3 - Removed filters parameter (not supported in Workers bindings), added helpful error messages for filter attempts
- v1.1.2 - Attempted to fix filter format (discovered Workers bindings don't support filters)
- v1.1.1 - Added
include_ai_response
parameter to AI search tool, default score threshold of 0.5, comprehensive parameter validation - v1.1.0 - Added three distinct search tools with boolean parameter support
- v1.0.0 - Initial release
License
This project is licensed under the MIT License.
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
Support
For issues related to:
- Cloudflare AutoRAG: Cloudflare AutoRAG Documentation
- Model Context Protocol: MCP Documentation
- This Server: Open an issue in this repository
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.