Discover Awesome MCP Servers

Extend your agent with 26,843 capabilities via MCP servers.

All26,843
Mcp Server

Mcp Server

Minimal MCP Hello Server

Minimal MCP Hello Server

A minimal MCP server template that provides a basic endpoint for testing ChatGPT integration. Designed to be deployed on Render's free plan as a starting point for building custom MCP servers with external API integrations.

Himalaya MCP Server

Himalaya MCP Server

Enables AI agents to interact with email accounts via the Himalaya CLI, supporting features like message reading, folder management, and controlled email sending. It prioritizes security with graduated permission modes and the exclusion of destructive operations like permanent deletion.

Postgres MCP Server

Postgres MCP Server

Enables interaction with PostgreSQL databases through natural language commands. Supports schema exploration, table inspection, DDL generation, data preview, and safe SQL execution with built-in query limits.

Discord MCP

Discord MCP

Enables comprehensive interaction with Discord servers to manage guilds, channels, messages, and members through an agentic workflow. It supports RAG capabilities, rate-limited message operations, and includes a React-based dashboard for monitoring and configuration.

CodebaseHQ MCP Server

CodebaseHQ MCP Server

An MCP server that enables users to manage CodebaseHQ tickets and track project activity through natural language. It supports searching tickets, viewing discussions, and performing updates such as creating new tickets or changing assignments and statuses.

Jira - GitHub MCP Server

Jira - GitHub MCP Server

Enables end-to-end automation of developer workflows from Jira issue tracking to GitHub pull requests through natural language, allowing developers to search issues, create branches, commit changes, and manage PRs directly from their IDE.

Sond Core API MCP Server

Sond Core API MCP Server

Enables interaction with the Sond Core API through the Model Context Protocol. Auto-generated from the OpenAPI specification at https://core-api-lb.sond.com/api-json to provide programmatic access to Sond's services.

Tangled MCP Server

Tangled MCP Server

Enables interaction with Tangled, a git collaboration platform built on AT Protocol. Supports repository management, branch listing, and issue creation/management through natural language commands.

GLM OCR MCP Server

GLM OCR MCP Server

Enables text extraction from images and PDF files using the ZhipuAI GLM-OCR API. It allows users to process documents through local file paths or base64-encoded data for high-quality OCR results.

Azure MCP Server

Azure MCP Server

通过 Claude Desktop 实现与 Azure 服务的自然语言交互,支持资源管理、订阅处理和租户选择,并提供安全身份验证。

Stockfilm — Authentic Vintage Footage

Stockfilm — Authentic Vintage Footage

Search and license 217,000+ authentic vintage 8mm home movie clips from the 1930s-1980s. 6 tools: text search, clip details, visual similarity, rough-cut timeline builder, rights verification, and instant licensing via x402 USDC payments on Solana and Base. All footage is real archival film restored in 4K, no AI-generated content.

ibex35-mcp

ibex35-mcp

Analyze relationships in the Spanish stock exchange

AFL (Australian Football League) MCP Server

AFL (Australian Football League) MCP Server

这是一个模型上下文协议 (MCP) 服务器,它提供来自 Squiggle API 的澳式足球联盟 (AFL) 数据。

token-rugcheck

token-rugcheck

MCP server for real-time Solana token risk analysis. Cross-references RugCheck.xyz, DexScreener, and GoPlus Security to generate three-layer reports: machine verdict → LLM analysis → raw on-chain evidence. Live on Solana mainnet with USDC micropayments ($0.02/audit). Give any AI agent the ability to check if a token is safe before trading.

Marvel MCP Server using Azure Functions

Marvel MCP Server using Azure Functions

一个基于 Azure Functions 的 MCP 服务器,它允许通过官方 Marvel Developer API 与漫威角色和漫画数据进行交互。

GitHub Repos Manager MCP Server

GitHub Repos Manager MCP Server

GitHub Repos Manager MCP Server

MCP Presidio Server

MCP Presidio Server

Enables anonymization and deanonymization of sensitive data in text using Microsoft Presidio. Supports session-based storage to reversibly replace sensitive information like passwords and secrets with placeholder tokens.

munich-mensa-mcp

munich-mensa-mcp

Remote MCP Server for listing and getting the menus of the official mensas in munich

Clawslist MCP Server

Clawslist MCP Server

Enables AI agents to interact with the Clawslist marketplace to browse, create, and manage listings using the Model Context Protocol. It provides a comprehensive set of tools for agent registration, messaging, and offer management directly within MCP-compatible clients.

cmux-agent-mcp

cmux-agent-mcp

A programmable terminal control plane that enables AI agents to orchestrate, monitor, and interact with multiple parallel AI CLI sessions and browser instances within CMUX. It provides over 80 tools for workspace management, pane manipulation, and cross-agent communication to facilitate complex multi-project workflows.

Firestore Advanced MCP

Firestore Advanced MCP

一个模型上下文协议服务器,使像 Claude 这样的大型语言模型能够与 Firebase Firestore 数据库进行全面的交互,支持完整的 CRUD 操作、复杂的查询以及诸如事务和 TTL 管理等高级功能。

Prism MCP

Prism MCP

Production-ready MCP server with session memory, Brave Search, Vertex AI Discovery Engine, Google Gemini analysis, and sandboxed code-mode transforms.

Edit-MCP

Edit-MCP

A Model Context Protocol server that integrates with Microsoft's Edit tool, allowing AI systems to perform file operations from simple reads/writes to complex code editing and refactoring.

Planning MCP Server

Planning MCP Server

Provides comprehensive Australian planning property reports, including zoning, overlays, land size, and utility information for AI assistants. This high-performance MCP server is built for Cloudflare Workers and enables real-time property data retrieval through an HTTP-based interface.

Japanese Weather MCP Server

Japanese Weather MCP Server

A Model Context Protocol (MCP) server that provides access to Japanese weather forecasts using the weather.tsukumijima.net API.

crawl4ai-mcp

crawl4ai-mcp

Okay, here's a Python outline and explanation of how you might structure an MCP (Model Context Protocol) server that wraps the Crawl4AI library. This is a conceptual framework; you'll need to fill in the details based on your specific requirements and the Crawl4AI library's API. **Conceptual Overview** 1. **MCP Server:** This will be the main entry point. It listens for requests conforming to the MCP standard. These requests will specify which Crawl4AI function to execute and provide the necessary parameters. 2. **Crawl4AI Wrapper:** This layer translates MCP requests into calls to the Crawl4AI library. It handles parameter conversion, error handling, and result formatting. 3. **Crawl4AI Library:** This is the core library that performs the actual crawling and AI-related tasks. **Python Code Outline** ```python # Import necessary libraries import json from http.server import BaseHTTPRequestHandler, HTTPServer # For a simple HTTP server # Or use a more robust framework like Flask or FastAPI # from flask import Flask, request, jsonify # Example with Flask # Assuming Crawl4AI is installed and importable import crawl4ai # Replace with the actual import statement # --- Crawl4AI Wrapper --- class Crawl4AIWrapper: def __init__(self): # Initialize any necessary Crawl4AI resources here pass def crawl_website(self, url, max_depth=1): """ Wraps the Crawl4AI website crawling function. Args: url (str): The URL to start crawling from. max_depth (int): The maximum depth to crawl. Returns: dict: A dictionary containing the crawling results. Format this according to your MCP requirements. Could include: - `status`: "success" or "error" - `data`: The crawled data (e.g., list of URLs, extracted text) - `error_message`: If an error occurred. """ try: # Call the Crawl4AI function results = crawl4ai.crawl_website(url, max_depth=max_depth) # Replace with actual Crawl4AI call # Format the results into an MCP-compatible dictionary response = { "status": "success", "data": results # Adapt this to the MCP format } return response except Exception as e: # Handle errors gracefully response = { "status": "error", "error_message": str(e) } return response def analyze_text(self, text): """ Wraps the Crawl4AI text analysis function. Args: text (str): The text to analyze. Returns: dict: A dictionary containing the analysis results. """ try: analysis_results = crawl4ai.analyze_text(text) # Replace with actual Crawl4AI call response = { "status": "success", "data": analysis_results } return response except Exception as e: response = { "status": "error", "error_message": str(e) } return response # Add more wrapper functions for other Crawl4AI functionalities # --- MCP Server (Simple HTTP Server Example) --- class MCPRequestHandler(BaseHTTPRequestHandler): def do_POST(self): """Handles POST requests (MCP requests).""" content_length = int(self.headers['Content-Length']) post_data = self.rfile.read(content_length) try: request_data = json.loads(post_data.decode('utf-8')) # Process the request response = self.process_request(request_data) # Send the response self.send_response(200) # OK self.send_header('Content-type', 'application/json') self.end_headers() self.wfile.write(json.dumps(response).encode('utf-8')) except json.JSONDecodeError: self.send_response(400) # Bad Request self.send_header('Content-type', 'application/json') self.end_headers() self.wfile.write(json.dumps({"status": "error", "error_message": "Invalid JSON"}).encode('utf-8')) except Exception as e: self.send_response(500) # Internal Server Error self.send_header('Content-type', 'application/json') self.end_headers() self.wfile.write(json.dumps({"status": "error", "error_message": str(e)}).encode('utf-8')) def process_request(self, request_data): """ Processes the MCP request and calls the appropriate Crawl4AI function. Args: request_data (dict): The JSON-decoded MCP request. This should contain information like the function name and parameters. Returns: dict: The response from the Crawl4AI wrapper. """ global crawl4ai_wrapper # Access the global instance try: function_name = request_data.get("function") parameters = request_data.get("parameters", {}) # Default to empty dict if function_name == "crawl_website": url = parameters.get("url") max_depth = parameters.get("max_depth", 1) # Default max_depth if not url: return {"status": "error", "error_message": "Missing 'url' parameter"} return crawl4ai_wrapper.crawl_website(url, max_depth) elif function_name == "analyze_text": text = parameters.get("text") if not text: return {"status": "error", "error_message": "Missing 'text' parameter"} return crawl4ai_wrapper.analyze_text(text) else: return {"status": "error", "error_message": "Invalid function name"} except Exception as e: return {"status": "error", "error_message": str(e)} def run_server(server_class=HTTPServer, handler_class=MCPRequestHandler, port=8000): """Starts the MCP server.""" server_address = ('', port) httpd = server_class(server_address, handler_class) print(f"Starting MCP server on port {port}") httpd.serve_forever() # --- Main --- if __name__ == "__main__": # Initialize the Crawl4AI wrapper crawl4ai_wrapper = Crawl4AIWrapper() # Create a global instance # Start the server run_server() # --- Example MCP Request (JSON) --- # { # "function": "crawl_website", # "parameters": { # "url": "https://www.example.com", # "max_depth": 2 # } # } # --- Example MCP Request (JSON) --- # { # "function": "analyze_text", # "parameters": { # "text": "This is some text to analyze." # } # } ``` **Key Improvements and Explanations** * **Error Handling:** Includes `try...except` blocks to catch potential errors during Crawl4AI calls and JSON processing. Returns error messages in the MCP response. * **Parameter Handling:** The `process_request` function extracts parameters from the JSON request and passes them to the Crawl4AI wrapper functions. It also includes default values for optional parameters. It checks for missing required parameters. * **MCP-Compliant Responses:** The responses are formatted as JSON dictionaries with a `status` field ("success" or "error") and either a `data` field (for successful results) or an `error_message` field. Adapt the `data` format to your specific MCP requirements. * **Function Dispatch:** The `process_request` function uses `if/elif/else` to dispatch the request to the correct Crawl4AI wrapper function based on the `function` field in the MCP request. * **Crawl4AI Wrapper Class:** Encapsulates the Crawl4AI library calls within a class. This allows you to initialize resources (e.g., API keys, models) in the `__init__` method and reuse them across multiple requests. * **Global Crawl4AI Wrapper Instance:** A global instance `crawl4ai_wrapper` is created to avoid re-initializing the wrapper for each request. This can improve performance if the wrapper initialization is expensive. * **Example MCP Requests:** Includes example JSON requests that you can use to test the server. * **Clearer Structure:** Separates the Crawl4AI wrapper logic from the MCP server logic for better organization. * **Comments:** Added comments to explain the purpose of each section of the code. * **Uses `json` library:** Uses the standard `json` library for encoding and decoding JSON data. * **HTTP Status Codes:** Returns appropriate HTTP status codes (200, 400, 500) to indicate the success or failure of the request. * **Flexibility:** The code is designed to be easily extended to support more Crawl4AI functions. Just add more wrapper functions to the `Crawl4AIWrapper` class and update the `process_request` function to handle the new function names. **How to Use** 1. **Install Crawl4AI:** Make sure you have the Crawl4AI library installed (`pip install crawl4ai` or however it's installed). *Replace `crawl4ai` with the actual package name if it's different.* 2. **Replace Placeholders:** Replace the placeholder `crawl4ai.crawl_website()` and `crawl4ai.analyze_text()` calls with the actual calls to the Crawl4AI library. Adapt the parameter passing and result formatting to match the Crawl4AI API. 3. **Define MCP Format:** Clearly define the format of your MCP requests and responses. The code assumes a JSON-based format with a `function` field and a `parameters` field. 4. **Run the Server:** Run the Python script. It will start an HTTP server on port 8000 (by default). 5. **Send MCP Requests:** Send HTTP POST requests to the server with the MCP requests in the body. Use a tool like `curl`, `Postman`, or a Python `requests` library. **Example using `curl`:** ```bash curl -X POST -H "Content-Type: application/json" -d '{ "function": "crawl_website", "parameters": { "url": "https://www.example.com", "max_depth": 1 } }' http://localhost:8000 ``` **Important Considerations** * **Security:** This is a *very basic* HTTP server. For production environments, use a more robust framework like Flask or FastAPI, and implement proper security measures (authentication, authorization, input validation, etc.). * **Asynchronous Operations:** If Crawl4AI operations are long-running, consider using asynchronous programming (e.g., `asyncio` with FastAPI) to avoid blocking the server. * **Scalability:** For high-volume traffic, you'll need to consider scalability. This might involve using a load balancer, multiple server instances, and a message queue for handling requests. * **Error Logging:** Implement proper error logging to help you debug and monitor the server. * **MCP Standard:** Ensure that your implementation fully conforms to the MCP standard. This includes the request and response formats, error codes, and any other requirements. * **Crawl4AI API:** Thoroughly understand the Crawl4AI library's API and how to use its functions effectively. * **Rate Limiting:** Implement rate limiting to prevent abuse of your server and to comply with the terms of service of the websites you are crawling. * **User Agent:** Set a proper user agent string when crawling websites to identify your crawler and avoid being blocked. * **Robots.txt:** Respect the `robots.txt` file of the websites you are crawling. **Chinese Translation of Key Terms** * **MCP (Model Context Protocol):** 模型上下文协议 (Móxíng Shàngxiàwén Xiéyì) * **Crawl4AI:** 网络爬虫AI库 (Wǎngluò Páchóng AI Kù) or AI爬虫库 (AI Páchóng Kù) * **Server:** 服务器 (Fúwùqì) * **Wrapper:** 封装器 (Fēngzhuāngqì) or 包装器 (Bāozhuāngqì) * **Function:** 函数 (Hánshù) * **Parameter:** 参数 (Cānshù) * **Request:** 请求 (Qǐngqiú) * **Response:** 响应 (Xiǎngyìng) * **Error:** 错误 (Cuòwù) * **Status:** 状态 (Zhuàngtài) * **Data:** 数据 (Shùjù) * **URL:** 网址 (Wǎngzhǐ) * **Text:** 文本 (Wénběn) * **Analysis:** 分析 (Fēnxī) * **JSON:** JSON (JSON) (commonly used without translation) * **HTTP:** HTTP (HTTP) (commonly used without translation) This comprehensive outline should give you a solid foundation for building your MCP server with Crawl4AI. Remember to adapt the code to your specific needs and the Crawl4AI library's API. Good luck!

Groq MCP Server

Groq MCP Server

Groq MCP 服务器 (Groq MCP fúwùqì)

UniProt MCP Server

UniProt MCP Server

UniProt MCP Server

mcp-voice-hooks

mcp-voice-hooks

Voice Mode for Claude Code