Discover Awesome MCP Servers
Extend your agent with 17,755 capabilities via MCP servers.
- All17,755
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
mcp-servers
Minimal MCP Hello Server
A minimal MCP server template that provides a basic endpoint for testing ChatGPT integration. Designed to be deployed on Render's free plan as a starting point for building custom MCP servers with external API integrations.
MoEngage Documentation MCP Server
Provides comprehensive access to MoEngage documentation from developers, help, and partners portals with full-text search, automatic updates, and intelligent filtering by platform, category, and source.
Homebrew MCP Server
A Model Context Protocol server that enables interacting with the Homebrew package manager on macOS through natural language, supporting commands like install, search, info, and doctor.
Dropbox MCP Server
Provides read access to Dropbox files with advanced search and content extraction capabilities. Supports browsing, reading, and searching within various file types including PDFs, DOCX, and text files.
Claude MCP Servers
A toolkit for managing Claude desktop app extension configurations, enabling features like Brave search, file operations, Git/GitHub integration, shell access, web scraping, HTTP requests, AWS documentation search, and Obsidian note management.
Unpaywall MCP Server
Enables AI clients to search for academic papers, fetch metadata by DOI, retrieve open access PDF links, and extract full text from research papers using the Unpaywall API. Provides seamless access to scholarly literature for research and analysis tasks.
golf-course-project-mcp-server
高尔夫球场项目 MCP 服务器
Mcp Android Adb Server
Using AI Large Language Models to Operate Android Devices.
Google Ads Content Generator MCP Server
An MCP server implementation that enables generation of Google Ads content through Claude Code, Claude Desktop, and other MCP-compatible clients.
GraphQL MCP Server by CData
GraphQL MCP Server by CData
DuckDB-RAG-MCP-Sample
An MCP server that enables RAG (Retrieval-Augmented Generation) on markdown documents by converting them to embedding vectors and performing vector search using DuckDB.
MCP Video Extraction Plus
Enables video text extraction using multiple speech recognition providers including local Whisper, JianYing/CapCut, and Bilibili Cut services. Supports video downloading, audio extraction, and automatic speech-to-text transcription with configurable providers.
Remote MCP AuthKit
Enables remote MCP server connections with WorkOS AuthKit authentication and user management. Supports organization-centric authentication and permission-based tool access control.
HackathonWatch MCP Server
An MCP server for interacting with the HackathonWatch API, allowing users to access hackathon data and related information through natural language commands.
Gemini Audio Upload
Enables audio file analysis using Google's Gemini multimodal models with support for additional context and system instructions to guide the model's behavior.
Dida365 (TickTick) MCP Server
为 AI 助手提供与滴答清单 (TickTick) 任务管理 API 交互的工具,允许在用户授权后管理任务和项目。
MCP Server with LLM Integration
Enables chat with multiple LLM providers (OpenAI and Anthropic) while maintaining persistent conversation memory. Provides extensible tool framework for various operations including echo functionality and conversation storage/retrieval.
MCP ChatGPT Multi-Server Suite
A comprehensive suite of four MCP servers providing real-time stock market data, currency conversion between 160+ currencies, world timezone conversion, and unit conversion across 10 measurement categories. Features beautiful web interfaces and ChatGPT integration capabilities.
Multi-Tool Control Platform (MCP) Server
A Python framework for developing and managing tool instances through a registry system, where developers can easily create new tools by inheriting from the BaseHandler class and implementing required methods.
Marvel MCP Server using Azure Functions
一个基于 Azure Functions 的 MCP 服务器,它允许通过官方 Marvel Developer API 与漫威角色和漫画数据进行交互。
mcp-gomamayo
For "ゴママヨ", an MCP server could be translated as: **芝麻蛋黄酱服务器 (Zhīma dànhuángjiàng fúwùqì)** This is a literal translation: * **芝麻 (Zhīma):** Sesame * **蛋黄酱 (Dànhuángjiàng):** Mayonnaise * **服务器 (Fúwùqì):** Server While accurate, it might sound a bit clunky. Depending on the context, you might consider a more descriptive or brand-specific translation if "ゴママヨ" is a specific product or brand. For example, if it's a specific brand of sesame mayonnaise, you might use the brand name in Chinese followed by "服务器".
munich-mensa-mcp
Remote MCP Server for listing and getting the menus of the official mensas in munich
Jira MCP Server
一个用于查询 Jira 任务的 MCP 服务器 (Yī gè yòng yú cháxún Jira rènwù de MCP fúwùqì) **Breakdown:** * **一个 (yī gè):** a / one * **用于 (yòng yú):** for / used for * **查询 (cháxún):** querying / to query * **Jira 任务 (Jira rènwù):** Jira tasks * **的 (de):** of / 's (possessive particle) * **MCP 服务器 (MCP fúwùqì):** MCP server
athenahealth MCP Server
Enables AI-powered clinical decision support by integrating with athenahealth's API to access patient data, manage prescriptions, check drug interactions, and generate clinical assessments. Provides HIPAA-compliant healthcare workflows with comprehensive audit logging and data sanitization.
MCP_claude
这是为了演示如何为 Claude Desktop MCP 客户端构建一个 MCP 服务器。
crawl4ai-mcp
Okay, here's a Python outline and explanation of how you might structure an MCP (Model Context Protocol) server that wraps the Crawl4AI library. This is a conceptual framework; you'll need to fill in the details based on your specific requirements and the Crawl4AI library's API. **Conceptual Overview** 1. **MCP Server:** This will be the main entry point. It listens for requests conforming to the MCP standard. These requests will specify which Crawl4AI function to execute and provide the necessary parameters. 2. **Crawl4AI Wrapper:** This layer translates MCP requests into calls to the Crawl4AI library. It handles parameter conversion, error handling, and result formatting. 3. **Crawl4AI Library:** This is the core library that performs the actual crawling and AI-related tasks. **Python Code Outline** ```python # Import necessary libraries import json from http.server import BaseHTTPRequestHandler, HTTPServer # For a simple HTTP server # Or use a more robust framework like Flask or FastAPI # from flask import Flask, request, jsonify # Example with Flask # Assuming Crawl4AI is installed and importable import crawl4ai # Replace with the actual import statement # --- Crawl4AI Wrapper --- class Crawl4AIWrapper: def __init__(self): # Initialize any necessary Crawl4AI resources here pass def crawl_website(self, url, max_depth=1): """ Wraps the Crawl4AI website crawling function. Args: url (str): The URL to start crawling from. max_depth (int): The maximum depth to crawl. Returns: dict: A dictionary containing the crawling results. Format this according to your MCP requirements. Could include: - `status`: "success" or "error" - `data`: The crawled data (e.g., list of URLs, extracted text) - `error_message`: If an error occurred. """ try: # Call the Crawl4AI function results = crawl4ai.crawl_website(url, max_depth=max_depth) # Replace with actual Crawl4AI call # Format the results into an MCP-compatible dictionary response = { "status": "success", "data": results # Adapt this to the MCP format } return response except Exception as e: # Handle errors gracefully response = { "status": "error", "error_message": str(e) } return response def analyze_text(self, text): """ Wraps the Crawl4AI text analysis function. Args: text (str): The text to analyze. Returns: dict: A dictionary containing the analysis results. """ try: analysis_results = crawl4ai.analyze_text(text) # Replace with actual Crawl4AI call response = { "status": "success", "data": analysis_results } return response except Exception as e: response = { "status": "error", "error_message": str(e) } return response # Add more wrapper functions for other Crawl4AI functionalities # --- MCP Server (Simple HTTP Server Example) --- class MCPRequestHandler(BaseHTTPRequestHandler): def do_POST(self): """Handles POST requests (MCP requests).""" content_length = int(self.headers['Content-Length']) post_data = self.rfile.read(content_length) try: request_data = json.loads(post_data.decode('utf-8')) # Process the request response = self.process_request(request_data) # Send the response self.send_response(200) # OK self.send_header('Content-type', 'application/json') self.end_headers() self.wfile.write(json.dumps(response).encode('utf-8')) except json.JSONDecodeError: self.send_response(400) # Bad Request self.send_header('Content-type', 'application/json') self.end_headers() self.wfile.write(json.dumps({"status": "error", "error_message": "Invalid JSON"}).encode('utf-8')) except Exception as e: self.send_response(500) # Internal Server Error self.send_header('Content-type', 'application/json') self.end_headers() self.wfile.write(json.dumps({"status": "error", "error_message": str(e)}).encode('utf-8')) def process_request(self, request_data): """ Processes the MCP request and calls the appropriate Crawl4AI function. Args: request_data (dict): The JSON-decoded MCP request. This should contain information like the function name and parameters. Returns: dict: The response from the Crawl4AI wrapper. """ global crawl4ai_wrapper # Access the global instance try: function_name = request_data.get("function") parameters = request_data.get("parameters", {}) # Default to empty dict if function_name == "crawl_website": url = parameters.get("url") max_depth = parameters.get("max_depth", 1) # Default max_depth if not url: return {"status": "error", "error_message": "Missing 'url' parameter"} return crawl4ai_wrapper.crawl_website(url, max_depth) elif function_name == "analyze_text": text = parameters.get("text") if not text: return {"status": "error", "error_message": "Missing 'text' parameter"} return crawl4ai_wrapper.analyze_text(text) else: return {"status": "error", "error_message": "Invalid function name"} except Exception as e: return {"status": "error", "error_message": str(e)} def run_server(server_class=HTTPServer, handler_class=MCPRequestHandler, port=8000): """Starts the MCP server.""" server_address = ('', port) httpd = server_class(server_address, handler_class) print(f"Starting MCP server on port {port}") httpd.serve_forever() # --- Main --- if __name__ == "__main__": # Initialize the Crawl4AI wrapper crawl4ai_wrapper = Crawl4AIWrapper() # Create a global instance # Start the server run_server() # --- Example MCP Request (JSON) --- # { # "function": "crawl_website", # "parameters": { # "url": "https://www.example.com", # "max_depth": 2 # } # } # --- Example MCP Request (JSON) --- # { # "function": "analyze_text", # "parameters": { # "text": "This is some text to analyze." # } # } ``` **Key Improvements and Explanations** * **Error Handling:** Includes `try...except` blocks to catch potential errors during Crawl4AI calls and JSON processing. Returns error messages in the MCP response. * **Parameter Handling:** The `process_request` function extracts parameters from the JSON request and passes them to the Crawl4AI wrapper functions. It also includes default values for optional parameters. It checks for missing required parameters. * **MCP-Compliant Responses:** The responses are formatted as JSON dictionaries with a `status` field ("success" or "error") and either a `data` field (for successful results) or an `error_message` field. Adapt the `data` format to your specific MCP requirements. * **Function Dispatch:** The `process_request` function uses `if/elif/else` to dispatch the request to the correct Crawl4AI wrapper function based on the `function` field in the MCP request. * **Crawl4AI Wrapper Class:** Encapsulates the Crawl4AI library calls within a class. This allows you to initialize resources (e.g., API keys, models) in the `__init__` method and reuse them across multiple requests. * **Global Crawl4AI Wrapper Instance:** A global instance `crawl4ai_wrapper` is created to avoid re-initializing the wrapper for each request. This can improve performance if the wrapper initialization is expensive. * **Example MCP Requests:** Includes example JSON requests that you can use to test the server. * **Clearer Structure:** Separates the Crawl4AI wrapper logic from the MCP server logic for better organization. * **Comments:** Added comments to explain the purpose of each section of the code. * **Uses `json` library:** Uses the standard `json` library for encoding and decoding JSON data. * **HTTP Status Codes:** Returns appropriate HTTP status codes (200, 400, 500) to indicate the success or failure of the request. * **Flexibility:** The code is designed to be easily extended to support more Crawl4AI functions. Just add more wrapper functions to the `Crawl4AIWrapper` class and update the `process_request` function to handle the new function names. **How to Use** 1. **Install Crawl4AI:** Make sure you have the Crawl4AI library installed (`pip install crawl4ai` or however it's installed). *Replace `crawl4ai` with the actual package name if it's different.* 2. **Replace Placeholders:** Replace the placeholder `crawl4ai.crawl_website()` and `crawl4ai.analyze_text()` calls with the actual calls to the Crawl4AI library. Adapt the parameter passing and result formatting to match the Crawl4AI API. 3. **Define MCP Format:** Clearly define the format of your MCP requests and responses. The code assumes a JSON-based format with a `function` field and a `parameters` field. 4. **Run the Server:** Run the Python script. It will start an HTTP server on port 8000 (by default). 5. **Send MCP Requests:** Send HTTP POST requests to the server with the MCP requests in the body. Use a tool like `curl`, `Postman`, or a Python `requests` library. **Example using `curl`:** ```bash curl -X POST -H "Content-Type: application/json" -d '{ "function": "crawl_website", "parameters": { "url": "https://www.example.com", "max_depth": 1 } }' http://localhost:8000 ``` **Important Considerations** * **Security:** This is a *very basic* HTTP server. For production environments, use a more robust framework like Flask or FastAPI, and implement proper security measures (authentication, authorization, input validation, etc.). * **Asynchronous Operations:** If Crawl4AI operations are long-running, consider using asynchronous programming (e.g., `asyncio` with FastAPI) to avoid blocking the server. * **Scalability:** For high-volume traffic, you'll need to consider scalability. This might involve using a load balancer, multiple server instances, and a message queue for handling requests. * **Error Logging:** Implement proper error logging to help you debug and monitor the server. * **MCP Standard:** Ensure that your implementation fully conforms to the MCP standard. This includes the request and response formats, error codes, and any other requirements. * **Crawl4AI API:** Thoroughly understand the Crawl4AI library's API and how to use its functions effectively. * **Rate Limiting:** Implement rate limiting to prevent abuse of your server and to comply with the terms of service of the websites you are crawling. * **User Agent:** Set a proper user agent string when crawling websites to identify your crawler and avoid being blocked. * **Robots.txt:** Respect the `robots.txt` file of the websites you are crawling. **Chinese Translation of Key Terms** * **MCP (Model Context Protocol):** 模型上下文协议 (Móxíng Shàngxiàwén Xiéyì) * **Crawl4AI:** 网络爬虫AI库 (Wǎngluò Páchóng AI Kù) or AI爬虫库 (AI Páchóng Kù) * **Server:** 服务器 (Fúwùqì) * **Wrapper:** 封装器 (Fēngzhuāngqì) or 包装器 (Bāozhuāngqì) * **Function:** 函数 (Hánshù) * **Parameter:** 参数 (Cānshù) * **Request:** 请求 (Qǐngqiú) * **Response:** 响应 (Xiǎngyìng) * **Error:** 错误 (Cuòwù) * **Status:** 状态 (Zhuàngtài) * **Data:** 数据 (Shùjù) * **URL:** 网址 (Wǎngzhǐ) * **Text:** 文本 (Wénběn) * **Analysis:** 分析 (Fēnxī) * **JSON:** JSON (JSON) (commonly used without translation) * **HTTP:** HTTP (HTTP) (commonly used without translation) This comprehensive outline should give you a solid foundation for building your MCP server with Crawl4AI. Remember to adapt the code to your specific needs and the Crawl4AI library's API. Good luck!
mcp-voice-hooks
Voice Mode for Claude Code
Nuclei MCP
Connects Nuclei vulnerability scanner with MCP-compatible applications, enabling AI assistants to perform security testing through natural language interactions.
Apache AGE MCP Server
Enables AI agents to manage and interact with Apache AGE graph databases through natural language. Supports creating, updating, querying, and visualizing multiple graphs with vertices and edges.