Discover Awesome MCP Servers

Extend your agent with 16,080 capabilities via MCP servers.

All16,080
Zoom API MCP Server

Zoom API MCP Server

Server Protokol Konteks Model komprehensif yang memungkinkan interaksi dengan rangkaian lengkap titik akhir API Zoom, menyediakan alat terstruktur dengan validasi yang tepat dan autentikasi OAuth 2.0 untuk mengelola rapat, pengguna, webinar, dan sumber daya Zoom lainnya.

Mcp Server

Mcp Server

Network School Events MCP Server

Network School Events MCP Server

Enables access to Network School Luma calendar events through Claude Desktop. Allows users to view today's events, search upcoming events, and register for events directly with their contact information.

Gemini Image MCP Server

Gemini Image MCP Server

Enables image generation and editing using Google Gemini AI with support for multiple aspect ratios, context images, custom styles, and watermark overlays. Optimized for creating social media content with automatic file saving and flexible output configuration.

Firewalla MCP Server

Firewalla MCP Server

Provides real-time access to Firewalla firewall data through 28 specialized tools for network monitoring, security analysis, bandwidth tracking, and firewall rule management. Enables users to query security alerts, analyze network flows, monitor device status, and manage firewall configurations through natural language.

protolint-mcp

protolint-mcp

protolint-mcp

mcp-nckuhub-server

mcp-nckuhub-server

Salesforce MCP

Salesforce MCP

Salesforce MCP

Apache AGE MCP Server

Apache AGE MCP Server

Enables AI agents to manage and interact with Apache AGE graph databases through natural language. Supports creating, updating, querying, and visualizing multiple graphs with vertices and edges.

Bilibili-Mcp-Server

Bilibili-Mcp-Server

Fathom-Simple-MCP

Fathom-Simple-MCP

A Model Context Protocol (MCP) server for accessing Fathom AI API endpoints (meetings, recordings, transcripts, summaries, teams, team members) via GET operations.

Serpex MCP

Serpex MCP

Enables multi-engine web search capabilities across Google, Bing, DuckDuckGo, Brave, Yahoo, and Yandex through the Serpex API. Features automatic engine routing, time filtering, and structured JSON results for reliable web search integration.

munich-mensa-mcp

munich-mensa-mcp

Remote MCP Server for listing and getting the menus of the official mensas in munich

Amazon Rainforest Travel Guide

Amazon Rainforest Travel Guide

An AI-powered travel assistant connected to Claude via MCP that helps users plan Amazon Rainforest trips by providing directions, hotel recommendations, adventure spots, cultural sites, and travel precautions.

National Park Data MCP Server

National Park Data MCP Server

Python based with all 29 tools that's available to developers from National Park Services (NPS)

Bybit MCP Server

Bybit MCP Server

Enables cryptocurrency trading on Bybit exchange through comprehensive market data access, account management, and automated trading operations. Features smart position validation, trailing stop losses, and risk management tools with demo mode support for safe testing.

Data Labeling MCP Server

Data Labeling MCP Server

An MCP Server that enables interaction with Google's Data Labeling API, allowing users to manage datasets, annotations, and labeling tasks through natural language commands.

YAPI MCP Server

YAPI MCP Server

Commit Messages MCP Server

Commit Messages MCP Server

Sebuah server MCP yang membantu menghasilkan pesan commit berkualitas tinggi berdasarkan perubahan berkas.

Customer Reminder MCP

Customer Reminder MCP

Enables automated customer reminder management by integrating with Google Sheets to read customer data and sending scheduled email reminders based on due dates. Supports personalized email templates, intelligent scheduling, and duplicate prevention with Gmail SMTP integration.

12306 MCP Server

12306 MCP Server

A high-performance FastAPI backend for train ticket queries in China, supporting real-time ticket availability, station information, transfers, and train schedules through the Model Context Protocol for AI assistants and automation.

Remote MCP Server on Cloudflare

Remote MCP Server on Cloudflare

Firestore Advanced MCP

Firestore Advanced MCP

Server Protokol Konteks Model yang memungkinkan model bahasa besar seperti Claude untuk melakukan interaksi komprehensif dengan database Firebase Firestore, mendukung operasi CRUD lengkap, kueri kompleks, dan fitur lanjutan seperti transaksi dan manajemen TTL.

FastMCP SonarQube Metrics

FastMCP SonarQube Metrics

A server that provides tools for retrieving SonarQube project metrics and quality data through a simplified message-based approach, allowing users to programmatically access metrics, historical data, and component-level information from SonarQube.

Edit-MCP

Edit-MCP

A Model Context Protocol server that integrates with Microsoft's Edit tool, allowing AI systems to perform file operations from simple reads/writes to complex code editing and refactoring.

Japanese Weather MCP Server

Japanese Weather MCP Server

A Model Context Protocol (MCP) server that provides access to Japanese weather forecasts using the weather.tsukumijima.net API.

crawl4ai-mcp

crawl4ai-mcp

```python from crawl4ai import Crawl4AI # Assuming crawl4ai is installed class MCP_Server: """ MCP (Model Context Protocol) server wrapping the Crawl4AI library. This class provides a simplified interface to the Crawl4AI library, exposing its functionality as functions that can be easily integrated into an MCP server environment. """ def __init__(self, api_key=None, proxy=None): """ Initializes the MCP_Server with Crawl4AI. Args: api_key (str, optional): Your Crawl4AI API key. Defaults to None. proxy (str, optional): Proxy server to use for requests. Defaults to None. """ self.crawl4ai = Crawl4AI(api_key=api_key, proxy=proxy) def crawl_website(self, url, max_depth=1, max_pages=10, follow_robots_txt=True, include_images=False, include_pdfs=False): """ Crawls a website and returns the extracted content. Args: url (str): The URL of the website to crawl. max_depth (int, optional): The maximum depth to crawl. Defaults to 1. max_pages (int, optional): The maximum number of pages to crawl. Defaults to 10. follow_robots_txt (bool, optional): Whether to follow robots.txt. Defaults to True. include_images (bool, optional): Whether to include image URLs in the output. Defaults to False. include_pdfs (bool, optional): Whether to include PDF URLs in the output. Defaults to False. Returns: dict: A dictionary containing the crawled data. The structure will depend on the Crawl4AI library's output. Likely to include keys like 'pages', 'images', 'pdfs', etc. Returns None if an error occurs. """ try: result = self.crawl4ai.crawl_website( url, max_depth=max_depth, max_pages=max_pages, follow_robots_txt=follow_robots_txt, include_images=include_images, include_pdfs=include_pdfs ) return result except Exception as e: print(f"Error during website crawl: {e}") return None def extract_text_from_url(self, url): """ Extracts text content from a single URL. Args: url (str): The URL to extract text from. Returns: str: The extracted text content, or None if an error occurs. """ try: text = self.crawl4ai.extract_text_from_url(url) return text except Exception as e: print(f"Error extracting text from URL: {e}") return None def extract_links_from_url(self, url): """ Extracts all links from a single URL. Args: url (str): The URL to extract links from. Returns: list: A list of URLs found on the page, or None if an error occurs. """ try: links = self.crawl4ai.extract_links_from_url(url) return links except Exception as e: print(f"Error extracting links from URL: {e}") return None # Add more functions as needed, wrapping other Crawl4AI functionalities # Example Usage (assuming you have an API key): if __name__ == '__main__': # Replace with your actual API key api_key = "YOUR_CRAWL4AI_API_KEY" mcp_server = MCP_Server(api_key=api_key) # Crawl a website url_to_crawl = "https://www.example.com" crawl_data = mcp_server.crawl_website(url_to_crawl, max_depth=2, max_pages=5) if crawl_data: print(f"Crawled data from {url_to_crawl}:") print(crawl_data) # Print the entire crawled data dictionary # You can then access specific parts of the data, e.g., # if 'pages' in crawl_data: # for page in crawl_data['pages']: # print(f" Page URL: {page['url']}") # print(f" Page Title: {page['title']}") # print(f" Page Content: {page['content']}") # Extract text from a URL url_to_extract = "https://www.example.com/some-page" extracted_text = mcp_server.extract_text_from_url(url_to_extract) if extracted_text: print(f"\nExtracted text from {url_to_extract}:") print(extracted_text[:200] + "...") # Print the first 200 characters # Extract links from a URL url_to_extract_links = "https://www.example.com/another-page" extracted_links = mcp_server.extract_links_from_url(url_to_extract_links) if extracted_links: print(f"\nExtracted links from {url_to_extract_links}:") for link in extracted_links: print(link) ``` **Explanation:** 1. **Import `Crawl4AI`:** The code starts by importing the `Crawl4AI` library. Make sure you have it installed (`pip install crawl4ai`). 2. **`MCP_Server` Class:** - **`__init__`:** The constructor initializes the `Crawl4AI` object with your API key and optional proxy settings. This is where you'll need to provide your actual Crawl4AI API key. - **`crawl_website`:** This function wraps the `crawl4ai.crawl_website()` method. It takes the URL and other crawling parameters as input and returns the crawled data (likely a dictionary). It also includes error handling using a `try...except` block. If an error occurs, it prints an error message and returns `None`. - **`extract_text_from_url`:** This function wraps the `crawl4ai.extract_text_from_url()` method. It extracts the text content from a given URL and returns it. It also includes error handling. - **`extract_links_from_url`:** This function wraps the `crawl4ai.extract_links_from_url()` method. It extracts all links from a given URL and returns them as a list. It also includes error handling. - **Error Handling:** Each function includes a `try...except` block to catch potential exceptions during the Crawl4AI operations. This is important for a robust server implementation. The error messages are printed to the console, but in a real server environment, you'd likely want to log them to a file or use a more sophisticated error reporting mechanism. 3. **Example Usage (`if __name__ == '__main__':`)** - **API Key:** The example code shows where to put your Crawl4AI API key. **Replace `"YOUR_CRAWL4AI_API_KEY"` with your actual key.** - **Instantiation:** It creates an instance of the `MCP_Server` class. - **Calling Functions:** It demonstrates how to call the `crawl_website`, `extract_text_from_url`, and `extract_links_from_url` functions. - **Printing Results:** It prints the results of the crawling and extraction operations. The `crawl_data` is printed as a dictionary. You'll need to examine the structure of the `crawl_data` dictionary to access the specific information you need (e.g., page titles, content, etc.). The example shows how to iterate through the `pages` if the `crawl_data` contains a 'pages' key. - **String Slicing:** The `extracted_text[:200] + "..."` part truncates the extracted text to the first 200 characters to avoid printing a very long string. **How to Integrate into an MCP Server:** This code provides the *functions* you need. To integrate it into an MCP server, you'll need to: 1. **Choose an MCP Server Framework:** Popular choices include: - **Flask:** A lightweight and flexible Python web framework. - **FastAPI:** A modern, high-performance web framework for building APIs. - **gRPC:** A high-performance, open-source universal RPC framework. 2. **Define API Endpoints:** Create API endpoints (routes) that correspond to the functions in the `MCP_Server` class. For example, you might have an endpoint `/crawl` that calls the `crawl_website` function. 3. **Receive Requests and Pass Parameters:** Your API endpoints will receive HTTP requests (or gRPC requests) containing the parameters for the Crawl4AI functions (e.g., the URL to crawl, the maximum depth, etc.). You'll need to extract these parameters from the request and pass them to the corresponding `MCP_Server` function. 4. **Return Results:** After calling the `MCP_Server` function, you'll need to format the results (e.g., as JSON) and return them as the response to the HTTP request (or gRPC response). **Example using Flask:** ```python from flask import Flask, request, jsonify from crawl4ai import Crawl4AI # Assuming crawl4ai is installed app = Flask(__name__) class MCP_Server: # (Same MCP_Server class as above) def __init__(self, api_key=None, proxy=None): """ Initializes the MCP_Server with Crawl4AI. Args: api_key (str, optional): Your Crawl4AI API key. Defaults to None. proxy (str, optional): Proxy server to use for requests. Defaults to None. """ self.crawl4ai = Crawl4AI(api_key=api_key, proxy=proxy) def crawl_website(self, url, max_depth=1, max_pages=10, follow_robots_txt=True, include_images=False, include_pdfs=False): """ Crawls a website and returns the extracted content. Args: url (str): The URL of the website to crawl. max_depth (int, optional): The maximum depth to crawl. Defaults to 1. max_pages (int, optional): The maximum number of pages to crawl. Defaults to 10. follow_robots_txt (bool, optional): Whether to follow robots.txt. Defaults to True. include_images (bool, optional): Whether to include image URLs in the output. Defaults to False. include_pdfs (bool, optional): Whether to include PDF URLs in the output. Defaults to False. Returns: dict: A dictionary containing the crawled data. The structure will depend on the Crawl4AI library's output. Likely to include keys like 'pages', 'images', 'pdfs', etc. Returns None if an error occurs. """ try: result = self.crawl4ai.crawl_website( url, max_depth=max_depth, max_pages=max_pages, follow_robots_txt=follow_robots_txt, include_images=include_images, include_pdfs=include_pdfs ) return result except Exception as e: print(f"Error during website crawl: {e}") return None def extract_text_from_url(self, url): """ Extracts text content from a single URL. Args: url (str): The URL to extract text from. Returns: str: The extracted text content, or None if an error occurs. """ try: text = self.crawl4ai.extract_text_from_url(url) return text except Exception as e: print(f"Error extracting text from URL: {e}") return None def extract_links_from_url(self, url): """ Extracts all links from a single URL. Args: url (str): The URL to extract links from. Returns: list: A list of URLs found on the page, or None if an error occurs. """ try: links = self.crawl4ai.extract_links_from_url(url) return links except Exception as e: print(f"Error extracting links from URL: {e}") return None # Replace with your actual API key api_key = "YOUR_CRAWL4AI_API_KEY" mcp_server = MCP_Server(api_key=api_key) @app.route('/crawl', methods=['POST']) def crawl_endpoint(): """ API endpoint for crawling a website. """ try: data = request.get_json() url = data['url'] max_depth = data.get('max_depth', 1) # Use .get() for optional parameters max_pages = data.get('max_pages', 10) follow_robots_txt = data.get('follow_robots_txt', True) include_images = data.get('include_images', False) include_pdfs = data.get('include_pdfs', False) crawl_data = mcp_server.crawl_website( url, max_depth=max_depth, max_pages=max_pages, follow_robots_txt=follow_robots_txt, include_images=include_images, include_pdfs=include_pdfs ) if crawl_data: return jsonify(crawl_data), 200 # Return JSON with status code 200 (OK) else: return jsonify({"error": "Crawl failed"}), 500 # Return error with status code 500 (Internal Server Error) except Exception as e: print(f"Error in /crawl endpoint: {e}") return jsonify({"error": str(e)}), 400 # Return error with status code 400 (Bad Request) @app.route('/extract_text', methods=['POST']) def extract_text_endpoint(): """ API endpoint for extracting text from a URL. """ try: data = request.get_json() url = data['url'] extracted_text = mcp_server.extract_text_from_url(url) if extracted_text: return jsonify({"text": extracted_text}), 200 else: return jsonify({"error": "Text extraction failed"}), 500 except Exception as e: print(f"Error in /extract_text endpoint: {e}") return jsonify({"error": str(e)}), 400 @app.route('/extract_links', methods=['POST']) def extract_links_endpoint(): """ API endpoint for extracting links from a URL. """ try: data = request.get_json() url = data['url'] extracted_links = mcp_server.extract_links_from_url(url) if extracted_links: return jsonify({"links": extracted_links}), 200 else: return jsonify({"error": "Link extraction failed"}), 500 except Exception as e: print(f"Error in /extract_links endpoint: {e}") return jsonify({"error": str(e)}), 400 if __name__ == '__main__': app.run(debug=True) # Use debug=False in production ``` **To run the Flask example:** 1. **Install Flask:** `pip install Flask` 2. **Save the code:** Save the code as a Python file (e.g., `mcp_server.py`). 3. **Run the file:** `python mcp_server.py` 4. **Test the endpoints:** You can use `curl`, `Postman`, or any other HTTP client to send requests to the API endpoints. For example: ```bash # Crawl a website curl -X POST -H "Content-Type: application/json" -d '{"url": "https://www.example.com", "max_depth": 2}' http://127.0.0.1:5000/crawl # Extract text from a URL curl -X POST -H "Content-Type: application/json" -d '{"url": "https://www.example.com/some-page"}' http://127.0.0.1:5000/extract_text # Extract links from a URL curl -X POST -H "Content-Type: application/json" -d '{"url": "https://www.example.com/another-page"}' http://127.0.0.1:5000/extract_links ``` **Key improvements in the Flask example:** * **Flask Integration:** The code is now a complete Flask application. * **API Endpoints:** It defines three API endpoints: `/crawl`, `/extract_text`, and `/extract_links`. * **Request Handling:** It uses `request.get_json()` to parse the JSON data sent in the request body. * **Parameter Extraction:** It extracts the parameters from the JSON data and passes them to the `MCP_Server` functions. It uses `data.get('max_depth', 1)` to handle optional parameters, providing default values if they are not present in the request. * **JSON Responses:** It uses `jsonify()` to format the results as JSON and returns them as the response to the HTTP request. * **Error Handling:** It includes `try...except` blocks to catch exceptions in the API endpoints and returns appropriate error responses with HTTP status codes (400, 500). * **HTTP Status Codes:** It returns appropriate HTTP status codes (200 OK, 400 Bad Request, 500 Internal Server Error) to indicate the success or failure of the request. * **Clearer Error Messages:** The error messages returned in the JSON responses are more informative. Remember to replace `"YOUR_CRAWL4AI_API_KEY"` with your actual API key. Also, in a production environment, you should use a more robust web server (like Gunicorn or uWSGI) instead of the Flask development server. You should also set `debug=False` in `app.run()`. This comprehensive example provides a solid foundation for building an MCP server that wraps the Crawl4AI library. You can adapt it to your specific needs by adding more API endpoints, customizing the error handling, and integrating it with other components of your MCP system.

Groq MCP Server

Groq MCP Server

Server Groq MCP

AFL (Australian Football League) MCP Server

AFL (Australian Football League) MCP Server

Ini adalah server Model Context Protocol (MCP) yang menyediakan data AFL (Liga Sepak Bola Australia) dari Squiggle API.

Marvel MCP Server using Azure Functions

Marvel MCP Server using Azure Functions

Sebuah server MCP berbasis Azure Functions yang memungkinkan interaksi dengan data karakter dan komik Marvel melalui API Pengembang Marvel resmi.