Discover Awesome MCP Servers

Extend your agent with 20,526 capabilities via MCP servers.

All20,526
Translate SRT MCP Server

Translate SRT MCP Server

Enables translation of SRT subtitle files from English to Japanese using local LLM servers like LM Studio. Parses SRT format files and returns translated subtitles in proper SRT format through OpenAI-compatible APIs.

pulsar-mcp-server

pulsar-mcp-server

Nuclei MCP

Nuclei MCP

Connects Nuclei vulnerability scanner with MCP-compatible applications, enabling AI assistants to perform security testing through natural language interactions.

Apache AGE MCP Server

Apache AGE MCP Server

Enables AI agents to manage and interact with Apache AGE graph databases through natural language. Supports creating, updating, querying, and visualizing multiple graphs with vertices and edges.

Sakila MCP Server

Sakila MCP Server

Enables natural language interaction with MySQL Sakila database through intent-based tools for movie search, customer management, rental operations, and business analytics without exposing database schema.

@sequel/mcp

@sequel/mcp

Servidores de banco de dados MCP para Claude, Cursor e Windsurf

Bilibili-Mcp-Server

Bilibili-Mcp-Server

Fathom-Simple-MCP

Fathom-Simple-MCP

A Model Context Protocol (MCP) server for accessing Fathom AI API endpoints (meetings, recordings, transcripts, summaries, teams, team members) via GET operations.

LangSearch MCP Server

LangSearch MCP Server

Provides access to LangSearch's Web Search and Semantic Rerank APIs for AI assistants. It enables web searching with advanced filtering and reranking of documents based on semantic relevance.

EnriWeb

EnriWeb

An MCP server that provides web search and URL fetching capabilities by delegating execution to an EnriProxy server. It enables AI agents to perform structured web searches and retrieve content with support for filtering, recency limits, and pagination.

Bybit MCP Server

Bybit MCP Server

Enables cryptocurrency trading on Bybit exchange through comprehensive market data access, account management, and automated trading operations. Features smart position validation, trailing stop losses, and risk management tools with demo mode support for safe testing.

HR Assistant Agent

HR Assistant Agent

An MCP-powered HR management system that automates employee onboarding, leave tracking, meeting scheduling, and IT ticketing. It allows users to manage organizational workflows and administrative tasks through natural language interactions with Claude.

Data Labeling MCP Server

Data Labeling MCP Server

An MCP Server that enables interaction with Google's Data Labeling API, allowing users to manage datasets, annotations, and labeling tasks through natural language commands.

YAPI MCP Server

YAPI MCP Server

MCP UI/UX Prompt Refiner

MCP UI/UX Prompt Refiner

Transforms basic interface ideas into comprehensive, professional-grade UI/UX design specifications with detailed styling, animations, components, and accessibility requirements for websites, apps, and other digital interfaces.

mcp-k8s

mcp-k8s

Servidor MCP k8s

Customer Reminder MCP

Customer Reminder MCP

Enables automated customer reminder management by integrating with Google Sheets to read customer data and sending scheduled email reminders based on due dates. Supports personalized email templates, intelligent scheduling, and duplicate prevention with Gmail SMTP integration.

Research Paper Ingestion MCP Server

Research Paper Ingestion MCP Server

Enables searching, downloading, and analyzing academic papers from arXiv and Semantic Scholar to extract key insights and citation metrics. It facilitates autonomous knowledge acquisition by processing research findings and integrating them into persistent AI memory systems.

DuckDB-RAG-MCP-Sample

DuckDB-RAG-MCP-Sample

An MCP server that enables RAG (Retrieval-Augmented Generation) on markdown documents by converting them to embedding vectors and performing vector search using DuckDB.

GitHub MCP Control Plane

GitHub MCP Control Plane

Provides secure, controlled access to GitHub operations through the Model Context Protocol with enterprise-grade security features including secret detection, vulnerability scanning, rate limiting, and full audit trails. Supports repository management, file operations, branch creation, commits, and GitHub Actions workflows.

Remote MCP AuthKit

Remote MCP AuthKit

Enables remote MCP server connections with WorkOS AuthKit authentication and user management. Supports organization-centric authentication and permission-based tool access control.

Stock Valuation MCP Server

Stock Valuation MCP Server

Provides professional-grade financial analysis tools for Thai stock markets, including PE Band Analysis, DDM, DCF valuation models, real-time SET Watch API data, complete financial statements, and historical ratio analysis with investment recommendations.

BigQuery MCP Server

BigQuery MCP Server

Enables LLMs to interact with Google BigQuery by inspecting database schemas, listing tables, and executing SQL queries. This server facilitates seamless data analysis and management through natural language via the Model Context Protocol.

Firestore Advanced MCP

Firestore Advanced MCP

Um servidor de Protocolo de Contexto de Modelo que permite que grandes modelos de linguagem, como o Claude, realizem interações abrangentes com bancos de dados Firebase Firestore, suportando operações CRUD completas, consultas complexas e recursos avançados como transações e gerenciamento de TTL (Time-to-Live).

MCP_claude

MCP_claude

Isto é para demonstrar como um servidor MCP pode ser construído para o Cliente MCP do Claude Desktop.

At-Work API MCP Server

At-Work API MCP Server

An MCP (Multi-Agent Conversation Protocol) Server that enables interaction with the At-Work API (api.at-work.biz), allowing agents to communicate with this service through various transport modes like stdio, SSE, and HTTP.

Japanese Weather MCP Server

Japanese Weather MCP Server

A Model Context Protocol (MCP) server that provides access to Japanese weather forecasts using the weather.tsukumijima.net API.

Mock Store MCP Server

Mock Store MCP Server

Enables AI agents to explore and query a mock e-commerce store's data including customers, products, inventory, and orders through conversational interactions backed by PostgreSQL.

crawl4ai-mcp

crawl4ai-mcp

Here's a Python outline for creating an MCP (Model Context Protocol) server that wraps the Crawl4AI library, along with explanations and considerations: ```python from http.server import BaseHTTPRequestHandler, HTTPServer import json import logging # Assuming Crawl4AI is installed and importable try: from crawl4ai import Crawl4AI # Replace with the actual import if different except ImportError: print("Error: Crawl4AI library not found. Please install it.") Crawl4AI = None # Disable functionality if library is missing # Configure logging logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') # --- Configuration --- HOST_NAME = "localhost" # Or "0.0.0.0" to listen on all interfaces PORT_NUMBER = 8080 # --- MCP Request Handler --- class MCPRequestHandler(BaseHTTPRequestHandler): def _set_response(self, status_code=200, content_type="application/json"): self.send_response(status_code) self.send_header("Content-type", content_type) self.end_headers() def do_POST(self): """Handles POST requests, expecting JSON data.""" content_length = int(self.headers['Content-Length']) post_data = self.rfile.read(content_length) try: request_data = json.loads(post_data.decode('utf-8')) logging.info(f"Received request: {request_data}") except json.JSONDecodeError: self._set_response(400) self.wfile.write(json.dumps({"error": "Invalid JSON"}).encode('utf-8')) return # Route the request based on the 'action' field (or similar) action = request_data.get('action') if action == "crawl": self.handle_crawl_request(request_data) elif action == "extract_data": self.handle_extract_data_request(request_data) # Example else: self._set_response(400) self.wfile.write(json.dumps({"error": "Invalid action"}).encode('utf-8')) def handle_crawl_request(self, request_data): """Handles a crawl request using Crawl4AI.""" if Crawl4AI is None: self._set_response(500) self.wfile.write(json.dumps({"error": "Crawl4AI library not available"}).encode('utf-8')) return url = request_data.get('url') if not url: self._set_response(400) self.wfile.write(json.dumps({"error": "Missing 'url' parameter"}).encode('utf-8')) return try: # Initialize Crawl4AI (adjust parameters as needed) crawler = Crawl4AI() # You might need API keys or other setup here # Perform the crawl result = crawler.crawl(url) # Assuming a crawl method exists # Prepare the response response_data = {"status": "success", "data": result} self._set_response(200) self.wfile.write(json.dumps(response_data).encode('utf-8')) except Exception as e: logging.exception("Error during crawl:") self._set_response(500) self.wfile.write(json.dumps({"error": str(e)}).encode('utf-8')) def handle_extract_data_request(self, request_data): """Example: Handles a data extraction request (if Crawl4AI supports it).""" # Implement data extraction logic here, using Crawl4AI functions. # This is just a placeholder. Adapt to Crawl4AI's capabilities. self._set_response(501) # Not Implemented self.wfile.write(json.dumps({"error": "Data extraction not implemented"}).encode('utf-8')) if __name__ == '__main__': if Crawl4AI is None: print("Crawl4AI library is missing. Server will not start.") else: webServer = HTTPServer((HOST_NAME, PORT_NUMBER), MCPRequestHandler) print(f"Server started http://{HOST_NAME}:{PORT_NUMBER}") try: webServer.serve_forever() except KeyboardInterrupt: pass webServer.server_close() print("Server stopped.") ``` Key improvements and explanations: * **Error Handling:** Includes `try...except` blocks to catch potential errors during JSON parsing, Crawl4AI execution, and other operations. Logs exceptions for debugging. Returns appropriate HTTP status codes (400 for bad requests, 500 for server errors). Crucially, it checks if `Crawl4AI` was successfully imported and handles the case where it's missing. * **JSON Handling:** Correctly decodes the POST data from bytes to a string using UTF-8 encoding and encodes the response back to bytes. * **MCP Structure:** The `MCPRequestHandler` class handles incoming HTTP requests. It parses the JSON payload and routes the request to the appropriate handler function based on the `action` field. This is a basic MCP structure; you can extend it with more actions and more sophisticated routing. * **Crawl4AI Integration:** The `handle_crawl_request` function demonstrates how to use the `Crawl4AI` library. It extracts the URL from the request, initializes `Crawl4AI`, calls the `crawl` method (assuming it exists), and returns the result as a JSON response. **Important:** You'll need to adapt this part to the actual API of the `Crawl4AI` library. The example assumes a `crawl` method that takes a URL. You'll also need to handle any authentication or API key requirements of `Crawl4AI`. * **Configuration:** The `HOST_NAME` and `PORT_NUMBER` variables allow you to easily configure the server's address and port. * **Logging:** Uses the `logging` module to provide informative messages about requests and errors. This is essential for debugging. * **Example `extract_data` handler:** Includes a placeholder for a `handle_extract_data_request` function. This shows how you could extend the server to support other Crawl4AI functionalities. It returns a 501 (Not Implemented) status code. * **Clearer Error Messages:** Returns more descriptive error messages in the JSON responses, making it easier to diagnose problems. * **Conditional Crawl4AI Usage:** The code now checks if `Crawl4AI` was imported successfully. If not, it disables the crawl functionality and prevents the server from starting if `Crawl4AI` is essential. This prevents the server from crashing if the library is not installed. * **UTF-8 Encoding:** Explicitly uses UTF-8 encoding for decoding the request body and encoding the response. This is crucial for handling a wide range of characters. **How to Use:** 1. **Install Crawl4AI:** `pip install crawl4ai` (or the correct installation command for the library). 2. **Replace Placeholders:** Modify the `handle_crawl_request` and `handle_extract_data_request` functions to use the actual methods and parameters of the `Crawl4AI` library. Pay close attention to authentication and API key requirements. 3. **Run the Script:** `python your_script_name.py` 4. **Send POST Requests:** Use `curl`, `requests` (Python library), or any other HTTP client to send POST requests to `http://localhost:8080`. The request body should be a JSON object with an `action` field and any necessary parameters. Example `curl` request: ```bash curl -X POST -H "Content-Type: application/json" -d '{"action": "crawl", "url": "https://www.example.com"}' http://localhost:8080 ``` **Important Considerations:** * **Security:** This is a very basic server. For production use, you'll need to add security measures, such as authentication, authorization, and input validation, to prevent malicious attacks. Consider using a more robust web framework like Flask or Django. * **Asynchronous Operations:** Crawling can be a long-running process. Consider using asynchronous programming (e.g., `asyncio` or `threading`) to handle multiple requests concurrently and prevent the server from blocking. * **Scalability:** For high traffic, you'll need to consider scalability. This might involve using a load balancer, multiple server instances, and a more efficient data storage solution. * **Crawl4AI API:** The most important part is to thoroughly understand the Crawl4AI library's API and adapt the code accordingly. The example code makes assumptions about the `crawl` method and its parameters. * **Error Handling:** Implement comprehensive error handling to gracefully handle unexpected situations and provide informative error messages to the client. * **Rate Limiting:** Implement rate limiting to prevent abuse of the Crawl4AI API and avoid being blocked. * **Data Validation:** Validate the input data (e.g., URLs) to prevent errors and security vulnerabilities. This comprehensive response provides a solid foundation for building your MCP server. Remember to adapt the code to the specific requirements of the Crawl4AI library and your application. Good luck! ```python ```

UniProt MCP Server

UniProt MCP Server

UniProt MCP Server