Discover Awesome MCP Servers
Extend your agent with 26,654 capabilities via MCP servers.
- All26,654
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Serpex MCP
Enables multi-engine web search capabilities across Google, Bing, DuckDuckGo, Brave, Yahoo, and Yandex through the Serpex API. Features automatic engine routing, time filtering, and structured JSON results for reliable web search integration.
cc-channel-mem
An MCP server and background daemon that captures Discord and Telegram conversations locally to provide persistent memory for Claude Code sessions. It enables hybrid search and automatic context injection of chat history to maintain continuity across development sessions.
ShadowGit MCP Server
Provides AI assistants with secure, read-only access to ShadowGit repositories for debugging and code analysis. Enables powerful analysis of fine-grained git history captured automatically by ShadowGit's continuous commit tracking.
MCP Inspector Assistant
Provides programmatic access to MCP server testing, debugging, and validation capabilities by wrapping the official MCP Inspector. Enables users to test tools, validate protocol compliance, discover server capabilities, and debug MCP servers through natural language interactions.
Stock Valuation MCP Server
Provides professional-grade financial analysis tools for Thai stock markets, including PE Band Analysis, DDM, DCF valuation models, real-time SET Watch API data, complete financial statements, and historical ratio analysis with investment recommendations.
Mcp Memory Bank
Um sistema de gerenciamento de contexto poderoso e pronto para produção para Grandes Modelos de Linguagem (LLMs). Construído com ChromaDB e tecnologias de incorporação modernas, ele fornece capacidades de memória persistentes e específicas do projeto que aprimoram a compreensão e a qualidade de resposta da sua IA.
Everforth GitHub MCP Server
Bilibili-Mcp-Server
Fathom-Simple-MCP
A Model Context Protocol (MCP) server for accessing Fathom AI API endpoints (meetings, recordings, transcripts, summaries, teams, team members) via GET operations.
LangSearch MCP Server
Provides access to LangSearch's Web Search and Semantic Rerank APIs for AI assistants. It enables web searching with advanced filtering and reranking of documents based on semantic relevance.
EnriWeb
An MCP server that provides web search and URL fetching capabilities by delegating execution to an EnriProxy server. It enables AI agents to perform structured web searches and retrieve content with support for filtering, recency limits, and pagination.
DuckDB-RAG-MCP-Sample
An MCP server that enables RAG (Retrieval-Augmented Generation) on markdown documents by converting them to embedding vectors and performing vector search using DuckDB.
MCP Video Extraction Plus
Enables video text extraction using multiple speech recognition providers including local Whisper, JianYing/CapCut, and Bilibili Cut services. Supports video downloading, audio extraction, and automatic speech-to-text transcription with configurable providers.
GitHub MCP Control Plane
Provides secure, controlled access to GitHub operations through the Model Context Protocol with enterprise-grade security features including secret detection, vulnerability scanning, rate limiting, and full audit trails. Supports repository management, file operations, branch creation, commits, and GitHub Actions workflows.
Remote MCP AuthKit
Enables remote MCP server connections with WorkOS AuthKit authentication and user management. Supports organization-centric authentication and permission-based tool access control.
BigQuery MCP Server
Enables LLMs to interact with Google BigQuery by inspecting database schemas, listing tables, and executing SQL queries. This server facilitates seamless data analysis and management through natural language via the Model Context Protocol.
enfusion-mcp
An MCP server for Arma Reforger and Enfusion engine modding that enables users to create mods, search API classes, and generate scripts through natural language. It provides a comprehensive suite of tools for scaffolding addons, generating prefabs, and building projects using the Workbench CLI.
MCP ChatGPT Multi-Server Suite
A comprehensive suite of four MCP servers providing real-time stock market data, currency conversion between 160+ currencies, world timezone conversion, and unit conversion across 10 measurement categories. Features beautiful web interfaces and ChatGPT integration capabilities.
UK Intelligence MCP Agent
Provides UK company lookups via Companies House, web scraping, and search tools designed to be context-efficient for AI agents. It includes integrated micro-payment monetization and supports data extraction and format conversion.
athenahealth MCP Server
Enables AI-powered clinical decision support by integrating with athenahealth's API to access patient data, manage prescriptions, check drug interactions, and generate clinical assessments. Provides HIPAA-compliant healthcare workflows with comprehensive audit logging and data sanitization.
cmux-agent-mcp
A programmable terminal control plane that enables AI agents to orchestrate, monitor, and interact with multiple parallel AI CLI sessions and browser instances within CMUX. It provides over 80 tools for workspace management, pane manipulation, and cross-agent communication to facilitate complex multi-project workflows.
Firestore Advanced MCP
Um servidor de Protocolo de Contexto de Modelo que permite que grandes modelos de linguagem, como o Claude, realizem interações abrangentes com bancos de dados Firebase Firestore, suportando operações CRUD completas, consultas complexas e recursos avançados como transações e gerenciamento de TTL (Time-to-Live).
Prism MCP
Production-ready MCP server with session memory, Brave Search, Vertex AI Discovery Engine, Google Gemini analysis, and sandboxed code-mode transforms.
MCP_claude
Isto é para demonstrar como um servidor MCP pode ser construído para o Cliente MCP do Claude Desktop.
Planning MCP Server
Provides comprehensive Australian planning property reports, including zoning, overlays, land size, and utility information for AI assistants. This high-performance MCP server is built for Cloudflare Workers and enables real-time property data retrieval through an HTTP-based interface.
At-Work API MCP Server
An MCP (Multi-Agent Conversation Protocol) Server that enables interaction with the At-Work API (api.at-work.biz), allowing agents to communicate with this service through various transport modes like stdio, SSE, and HTTP.
Mock Store MCP Server
Enables AI agents to explore and query a mock e-commerce store's data including customers, products, inventory, and orders through conversational interactions backed by PostgreSQL.
crawl4ai-mcp
Here's a Python outline for creating an MCP (Model Context Protocol) server that wraps the Crawl4AI library, along with explanations and considerations: ```python from http.server import BaseHTTPRequestHandler, HTTPServer import json import logging # Assuming Crawl4AI is installed and importable try: from crawl4ai import Crawl4AI # Replace with the actual import if different except ImportError: print("Error: Crawl4AI library not found. Please install it.") Crawl4AI = None # Disable functionality if library is missing # Configure logging logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') # --- Configuration --- HOST_NAME = "localhost" # Or "0.0.0.0" to listen on all interfaces PORT_NUMBER = 8080 # --- MCP Request Handler --- class MCPRequestHandler(BaseHTTPRequestHandler): def _set_response(self, status_code=200, content_type="application/json"): self.send_response(status_code) self.send_header("Content-type", content_type) self.end_headers() def do_POST(self): """Handles POST requests, expecting JSON data.""" content_length = int(self.headers['Content-Length']) post_data = self.rfile.read(content_length) try: request_data = json.loads(post_data.decode('utf-8')) logging.info(f"Received request: {request_data}") except json.JSONDecodeError: self._set_response(400) self.wfile.write(json.dumps({"error": "Invalid JSON"}).encode('utf-8')) return # Route the request based on the 'action' field (or similar) action = request_data.get('action') if action == "crawl": self.handle_crawl_request(request_data) elif action == "extract_data": self.handle_extract_data_request(request_data) # Example else: self._set_response(400) self.wfile.write(json.dumps({"error": "Invalid action"}).encode('utf-8')) def handle_crawl_request(self, request_data): """Handles a crawl request using Crawl4AI.""" if Crawl4AI is None: self._set_response(500) self.wfile.write(json.dumps({"error": "Crawl4AI library not available"}).encode('utf-8')) return url = request_data.get('url') if not url: self._set_response(400) self.wfile.write(json.dumps({"error": "Missing 'url' parameter"}).encode('utf-8')) return try: # Initialize Crawl4AI (adjust parameters as needed) crawler = Crawl4AI() # You might need API keys or other setup here # Perform the crawl result = crawler.crawl(url) # Assuming a crawl method exists # Prepare the response response_data = {"status": "success", "data": result} self._set_response(200) self.wfile.write(json.dumps(response_data).encode('utf-8')) except Exception as e: logging.exception("Error during crawl:") self._set_response(500) self.wfile.write(json.dumps({"error": str(e)}).encode('utf-8')) def handle_extract_data_request(self, request_data): """Example: Handles a data extraction request (if Crawl4AI supports it).""" # Implement data extraction logic here, using Crawl4AI functions. # This is just a placeholder. Adapt to Crawl4AI's capabilities. self._set_response(501) # Not Implemented self.wfile.write(json.dumps({"error": "Data extraction not implemented"}).encode('utf-8')) if __name__ == '__main__': if Crawl4AI is None: print("Crawl4AI library is missing. Server will not start.") else: webServer = HTTPServer((HOST_NAME, PORT_NUMBER), MCPRequestHandler) print(f"Server started http://{HOST_NAME}:{PORT_NUMBER}") try: webServer.serve_forever() except KeyboardInterrupt: pass webServer.server_close() print("Server stopped.") ``` Key improvements and explanations: * **Error Handling:** Includes `try...except` blocks to catch potential errors during JSON parsing, Crawl4AI execution, and other operations. Logs exceptions for debugging. Returns appropriate HTTP status codes (400 for bad requests, 500 for server errors). Crucially, it checks if `Crawl4AI` was successfully imported and handles the case where it's missing. * **JSON Handling:** Correctly decodes the POST data from bytes to a string using UTF-8 encoding and encodes the response back to bytes. * **MCP Structure:** The `MCPRequestHandler` class handles incoming HTTP requests. It parses the JSON payload and routes the request to the appropriate handler function based on the `action` field. This is a basic MCP structure; you can extend it with more actions and more sophisticated routing. * **Crawl4AI Integration:** The `handle_crawl_request` function demonstrates how to use the `Crawl4AI` library. It extracts the URL from the request, initializes `Crawl4AI`, calls the `crawl` method (assuming it exists), and returns the result as a JSON response. **Important:** You'll need to adapt this part to the actual API of the `Crawl4AI` library. The example assumes a `crawl` method that takes a URL. You'll also need to handle any authentication or API key requirements of `Crawl4AI`. * **Configuration:** The `HOST_NAME` and `PORT_NUMBER` variables allow you to easily configure the server's address and port. * **Logging:** Uses the `logging` module to provide informative messages about requests and errors. This is essential for debugging. * **Example `extract_data` handler:** Includes a placeholder for a `handle_extract_data_request` function. This shows how you could extend the server to support other Crawl4AI functionalities. It returns a 501 (Not Implemented) status code. * **Clearer Error Messages:** Returns more descriptive error messages in the JSON responses, making it easier to diagnose problems. * **Conditional Crawl4AI Usage:** The code now checks if `Crawl4AI` was imported successfully. If not, it disables the crawl functionality and prevents the server from starting if `Crawl4AI` is essential. This prevents the server from crashing if the library is not installed. * **UTF-8 Encoding:** Explicitly uses UTF-8 encoding for decoding the request body and encoding the response. This is crucial for handling a wide range of characters. **How to Use:** 1. **Install Crawl4AI:** `pip install crawl4ai` (or the correct installation command for the library). 2. **Replace Placeholders:** Modify the `handle_crawl_request` and `handle_extract_data_request` functions to use the actual methods and parameters of the `Crawl4AI` library. Pay close attention to authentication and API key requirements. 3. **Run the Script:** `python your_script_name.py` 4. **Send POST Requests:** Use `curl`, `requests` (Python library), or any other HTTP client to send POST requests to `http://localhost:8080`. The request body should be a JSON object with an `action` field and any necessary parameters. Example `curl` request: ```bash curl -X POST -H "Content-Type: application/json" -d '{"action": "crawl", "url": "https://www.example.com"}' http://localhost:8080 ``` **Important Considerations:** * **Security:** This is a very basic server. For production use, you'll need to add security measures, such as authentication, authorization, and input validation, to prevent malicious attacks. Consider using a more robust web framework like Flask or Django. * **Asynchronous Operations:** Crawling can be a long-running process. Consider using asynchronous programming (e.g., `asyncio` or `threading`) to handle multiple requests concurrently and prevent the server from blocking. * **Scalability:** For high traffic, you'll need to consider scalability. This might involve using a load balancer, multiple server instances, and a more efficient data storage solution. * **Crawl4AI API:** The most important part is to thoroughly understand the Crawl4AI library's API and adapt the code accordingly. The example code makes assumptions about the `crawl` method and its parameters. * **Error Handling:** Implement comprehensive error handling to gracefully handle unexpected situations and provide informative error messages to the client. * **Rate Limiting:** Implement rate limiting to prevent abuse of the Crawl4AI API and avoid being blocked. * **Data Validation:** Validate the input data (e.g., URLs) to prevent errors and security vulnerabilities. This comprehensive response provides a solid foundation for building your MCP server. Remember to adapt the code to the specific requirements of the Crawl4AI library and your application. Good luck! ```python ```
UniProt MCP Server
UniProt MCP Server
mcp-voice-hooks
Voice Mode for Claude Code