Discover Awesome MCP Servers
Extend your agent with 26,604 capabilities via MCP servers.
- All26,604
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
MCP Servers
MCP MariaDB Server
Enables AI assistants to interact with MariaDB databases through standard SQL operations and advanced vector/embedding-based search. Supports database management, schema inspection, and semantic document storage and retrieval with multiple embedding providers.
Synthetic Web Search MCP Server
Exposes the Synthetic API as an MCP tool to enable web searching within Claude and other compatible applications. It provides formatted search results including titles, URLs, and text snippets for enhanced model context.
mcp_command-line_tool
mcp\_command-line\_tool
Local Mcp Server Tutorial
ローカルMCPサーバー(stdio)を作成するためのチュートリアルです。
Open Food Facts MCP Server
Enables AI assistants to access the Open Food Facts database to query detailed food product information, nutritional data, and environmental scores. Supports product lookup by barcode, smart search with filtering, nutritional analysis, product comparison, and dietary recommendations to help users make informed food choices.
Notion MCP Server
A simple MCP server implementation for Notion integration
VMware Fusion MCP Server
A Model Context Protocol server that allows managing VMware Fusion virtual machines through natural language, enabling users to list VMs, retrieve VM information, and perform power operations via the Fusion REST API.
MCP Notifications Server
Enables Claude and other MCP clients to display native macOS notifications with customizable titles, messages, icons, and sounds through the macOS Notification Center.
Firebase MCP Server
Enables inspection and querying of Firestore data and Cloud Function logs within the Firebase Emulator environment. It provides tools to navigate collections, retrieve documents, and filter function logs using regex and other criteria.
Remote MCP Server on Cloudflare
A template for deploying secure Model Context Protocol servers to Cloudflare Workers with built-in OAuth authentication. It enables hosting and connecting remote tools to Claude Desktop using SSE transport and a local proxy.
MaxKB
💬 MaxKB es un chatbot RAG listo para usar que ofrece un flujo de trabajo robusto y capacidades de uso de herramientas MCP. Es compatible con una amplia gama de modelos de lenguaje grandes (LLM) convencionales, incluidos DeepSeek-R1, Llama 3.3, OpenAI, entre otros.
Apple Shortcuts MCP Server
Enables the generation, management, and validation of Apple Shortcuts (.shortcut files) by providing tools to search actions and build control flow blocks. It allows users to programmatically create and analyze shortcut structures for deployment on iOS and macOS devices.
UUID MCP Provider
Un servidor simple de Protocolo de Contexto de Modelo que genera UUIDs basados en marcas de tiempo (v7) cuando es llamado por un LLM, proporcionando identificadores únicos ordenables cronológicamente sin necesidad de parámetros de entrada.
vk-mcp-server
Model Context Protocol server for VK (VKontakte) social network API
pumpswap-mcp
pumpswap-mcp
EVM MCP Server
Jokes MCP Server
An MCP server that delivers jokes on demand, supporting different joke categories like Chuck Norris and Dad jokes, which can be integrated with Microsoft Copilot Studio.
Local Falcon MCP Server
Connects AI systems to Local Falcon API, enabling access to local SEO reporting tools including scan reports, trend analysis, keyword tracking, and competitor data through the Model Context Protocol.
Missive MCP Server
An MCP server that integrates with Missive App allowing users to create, manage, and send email drafts through AI tools like Claude Desktop or N8N.
domain-lookup-mcp
Un mini servidor MCP para búsqueda de nombres de dominio.
Tavily Web Search MCP Server
Enables web search capabilities through the Tavily API, allowing users to search the internet for information using natural language queries. Serves as a demonstration and educational project for building MCP servers with external API integrations.
YaVendió Tools
An MCP-based messaging system that allows AI systems to interact with various messaging platforms through standardized tools for sending text, images, documents, buttons, and alerts.
Model Context Protocol (MCP)
Okay, here's a breakdown of a working pattern for SSE-based (Server-Sent Events) MCP (Message Channel Protocol) clients and servers using the Gemini LLM (Language Model): **Core Concepts** * **SSE (Server-Sent Events):** A lightweight, unidirectional protocol where the server pushes data updates to the client over a single HTTP connection. Ideal for real-time updates. * **MCP (Message Channel Protocol):** A protocol for structured communication between components. In this context, it defines the format of messages exchanged between the client and server. We'll assume a simple JSON-based MCP for this example. * **Gemini LLM:** Google's large language model. It will be used on the server-side to process client requests and generate responses. **Architecture** 1. **Client (Web Browser, Mobile App, etc.):** * Establishes an SSE connection to the server. * Sends initial requests (if needed) via standard HTTP (e.g., `POST` or `GET`) to set up the context for the SSE stream. These requests might include user authentication, initial data retrieval, or instructions for the LLM. * Listens for SSE events from the server. * Parses the SSE event data (which will be in MCP format, likely JSON). * Updates the UI or performs other actions based on the received data. * Can send new requests to the server via standard HTTP (outside the SSE stream) to change the context or initiate new interactions. 2. **Server (Node.js, Python/Flask, Go, etc.):** * Handles initial HTTP requests from the client (e.g., for authentication, setup). * Establishes and maintains SSE connections with clients. * Receives requests (likely via HTTP `POST` requests) containing instructions or data for the Gemini LLM. * Uses the Gemini LLM to process the requests. * Formats the LLM's responses into MCP messages (JSON). * Sends these MCP messages as SSE events to the appropriate client(s). * Manages the state of each client connection (e.g., user context, conversation history). **MCP Message Format (Example - JSON)** ```json { "type": "llm_response", // Type of message "data": { "text": "The capital of France is Paris.", // LLM generated text "metadata": { "model": "gemini-pro", "timestamp": "2024-10-27T10:00:00Z" } } } ``` Other possible message types: * `status`: Server status updates (e.g., "LLM processing...", "Ready"). * `error`: Error messages. * `context_update`: Updates to the client's context (e.g., new user settings). **Workflow** 1. **Client Initialization:** * Client sends an HTTP request (e.g., `POST /init`) to the server to initialize the session. This might include authentication tokens or initial instructions. * Server authenticates the client and sets up a session. * Server returns a success response. 2. **SSE Connection Establishment:** * Client opens an SSE connection to the server (e.g., `GET /events`). * Server sets the appropriate headers for SSE (e.g., `Content-Type: text/event-stream`). 3. **Client Request (Example: Asking a Question):** * Client sends an HTTP `POST` request to the server (e.g., `POST /ask`). * The request body contains the question: ```json { "query": "What is the capital of France?" } ``` 4. **Server Processing:** * Server receives the `POST /ask` request. * Server extracts the query ("What is the capital of France?"). * Server sends the query to the Gemini LLM. * Server receives the LLM's response ("The capital of France is Paris."). * Server formats the response into an MCP message: ```json { "type": "llm_response", "data": { "text": "The capital of France is Paris.", "metadata": { "model": "gemini-pro", "timestamp": "2024-10-27T10:00:00Z" } } } ``` 5. **Server Sends SSE Event:** * Server sends the MCP message as an SSE event to the client: ``` event: message data: {"type": "llm_response", "data": {"text": "The capital of France is Paris.", "metadata": {"model": "gemini-pro", "timestamp": "2024-10-27T10:00:00Z"}}} ``` 6. **Client Receives and Processes SSE Event:** * Client receives the SSE event. * Client parses the JSON data from the `data` field. * Client updates the UI with the LLM's response ("The capital of France is Paris."). **Code Snippets (Illustrative - Python/Flask)** **Server (Python/Flask):** ```python from flask import Flask, Response, request, jsonify import time import json # Assuming you have the Gemini API set up # from google.generativeai import GenerativeModel # Replace with actual Gemini import app = Flask(__name__) # Replace with your actual Gemini model # model = GenerativeModel('gemini-pro') clients = {} # Store connected clients def generate_llm_response(query): """Simulates a Gemini LLM response.""" # In a real implementation, you would call the Gemini API here. # response = model.generate_content(query) # return response.text # Mock response for demonstration if "capital of France" in query.lower(): return "The capital of France is Paris." else: return "I don't have information on that topic." def format_sse_message(data): """Formats data as an SSE message.""" return f"event: message\ndata: {json.dumps(data)}\n\n" @app.route('/init', methods=['POST']) def init(): # Handle client initialization (e.g., authentication) # For simplicity, we'll just return a success message return jsonify({"status": "success", "message": "Session initialized"}) @app.route('/events') def events(): """Handles SSE connections.""" def event_stream(): client_id = request.headers.get('X-Client-ID') # Or generate a unique ID clients[client_id] = True # Track connected clients try: while True: time.sleep(0.1) # Keep the connection alive yield "" # Send a heartbeat to prevent timeout except GeneratorExit: del clients[client_id] # Remove client when disconnected return Response(event_stream(), mimetype="text/event-stream") @app.route('/ask', methods=['POST']) def ask(): """Handles client requests to the LLM.""" query = request.json.get('query') if not query: return jsonify({"status": "error", "message": "Missing query"}), 400 llm_response = generate_llm_response(query) mcp_message = { "type": "llm_response", "data": { "text": llm_response, "metadata": { "model": "mock-gemini", "timestamp": time.strftime("%Y-%m-%dT%H:%M:%SZ") } } } # Send the SSE event to all connected clients (you might want to target a specific client) for client_id in clients: sse_message = format_sse_message(mcp_message) # In a real implementation, you would need a way to send the SSE message # to the specific client's connection. This might involve using a message queue # or a more sophisticated connection management system. print(f"Sending to client {client_id}: {sse_message}") # Replace with actual sending logic return jsonify({"status": "success", "message": "Request processed"}) if __name__ == '__main__': app.run(debug=True) ``` **Client (JavaScript - Browser):** ```javascript // Initialize the session fetch('/init', { method: 'POST', // Add any necessary headers or body data }) .then(response => response.json()) .then(data => { if (data.status === 'success') { console.log('Session initialized'); startSSE(); } else { console.error('Session initialization failed:', data.message); } }); function startSSE() { const eventSource = new EventSource('/events'); eventSource.onmessage = (event) => { const data = JSON.parse(event.data); console.log('Received SSE event:', data); if (data.type === 'llm_response') { const llmText = data.data.text; // Update the UI with the LLM's response document.getElementById('response').textContent = llmText; } else if (data.type === 'status') { // Handle status updates console.log('Status:', data.data.message); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); eventSource.close(); }; } function askQuestion() { const question = document.getElementById('question').value; fetch('/ask', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ query: question }) }) .then(response => response.json()) .then(data => { if (data.status === 'success') { console.log('Question submitted'); } else { console.error('Error submitting question:', data.message); } }); } // Example HTML (for the client-side) /* <input type="text" id="question" placeholder="Ask a question"> <button onclick="askQuestion()">Ask</button> <div id="response"></div> */ ``` **Key Considerations and Improvements** * **Error Handling:** Implement robust error handling on both the client and server. Send error messages as SSE events. * **Authentication/Authorization:** Secure the SSE endpoint. Use authentication tokens or cookies to verify the client's identity. * **Connection Management:** Handle client disconnections gracefully. Clean up resources on the server when a client disconnects. * **Scalability:** For high-traffic applications, consider using a message queue (e.g., Redis, RabbitMQ) to distribute SSE events to multiple server instances. This decouples the LLM processing from the SSE event delivery. * **Rate Limiting:** Implement rate limiting to prevent abuse of the Gemini LLM API. * **Context Management:** Maintain conversation history or other relevant context for each client. This allows the LLM to provide more relevant responses. Consider using a database or in-memory store to manage context. * **Streaming Responses:** For long-running LLM tasks, consider streaming the response back to the client as a series of SSE events. This provides a better user experience. Gemini supports streaming responses. * **MCP Design:** Carefully design your MCP message format to be flexible and extensible. Consider using a schema validation library to ensure that messages are valid. * **Client ID:** Generate a unique client ID for each SSE connection. This allows the server to target specific clients with SSE events. You can pass this ID in the `X-Client-ID` header when establishing the SSE connection. * **Heartbeats:** Send periodic heartbeat messages from the server to the client to keep the SSE connection alive. This prevents the connection from timing out. * **Reconnection:** Implement automatic reconnection logic on the client in case the SSE connection is lost. **Benefits of this Pattern** * **Real-time Updates:** SSE provides real-time updates from the Gemini LLM to the client. * **Unidirectional Communication:** SSE is well-suited for scenarios where the server primarily pushes data to the client. * **Lightweight:** SSE is a lightweight protocol compared to WebSockets. * **Scalability:** With proper architecture (e.g., message queues), this pattern can be scaled to handle a large number of concurrent clients. * **Structured Communication:** MCP provides a structured way to exchange messages between the client and server. **When to Use This Pattern** * Applications that require real-time updates from an LLM (e.g., chatbots, live dashboards). * Scenarios where the server needs to push data to the client without the client constantly polling. * Applications that need a structured way to communicate between the client and server. This detailed explanation and code snippets should give you a solid foundation for building SSE-based MCP clients and servers using the Gemini LLM. Remember to adapt the code to your specific needs and environment.
My Mcp Tools
Recopilación de herramientas de MCP de uso común y aprendizaje del desarrollo de servidores MCP.
Kylas CRM MCP Server
Enables management of Kylas CRM lead operations, including creating leads, searching and filtering records, and resolving user, product, or pipeline IDs. It provides specialized tools for monitoring idle leads and accessing lead schema instructions through natural language.
DuckDuckGo MCP Server
A basic MCP server with example tools including message echo functionality and server information retrieval. Built with FastMCP framework and supports both stdio and HTTP transports.
Scrapeless MCP Server
Una implementación de servidor del Protocolo de Contexto de Modelo que permite a asistentes de IA como Claude realizar búsquedas en Google y recuperar datos web directamente a través de solicitudes en lenguaje natural.
Files-DB-MCP
Un sistema de base de datos vectorial local que proporciona a los agentes de codificación LLM capacidades de búsqueda semántica rápidas y eficientes para proyectos de software a través del Protocolo de Control de Mensajes.
NetworksDB-MCP
Integración rápida de MCP para la API de NetworksDB: consulta direcciones IP, organizaciones, ASN y registros DNS utilizando lenguaje natural a través del Protocolo de Contexto de Modelo (MCP).