Discover Awesome MCP Servers
Extend your agent with 16,059 capabilities via MCP servers.
- All16,059
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Files-DB-MCP
Un sistema de base de datos vectorial local que proporciona a los agentes de codificación LLM capacidades de búsqueda semántica rápidas y eficientes para proyectos de software a través del Protocolo de Control de Mensajes.
RAG Documentation MCP Server
Proporciona herramientas para recuperar y procesar documentación a través de la búsqueda vectorial, lo que permite a los asistentes de IA aumentar sus respuestas con contexto de documentación relevante.
Wireshark MCP
A Model Context Protocol server that integrates Wireshark's network analysis capabilities with AI systems like Claude, allowing direct analysis of network packet data without manual copying.
SSH MCP Server
Enables SSH operations including connecting to remote servers, executing commands, and transferring files between local and remote systems. Supports multiple SSH connections with both password and private key authentication methods.
Financial Analysis MCP Server
Un servidor MCP para potenciar tu LLM con capacidades de análisis financiero.
Cloudflare Playwright MCP
Enables AI assistants to control a browser through Playwright on Cloudflare Workers, allowing them to perform web automation tasks like navigation, typing, clicking, and taking screenshots.
MCP-openproject
MCP-openproject
Puch AI MCP Starter
A starter template for creating MCP servers compatible with Puch AI, featuring built-in tools for job searching and analysis, plus basic image processing capabilities. Includes authentication and deployment guidance for extending Puch AI with custom tools.
Time Tools MCP Server
A Model Context Protocol server for time manipulation tasks, enabling AI models to get the current date/time and calculate duration between timestamps.
Exegol MCP Server
Provides secure access to Exegol penetration testing tools for CTF competitions and red team learning. Enables AI assistants to perform web penetration testing, network reconnaissance, password cracking, OSINT analysis, and more for educational purposes.
Jupyter MCP Server
Permite la interacción con cuadernos Jupyter a través del Protocolo de Contexto del Modelo, admitiendo la ejecución de código y la inserción de markdown dentro de entornos JupyterLab.
arXiv Research Assistant MCP Server
An MCP server that allows Claude AI to search, explore, and compare arXiv papers efficiently through a custom-built local server.
Tavily Web Search MCP Server
Enables web search capabilities through the Tavily API, allowing users to search the internet for information using natural language queries. Serves as a demonstration and educational project for building MCP servers with external API integrations.
UUID MCP Provider
Un servidor simple de Protocolo de Contexto de Modelo que genera UUIDs basados en marcas de tiempo (v7) cuando es llamado por un LLM, proporcionando identificadores únicos ordenables cronológicamente sin necesidad de parámetros de entrada.
YaVendió Tools
An MCP-based messaging system that allows AI systems to interact with various messaging platforms through standardized tools for sending text, images, documents, buttons, and alerts.
pumpswap-mcp
pumpswap-mcp
Model Context Protocol (MCP)
Okay, here's a breakdown of a working pattern for SSE-based (Server-Sent Events) MCP (Message Channel Protocol) clients and servers using the Gemini LLM (Language Model): **Core Concepts** * **SSE (Server-Sent Events):** A lightweight, unidirectional protocol where the server pushes data updates to the client over a single HTTP connection. Ideal for real-time updates. * **MCP (Message Channel Protocol):** A protocol for structured communication between components. In this context, it defines the format of messages exchanged between the client and server. We'll assume a simple JSON-based MCP for this example. * **Gemini LLM:** Google's large language model. It will be used on the server-side to process client requests and generate responses. **Architecture** 1. **Client (Web Browser, Mobile App, etc.):** * Establishes an SSE connection to the server. * Sends initial requests (if needed) via standard HTTP (e.g., `POST` or `GET`) to set up the context for the SSE stream. These requests might include user authentication, initial data retrieval, or instructions for the LLM. * Listens for SSE events from the server. * Parses the SSE event data (which will be in MCP format, likely JSON). * Updates the UI or performs other actions based on the received data. * Can send new requests to the server via standard HTTP (outside the SSE stream) to change the context or initiate new interactions. 2. **Server (Node.js, Python/Flask, Go, etc.):** * Handles initial HTTP requests from the client (e.g., for authentication, setup). * Establishes and maintains SSE connections with clients. * Receives requests (likely via HTTP `POST` requests) containing instructions or data for the Gemini LLM. * Uses the Gemini LLM to process the requests. * Formats the LLM's responses into MCP messages (JSON). * Sends these MCP messages as SSE events to the appropriate client(s). * Manages the state of each client connection (e.g., user context, conversation history). **MCP Message Format (Example - JSON)** ```json { "type": "llm_response", // Type of message "data": { "text": "The capital of France is Paris.", // LLM generated text "metadata": { "model": "gemini-pro", "timestamp": "2024-10-27T10:00:00Z" } } } ``` Other possible message types: * `status`: Server status updates (e.g., "LLM processing...", "Ready"). * `error`: Error messages. * `context_update`: Updates to the client's context (e.g., new user settings). **Workflow** 1. **Client Initialization:** * Client sends an HTTP request (e.g., `POST /init`) to the server to initialize the session. This might include authentication tokens or initial instructions. * Server authenticates the client and sets up a session. * Server returns a success response. 2. **SSE Connection Establishment:** * Client opens an SSE connection to the server (e.g., `GET /events`). * Server sets the appropriate headers for SSE (e.g., `Content-Type: text/event-stream`). 3. **Client Request (Example: Asking a Question):** * Client sends an HTTP `POST` request to the server (e.g., `POST /ask`). * The request body contains the question: ```json { "query": "What is the capital of France?" } ``` 4. **Server Processing:** * Server receives the `POST /ask` request. * Server extracts the query ("What is the capital of France?"). * Server sends the query to the Gemini LLM. * Server receives the LLM's response ("The capital of France is Paris."). * Server formats the response into an MCP message: ```json { "type": "llm_response", "data": { "text": "The capital of France is Paris.", "metadata": { "model": "gemini-pro", "timestamp": "2024-10-27T10:00:00Z" } } } ``` 5. **Server Sends SSE Event:** * Server sends the MCP message as an SSE event to the client: ``` event: message data: {"type": "llm_response", "data": {"text": "The capital of France is Paris.", "metadata": {"model": "gemini-pro", "timestamp": "2024-10-27T10:00:00Z"}}} ``` 6. **Client Receives and Processes SSE Event:** * Client receives the SSE event. * Client parses the JSON data from the `data` field. * Client updates the UI with the LLM's response ("The capital of France is Paris."). **Code Snippets (Illustrative - Python/Flask)** **Server (Python/Flask):** ```python from flask import Flask, Response, request, jsonify import time import json # Assuming you have the Gemini API set up # from google.generativeai import GenerativeModel # Replace with actual Gemini import app = Flask(__name__) # Replace with your actual Gemini model # model = GenerativeModel('gemini-pro') clients = {} # Store connected clients def generate_llm_response(query): """Simulates a Gemini LLM response.""" # In a real implementation, you would call the Gemini API here. # response = model.generate_content(query) # return response.text # Mock response for demonstration if "capital of France" in query.lower(): return "The capital of France is Paris." else: return "I don't have information on that topic." def format_sse_message(data): """Formats data as an SSE message.""" return f"event: message\ndata: {json.dumps(data)}\n\n" @app.route('/init', methods=['POST']) def init(): # Handle client initialization (e.g., authentication) # For simplicity, we'll just return a success message return jsonify({"status": "success", "message": "Session initialized"}) @app.route('/events') def events(): """Handles SSE connections.""" def event_stream(): client_id = request.headers.get('X-Client-ID') # Or generate a unique ID clients[client_id] = True # Track connected clients try: while True: time.sleep(0.1) # Keep the connection alive yield "" # Send a heartbeat to prevent timeout except GeneratorExit: del clients[client_id] # Remove client when disconnected return Response(event_stream(), mimetype="text/event-stream") @app.route('/ask', methods=['POST']) def ask(): """Handles client requests to the LLM.""" query = request.json.get('query') if not query: return jsonify({"status": "error", "message": "Missing query"}), 400 llm_response = generate_llm_response(query) mcp_message = { "type": "llm_response", "data": { "text": llm_response, "metadata": { "model": "mock-gemini", "timestamp": time.strftime("%Y-%m-%dT%H:%M:%SZ") } } } # Send the SSE event to all connected clients (you might want to target a specific client) for client_id in clients: sse_message = format_sse_message(mcp_message) # In a real implementation, you would need a way to send the SSE message # to the specific client's connection. This might involve using a message queue # or a more sophisticated connection management system. print(f"Sending to client {client_id}: {sse_message}") # Replace with actual sending logic return jsonify({"status": "success", "message": "Request processed"}) if __name__ == '__main__': app.run(debug=True) ``` **Client (JavaScript - Browser):** ```javascript // Initialize the session fetch('/init', { method: 'POST', // Add any necessary headers or body data }) .then(response => response.json()) .then(data => { if (data.status === 'success') { console.log('Session initialized'); startSSE(); } else { console.error('Session initialization failed:', data.message); } }); function startSSE() { const eventSource = new EventSource('/events'); eventSource.onmessage = (event) => { const data = JSON.parse(event.data); console.log('Received SSE event:', data); if (data.type === 'llm_response') { const llmText = data.data.text; // Update the UI with the LLM's response document.getElementById('response').textContent = llmText; } else if (data.type === 'status') { // Handle status updates console.log('Status:', data.data.message); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); eventSource.close(); }; } function askQuestion() { const question = document.getElementById('question').value; fetch('/ask', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ query: question }) }) .then(response => response.json()) .then(data => { if (data.status === 'success') { console.log('Question submitted'); } else { console.error('Error submitting question:', data.message); } }); } // Example HTML (for the client-side) /* <input type="text" id="question" placeholder="Ask a question"> <button onclick="askQuestion()">Ask</button> <div id="response"></div> */ ``` **Key Considerations and Improvements** * **Error Handling:** Implement robust error handling on both the client and server. Send error messages as SSE events. * **Authentication/Authorization:** Secure the SSE endpoint. Use authentication tokens or cookies to verify the client's identity. * **Connection Management:** Handle client disconnections gracefully. Clean up resources on the server when a client disconnects. * **Scalability:** For high-traffic applications, consider using a message queue (e.g., Redis, RabbitMQ) to distribute SSE events to multiple server instances. This decouples the LLM processing from the SSE event delivery. * **Rate Limiting:** Implement rate limiting to prevent abuse of the Gemini LLM API. * **Context Management:** Maintain conversation history or other relevant context for each client. This allows the LLM to provide more relevant responses. Consider using a database or in-memory store to manage context. * **Streaming Responses:** For long-running LLM tasks, consider streaming the response back to the client as a series of SSE events. This provides a better user experience. Gemini supports streaming responses. * **MCP Design:** Carefully design your MCP message format to be flexible and extensible. Consider using a schema validation library to ensure that messages are valid. * **Client ID:** Generate a unique client ID for each SSE connection. This allows the server to target specific clients with SSE events. You can pass this ID in the `X-Client-ID` header when establishing the SSE connection. * **Heartbeats:** Send periodic heartbeat messages from the server to the client to keep the SSE connection alive. This prevents the connection from timing out. * **Reconnection:** Implement automatic reconnection logic on the client in case the SSE connection is lost. **Benefits of this Pattern** * **Real-time Updates:** SSE provides real-time updates from the Gemini LLM to the client. * **Unidirectional Communication:** SSE is well-suited for scenarios where the server primarily pushes data to the client. * **Lightweight:** SSE is a lightweight protocol compared to WebSockets. * **Scalability:** With proper architecture (e.g., message queues), this pattern can be scaled to handle a large number of concurrent clients. * **Structured Communication:** MCP provides a structured way to exchange messages between the client and server. **When to Use This Pattern** * Applications that require real-time updates from an LLM (e.g., chatbots, live dashboards). * Scenarios where the server needs to push data to the client without the client constantly polling. * Applications that need a structured way to communicate between the client and server. This detailed explanation and code snippets should give you a solid foundation for building SSE-based MCP clients and servers using the Gemini LLM. Remember to adapt the code to your specific needs and environment.
AnyDocs MCP Server
Transforms any website's documentation into an MCP-compatible interactive knowledge base with universal scraping, advanced search, and AI-powered tools. Supports GitBook, Notion, Confluence, and custom documentation platforms with real-time synchronization.
Dynamic Reincarnation Story
Enables interactive reincarnation storytelling where users choose their path after death, becoming characters like a vengeful spirit, Bilbo Baggins, or Monkey D. Luffy. Features dynamic narrative generation with personalized story paths based on user choices and soul-searching questions.
MCP Databases Server
Enables LLMs and agents to interact with relational databases (SQL Server, MySQL, PostgreSQL) through MCP tools. Supports executing queries, inserting records, listing tables, and exposing database schemas with secure credential management.
Feishu Integration Server
Proporciona acceso a documentos de Feishu (Lark) para herramientas de codificación impulsadas por IA como Cursor, Windsurf y Cline, basándose en la implementación del Protocolo de Contexto de Modelo.
Toast MCP Server
Un servidor MCP que muestra notificaciones de escritorio en Windows 10 y macOS, compatible con VSCode Cline y que admite parámetros de notificación personalizables.
MuseScore MCP Server
A Model Context Protocol server that provides programmatic control over MuseScore through a WebSocket-based plugin system, allowing AI assistants to compose music, add lyrics, navigate scores, and control MuseScore directly.
MarketAuxMcpServer
A MCP Server Implementation that integrates the Marketaux api providing search based on entity, countries, industries, symbols etc.
MCP-NOSTR
Un puente que permite a los modelos de lenguaje de IA publicar contenido en la red Nostr mediante la implementación del Protocolo de Contexto del Modelo (MCP).
Matomo MCP Server
A Model Context Protocol server that provides tools to interact with Matomo Analytics API, enabling management of sites, users, goals, segments, and access to analytics reports through a MCP interface.
MCP API Service
Un servidor de Protocolo de Contexto de Modelo (MCP) que interactúa con las API del sistema, permitiendo a los usuarios verificar conexiones, buscar empleados, registrar el desayuno y actualizar la información química por turnos.
FastAPI MCP Demo Server
A demonstration MCP server built with FastAPI that provides basic mathematical operations and greeting services. Integrates with Gemini CLI to showcase MCP protocol implementation with simple REST endpoints.
MCP Terminal
Un servidor que permite a los asistentes de IA ejecutar comandos de terminal y recuperar resultados a través del Protocolo de Contexto del Modelo (MCP).
Self-Hosted Supabase MCP Server
Enables developers to interact with self-hosted Supabase instances, providing database introspection, migration management, auth user operations, storage management, and TypeScript type generation directly from MCP-compatible development environments.