Discover Awesome MCP Servers

Extend your agent with 14,392 capabilities via MCP servers.

All14,392
MuseScore MCP Server

MuseScore MCP Server

A Model Context Protocol server that provides programmatic control over MuseScore through a WebSocket-based plugin system, allowing AI assistants to compose music, add lyrics, navigate scores, and control MuseScore directly.

MCP Filesystem Server

MCP Filesystem Server

Un servidor de Protocolo de Contexto de Modelo que proporciona una interacción segura e inteligente con archivos y sistemas de archivos, ofreciendo una gestión de contexto inteligente y operaciones eficientes en cuanto a tokens para trabajar con archivos grandes y estructuras de directorios complejas.

MCP Docker Server

MCP Docker Server

Enables secure Docker command execution from isolated environments like containers through MCP protocol. Provides tools for managing Docker containers, images, and Docker Compose services with security validation and async operation support.

Remote MCP Server

Remote MCP Server

A deployable Model Context Protocol server for Cloudflare Workers that allows users to create custom AI tools without authentication requirements and connect them to Cloudflare AI Playground or Claude Desktop.

erpnext-server

erpnext-server

Este es un servidor MCP basado en TypeScript que proporciona integración con la API de ERPNext/Frappe. Permite que los asistentes de IA interactúen con los datos y la funcionalidad de ERPNext a través del Protocolo de Contexto del Modelo.

kickstart-mcp

kickstart-mcp

🚀 Kickstart-mcp es un tutorial para usar MCP, creando tu propio servidor/cliente MCP. Te guiaremos a través de cada paso de tu viaje con MCP.

BlazeMeter MCP Server

BlazeMeter MCP Server

Provides programmatic access to BlazeMeter's performance testing platform through MCP tools. Enables users to retrieve test runs, analyze performance data, view error reports, and manage testing resources via natural language interactions.

mcp-figma

mcp-figma

Figma MCP con funcionalidad API completa

DocuMCP

DocuMCP

An MCP server that enables Claude to generate, search, and manage documentation for codebases using vector embeddings and semantic search, providing tools for creating user guides, technical documentation, code explanations, and architectural diagrams.

OPS MCP Server

OPS MCP Server

Una colección de herramientas de monitoreo e inspección de servidores que proporciona utilidades remotas para verificar interfaces de red, estados de servicios y configuraciones de firewall en servidores.

Fedspeak MCP Server

Fedspeak MCP Server

MCP Server

MCP Server

HWP MCP Server

HWP MCP Server

Un servidor Node.js que implementa el Protocolo de Contexto de Modelo (MCP) para controlar documentos HWP (procesador de textos coreano), permitiendo que asistentes de IA como Claude creen y manipulen documentos Hangul.

nREPL MCP Server

nREPL MCP Server

Enables interaction with a running Clojure nREPL instance for evaluating Clojure code, inspecting namespaces, and retrieving connection statuses, compatible with MCP clients like Claude Desktop and CLine in VSCode.

hyperliquid-info-mcp

hyperliquid-info-mcp

hyperliquid-info-mcp

ONEDeFi MCP Server

ONEDeFi MCP Server

Enables AI-powered DeFi operations across Ethereum, Polygon, and Solana with automated portfolio optimization, risk assessment, and yield farming strategies. Provides intelligent portfolio diagnostics, investment strategy generation, and multi-chain DeFi protocol integration through natural language.

MCP-NOSTR

MCP-NOSTR

Un puente que permite a los modelos de lenguaje de IA publicar contenido en la red Nostr mediante la implementación del Protocolo de Contexto del Modelo (MCP).

Matomo MCP Server

Matomo MCP Server

A Model Context Protocol server that provides tools to interact with Matomo Analytics API, enabling management of sites, users, goals, segments, and access to analytics reports through a MCP interface.

MCP API Service

MCP API Service

Un servidor de Protocolo de Contexto de Modelo (MCP) que interactúa con las API del sistema, permitiendo a los usuarios verificar conexiones, buscar empleados, registrar el desayuno y actualizar la información química por turnos.

MCP Terminal

MCP Terminal

Un servidor que permite a los asistentes de IA ejecutar comandos de terminal y recuperar resultados a través del Protocolo de Contexto del Modelo (MCP).

@container-inc/mcp

@container-inc/mcp

Servidor MCP para implementaciones automatizadas en Container Inc.

Open Food Facts MCP Server

Open Food Facts MCP Server

Enables AI assistants to access the Open Food Facts database to query detailed food product information, nutritional data, and environmental scores. Supports product lookup by barcode, smart search with filtering, nutritional analysis, product comparison, and dietary recommendations to help users make informed food choices.

Notion MCP Server

Notion MCP Server

A simple MCP server implementation for Notion integration

Local Falcon MCP Server

Local Falcon MCP Server

Connects AI systems to Local Falcon API, enabling access to local SEO reporting tools including scan reports, trend analysis, keyword tracking, and competitor data through the Model Context Protocol.

UUID MCP Provider

UUID MCP Provider

Un servidor simple de Protocolo de Contexto de Modelo que genera UUIDs basados en marcas de tiempo (v7) cuando es llamado por un LLM, proporcionando identificadores únicos ordenables cronológicamente sin necesidad de parámetros de entrada.

YaVendió Tools

YaVendió Tools

An MCP-based messaging system that allows AI systems to interact with various messaging platforms through standardized tools for sending text, images, documents, buttons, and alerts.

pumpswap-mcp

pumpswap-mcp

pumpswap-mcp

Model Context Protocol (MCP)

Model Context Protocol (MCP)

Okay, here's a breakdown of a working pattern for SSE-based (Server-Sent Events) MCP (Message Channel Protocol) clients and servers using the Gemini LLM (Language Model): **Core Concepts** * **SSE (Server-Sent Events):** A lightweight, unidirectional protocol where the server pushes data updates to the client over a single HTTP connection. Ideal for real-time updates. * **MCP (Message Channel Protocol):** A protocol for structured communication between components. In this context, it defines the format of messages exchanged between the client and server. We'll assume a simple JSON-based MCP for this example. * **Gemini LLM:** Google's large language model. It will be used on the server-side to process client requests and generate responses. **Architecture** 1. **Client (Web Browser, Mobile App, etc.):** * Establishes an SSE connection to the server. * Sends initial requests (if needed) via standard HTTP (e.g., `POST` or `GET`) to set up the context for the SSE stream. These requests might include user authentication, initial data retrieval, or instructions for the LLM. * Listens for SSE events from the server. * Parses the SSE event data (which will be in MCP format, likely JSON). * Updates the UI or performs other actions based on the received data. * Can send new requests to the server via standard HTTP (outside the SSE stream) to change the context or initiate new interactions. 2. **Server (Node.js, Python/Flask, Go, etc.):** * Handles initial HTTP requests from the client (e.g., for authentication, setup). * Establishes and maintains SSE connections with clients. * Receives requests (likely via HTTP `POST` requests) containing instructions or data for the Gemini LLM. * Uses the Gemini LLM to process the requests. * Formats the LLM's responses into MCP messages (JSON). * Sends these MCP messages as SSE events to the appropriate client(s). * Manages the state of each client connection (e.g., user context, conversation history). **MCP Message Format (Example - JSON)** ```json { "type": "llm_response", // Type of message "data": { "text": "The capital of France is Paris.", // LLM generated text "metadata": { "model": "gemini-pro", "timestamp": "2024-10-27T10:00:00Z" } } } ``` Other possible message types: * `status`: Server status updates (e.g., "LLM processing...", "Ready"). * `error`: Error messages. * `context_update`: Updates to the client's context (e.g., new user settings). **Workflow** 1. **Client Initialization:** * Client sends an HTTP request (e.g., `POST /init`) to the server to initialize the session. This might include authentication tokens or initial instructions. * Server authenticates the client and sets up a session. * Server returns a success response. 2. **SSE Connection Establishment:** * Client opens an SSE connection to the server (e.g., `GET /events`). * Server sets the appropriate headers for SSE (e.g., `Content-Type: text/event-stream`). 3. **Client Request (Example: Asking a Question):** * Client sends an HTTP `POST` request to the server (e.g., `POST /ask`). * The request body contains the question: ```json { "query": "What is the capital of France?" } ``` 4. **Server Processing:** * Server receives the `POST /ask` request. * Server extracts the query ("What is the capital of France?"). * Server sends the query to the Gemini LLM. * Server receives the LLM's response ("The capital of France is Paris."). * Server formats the response into an MCP message: ```json { "type": "llm_response", "data": { "text": "The capital of France is Paris.", "metadata": { "model": "gemini-pro", "timestamp": "2024-10-27T10:00:00Z" } } } ``` 5. **Server Sends SSE Event:** * Server sends the MCP message as an SSE event to the client: ``` event: message data: {"type": "llm_response", "data": {"text": "The capital of France is Paris.", "metadata": {"model": "gemini-pro", "timestamp": "2024-10-27T10:00:00Z"}}} ``` 6. **Client Receives and Processes SSE Event:** * Client receives the SSE event. * Client parses the JSON data from the `data` field. * Client updates the UI with the LLM's response ("The capital of France is Paris."). **Code Snippets (Illustrative - Python/Flask)** **Server (Python/Flask):** ```python from flask import Flask, Response, request, jsonify import time import json # Assuming you have the Gemini API set up # from google.generativeai import GenerativeModel # Replace with actual Gemini import app = Flask(__name__) # Replace with your actual Gemini model # model = GenerativeModel('gemini-pro') clients = {} # Store connected clients def generate_llm_response(query): """Simulates a Gemini LLM response.""" # In a real implementation, you would call the Gemini API here. # response = model.generate_content(query) # return response.text # Mock response for demonstration if "capital of France" in query.lower(): return "The capital of France is Paris." else: return "I don't have information on that topic." def format_sse_message(data): """Formats data as an SSE message.""" return f"event: message\ndata: {json.dumps(data)}\n\n" @app.route('/init', methods=['POST']) def init(): # Handle client initialization (e.g., authentication) # For simplicity, we'll just return a success message return jsonify({"status": "success", "message": "Session initialized"}) @app.route('/events') def events(): """Handles SSE connections.""" def event_stream(): client_id = request.headers.get('X-Client-ID') # Or generate a unique ID clients[client_id] = True # Track connected clients try: while True: time.sleep(0.1) # Keep the connection alive yield "" # Send a heartbeat to prevent timeout except GeneratorExit: del clients[client_id] # Remove client when disconnected return Response(event_stream(), mimetype="text/event-stream") @app.route('/ask', methods=['POST']) def ask(): """Handles client requests to the LLM.""" query = request.json.get('query') if not query: return jsonify({"status": "error", "message": "Missing query"}), 400 llm_response = generate_llm_response(query) mcp_message = { "type": "llm_response", "data": { "text": llm_response, "metadata": { "model": "mock-gemini", "timestamp": time.strftime("%Y-%m-%dT%H:%M:%SZ") } } } # Send the SSE event to all connected clients (you might want to target a specific client) for client_id in clients: sse_message = format_sse_message(mcp_message) # In a real implementation, you would need a way to send the SSE message # to the specific client's connection. This might involve using a message queue # or a more sophisticated connection management system. print(f"Sending to client {client_id}: {sse_message}") # Replace with actual sending logic return jsonify({"status": "success", "message": "Request processed"}) if __name__ == '__main__': app.run(debug=True) ``` **Client (JavaScript - Browser):** ```javascript // Initialize the session fetch('/init', { method: 'POST', // Add any necessary headers or body data }) .then(response => response.json()) .then(data => { if (data.status === 'success') { console.log('Session initialized'); startSSE(); } else { console.error('Session initialization failed:', data.message); } }); function startSSE() { const eventSource = new EventSource('/events'); eventSource.onmessage = (event) => { const data = JSON.parse(event.data); console.log('Received SSE event:', data); if (data.type === 'llm_response') { const llmText = data.data.text; // Update the UI with the LLM's response document.getElementById('response').textContent = llmText; } else if (data.type === 'status') { // Handle status updates console.log('Status:', data.data.message); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); eventSource.close(); }; } function askQuestion() { const question = document.getElementById('question').value; fetch('/ask', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ query: question }) }) .then(response => response.json()) .then(data => { if (data.status === 'success') { console.log('Question submitted'); } else { console.error('Error submitting question:', data.message); } }); } // Example HTML (for the client-side) /* <input type="text" id="question" placeholder="Ask a question"> <button onclick="askQuestion()">Ask</button> <div id="response"></div> */ ``` **Key Considerations and Improvements** * **Error Handling:** Implement robust error handling on both the client and server. Send error messages as SSE events. * **Authentication/Authorization:** Secure the SSE endpoint. Use authentication tokens or cookies to verify the client's identity. * **Connection Management:** Handle client disconnections gracefully. Clean up resources on the server when a client disconnects. * **Scalability:** For high-traffic applications, consider using a message queue (e.g., Redis, RabbitMQ) to distribute SSE events to multiple server instances. This decouples the LLM processing from the SSE event delivery. * **Rate Limiting:** Implement rate limiting to prevent abuse of the Gemini LLM API. * **Context Management:** Maintain conversation history or other relevant context for each client. This allows the LLM to provide more relevant responses. Consider using a database or in-memory store to manage context. * **Streaming Responses:** For long-running LLM tasks, consider streaming the response back to the client as a series of SSE events. This provides a better user experience. Gemini supports streaming responses. * **MCP Design:** Carefully design your MCP message format to be flexible and extensible. Consider using a schema validation library to ensure that messages are valid. * **Client ID:** Generate a unique client ID for each SSE connection. This allows the server to target specific clients with SSE events. You can pass this ID in the `X-Client-ID` header when establishing the SSE connection. * **Heartbeats:** Send periodic heartbeat messages from the server to the client to keep the SSE connection alive. This prevents the connection from timing out. * **Reconnection:** Implement automatic reconnection logic on the client in case the SSE connection is lost. **Benefits of this Pattern** * **Real-time Updates:** SSE provides real-time updates from the Gemini LLM to the client. * **Unidirectional Communication:** SSE is well-suited for scenarios where the server primarily pushes data to the client. * **Lightweight:** SSE is a lightweight protocol compared to WebSockets. * **Scalability:** With proper architecture (e.g., message queues), this pattern can be scaled to handle a large number of concurrent clients. * **Structured Communication:** MCP provides a structured way to exchange messages between the client and server. **When to Use This Pattern** * Applications that require real-time updates from an LLM (e.g., chatbots, live dashboards). * Scenarios where the server needs to push data to the client without the client constantly polling. * Applications that need a structured way to communicate between the client and server. This detailed explanation and code snippets should give you a solid foundation for building SSE-based MCP clients and servers using the Gemini LLM. Remember to adapt the code to your specific needs and environment.

Thought Space - MCP Advanced Branch-Thinking Tool

Thought Space - MCP Advanced Branch-Thinking Tool

Una herramienta MCP que permite el pensamiento estructurado y el análisis en múltiples plataformas de IA a través de la gestión de ramas, el análisis semántico y la mejora cognitiva.

EVM MCP Server

EVM MCP Server