Discover Awesome MCP Servers

Extend your agent with 26,375 capabilities via MCP servers.

All26,375
Apple Shortcuts MCP Server

Apple Shortcuts MCP Server

Enables the generation, management, and validation of Apple Shortcuts (.shortcut files) by providing tools to search actions and build control flow blocks. It allows users to programmatically create and analyze shortcut structures for deployment on iOS and macOS devices.

Tavily Web Search MCP Server

Tavily Web Search MCP Server

Enables web search capabilities through the Tavily API, allowing users to search the internet for information using natural language queries. Serves as a demonstration and educational project for building MCP servers with external API integrations.

UUID MCP Provider

UUID MCP Provider

Un servidor simple de Protocolo de Contexto de Modelo que genera UUIDs basados en marcas de tiempo (v7) cuando es llamado por un LLM, proporcionando identificadores únicos ordenables cronológicamente sin necesidad de parámetros de entrada.

vk-mcp-server

vk-mcp-server

Model Context Protocol server for VK (VKontakte) social network API

YaVendió Tools

YaVendió Tools

An MCP-based messaging system that allows AI systems to interact with various messaging platforms through standardized tools for sending text, images, documents, buttons, and alerts.

pumpswap-mcp

pumpswap-mcp

pumpswap-mcp

Model Context Protocol (MCP)

Model Context Protocol (MCP)

Okay, here's a breakdown of a working pattern for SSE-based (Server-Sent Events) MCP (Message Channel Protocol) clients and servers using the Gemini LLM (Language Model): **Core Concepts** * **SSE (Server-Sent Events):** A lightweight, unidirectional protocol where the server pushes data updates to the client over a single HTTP connection. Ideal for real-time updates. * **MCP (Message Channel Protocol):** A protocol for structured communication between components. In this context, it defines the format of messages exchanged between the client and server. We'll assume a simple JSON-based MCP for this example. * **Gemini LLM:** Google's large language model. It will be used on the server-side to process client requests and generate responses. **Architecture** 1. **Client (Web Browser, Mobile App, etc.):** * Establishes an SSE connection to the server. * Sends initial requests (if needed) via standard HTTP (e.g., `POST` or `GET`) to set up the context for the SSE stream. These requests might include user authentication, initial data retrieval, or instructions for the LLM. * Listens for SSE events from the server. * Parses the SSE event data (which will be in MCP format, likely JSON). * Updates the UI or performs other actions based on the received data. * Can send new requests to the server via standard HTTP (outside the SSE stream) to change the context or initiate new interactions. 2. **Server (Node.js, Python/Flask, Go, etc.):** * Handles initial HTTP requests from the client (e.g., for authentication, setup). * Establishes and maintains SSE connections with clients. * Receives requests (likely via HTTP `POST` requests) containing instructions or data for the Gemini LLM. * Uses the Gemini LLM to process the requests. * Formats the LLM's responses into MCP messages (JSON). * Sends these MCP messages as SSE events to the appropriate client(s). * Manages the state of each client connection (e.g., user context, conversation history). **MCP Message Format (Example - JSON)** ```json { "type": "llm_response", // Type of message "data": { "text": "The capital of France is Paris.", // LLM generated text "metadata": { "model": "gemini-pro", "timestamp": "2024-10-27T10:00:00Z" } } } ``` Other possible message types: * `status`: Server status updates (e.g., "LLM processing...", "Ready"). * `error`: Error messages. * `context_update`: Updates to the client's context (e.g., new user settings). **Workflow** 1. **Client Initialization:** * Client sends an HTTP request (e.g., `POST /init`) to the server to initialize the session. This might include authentication tokens or initial instructions. * Server authenticates the client and sets up a session. * Server returns a success response. 2. **SSE Connection Establishment:** * Client opens an SSE connection to the server (e.g., `GET /events`). * Server sets the appropriate headers for SSE (e.g., `Content-Type: text/event-stream`). 3. **Client Request (Example: Asking a Question):** * Client sends an HTTP `POST` request to the server (e.g., `POST /ask`). * The request body contains the question: ```json { "query": "What is the capital of France?" } ``` 4. **Server Processing:** * Server receives the `POST /ask` request. * Server extracts the query ("What is the capital of France?"). * Server sends the query to the Gemini LLM. * Server receives the LLM's response ("The capital of France is Paris."). * Server formats the response into an MCP message: ```json { "type": "llm_response", "data": { "text": "The capital of France is Paris.", "metadata": { "model": "gemini-pro", "timestamp": "2024-10-27T10:00:00Z" } } } ``` 5. **Server Sends SSE Event:** * Server sends the MCP message as an SSE event to the client: ``` event: message data: {"type": "llm_response", "data": {"text": "The capital of France is Paris.", "metadata": {"model": "gemini-pro", "timestamp": "2024-10-27T10:00:00Z"}}} ``` 6. **Client Receives and Processes SSE Event:** * Client receives the SSE event. * Client parses the JSON data from the `data` field. * Client updates the UI with the LLM's response ("The capital of France is Paris."). **Code Snippets (Illustrative - Python/Flask)** **Server (Python/Flask):** ```python from flask import Flask, Response, request, jsonify import time import json # Assuming you have the Gemini API set up # from google.generativeai import GenerativeModel # Replace with actual Gemini import app = Flask(__name__) # Replace with your actual Gemini model # model = GenerativeModel('gemini-pro') clients = {} # Store connected clients def generate_llm_response(query): """Simulates a Gemini LLM response.""" # In a real implementation, you would call the Gemini API here. # response = model.generate_content(query) # return response.text # Mock response for demonstration if "capital of France" in query.lower(): return "The capital of France is Paris." else: return "I don't have information on that topic." def format_sse_message(data): """Formats data as an SSE message.""" return f"event: message\ndata: {json.dumps(data)}\n\n" @app.route('/init', methods=['POST']) def init(): # Handle client initialization (e.g., authentication) # For simplicity, we'll just return a success message return jsonify({"status": "success", "message": "Session initialized"}) @app.route('/events') def events(): """Handles SSE connections.""" def event_stream(): client_id = request.headers.get('X-Client-ID') # Or generate a unique ID clients[client_id] = True # Track connected clients try: while True: time.sleep(0.1) # Keep the connection alive yield "" # Send a heartbeat to prevent timeout except GeneratorExit: del clients[client_id] # Remove client when disconnected return Response(event_stream(), mimetype="text/event-stream") @app.route('/ask', methods=['POST']) def ask(): """Handles client requests to the LLM.""" query = request.json.get('query') if not query: return jsonify({"status": "error", "message": "Missing query"}), 400 llm_response = generate_llm_response(query) mcp_message = { "type": "llm_response", "data": { "text": llm_response, "metadata": { "model": "mock-gemini", "timestamp": time.strftime("%Y-%m-%dT%H:%M:%SZ") } } } # Send the SSE event to all connected clients (you might want to target a specific client) for client_id in clients: sse_message = format_sse_message(mcp_message) # In a real implementation, you would need a way to send the SSE message # to the specific client's connection. This might involve using a message queue # or a more sophisticated connection management system. print(f"Sending to client {client_id}: {sse_message}") # Replace with actual sending logic return jsonify({"status": "success", "message": "Request processed"}) if __name__ == '__main__': app.run(debug=True) ``` **Client (JavaScript - Browser):** ```javascript // Initialize the session fetch('/init', { method: 'POST', // Add any necessary headers or body data }) .then(response => response.json()) .then(data => { if (data.status === 'success') { console.log('Session initialized'); startSSE(); } else { console.error('Session initialization failed:', data.message); } }); function startSSE() { const eventSource = new EventSource('/events'); eventSource.onmessage = (event) => { const data = JSON.parse(event.data); console.log('Received SSE event:', data); if (data.type === 'llm_response') { const llmText = data.data.text; // Update the UI with the LLM's response document.getElementById('response').textContent = llmText; } else if (data.type === 'status') { // Handle status updates console.log('Status:', data.data.message); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); eventSource.close(); }; } function askQuestion() { const question = document.getElementById('question').value; fetch('/ask', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ query: question }) }) .then(response => response.json()) .then(data => { if (data.status === 'success') { console.log('Question submitted'); } else { console.error('Error submitting question:', data.message); } }); } // Example HTML (for the client-side) /* <input type="text" id="question" placeholder="Ask a question"> <button onclick="askQuestion()">Ask</button> <div id="response"></div> */ ``` **Key Considerations and Improvements** * **Error Handling:** Implement robust error handling on both the client and server. Send error messages as SSE events. * **Authentication/Authorization:** Secure the SSE endpoint. Use authentication tokens or cookies to verify the client's identity. * **Connection Management:** Handle client disconnections gracefully. Clean up resources on the server when a client disconnects. * **Scalability:** For high-traffic applications, consider using a message queue (e.g., Redis, RabbitMQ) to distribute SSE events to multiple server instances. This decouples the LLM processing from the SSE event delivery. * **Rate Limiting:** Implement rate limiting to prevent abuse of the Gemini LLM API. * **Context Management:** Maintain conversation history or other relevant context for each client. This allows the LLM to provide more relevant responses. Consider using a database or in-memory store to manage context. * **Streaming Responses:** For long-running LLM tasks, consider streaming the response back to the client as a series of SSE events. This provides a better user experience. Gemini supports streaming responses. * **MCP Design:** Carefully design your MCP message format to be flexible and extensible. Consider using a schema validation library to ensure that messages are valid. * **Client ID:** Generate a unique client ID for each SSE connection. This allows the server to target specific clients with SSE events. You can pass this ID in the `X-Client-ID` header when establishing the SSE connection. * **Heartbeats:** Send periodic heartbeat messages from the server to the client to keep the SSE connection alive. This prevents the connection from timing out. * **Reconnection:** Implement automatic reconnection logic on the client in case the SSE connection is lost. **Benefits of this Pattern** * **Real-time Updates:** SSE provides real-time updates from the Gemini LLM to the client. * **Unidirectional Communication:** SSE is well-suited for scenarios where the server primarily pushes data to the client. * **Lightweight:** SSE is a lightweight protocol compared to WebSockets. * **Scalability:** With proper architecture (e.g., message queues), this pattern can be scaled to handle a large number of concurrent clients. * **Structured Communication:** MCP provides a structured way to exchange messages between the client and server. **When to Use This Pattern** * Applications that require real-time updates from an LLM (e.g., chatbots, live dashboards). * Scenarios where the server needs to push data to the client without the client constantly polling. * Applications that need a structured way to communicate between the client and server. This detailed explanation and code snippets should give you a solid foundation for building SSE-based MCP clients and servers using the Gemini LLM. Remember to adapt the code to your specific needs and environment.

Oomol Connect MCP Server

Oomol Connect MCP Server

Enables integration with Oomol Connect for executing tasks, managing blocks and packages, and uploading files. Supports task execution with real-time progress monitoring and intelligent polling across audio, video, and other processing workflows.

MuseScore MCP Server

MuseScore MCP Server

A Model Context Protocol server that provides programmatic control over MuseScore through a WebSocket-based plugin system, allowing AI assistants to compose music, add lyrics, navigate scores, and control MuseScore directly.

Local Snowflake MCP Server

Local Snowflake MCP Server

Enables Claude Desktop to interact with Snowflake databases through natural-language SQL queries. Built in Python, it allows secure local integration between LLMs and enterprise data systems for database operations and analysis.

MCP Filesystem Server

MCP Filesystem Server

Un servidor de Protocolo de Contexto de Modelo que proporciona una interacción segura e inteligente con archivos y sistemas de archivos, ofreciendo una gestión de contexto inteligente y operaciones eficientes en cuanto a tokens para trabajar con archivos grandes y estructuras de directorios complejas.

MCP Docker Server

MCP Docker Server

Enables secure Docker command execution from isolated environments like containers through MCP protocol. Provides tools for managing Docker containers, images, and Docker Compose services with security validation and async operation support.

Remote MCP Server

Remote MCP Server

A deployable Model Context Protocol server for Cloudflare Workers that allows users to create custom AI tools without authentication requirements and connect them to Cloudflare AI Playground or Claude Desktop.

erpnext-server

erpnext-server

Este es un servidor MCP basado en TypeScript que proporciona integración con la API de ERPNext/Frappe. Permite que los asistentes de IA interactúen con los datos y la funcionalidad de ERPNext a través del Protocolo de Contexto del Modelo.

LinkedIn MCP Assistant

LinkedIn MCP Assistant

LinkedIn MCP Assistant

vSphere MCP Server

vSphere MCP Server

Enables AI agents to manage VMware vSphere virtual infrastructure through comprehensive operations including VM power control, snapshot management, resource monitoring, performance analytics, and bulk operations with built-in safety confirmations for destructive actions.

Interactive Feedback MCP

Interactive Feedback MCP

A Model Context Protocol server that enables AI assistants to request user feedback at critical points during interactions, improving communication and reducing unnecessary tool calls.

For Five Coffee MCP Server

For Five Coffee MCP Server

Enables AI assistants to fetch, search, and organize menu information from For Five Coffee café. Provides access to complete menu data, category filtering, and item search capabilities through both MCP and REST API interfaces.

MCP-NOSTR

MCP-NOSTR

Un puente que permite a los modelos de lenguaje de IA publicar contenido en la red Nostr mediante la implementación del Protocolo de Contexto del Modelo (MCP).

Matomo MCP Server

Matomo MCP Server

A Model Context Protocol server that provides tools to interact with Matomo Analytics API, enabling management of sites, users, goals, segments, and access to analytics reports through a MCP interface.

ReasonForge

ReasonForge

Provides a suite of deterministic math tools powered by SymPy to handle algebra, calculus, linear algebra, and statistics via the Model Context Protocol. It enables smaller language models to delegate complex computations to a verified symbolic backend for accurate and reliable results.

MCP API Service

MCP API Service

Un servidor de Protocolo de Contexto de Modelo (MCP) que interactúa con las API del sistema, permitiendo a los usuarios verificar conexiones, buscar empleados, registrar el desayuno y actualizar la información química por turnos.

YaTracker Connector

YaTracker Connector

Enables interaction with Yandex Tracker through its API for managing tasks, comments, and attachments. It supports issue searching, status transitions, and metadata retrieval for automated project management.

FastAPI MCP Demo Server

FastAPI MCP Demo Server

A demonstration MCP server built with FastAPI that provides basic mathematical operations and greeting services. Integrates with Gemini CLI to showcase MCP protocol implementation with simple REST endpoints.

MCP Terminal

MCP Terminal

Un servidor que permite a los asistentes de IA ejecutar comandos de terminal y recuperar resultados a través del Protocolo de Contexto del Modelo (MCP).

Self-Hosted Supabase MCP Server

Self-Hosted Supabase MCP Server

Enables developers to interact with self-hosted Supabase instances, providing database introspection, migration management, auth user operations, storage management, and TypeScript type generation directly from MCP-compatible development environments.

@container-inc/mcp

@container-inc/mcp

Servidor MCP para implementaciones automatizadas en Container Inc.

Kylas CRM MCP Server

Kylas CRM MCP Server

Enables management of Kylas CRM lead operations, including creating leads, searching and filtering records, and resolving user, product, or pipeline IDs. It provides specialized tools for monitoring idle leads and accessing lead schema instructions through natural language.

Scrapeless MCP Server

Scrapeless MCP Server

Una implementación de servidor del Protocolo de Contexto de Modelo que permite a asistentes de IA como Claude realizar búsquedas en Google y recuperar datos web directamente a través de solicitudes en lenguaje natural.

Files-DB-MCP

Files-DB-MCP

Un sistema de base de datos vectorial local que proporciona a los agentes de codificación LLM capacidades de búsqueda semántica rápidas y eficientes para proyectos de software a través del Protocolo de Control de Mensajes.