Discover Awesome MCP Servers

Extend your agent with 14,392 capabilities via MCP servers.

All14,392
MCP Demo

MCP Demo

Here are a few ways to translate "Mcp server of slack with python" into Spanish, depending on the intended meaning: **Option 1 (Most Literal):** * **Servidor MCP de Slack con Python** * This is a direct translation and works if "MCP" is a specific term or acronym that doesn't need translation. **Option 2 (If MCP refers to a specific type of server/functionality):** * **Servidor [Descripción de MCP] de Slack con Python** * Replace "[Descripción de MCP]" with a brief explanation of what the MCP server does. For example, if MCP stands for "Message Control Protocol", you could say: * **Servidor de control de mensajes de Slack con Python** **Option 3 (More General, if MCP is unclear):** * **Servidor para Slack con Python (usando MCP)** * This translates to "Server for Slack with Python (using MCP)". It implies that MCP is a technology or method used in the server. **Option 4 (Focus on the purpose):** * **Servidor Python para Slack (con funcionalidad MCP)** * This translates to "Python server for Slack (with MCP functionality)". This emphasizes that it's a Python server designed for Slack and includes MCP features. **Which option is best depends on the context. If you can provide more information about what "MCP" means in this context, I can give you a more accurate translation.**

@container-inc/mcp

@container-inc/mcp

Servidor MCP para implementaciones automatizadas en Container Inc.

Local Mcp Server Tutorial

Local Mcp Server Tutorial

ローカルMCPサーバー(stdio)を作成するためのチュートリアルです。

Open Food Facts MCP Server

Open Food Facts MCP Server

Enables AI assistants to access the Open Food Facts database to query detailed food product information, nutritional data, and environmental scores. Supports product lookup by barcode, smart search with filtering, nutritional analysis, product comparison, and dietary recommendations to help users make informed food choices.

Notion MCP Server

Notion MCP Server

A simple MCP server implementation for Notion integration

Local Falcon MCP Server

Local Falcon MCP Server

Connects AI systems to Local Falcon API, enabling access to local SEO reporting tools including scan reports, trend analysis, keyword tracking, and competitor data through the Model Context Protocol.

MaxKB

MaxKB

💬 MaxKB es un chatbot RAG listo para usar que ofrece un flujo de trabajo robusto y capacidades de uso de herramientas MCP. Es compatible con una amplia gama de modelos de lenguaje grandes (LLM) convencionales, incluidos DeepSeek-R1, Llama 3.3, OpenAI, entre otros.

UUID MCP Provider

UUID MCP Provider

Un servidor simple de Protocolo de Contexto de Modelo que genera UUIDs basados en marcas de tiempo (v7) cuando es llamado por un LLM, proporcionando identificadores únicos ordenables cronológicamente sin necesidad de parámetros de entrada.

YaVendió Tools

YaVendió Tools

An MCP-based messaging system that allows AI systems to interact with various messaging platforms through standardized tools for sending text, images, documents, buttons, and alerts.

pumpswap-mcp

pumpswap-mcp

pumpswap-mcp

Model Context Protocol (MCP)

Model Context Protocol (MCP)

Okay, here's a breakdown of a working pattern for SSE-based (Server-Sent Events) MCP (Message Channel Protocol) clients and servers using the Gemini LLM (Language Model): **Core Concepts** * **SSE (Server-Sent Events):** A lightweight, unidirectional protocol where the server pushes data updates to the client over a single HTTP connection. Ideal for real-time updates. * **MCP (Message Channel Protocol):** A protocol for structured communication between components. In this context, it defines the format of messages exchanged between the client and server. We'll assume a simple JSON-based MCP for this example. * **Gemini LLM:** Google's large language model. It will be used on the server-side to process client requests and generate responses. **Architecture** 1. **Client (Web Browser, Mobile App, etc.):** * Establishes an SSE connection to the server. * Sends initial requests (if needed) via standard HTTP (e.g., `POST` or `GET`) to set up the context for the SSE stream. These requests might include user authentication, initial data retrieval, or instructions for the LLM. * Listens for SSE events from the server. * Parses the SSE event data (which will be in MCP format, likely JSON). * Updates the UI or performs other actions based on the received data. * Can send new requests to the server via standard HTTP (outside the SSE stream) to change the context or initiate new interactions. 2. **Server (Node.js, Python/Flask, Go, etc.):** * Handles initial HTTP requests from the client (e.g., for authentication, setup). * Establishes and maintains SSE connections with clients. * Receives requests (likely via HTTP `POST` requests) containing instructions or data for the Gemini LLM. * Uses the Gemini LLM to process the requests. * Formats the LLM's responses into MCP messages (JSON). * Sends these MCP messages as SSE events to the appropriate client(s). * Manages the state of each client connection (e.g., user context, conversation history). **MCP Message Format (Example - JSON)** ```json { "type": "llm_response", // Type of message "data": { "text": "The capital of France is Paris.", // LLM generated text "metadata": { "model": "gemini-pro", "timestamp": "2024-10-27T10:00:00Z" } } } ``` Other possible message types: * `status`: Server status updates (e.g., "LLM processing...", "Ready"). * `error`: Error messages. * `context_update`: Updates to the client's context (e.g., new user settings). **Workflow** 1. **Client Initialization:** * Client sends an HTTP request (e.g., `POST /init`) to the server to initialize the session. This might include authentication tokens or initial instructions. * Server authenticates the client and sets up a session. * Server returns a success response. 2. **SSE Connection Establishment:** * Client opens an SSE connection to the server (e.g., `GET /events`). * Server sets the appropriate headers for SSE (e.g., `Content-Type: text/event-stream`). 3. **Client Request (Example: Asking a Question):** * Client sends an HTTP `POST` request to the server (e.g., `POST /ask`). * The request body contains the question: ```json { "query": "What is the capital of France?" } ``` 4. **Server Processing:** * Server receives the `POST /ask` request. * Server extracts the query ("What is the capital of France?"). * Server sends the query to the Gemini LLM. * Server receives the LLM's response ("The capital of France is Paris."). * Server formats the response into an MCP message: ```json { "type": "llm_response", "data": { "text": "The capital of France is Paris.", "metadata": { "model": "gemini-pro", "timestamp": "2024-10-27T10:00:00Z" } } } ``` 5. **Server Sends SSE Event:** * Server sends the MCP message as an SSE event to the client: ``` event: message data: {"type": "llm_response", "data": {"text": "The capital of France is Paris.", "metadata": {"model": "gemini-pro", "timestamp": "2024-10-27T10:00:00Z"}}} ``` 6. **Client Receives and Processes SSE Event:** * Client receives the SSE event. * Client parses the JSON data from the `data` field. * Client updates the UI with the LLM's response ("The capital of France is Paris."). **Code Snippets (Illustrative - Python/Flask)** **Server (Python/Flask):** ```python from flask import Flask, Response, request, jsonify import time import json # Assuming you have the Gemini API set up # from google.generativeai import GenerativeModel # Replace with actual Gemini import app = Flask(__name__) # Replace with your actual Gemini model # model = GenerativeModel('gemini-pro') clients = {} # Store connected clients def generate_llm_response(query): """Simulates a Gemini LLM response.""" # In a real implementation, you would call the Gemini API here. # response = model.generate_content(query) # return response.text # Mock response for demonstration if "capital of France" in query.lower(): return "The capital of France is Paris." else: return "I don't have information on that topic." def format_sse_message(data): """Formats data as an SSE message.""" return f"event: message\ndata: {json.dumps(data)}\n\n" @app.route('/init', methods=['POST']) def init(): # Handle client initialization (e.g., authentication) # For simplicity, we'll just return a success message return jsonify({"status": "success", "message": "Session initialized"}) @app.route('/events') def events(): """Handles SSE connections.""" def event_stream(): client_id = request.headers.get('X-Client-ID') # Or generate a unique ID clients[client_id] = True # Track connected clients try: while True: time.sleep(0.1) # Keep the connection alive yield "" # Send a heartbeat to prevent timeout except GeneratorExit: del clients[client_id] # Remove client when disconnected return Response(event_stream(), mimetype="text/event-stream") @app.route('/ask', methods=['POST']) def ask(): """Handles client requests to the LLM.""" query = request.json.get('query') if not query: return jsonify({"status": "error", "message": "Missing query"}), 400 llm_response = generate_llm_response(query) mcp_message = { "type": "llm_response", "data": { "text": llm_response, "metadata": { "model": "mock-gemini", "timestamp": time.strftime("%Y-%m-%dT%H:%M:%SZ") } } } # Send the SSE event to all connected clients (you might want to target a specific client) for client_id in clients: sse_message = format_sse_message(mcp_message) # In a real implementation, you would need a way to send the SSE message # to the specific client's connection. This might involve using a message queue # or a more sophisticated connection management system. print(f"Sending to client {client_id}: {sse_message}") # Replace with actual sending logic return jsonify({"status": "success", "message": "Request processed"}) if __name__ == '__main__': app.run(debug=True) ``` **Client (JavaScript - Browser):** ```javascript // Initialize the session fetch('/init', { method: 'POST', // Add any necessary headers or body data }) .then(response => response.json()) .then(data => { if (data.status === 'success') { console.log('Session initialized'); startSSE(); } else { console.error('Session initialization failed:', data.message); } }); function startSSE() { const eventSource = new EventSource('/events'); eventSource.onmessage = (event) => { const data = JSON.parse(event.data); console.log('Received SSE event:', data); if (data.type === 'llm_response') { const llmText = data.data.text; // Update the UI with the LLM's response document.getElementById('response').textContent = llmText; } else if (data.type === 'status') { // Handle status updates console.log('Status:', data.data.message); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); eventSource.close(); }; } function askQuestion() { const question = document.getElementById('question').value; fetch('/ask', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ query: question }) }) .then(response => response.json()) .then(data => { if (data.status === 'success') { console.log('Question submitted'); } else { console.error('Error submitting question:', data.message); } }); } // Example HTML (for the client-side) /* <input type="text" id="question" placeholder="Ask a question"> <button onclick="askQuestion()">Ask</button> <div id="response"></div> */ ``` **Key Considerations and Improvements** * **Error Handling:** Implement robust error handling on both the client and server. Send error messages as SSE events. * **Authentication/Authorization:** Secure the SSE endpoint. Use authentication tokens or cookies to verify the client's identity. * **Connection Management:** Handle client disconnections gracefully. Clean up resources on the server when a client disconnects. * **Scalability:** For high-traffic applications, consider using a message queue (e.g., Redis, RabbitMQ) to distribute SSE events to multiple server instances. This decouples the LLM processing from the SSE event delivery. * **Rate Limiting:** Implement rate limiting to prevent abuse of the Gemini LLM API. * **Context Management:** Maintain conversation history or other relevant context for each client. This allows the LLM to provide more relevant responses. Consider using a database or in-memory store to manage context. * **Streaming Responses:** For long-running LLM tasks, consider streaming the response back to the client as a series of SSE events. This provides a better user experience. Gemini supports streaming responses. * **MCP Design:** Carefully design your MCP message format to be flexible and extensible. Consider using a schema validation library to ensure that messages are valid. * **Client ID:** Generate a unique client ID for each SSE connection. This allows the server to target specific clients with SSE events. You can pass this ID in the `X-Client-ID` header when establishing the SSE connection. * **Heartbeats:** Send periodic heartbeat messages from the server to the client to keep the SSE connection alive. This prevents the connection from timing out. * **Reconnection:** Implement automatic reconnection logic on the client in case the SSE connection is lost. **Benefits of this Pattern** * **Real-time Updates:** SSE provides real-time updates from the Gemini LLM to the client. * **Unidirectional Communication:** SSE is well-suited for scenarios where the server primarily pushes data to the client. * **Lightweight:** SSE is a lightweight protocol compared to WebSockets. * **Scalability:** With proper architecture (e.g., message queues), this pattern can be scaled to handle a large number of concurrent clients. * **Structured Communication:** MCP provides a structured way to exchange messages between the client and server. **When to Use This Pattern** * Applications that require real-time updates from an LLM (e.g., chatbots, live dashboards). * Scenarios where the server needs to push data to the client without the client constantly polling. * Applications that need a structured way to communicate between the client and server. This detailed explanation and code snippets should give you a solid foundation for building SSE-based MCP clients and servers using the Gemini LLM. Remember to adapt the code to your specific needs and environment.

Scrapeless MCP Server

Scrapeless MCP Server

Una implementación de servidor del Protocolo de Contexto de Modelo que permite a asistentes de IA como Claude realizar búsquedas en Google y recuperar datos web directamente a través de solicitudes en lenguaje natural.

RAG Documentation MCP Server

RAG Documentation MCP Server

Proporciona herramientas para recuperar y procesar documentación a través de la búsqueda vectorial, lo que permite a los asistentes de IA aumentar sus respuestas con contexto de documentación relevante.

Black-Scholes MCP Server

Black-Scholes MCP Server

Enables calculation of European option prices and Greeks (like Delta, Vega, Theta) using the Black-Scholes model through a Model Context Protocol implementation.

Wireshark MCP

Wireshark MCP

A Model Context Protocol server that integrates Wireshark's network analysis capabilities with AI systems like Claude, allowing direct analysis of network packet data without manual copying.

Google Images Search MCP

Google Images Search MCP

MCP server for searching images with Google

SSH MCP Server

SSH MCP Server

Enables SSH operations including connecting to remote servers, executing commands, and transferring files between local and remote systems. Supports multiple SSH connections with both password and private key authentication methods.

Financial Analysis MCP Server

Financial Analysis MCP Server

Un servidor MCP para potenciar tu LLM con capacidades de análisis financiero.

MCP-openproject

MCP-openproject

MCP-openproject

Puch AI MCP Starter

Puch AI MCP Starter

A starter template for creating MCP servers compatible with Puch AI, featuring built-in tools for job searching and analysis, plus basic image processing capabilities. Includes authentication and deployment guidance for extending Puch AI with custom tools.

AnalyticDB for MySQL MCP Server

AnalyticDB for MySQL MCP Server

Espejo de

Time Tools MCP Server

Time Tools MCP Server

A Model Context Protocol server for time manipulation tasks, enabling AI models to get the current date/time and calculate duration between timestamps.

MCP Gemini Server

MCP Gemini Server

Espejo de

Jupyter MCP Server

Jupyter MCP Server

Permite la interacción con cuadernos Jupyter a través del Protocolo de Contexto del Modelo, admitiendo la ejecución de código y la inserción de markdown dentro de entornos JupyterLab.

Logstash MCP Server

Logstash MCP Server

A Model Context Protocol server that provides comprehensive tools for monitoring and identifying performance bottlenecks in Logstash instances through an interactive web UI and JSON-RPC interface.

MarketAuxMcpServer

MarketAuxMcpServer

A MCP Server Implementation that integrates the Marketaux api providing search based on entity, countries, industries, symbols etc.

PyMCP

PyMCP

Primarily to be used as a template for developing MCP servers with FastMCP in Python, PyMCP is somewhat inspired by the official everything MCP server in Typescript.

MCP MariaDB Server

MCP MariaDB Server

Enables AI assistants to interact with MariaDB databases through standard SQL operations and advanced vector/embedding-based search. Supports database management, schema inspection, and semantic document storage and retrieval with multiple embedding providers.

mcp_command-line_tool

mcp_command-line_tool

mcp\_command-line\_tool

Interactive Feedback MCP

Interactive Feedback MCP

A Model Context Protocol server that enables AI assistants to request user feedback at critical points during interactions, improving communication and reducing unnecessary tool calls.