Discover Awesome MCP Servers
Extend your agent with 28,410 capabilities via MCP servers.
- All28,410
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Date Operations MCP Server
Provides UK-centric date calculations including bank holiday integration, sprint planning tools, and specialized Asana workflow automation. It enables users to calculate working days, track upcoming holidays, and manage project schedules specifically within the Europe/London timezone.
Excalidraw MCP App Server
Enables users to create and interact with hand-drawn sketches and architecture diagrams directly within chat interfaces using Excalidraw. It leverages the Model Context Protocol to provide interactive HTML visualizations with smooth viewport control and fullscreen editing capabilities.
Chrome DevTools MCP
Enables AI assistants to control and inspect a live Chrome browser for automated web debugging, performance analysis, and Lighthouse audits. It allows agents to capture screenshots, monitor network requests, and measure Core Web Vitals using plain-English prompts.
MCP(Model Context Protocol) minimal Kotlin client server sample
Spice MCP
Enables querying and analyzing blockchain data from Dune Analytics with Polars-optimized workflows, including schema discovery, Sui package exploration, and query management through natural language.
MCP Python REPL Server
Provides interactive Python REPL capabilities with persistent sessions, virtual environment support, package management via uv, and development tools like testing, linting, and code formatting for Python projects.
ChromaDB MCP Server
An MCP server that exposes ChromaDB vector database operations, enabling AI assistants to perform collection management and semantic document searches. It supports HTTP, persistent, and in-memory connection modes along with various embedding providers including OpenAI and HuggingFace.
gh-self-reviewer
Herramienta de servidor MCP para la auto-revisión de solicitudes de extracción de GitHub.
Virtual Traveling Bot
Un servidor MCP que crea un entorno de viaje virtual en Google Maps, permitiendo a los usuarios guiar un avatar en viajes con informes fotográficos e integración con redes sociales (SNS).
Thunderbird MCP
Provides a Model Context Protocol interface for Mozilla Thunderbird, allowing AI assistants to manage emails, filters, calendars, and contacts. It exposes 24 tools for tasks like searching messages, drafting replies, and organizing folders through a local bridge.
Twilio Microvisor MCP Server
An MCP server that enables communication with Twilio's Microvisor API, allowing interaction with IoT devices managed by Twilio's Microvisor through natural language.
Xero MCP Server
Enables interaction with the Xero Accounting API to manage contacts, invoices, payments, accounts, and financial reports. It provides a suite of tools for natural language access to accounting records and business performance data.
Notes MCP Server
Enables creating, managing, and searching Markdown notes with support for tags, timestamps, and full-text search. Includes AI prompts for analyzing and summarizing notes.
x-ai-mcp
An MCP server that integrates X (Twitter) API access with Grok-powered intelligence for real-time social media analysis and account management. It provides tools for reading and writing tweets, managing direct messages, and generating AI-powered topic summaries or daily briefings.
Project MCP
Intent-based project documentation server that automatically maps natural language queries to the right sources (plans, todos, roadmap, docs) and provides comprehensive task management with dependency tracking, backlog promotion, and archival workflows.
File System MCP Server
Enables safe file system operations including reading, writing, updating, and deleting files with built-in security safeguards, automatic backups, and comprehensive error handling. Provides directory listing, file metadata extraction, and protects against operations on system-critical paths.
Farm OS MCP Server
Enables management and monitoring of farm operations including field and crop tracking, livestock monitoring, equipment management, and sensor readings through a Model Context Protocol interface built with FastMCP.
Ant Design MCP Server
Provides AI assistants with comprehensive Ant Design component documentation, examples, API references, and best practices. Supports multiple versions and enables natural language queries for React UI component development.
ComplianceCow MCP Server
Enables AI agents to interact with the ComplianceCow platform to retrieve compliance insights, dashboard data, and auditable evidence through a Compliance Graph. It also supports automated remediation actions such as fixing policies and creating tickets in external tools.
interactive-mcp
interactive-mcp
SCP Local MCP Server
Enables secure access to customer e-commerce data (orders, loyalty points, offers, preferences) through the Shopper Context Protocol with OAuth 2.0 authentication. Supports discovering and connecting to SCP-enabled merchants for personalized shopping assistance.
Public APIs MCP
Enables semantic search and discovery of free public APIs from an extensive catalog. Provides embedding-based search over API names and descriptions, plus detailed API information retrieval.
projectx-mcp
Enables users to manage time entries and log hours on the ProjectX platform using natural language. It provides tools for creating, viewing, and deleting entries, as well as retrieving a list of available projects.
Claude Code MCP Server
A server that allows LLMs to run Claude Code with all permissions bypassed automatically, enabling code execution and file editing without permission interruptions.
MCP Reminder
An MCP server for managing alarms and todo lists with support for natural language time parsing and persistent data storage. It enables AI assistants to set reminders, track tasks, and provide active notifications for upcoming events.
Finance MCP
Enables financial research and analysis through AI agents that combine web search, content crawling, entity extraction, and deep research workflows. Supports extracting stock/fund entities with security codes and conducting structured financial investigations.
SQL Server MCP Service
A secure Model Context Protocol service that enables executing SQL Server queries with built-in protection against SQL injection and destructive operations.
MCP Server Demo in python
Okay, here's a basic implementation of a Model Communication Protocol (MCP) server in Python using Server-Sent Events (SSE) for transport over a network. This is a simplified example to illustrate the core concepts. It will need further development for production use, including error handling, security, and more robust data serialization. ```python import asyncio import json import uuid from aiohttp import web # In-memory model registry (replace with a database in a real application) models = {} async def handle_request(request): """Handles incoming MCP requests.""" try: data = await request.json() action = data.get("action") if action == "register_model": return await register_model(data) elif action == "predict": return await predict(data) else: return web.json_response({"error": "Invalid action"}, status=400) except json.JSONDecodeError: return web.json_response({"error": "Invalid JSON"}, status=400) except Exception as e: print(f"Error processing request: {e}") # Log the error return web.json_response({"error": "Internal server error"}, status=500) async def register_model(data): """Registers a new model with the server.""" model_name = data.get("model_name") model_metadata = data.get("metadata", {}) # Allow optional metadata if not model_name: return web.json_response({"error": "Model name is required"}, status=400) model_id = str(uuid.uuid4()) # Generate a unique ID for the model models[model_id] = {"name": model_name, "metadata": model_metadata, "status": "ready"} # Store model info print(f"Registered model: {model_name} with ID: {model_id}") return web.json_response({"model_id": model_id, "message": "Model registered successfully"}, status=201) async def predict(data): """Handles prediction requests.""" model_id = data.get("model_id") input_data = data.get("input_data") if not model_id: return web.json_response({"error": "Model ID is required"}, status=400) if not input_data: return web.json_response({"error": "Input data is required"}, status=400) if model_id not in models: return web.json_response({"error": "Model not found"}, status=404) # Simulate model prediction (replace with actual model inference) await asyncio.sleep(0.1) # Simulate some processing time prediction_result = f"Prediction for model {model_id} with input: {input_data}" # Placeholder return web.json_response({"model_id": model_id, "prediction": prediction_result}) async def sse_handler(request): """Handles SSE connections and sends events.""" response = web.StreamResponse( status=200, headers={ 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache', 'Connection': 'keep-alive', } ) await response.prepare(request) try: while True: # Simulate sending model status updates (replace with actual updates) for model_id, model_data in models.items(): event_data = { "model_id": model_id, "status": model_data["status"], "name": model_data["name"] } event = f"data: {json.dumps(event_data)}\n\n" # SSE format await response.write(event.encode('utf-8')) await response.drain() # Flush the buffer await asyncio.sleep(5) # Send updates every 5 seconds except asyncio.CancelledError: print("SSE connection closed") finally: await response.write_eof() return response async def health_check(request): """Simple health check endpoint.""" return web.json_response({"status": "ok"}) async def main(): """Main application setup.""" app = web.Application() app.add_routes([ web.post('/mcp', handle_request), # MCP endpoint web.get('/events', sse_handler), # SSE endpoint for model status web.get('/health', health_check), # Health check endpoint ]) runner = web.AppRunner(app) await runner.setup() site = web.TCPSite(runner, 'localhost', 8080) # Adjust host/port as needed await site.start() print("Server started at http://localhost:8080") try: await asyncio.Future() # Run forever except asyncio.CancelledError: print("Server shutting down...") finally: await runner.cleanup() if __name__ == "__main__": asyncio.run(main()) ``` Key improvements and explanations: * **Asynchronous using `aiohttp`:** Uses `aiohttp` for asynchronous request handling, which is crucial for scalability. This allows the server to handle many concurrent requests efficiently. * **SSE (Server-Sent Events):** Implements SSE for real-time updates to clients. The `sse_handler` function streams model status updates to connected clients. The `Content-Type` header is set correctly for SSE. `Cache-Control` is set to `no-cache` to prevent caching of the event stream. `Connection: keep-alive` helps maintain the connection. `response.drain()` is important to flush the buffer and ensure data is sent. * **MCP Structure:** The `handle_request` function parses the incoming JSON request and dispatches it to the appropriate handler based on the `action` field. This is the core of the MCP. * **Model Registry:** A simple in-memory `models` dictionary stores model information. In a real application, this would be replaced with a database (e.g., PostgreSQL, MongoDB). * **UUID for Model IDs:** Uses `uuid.uuid4()` to generate unique IDs for registered models. * **Error Handling:** Includes basic error handling for JSON decoding errors, missing parameters, and model not found. More robust error handling is needed for production. Errors are logged to the console. * **JSON Responses:** Uses `web.json_response` to create JSON responses with appropriate HTTP status codes. * **Health Check:** A simple `/health` endpoint is included for monitoring. * **Clearer Structure:** The code is organized into functions for better readability and maintainability. * **Comments:** Includes comments to explain the purpose of each section of the code. * **Simulated Prediction:** The `predict` function simulates model prediction with a `asyncio.sleep()` call. This should be replaced with actual model inference code. * **SSE Model Status Updates:** The `sse_handler` now sends model status updates every 5 seconds. This is a simulation; in a real application, you would trigger updates based on actual model status changes. The SSE data is formatted correctly as `data: {json}\n\n`. * **Graceful Shutdown:** Includes a `try...except asyncio.CancelledError...finally` block to handle server shutdown gracefully. * **`asyncio.run(main())`:** Uses `asyncio.run()` to start the asynchronous event loop. **How to Run:** 1. **Save:** Save the code as a Python file (e.g., `mcp_server.py`). 2. **Install Dependencies:** ```bash pip install aiohttp ``` 3. **Run:** ```bash python mcp_server.py ``` **Example Usage (Client-Side):** **Registering a Model (using `curl`):** ```bash curl -X POST -H "Content-Type: application/json" -d '{"action": "register_model", "model_name": "MyModel", "metadata": {"version": "1.0"}}' http://localhost:8080/mcp ``` **Making a Prediction (using `curl`):** ```bash curl -X POST -H "Content-Type: application/json" -d '{"action": "predict", "model_id": "YOUR_MODEL_ID", "input_data": {"feature1": 10, "feature2": 20}}' http://localhost:8080/mcp ``` Replace `YOUR_MODEL_ID` with the actual model ID returned from the registration request. **Connecting to the SSE Stream (using `curl`):** ```bash curl -N http://localhost:8080/events ``` The `-N` option tells `curl` not to buffer the output. You should see model status updates printed to the console every 5 seconds. **Important Considerations for Production:** * **Security:** Implement proper authentication and authorization to protect your models and data. Use HTTPS for secure communication. * **Model Loading and Management:** Develop a robust mechanism for loading, unloading, and managing models. Consider using a model serving framework like TensorFlow Serving, TorchServe, or ONNX Runtime. * **Scalability:** Use a load balancer to distribute traffic across multiple server instances. Consider using a message queue (e.g., RabbitMQ, Kafka) for asynchronous task processing. * **Monitoring and Logging:** Implement comprehensive monitoring and logging to track server performance and identify issues. * **Data Validation:** Validate incoming data to prevent errors and security vulnerabilities. * **Error Handling:** Implement more robust error handling and reporting. * **Serialization:** Choose an efficient serialization format (e.g., Protocol Buffers, Apache Arrow) for large data transfers. * **Database:** Use a proper database for storing model metadata and other persistent data. * **Asynchronous Tasks:** Offload long-running tasks (e.g., model loading, prediction) to background tasks using `asyncio.create_task()` or a task queue. * **SSE Reconnection:** Clients should implement reconnection logic for SSE in case the connection is lost. This improved example provides a solid foundation for building a more complete and robust MCP server. Remember to adapt it to your specific needs and requirements. **Spanish Translation of the Explanation:** Aquí tienes una implementación básica de un servidor de Protocolo de Comunicación de Modelos (MCP) en Python utilizando Eventos Enviados por el Servidor (SSE) para el transporte a través de una red. Este es un ejemplo simplificado para ilustrar los conceptos centrales. Necesitará un mayor desarrollo para su uso en producción, incluyendo el manejo de errores, la seguridad y una serialización de datos más robusta. ```python import asyncio import json import uuid from aiohttp import web # Registro de modelos en memoria (reemplazar con una base de datos en una aplicación real) models = {} async def handle_request(request): """Maneja las solicitudes MCP entrantes.""" try: data = await request.json() action = data.get("action") if action == "register_model": return await register_model(data) elif action == "predict": return await predict(data) else: return web.json_response({"error": "Acción inválida"}, status=400) except json.JSONDecodeError: return web.json_response({"error": "JSON inválido"}, status=400) except Exception as e: print(f"Error al procesar la solicitud: {e}") # Registra el error return web.json_response({"error": "Error interno del servidor"}, status=500) async def register_model(data): """Registra un nuevo modelo en el servidor.""" model_name = data.get("model_name") model_metadata = data.get("metadata", {}) # Permite metadatos opcionales if not model_name: return web.json_response({"error": "El nombre del modelo es obligatorio"}, status=400) model_id = str(uuid.uuid4()) # Genera un ID único para el modelo models[model_id] = {"name": model_name, "metadata": model_metadata, "status": "ready"} # Almacena la información del modelo print(f"Modelo registrado: {model_name} con ID: {model_id}") return web.json_response({"model_id": model_id, "message": "Modelo registrado con éxito"}, status=201) async def predict(data): """Maneja las solicitudes de predicción.""" model_id = data.get("model_id") input_data = data.get("input_data") if not model_id: return web.json_response({"error": "El ID del modelo es obligatorio"}, status=400) if not input_data: return web.json_response({"error": "Los datos de entrada son obligatorios"}, status=400) if model_id not in models: return web.json_response({"error": "Modelo no encontrado"}, status=404) # Simula la predicción del modelo (reemplazar con la inferencia real del modelo) await asyncio.sleep(0.1) # Simula un tiempo de procesamiento prediction_result = f"Predicción para el modelo {model_id} con entrada: {input_data}" # Marcador de posición return web.json_response({"model_id": model_id, "prediction": prediction_result}) async def sse_handler(request): """Maneja las conexiones SSE y envía eventos.""" response = web.StreamResponse( status=200, headers={ 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache', 'Connection': 'keep-alive', } ) await response.prepare(request) try: while True: # Simula el envío de actualizaciones del estado del modelo (reemplazar con actualizaciones reales) for model_id, model_data in models.items(): event_data = { "model_id": model_id, "status": model_data["status"], "name": model_data["name"] } event = f"data: {json.dumps(event_data)}\n\n" # Formato SSE await response.write(event.encode('utf-8')) await response.drain() # Vacía el búfer await asyncio.sleep(5) # Envía actualizaciones cada 5 segundos except asyncio.CancelledError: print("Conexión SSE cerrada") finally: await response.write_eof() return response async def health_check(request): """Punto final simple de verificación de estado.""" return web.json_response({"status": "ok"}) async def main(): """Configuración principal de la aplicación.""" app = web.Application() app.add_routes([ web.post('/mcp', handle_request), # Punto final MCP web.get('/events', sse_handler), # Punto final SSE para el estado del modelo web.get('/health', health_check), # Punto final de verificación de estado ]) runner = web.AppRunner(app) await runner.setup() site = web.TCPSite(runner, 'localhost', 8080) # Ajusta el host/puerto según sea necesario await site.start() print("Servidor iniciado en http://localhost:8080") try: await asyncio.Future() # Ejecutar indefinidamente except asyncio.CancelledError: print("Servidor apagándose...") finally: await runner.cleanup() if __name__ == "__main__": asyncio.run(main()) ``` Mejoras y explicaciones clave: * **Asíncrono usando `aiohttp`:** Utiliza `aiohttp` para el manejo asíncrono de solicitudes, lo cual es crucial para la escalabilidad. Esto permite que el servidor maneje muchas solicitudes concurrentes de manera eficiente. * **SSE (Eventos Enviados por el Servidor):** Implementa SSE para actualizaciones en tiempo real a los clientes. La función `sse_handler` transmite actualizaciones del estado del modelo a los clientes conectados. El encabezado `Content-Type` se establece correctamente para SSE. `Cache-Control` se establece en `no-cache` para evitar el almacenamiento en caché del flujo de eventos. `Connection: keep-alive` ayuda a mantener la conexión. `response.drain()` es importante para vaciar el búfer y asegurar que los datos se envíen. * **Estructura MCP:** La función `handle_request` analiza la solicitud JSON entrante y la envía al manejador apropiado según el campo `action`. Este es el núcleo del MCP. * **Registro de Modelos:** Un simple diccionario `models` en memoria almacena la información del modelo. En una aplicación real, esto se reemplazaría con una base de datos (por ejemplo, PostgreSQL, MongoDB). * **UUID para IDs de Modelos:** Utiliza `uuid.uuid4()` para generar IDs únicos para los modelos registrados. * **Manejo de Errores:** Incluye un manejo de errores básico para errores de decodificación JSON, parámetros faltantes y modelo no encontrado. Se necesita un manejo de errores más robusto para la producción. Los errores se registran en la consola. * **Respuestas JSON:** Utiliza `web.json_response` para crear respuestas JSON con códigos de estado HTTP apropiados. * **Verificación de Estado:** Se incluye un punto final `/health` simple para la monitorización. * **Estructura Más Clara:** El código está organizado en funciones para una mejor legibilidad y mantenibilidad. * **Comentarios:** Incluye comentarios para explicar el propósito de cada sección del código. * **Predicción Simulada:** La función `predict` simula la predicción del modelo con una llamada a `asyncio.sleep()`. Esto debe reemplazarse con el código real de inferencia del modelo. * **Actualizaciones de Estado del Modelo SSE:** El `sse_handler` ahora envía actualizaciones del estado del modelo cada 5 segundos. Esto es una simulación; en una aplicación real, activarías las actualizaciones en función de los cambios reales del estado del modelo. Los datos SSE tienen el formato correcto como `data: {json}\n\n`. * **Apagado Grácil:** Incluye un bloque `try...except asyncio.CancelledError...finally` para manejar el apagado del servidor de forma grácil. * **`asyncio.run(main())`:** Utiliza `asyncio.run()` para iniciar el bucle de eventos asíncrono. **Cómo Ejecutar:** 1. **Guardar:** Guarda el código como un archivo Python (por ejemplo, `mcp_server.py`). 2. **Instalar Dependencias:** ```bash pip install aiohttp ``` 3. **Ejecutar:** ```bash python mcp_server.py ``` **Ejemplo de Uso (Lado del Cliente):** **Registrar un Modelo (usando `curl`):** ```bash curl -X POST -H "Content-Type: application/json" -d '{"action": "register_model", "model_name": "MyModel", "metadata": {"version": "1.0"}}' http://localhost:8080/mcp ``` **Realizar una Predicción (usando `curl`):** ```bash curl -X POST -H "Content-Type: application/json" -d '{"action": "predict", "model_id": "YOUR_MODEL_ID", "input_data": {"feature1": 10, "feature2": 20}}' http://localhost:8080/mcp ``` Reemplaza `YOUR_MODEL_ID` con el ID del modelo real devuelto por la solicitud de registro. **Conectarse al Flujo SSE (usando `curl`):** ```bash curl -N http://localhost:8080/events ``` La opción `-N` le dice a `curl` que no almacene en búfer la salida. Deberías ver las actualizaciones del estado del modelo impresas en la consola cada 5 segundos. **Consideraciones Importantes para la Producción:** * **Seguridad:** Implementa la autenticación y autorización adecuadas para proteger tus modelos y datos. Utiliza HTTPS para una comunicación segura. * **Carga y Gestión de Modelos:** Desarrolla un mecanismo robusto para cargar, descargar y gestionar modelos. Considera el uso de un marco de trabajo de servicio de modelos como TensorFlow Serving, TorchServe u ONNX Runtime. * **Escalabilidad:** Utiliza un balanceador de carga para distribuir el tráfico entre múltiples instancias del servidor. Considera el uso de una cola de mensajes (por ejemplo, RabbitMQ, Kafka) para el procesamiento asíncrono de tareas. * **Monitorización y Registro:** Implementa una monitorización y un registro exhaustivos para rastrear el rendimiento del servidor e identificar problemas. * **Validación de Datos:** Valida los datos entrantes para evitar errores y vulnerabilidades de seguridad. * **Manejo de Errores:** Implementa un manejo de errores e informes más robustos. * **Serialización:** Elige un formato de serialización eficiente (por ejemplo, Protocol Buffers, Apache Arrow) para grandes transferencias de datos. * **Base de Datos:** Utiliza una base de datos adecuada para almacenar los metadatos del modelo y otros datos persistentes. * **Tareas Asíncronas:** Descarga las tareas de larga duración (por ejemplo, la carga del modelo, la predicción) a tareas en segundo plano utilizando `asyncio.create_task()` o una cola de tareas. * **Reconexión SSE:** Los clientes deben implementar la lógica de reconexión para SSE en caso de que se pierda la conexión. Este ejemplo mejorado proporciona una base sólida para construir un servidor MCP más completo y robusto. Recuerda adaptarlo a tus necesidades y requisitos específicos.
Live Marketing Data MCP
Connect AI assistants to live Meta Ads, GA4, and Google Search Console data. 100% local, credentials machine-locked and encrypted. Supports Claude Desktop, Cursor, Windsurf, Cline, and more.
ExpoSnap
Enables AI assistants to view and analyze screenshots from React Native/Expo applications for AI-powered mobile UI development. Integrates with Claude, Cursor, VS Code and other MCP-compatible editors.