Discover Awesome MCP Servers

Extend your agent with 13,726 capabilities via MCP servers.

All13,726
mcp-md-vector-search

mcp-md-vector-search

Una implementación ligera de servidor MCP que utiliza PGLite y pgvector para realizar búsquedas de similitud eficientes en documentos Markdown locales utilizando el Protocolo de Contexto de Modelo (MCP).

MCP File System Server

MCP File System Server

A simple Model Context Protocol server that enables AI assistants to interact with local file systems, allowing them to read, write, update, and delete files within a specified project directory.

MCP SERVER

MCP SERVER

The code samples here are not for production use

The code samples here are not for production use

Here are a few ways to translate "Secure agentic tool repo with execution environment" into Spanish, with slightly different nuances: * **Repositorio seguro de herramientas agenticas con entorno de ejecución:** This is a direct and generally good translation. It emphasizes the security of the repository itself. * **Repositorio de herramientas agenticas seguro, con entorno de ejecución:** This version places the emphasis on the tools being secure, rather than the repository. While subtle, it could imply the tools have built-in security features. * **Repositorio de herramientas agenticas con entorno de ejecución seguro:** This emphasizes the security of the execution environment. * **Repositorio seguro para herramientas agenticas, con un entorno de ejecución:** This is a more verbose but potentially clearer option, using "para" to specify the purpose of the repository. Which translation is best depends on the specific context and what aspect of security you want to highlight. If you want to emphasize the security of the *repository itself*, the first option is probably the best. If you want to emphasize the security of the *execution environment*, the third option is best.

GitHub MCP Tools

GitHub MCP Tools

Un programa Multi-Claude para interactuar con las APIs de GitHub a través de Claude Desktop, que permite a los usuarios buscar repositorios, gestionar incidencias, pull requests, la configuración del repositorio, flujos de trabajo y colaboradores.

MCP Framework

MCP Framework

Un framework de TypeScript para construir servidores de Protocolo de Contexto de Modelo (MCP) con descubrimiento y carga automática de herramientas, recursos y prompts.

Model Context Provider (MCP) Server

Model Context Provider (MCP) Server

Facilita una interacción mejorada con modelos de lenguaje grandes (LLMs) al proporcionar una gestión inteligente del contexto, integración de herramientas y coordinación de modelos de IA de múltiples proveedores para flujos de trabajo eficientes impulsados por la IA.

Dev.to MCP Server

Dev.to MCP Server

An MCP server implementation that allows AI assistants to access, search, and interact with Dev.to content, including fetching articles, retrieving user information, and publishing new content.

CCXT MCP Server

CCXT MCP Server

Servidor MCP CCXT de alto rendimiento para la integración de exchanges de criptomonedas.

Hacker News

Hacker News

Un servidor de Protocolo de Contexto de Modelo (MCP) que proporciona herramientas para buscar y obtener información de Hacker News.

Android Project MCP Server

Android Project MCP Server

Un servidor de Protocolo de Contexto de Modelo que permite construir proyectos de Android y ejecutar pruebas directamente en Visual Studio Code a través de extensiones como Cline o Roo Code.

argocd-mcp

argocd-mcp

argocd-mcp

Moling

Moling

MoLing es un servidor MCP basado en el uso de computadoras y navegadores. Es un asistente de IA de oficina implementado localmente y sin dependencias.

Shell Command MCP Server

Shell Command MCP Server

Permite ejecutar comandos de shell dentro de un contenedor Docker seguro a través de la interfaz MCP de Claude, proporcionando herramientas de Kubernetes y un entorno aislado sin acceso al demonio Docker del host.

Code Review MCP Tool for Cursor

Code Review MCP Tool for Cursor

OpenWeatherMap MCP Server

OpenWeatherMap MCP Server

A Model Context Protocol server that provides comprehensive weather data and forecasts through the OpenWeatherMap API, enabling AI assistants to access real-time weather information, forecasts, air quality data, and location services.

Cosmos DB MCP Server by CData

Cosmos DB MCP Server by CData

Cosmos DB MCP Server by CData

Jumpseller API MCP Server

Jumpseller API MCP Server

An MCP Server that provides access to the Jumpseller e-commerce platform API, allowing users to interact with Jumpseller's functionality through natural language commands.

aiohttp-mcp

aiohttp-mcp

Here are some tools and approaches for building Model Context Protocol (MCP) servers on top of `aiohttp` in Python: **Core Libraries & Concepts** * **`aiohttp`:** The foundation. You'll use `aiohttp.web` to create the HTTP server, define routes, and handle requests. Key components: * `aiohttp.web.Application`: The central application object. * `aiohttp.web.RouteTableDef`: For defining routes (e.g., `/model/load`, `/model/predict`). * `aiohttp.web.Request`: Represents an incoming HTTP request. You'll access data from the request body (usually JSON) and headers. * `aiohttp.web.Response`: Represents the HTTP response you send back to the client. You'll set the status code, headers (e.g., `Content-Type: application/json`), and the response body (usually JSON). * `aiohttp.web.json_response`: A helper function to create a JSON response. * **`asyncio`:** `aiohttp` is built on `asyncio`, so you'll need to understand asynchronous programming. Use `async def` to define coroutines (asynchronous functions) that handle requests. Use `await` to call other asynchronous functions (e.g., loading a model, making a prediction). * **JSON Serialization/Deserialization:** MCP typically uses JSON for request and response bodies. Use the built-in `json` module (`json.loads` to parse JSON from the request body, `json.dumps` to serialize data into JSON for the response). `aiohttp.web.json_response` handles JSON serialization for you. **Libraries for Structure and Validation** * **`marshmallow`:** A popular library for serializing and deserializing complex data structures (like Python objects) to and from JSON. It also provides validation capabilities. This is *highly* recommended for MCP servers to ensure data integrity. * Define schemas using `marshmallow.Schema` to specify the structure and data types of your request and response payloads. * Use `schema.load(request_data)` to validate and deserialize the request data into Python objects. This will raise a `marshmallow.exceptions.ValidationError` if the data doesn't conform to the schema. * Use `schema.dump(response_data)` to serialize Python objects into JSON for the response. * **`pydantic`:** Another popular data validation and settings management library. It uses Python type hints for validation and serialization. Similar to `marshmallow` in its purpose, but often considered more concise and Pythonic. It integrates well with FastAPI (see below). * **`attrs`:** A library for defining classes with attributes. It can be used in conjunction with `marshmallow` or `pydantic` to define the data structures that your schemas will serialize/deserialize. **Frameworks (Optional, but Recommended for Larger Projects)** * **`FastAPI`:** A modern, high-performance web framework for building APIs with Python. It's built on `starlette` (which is built on `asyncio`) and integrates seamlessly with `pydantic` for data validation and serialization. FastAPI provides automatic API documentation (using OpenAPI/Swagger) and is generally a great choice for building robust MCP servers. It handles a lot of the boilerplate for you. * **`Sanic`:** Another asynchronous web framework that's known for its speed. It's a good alternative to `aiohttp` if you're looking for maximum performance. However, it might have a slightly steeper learning curve than FastAPI. * **`aiohttp-apispec`:** A library that integrates `aiohttp` with `apispec` (which is used to generate OpenAPI/Swagger documentation). This is useful if you want to document your MCP API using the OpenAPI standard. **Model Serving Libraries (For Integrating with Machine Learning Models)** * **`Triton Inference Server`:** A high-performance inference server from NVIDIA that supports a wide range of models and frameworks (TensorFlow, PyTorch, ONNX, etc.). You can use `aiohttp` to build a front-end that interacts with Triton. This is a good choice if you need to serve models at scale. * **`TorchServe`:** A model serving framework from PyTorch. Similar to Triton, it provides a way to deploy and serve PyTorch models. * **`TensorFlow Serving`:** A model serving framework from TensorFlow. * **Custom Model Loading/Inference:** If you have simpler model serving needs, you can load your models directly into your `aiohttp` application and perform inference within the request handlers. This is suitable for smaller models and lower traffic. Libraries like `scikit-learn`, `transformers`, `torch`, or `tensorflow` would be used directly for model loading and prediction. **Example (Illustrative, using `aiohttp` and `marshmallow`)** ```python import asyncio import json from aiohttp import web import marshmallow # Define a schema for the request data class PredictionRequestSchema(marshmallow.Schema): input_data = marshmallow.fields.List(marshmallow.fields.Float(), required=True) # Define a schema for the response data class PredictionResponseSchema(marshmallow.Schema): prediction = marshmallow.fields.Float(required=True) # In-memory model (replace with your actual model loading) model = None # Placeholder for your ML model async def load_model(app): """Load the model into the application state.""" global model # Simulate loading a model (replace with actual model loading) await asyncio.sleep(1) # Simulate loading time model = lambda x: sum(x) / len(x) # Dummy model: average of inputs print("Model loaded") app['model'] = model # Store the model in the application state async def predict_handler(request): """Handles prediction requests.""" try: data = await request.json() schema = PredictionRequestSchema() try: validated_data = schema.load(data) except marshmallow.exceptions.ValidationError as err: return web.json_response({"error": err.messages}, status=400) input_data = validated_data['input_data'] # Perform prediction using the loaded model prediction = request.app['model'](input_data) # Access model from app state # Create the response response_data = {"prediction": prediction} response_schema = PredictionResponseSchema() result = response_schema.dump(response_data) # Serialize to JSON return web.json_response(result) except Exception as e: print(f"Error during prediction: {e}") return web.json_response({"error": str(e)}, status=500) async def load_handler(request): """Handles model loading requests (optional).""" try: # Simulate model loading (replace with actual loading logic) await load_model(request.app) return web.json_response({"message": "Model loaded successfully"}) except Exception as e: print(f"Error loading model: {e}") return web.json_response({"error": str(e)}, status=500) async def create_app(): app = web.Application() app.add_routes([ web.post('/model/predict', predict_handler), web.post('/model/load', load_handler), # Optional endpoint for loading ]) # Load the model when the application starts app.on_startup.append(load_model) return app if __name__ == '__main__': loop = asyncio.get_event_loop() app = loop.run_until_complete(create_app()) web.run_app(app, port=8080) ``` **Key Considerations for MCP Servers:** * **Model Loading:** How will you load your models? From disk? From a model registry? Will you load them on startup or on demand? The `load_model` function in the example is a placeholder. * **Model Management:** If you have multiple models, how will you manage them? Will you use a model versioning system? * **Scalability:** How will you scale your server to handle increasing traffic? Consider using a load balancer and running multiple instances of your server. * **Monitoring:** How will you monitor the performance of your server? Collect metrics like request latency, error rates, and resource usage. * **Security:** Implement appropriate security measures, such as authentication and authorization, to protect your models and data. * **Error Handling:** Implement robust error handling to gracefully handle unexpected errors and provide informative error messages to the client. * **Logging:** Use a logging library (e.g., `logging`) to log important events and errors. * **Asynchronous Operations:** Ensure that all I/O-bound operations (e.g., reading from disk, making network requests) are performed asynchronously to avoid blocking the event loop. * **Data Validation:** Always validate the input data to prevent errors and security vulnerabilities. Use `marshmallow` or `pydantic` for this. * **Concurrency:** Be mindful of concurrency issues when accessing shared resources (e.g., the model). Use appropriate locking mechanisms if necessary. However, avoid locks as much as possible in asynchronous code. **In summary:** 1. **Choose a Framework:** Start with `aiohttp` directly for simple cases. For more complex projects, `FastAPI` is highly recommended. 2. **Data Validation:** Use `marshmallow` or `pydantic` for robust data validation. 3. **Model Serving:** Select a model serving library (Triton, TorchServe, TensorFlow Serving) if you need to serve models at scale. Otherwise, load models directly into your application. 4. **Implement MCP Endpoints:** Create the necessary endpoints (e.g., `/model/load`, `/model/predict`) to conform to the MCP specification. 5. **Handle Errors and Logging:** Implement comprehensive error handling and logging. 6. **Consider Scalability and Monitoring:** Plan for scalability and monitoring from the beginning. This comprehensive guide should give you a solid foundation for building MCP servers on top of `aiohttp`. Remember to adapt the example code and choose the libraries that best fit your specific needs.

ADB MCP Server

ADB MCP Server

A Model Context Protocol server that provides Android Debug Bridge functionality for automating Android devices, enabling remote device management, screen operations, app management, file operations, and shell command execution.

Ollama Pydantic Project

Ollama Pydantic Project

Creé un proyecto de ejemplo para un agente Pydantic con un modelo Ollama local con integración del servidor MCP.

Discord MCP Server

Discord MCP Server

A secure server that enables interaction with Discord channels through JWT-authenticated API calls, allowing users to send messages, fetch channel data, search content, and perform moderation actions.

MCPeasy

MCPeasy

A production-grade multi-tenant MCP server that provides different tools and configurations to different clients using API key-based routing.

Remote MCP Server on Cloudflare

Remote MCP Server on Cloudflare

GrabMaps MCP Server

GrabMaps MCP Server

Provides access to GrabMaps services (Places, Maps, Routes) through AWS Location Service using the Model Context Protocol.

MCPServer

MCPServer

Un servidor MCP sencillo para habilitar flujos de trabajo agenticos.

remote-mcp-server

remote-mcp-server

github-mcp-server-test

github-mcp-server-test

Aviation Model Context Protocol

Aviation Model Context Protocol

Integration platform for aviation data sources including weather, NOTAMs, airport information, and flight planning APIs, enabling comprehensive pre-flight preparation and in-flight decision support.

YingDao RPA MCP Server

YingDao RPA MCP Server

A Model Context Protocol server that enables AI platforms to invoke RPA (Robotic Process Automation) capabilities from YingDao, allowing automated execution of repetitive tasks through AI interactions.