Discover Awesome MCP Servers
Extend your agent with 23,645 capabilities via MCP servers.
- All23,645
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
MCP Servers Hub
Espejo de
PubMed MCP Server
A bridge connecting AI agents to NCBI's PubMed database through the Model Context Protocol, enabling seamless searching, retrieval, and analysis of biomedical literature and data.
MCP Multi-Language Sandbox
Enables local, Docker-isolated code execution across six programming languages including Python, Rust, and TypeScript. It features pre-warmed container pooling, persistent sessions, and built-in support for machine learning libraries.
Claude MCP x Google Docs
A Model Context Protocol server that enables AI assistants like Claude to read from, append to, and format text in Google Documents programmatically.
MCP Epic Free Games
Provides tools to retrieve information about current and upcoming free games on the Epic Games Store. It allows users to access game details including titles, descriptions, and claim URLs through Model Context Protocol clients.
Memento Protocol Enhanced
An enhanced memory management system that wraps memento-mcp with sophisticated features including protocol enforcement, quality scoring, hybrid search strategies, and synthesis reports. Enables intelligent memory storage, retrieval, and analysis with automatic archival and confidence tracking.
MCP Financeiro
A multi-tenant SaaS API that integrates Hinova SGA ERP and Atomos chat platforms to automate billing and financial workflows. It enables AI agents to process vehicle information and manage customer payment requests via webhooks.
Weather MCP Server
Provides real-time weather information and forecasts, connecting AI assistants with live weather data for current conditions and multi-day forecasts for any location worldwide.
Example MCP Server
A simple demonstration MCP server that provides a basic 'say_hello' tool for greeting users by name, serving as a template for building MCP servers with TypeScript.
Moling
MoLing es un servidor MCP basado en el uso de computadoras y navegadores. Es un asistente de IA de oficina implementado localmente y sin dependencias.
MCP Framework
Un framework de TypeScript para construir servidores de Protocolo de Contexto de Modelo (MCP) con descubrimiento y carga automática de herramientas, recursos y prompts.
Filesystem MCP Server
Enables comprehensive filesystem operations including reading/writing files, directory management, file searching, editing with diff preview, compression, hashing, and merging with dynamic directory access control.
Cosmos DB MCP Server by CData
Cosmos DB MCP Server by CData
Model Context Provider (MCP) Server
Facilita una interacción mejorada con modelos de lenguaje grandes (LLMs) al proporcionar una gestión inteligente del contexto, integración de herramientas y coordinación de modelos de IA de múltiples proveedores para flujos de trabajo eficientes impulsados por la IA.
NewRelic MCP Server
A comprehensive MCP server providing over 26 tools for querying, monitoring, and analyzing NewRelic data through NRQL queries and entity management. It enables interaction with NewRelic's NerdGraph API for managing alerts, logs, and incidents directly within Claude Code sessions.
Cursor10x MCP
The Cursor10x Memory System creates a persistent memory layer for AI assistants (specifically Claude), enabling them to retain and recall short-term, long-term and episodic memory on autonomously.
Wealthfolio MCP Server
Enables AI-powered portfolio analysis for Wealthfolio, allowing Claude to query and analyze investment holdings, asset allocation, real estate properties, and execute transactions through natural language.
Customer Registration MCP Server
Enables creating and managing customer records via an external API with Bearer token authentication, supporting required fields (name, email, phone) and extensive optional data including addresses, UTM parameters, and tags.
TestRail MCP Server
Enables comprehensive TestRail test management integration with support for projects, test cases, runs, results, advanced reporting, analytics, and AutoSpectra test automation framework synchronization.
aiohttp-mcp
Here are some tools and approaches for building Model Context Protocol (MCP) servers on top of `aiohttp` in Python: **Core Libraries & Concepts** * **`aiohttp`:** The foundation. You'll use `aiohttp.web` to create the HTTP server, define routes, and handle requests. Key components: * `aiohttp.web.Application`: The central application object. * `aiohttp.web.RouteTableDef`: For defining routes (e.g., `/model/load`, `/model/predict`). * `aiohttp.web.Request`: Represents an incoming HTTP request. You'll access data from the request body (usually JSON) and headers. * `aiohttp.web.Response`: Represents the HTTP response you send back to the client. You'll set the status code, headers (e.g., `Content-Type: application/json`), and the response body (usually JSON). * `aiohttp.web.json_response`: A helper function to create a JSON response. * **`asyncio`:** `aiohttp` is built on `asyncio`, so you'll need to understand asynchronous programming. Use `async def` to define coroutines (asynchronous functions) that handle requests. Use `await` to call other asynchronous functions (e.g., loading a model, making a prediction). * **JSON Serialization/Deserialization:** MCP typically uses JSON for request and response bodies. Use the built-in `json` module (`json.loads` to parse JSON from the request body, `json.dumps` to serialize data into JSON for the response). `aiohttp.web.json_response` handles JSON serialization for you. **Libraries for Structure and Validation** * **`marshmallow`:** A popular library for serializing and deserializing complex data structures (like Python objects) to and from JSON. It also provides validation capabilities. This is *highly* recommended for MCP servers to ensure data integrity. * Define schemas using `marshmallow.Schema` to specify the structure and data types of your request and response payloads. * Use `schema.load(request_data)` to validate and deserialize the request data into Python objects. This will raise a `marshmallow.exceptions.ValidationError` if the data doesn't conform to the schema. * Use `schema.dump(response_data)` to serialize Python objects into JSON for the response. * **`pydantic`:** Another popular data validation and settings management library. It uses Python type hints for validation and serialization. Similar to `marshmallow` in its purpose, but often considered more concise and Pythonic. It integrates well with FastAPI (see below). * **`attrs`:** A library for defining classes with attributes. It can be used in conjunction with `marshmallow` or `pydantic` to define the data structures that your schemas will serialize/deserialize. **Frameworks (Optional, but Recommended for Larger Projects)** * **`FastAPI`:** A modern, high-performance web framework for building APIs with Python. It's built on `starlette` (which is built on `asyncio`) and integrates seamlessly with `pydantic` for data validation and serialization. FastAPI provides automatic API documentation (using OpenAPI/Swagger) and is generally a great choice for building robust MCP servers. It handles a lot of the boilerplate for you. * **`Sanic`:** Another asynchronous web framework that's known for its speed. It's a good alternative to `aiohttp` if you're looking for maximum performance. However, it might have a slightly steeper learning curve than FastAPI. * **`aiohttp-apispec`:** A library that integrates `aiohttp` with `apispec` (which is used to generate OpenAPI/Swagger documentation). This is useful if you want to document your MCP API using the OpenAPI standard. **Model Serving Libraries (For Integrating with Machine Learning Models)** * **`Triton Inference Server`:** A high-performance inference server from NVIDIA that supports a wide range of models and frameworks (TensorFlow, PyTorch, ONNX, etc.). You can use `aiohttp` to build a front-end that interacts with Triton. This is a good choice if you need to serve models at scale. * **`TorchServe`:** A model serving framework from PyTorch. Similar to Triton, it provides a way to deploy and serve PyTorch models. * **`TensorFlow Serving`:** A model serving framework from TensorFlow. * **Custom Model Loading/Inference:** If you have simpler model serving needs, you can load your models directly into your `aiohttp` application and perform inference within the request handlers. This is suitable for smaller models and lower traffic. Libraries like `scikit-learn`, `transformers`, `torch`, or `tensorflow` would be used directly for model loading and prediction. **Example (Illustrative, using `aiohttp` and `marshmallow`)** ```python import asyncio import json from aiohttp import web import marshmallow # Define a schema for the request data class PredictionRequestSchema(marshmallow.Schema): input_data = marshmallow.fields.List(marshmallow.fields.Float(), required=True) # Define a schema for the response data class PredictionResponseSchema(marshmallow.Schema): prediction = marshmallow.fields.Float(required=True) # In-memory model (replace with your actual model loading) model = None # Placeholder for your ML model async def load_model(app): """Load the model into the application state.""" global model # Simulate loading a model (replace with actual model loading) await asyncio.sleep(1) # Simulate loading time model = lambda x: sum(x) / len(x) # Dummy model: average of inputs print("Model loaded") app['model'] = model # Store the model in the application state async def predict_handler(request): """Handles prediction requests.""" try: data = await request.json() schema = PredictionRequestSchema() try: validated_data = schema.load(data) except marshmallow.exceptions.ValidationError as err: return web.json_response({"error": err.messages}, status=400) input_data = validated_data['input_data'] # Perform prediction using the loaded model prediction = request.app['model'](input_data) # Access model from app state # Create the response response_data = {"prediction": prediction} response_schema = PredictionResponseSchema() result = response_schema.dump(response_data) # Serialize to JSON return web.json_response(result) except Exception as e: print(f"Error during prediction: {e}") return web.json_response({"error": str(e)}, status=500) async def load_handler(request): """Handles model loading requests (optional).""" try: # Simulate model loading (replace with actual loading logic) await load_model(request.app) return web.json_response({"message": "Model loaded successfully"}) except Exception as e: print(f"Error loading model: {e}") return web.json_response({"error": str(e)}, status=500) async def create_app(): app = web.Application() app.add_routes([ web.post('/model/predict', predict_handler), web.post('/model/load', load_handler), # Optional endpoint for loading ]) # Load the model when the application starts app.on_startup.append(load_model) return app if __name__ == '__main__': loop = asyncio.get_event_loop() app = loop.run_until_complete(create_app()) web.run_app(app, port=8080) ``` **Key Considerations for MCP Servers:** * **Model Loading:** How will you load your models? From disk? From a model registry? Will you load them on startup or on demand? The `load_model` function in the example is a placeholder. * **Model Management:** If you have multiple models, how will you manage them? Will you use a model versioning system? * **Scalability:** How will you scale your server to handle increasing traffic? Consider using a load balancer and running multiple instances of your server. * **Monitoring:** How will you monitor the performance of your server? Collect metrics like request latency, error rates, and resource usage. * **Security:** Implement appropriate security measures, such as authentication and authorization, to protect your models and data. * **Error Handling:** Implement robust error handling to gracefully handle unexpected errors and provide informative error messages to the client. * **Logging:** Use a logging library (e.g., `logging`) to log important events and errors. * **Asynchronous Operations:** Ensure that all I/O-bound operations (e.g., reading from disk, making network requests) are performed asynchronously to avoid blocking the event loop. * **Data Validation:** Always validate the input data to prevent errors and security vulnerabilities. Use `marshmallow` or `pydantic` for this. * **Concurrency:** Be mindful of concurrency issues when accessing shared resources (e.g., the model). Use appropriate locking mechanisms if necessary. However, avoid locks as much as possible in asynchronous code. **In summary:** 1. **Choose a Framework:** Start with `aiohttp` directly for simple cases. For more complex projects, `FastAPI` is highly recommended. 2. **Data Validation:** Use `marshmallow` or `pydantic` for robust data validation. 3. **Model Serving:** Select a model serving library (Triton, TorchServe, TensorFlow Serving) if you need to serve models at scale. Otherwise, load models directly into your application. 4. **Implement MCP Endpoints:** Create the necessary endpoints (e.g., `/model/load`, `/model/predict`) to conform to the MCP specification. 5. **Handle Errors and Logging:** Implement comprehensive error handling and logging. 6. **Consider Scalability and Monitoring:** Plan for scalability and monitoring from the beginning. This comprehensive guide should give you a solid foundation for building MCP servers on top of `aiohttp`. Remember to adapt the example code and choose the libraries that best fit your specific needs.
Minted MCP Server
Enables interaction with Minted.com to retrieve address book contacts, order history, and delivery information for recent card orders.
hyperspell-mcp
Hyperspell te permite conectar aplicaciones impulsadas por IA a cualquier dato — es como Plaid, pero para datos no estructurados y semiestructurados.
NetBrain MCP
An open-source network operations integration platform that connects large language models with network devices through the Model Context Protocol, allowing AI assistants to perform network configuration, diagnostics, and management tasks.
Bing Flights MCP Server
Enables flight search and price comparison by scraping flight information from Bing Flights. Supports one-way and round-trip searches with customizable passenger counts, cabin classes, and booking details.
defi-yields-mcp
An MCP server for AI agents to explore DeFi yield opportunities, powered by DefiLlama.
Webpage Design Analyzer
An MCP server that analyzes webpage design images using vision models and generates development documentation in Markdown format.
Webex Messaging MCP Server
Enables AI assistants to interact with Cisco Webex messaging through 52 comprehensive tools covering messages, rooms, teams, people management, webhooks, and enterprise features. Supports both personal and enterprise Webex environments with complete API coverage for messaging operations.
TSCodex MCP Images
Enables comprehensive image processing, stock image search from Pexels and Pixabay, and AI image generation using OpenAI DALL-E with support for resizing, format conversion, color extraction, and watermarking.
TrendRadar
An AI-powered news and trend aggregator that tracks real-time hot topics and RSS feeds with personalized filtering and summaries. It enables users to monitor global trends and receive automated reports across multiple platforms including WeChat, Telegram, and Slack.
AI Research MCP Server
Enables real-time tracking of AI/LLM research progress by searching and aggregating content from arXiv, GitHub, Hugging Face, and Papers with Code. Supports intelligent search, automated daily/weekly research summaries, and covers 15+ AI research areas with smart caching.