Discover Awesome MCP Servers
Extend your agent with 14,392 capabilities via MCP servers.
- All14,392
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2

Binary Ninja MCP Server
Un servidor de Protocolo de Contexto de Modelo que permite a los Modelos de Lenguaje Extensos interactuar con Binary Ninja para tareas de ingeniería inversa como ver código ensamblador, código descompilado, renombrar funciones y añadir comentarios.

BetterMCPFileServer
Un servidor de Protocolo de Contexto de Modelo rediseñado que permite a los modelos de IA acceder a sistemas de archivos a través de alias de ruta que preservan la privacidad, con una interfaz API optimizada de 6 funciones.

Vibe Worldbuilding MCP
Un Protocolo de Contexto de Modelo para crear mundos ficticios detallados con Claude, que incluye indicaciones estructuradas para la construcción del mundo y la generación automática de imágenes a través de la API Imagen de Google.
mcp_repob7b7df37-94c2-48e4-8721-6cc695c23d4c
Este es un repositorio de prueba creado por un script de prueba del Servidor MCP para GitHub.

Learning Path Generator MCP
Generates personalized learning paths by integrating with YouTube, Google Drive, and Notion to create comprehensive learning experiences based on user goals.
Caiyun Weather MCP Server
Servidor de Protocolo de Contexto del Modelo (MCP) para la API del Clima de Caiyun

Spreadsheet MCP Server
Proporciona un servidor de Protocolo de Contexto de Modelo (MCP) que permite a los LLM acceder e interactuar directamente con datos de Hojas de Cálculo de Google.

MCP - Model Context Protocol TypeScript SDK
A TypeScript wrapper library for the Model Context Protocol SDK that provides a simplified interface for creating MCP servers with tools, resources, and prompts without needing to work directly with the protocol.

Claude Conversation Logger
Enables intelligent conversation management with 4 AI agents that provide semantic analysis, pattern discovery, automatic documentation, and relationship mapping. Logs and analyzes Claude conversations with 70% token optimization and multi-language support.

MCP Server Template for Cursor IDE
Okay, here's a template and explanation for creating and connecting custom tools to Cursor IDE using the Model Context Protocol, with a focus on cheerful server responses. This template provides a good starting point and emphasizes clear communication between your tool and the IDE. **Conceptual Overview** * **Model Context Protocol (MCP):** This is the communication standard Cursor uses to interact with external tools. It's based on JSON-RPC. Your tool acts as a server, and Cursor (the client) sends requests to it. * **JSON-RPC:** A simple remote procedure call protocol using JSON for data encoding. Cursor sends JSON requests to your tool, and your tool sends back JSON responses. * **Cheerful Responses:** Instead of just returning data, your tool should provide informative and positive messages to the user through Cursor. This enhances the user experience. **Template Structure (Python - Flask Example)** This example uses Python and Flask for simplicity. You can adapt it to other languages and frameworks. ```python from flask import Flask, request, jsonify import json import os import subprocess # For running external commands import logging app = Flask(__name__) # Configure logging (optional, but highly recommended) logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') # --- Configuration --- TOOL_NAME = "MyAwesomeTool" # Replace with your tool's name TOOL_DESCRIPTION = "A tool that does amazing things!" # Replace with a description # Example: TOOL_COMMAND = ["python", "/path/to/my_script.py"] TOOL_COMMAND = ["echo", "Hello from MyAwesomeTool!"] # Replace with your tool's command # --- End Configuration --- def run_command(command, input_str=None): """ Executes a command and returns the output. Handles potential errors. """ try: process = subprocess.Popen( command, stdin=subprocess.PIPE if input_str else None, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True # Important for handling text data ) stdout, stderr = process.communicate(input=input_str) return_code = process.returncode if return_code != 0: logging.error(f"Command failed with code {return_code}: {stderr}") return None, stderr # Indicate failure return stdout, None # Indicate success except FileNotFoundError: logging.error(f"Command not found: {command[0]}") return None, f"Error: Command not found: {command[0]}" except Exception as e: logging.exception("An unexpected error occurred:") return None, f"An unexpected error occurred: {str(e)}" @app.route('/', methods=['POST']) def handle_request(): """ Handles incoming requests from Cursor. """ try: data = request.get_json() logging.info(f"Received request: {data}") method = data.get('method') params = data.get('params', {}) # Default to empty dictionary if no params if method == 'initialize': return handle_initialize() elif method == 'execute': return handle_execute(params) else: return create_error_response(f"Method not supported: {method}", code=-32601) # Method not found except json.JSONDecodeError: logging.error("Invalid JSON received.") return create_error_response("Invalid JSON received.", code=-32700) # Parse error except Exception as e: logging.exception("An unexpected error occurred:") return create_error_response(f"An unexpected error occurred: {str(e)}", code=-32000) # Server error def handle_initialize(): """ Handles the 'initialize' request. Returns tool information. """ response = { "jsonrpc": "2.0", "result": { "name": TOOL_NAME, "description": TOOL_DESCRIPTION, "version": "1.0.0", # Replace with your tool's version "capabilities": { "execute": True # Indicates that the tool can execute commands } }, "id": 0 # Important: Use the ID from the request if available. This is a placeholder. } logging.info(f"Sending initialize response: {response}") return jsonify(response) def handle_execute(params): """ Handles the 'execute' request. Executes the tool's command. """ query = params.get('query', '') # Get the query from the parameters context = params.get('context', {}) # Get the context from the parameters (e.g., selected text) selected_text = context.get('selectedText', '') logging.info(f"Executing with query: {query}, selected_text: {selected_text}") # --- Example: Pass the query and selected text as input to the command --- input_data = f"{query}\n{selected_text}" stdout, stderr = run_command(TOOL_COMMAND, input_data) if stdout: # --- Cheerful Success Response --- result = stdout.strip() # Remove leading/trailing whitespace response = { "jsonrpc": "2.0", "result": { "response": f"Great job! {TOOL_NAME} successfully processed your request. Here's the result:\n\n{result}", "success": True, }, "id": 1 # Important: Use the ID from the request if available. This is a placeholder. } logging.info(f"Sending execute success response: {response}") return jsonify(response) else: # --- Cheerful Error Response (even in error, be polite!) --- error_message = stderr or "Something went wrong, but I couldn't get the details. Please check the server logs." response = { "jsonrpc": "2.0", "result": { "response": f"Oops! {TOOL_NAME} encountered a problem. Don't worry, we'll get it sorted out. Here's what happened:\n\n{error_message}", "success": False, }, "id": 1 # Important: Use the ID from the request if available. This is a placeholder. } logging.error(f"Sending execute error response: {response}") return jsonify(response) def create_error_response(message, code): """ Creates a JSON-RPC error response. """ response = { "jsonrpc": "2.0", "error": { "code": code, "message": message }, "id": None # Or the ID from the request, if available } return jsonify(response), 500 # Return a 500 status code for errors if __name__ == '__main__': # Determine the port from the environment variable, default to 5000 port = int(os.environ.get('PORT', 5000)) app.run(debug=True, host='0.0.0.0', port=port) # Make sure to set debug=False in production ``` **Explanation and Key Points** 1. **Imports:** Import necessary libraries (Flask, JSON, `subprocess` for running external commands, `logging` for debugging). 2. **Configuration:** * `TOOL_NAME`: The name of your tool (displayed in Cursor). * `TOOL_DESCRIPTION`: A brief description of what your tool does. * `TOOL_COMMAND`: The command to execute. This is a list of strings (like `subprocess.Popen` expects). **Important:** Use absolute paths to your scripts or executables to avoid path issues. The example uses `echo` for demonstration purposes. Replace this with your actual tool's command. 3. **`run_command(command, input_str=None)`:** * This function executes the command using `subprocess.Popen`. * It captures both standard output (`stdout`) and standard error (`stderr`). * It handles potential errors like `FileNotFoundError` (if the command doesn't exist) and other exceptions. * **Important:** The `text=True` argument in `subprocess.Popen` ensures that the output is treated as text, which is crucial for handling strings correctly. * It returns the `stdout` and `stderr`. If there's an error, `stdout` will be `None`, and `stderr` will contain the error message. 4. **`@app.route('/', methods=['POST'])`:** * This Flask route handles all incoming POST requests to the root URL (`/`). Cursor will send requests here. 5. **`handle_request()`:** * This function is the main entry point for handling requests from Cursor. * It parses the JSON request body. * It extracts the `method` and `params` from the request. * It calls the appropriate handler function based on the `method` (e.g., `handle_initialize`, `handle_execute`). * It includes comprehensive error handling using `try...except` blocks to catch JSON parsing errors, method not found errors, and other exceptions. It then calls `create_error_response` to return a proper JSON-RPC error response to Cursor. 6. **`handle_initialize()`:** * Handles the `initialize` request. * Returns a JSON response containing the tool's name, description, version, and capabilities. * The `capabilities` section indicates what the tool can do (in this case, `execute`). * **Important:** The `id` field in the response should match the `id` from the request. This is how Cursor correlates requests and responses. The example uses `0` as a placeholder; you'll need to get the `id` from the incoming request. 7. **`handle_execute(params)`:** * Handles the `execute` request. * Extracts the `query` and `context` from the `params`. The `context` often contains information like the selected text in the editor. * Calls `run_command` to execute the tool's command. * **Cheerful Responses:** * If the command is successful, it creates a positive and informative response message that includes the result. The `"success": True` field indicates success. * If the command fails, it creates a polite error message that explains what went wrong. The `"success": False` field indicates failure. It also includes the error message from `stderr`. * **Important:** The `id` field in the response should match the `id` from the request. This is how Cursor correlates requests and responses. The example uses `1` as a placeholder; you'll need to get the `id` from the incoming request. 8. **`create_error_response(message, code)`:** * Creates a standard JSON-RPC error response. * Includes an error `code` and a `message`. See the JSON-RPC specification for standard error codes. * Returns a 500 HTTP status code to indicate an error. 9. **`if __name__ == '__main__':`:** * This block runs the Flask app when the script is executed directly. * It gets the port number from the environment variable `PORT` (this is important for deployment). If the `PORT` environment variable is not set, it defaults to port 5000. * It starts the Flask development server. **Important:** Set `debug=False` in production to disable the debugger. Also, use a proper WSGI server (like Gunicorn or uWSGI) for production deployments. * `host='0.0.0.0'` makes the server accessible from outside the local machine. **How to Use This Template** 1. **Install Flask:** ```bash pip install Flask ``` 2. **Customize:** * Replace `TOOL_NAME`, `TOOL_DESCRIPTION`, and `TOOL_COMMAND` with your tool's information. * Modify the `handle_execute` function to handle the `query` and `context` parameters appropriately for your tool. This is where you'll integrate your tool's logic. * Adjust the cheerful response messages to be more specific and engaging for your tool. 3. **Run the Server:** ```bash python your_script_name.py ``` 4. **Configure Cursor:** * In Cursor, go to Settings -> Tools. * Click "Add Tool". * Enter the URL of your server (e.g., `http://localhost:5000`). * Enter the name and description (these should match `TOOL_NAME` and `TOOL_DESCRIPTION`). * Click "Save". 5. **Test Your Tool:** * In Cursor, type `/` to bring up the command palette. * You should see your tool listed. * Select your tool and enter a query. * Observe the cheerful response from your tool. **Important Considerations** * **Error Handling:** Robust error handling is crucial. Provide informative error messages to the user. Log errors on the server-side for debugging. * **Security:** If your tool handles sensitive data, take appropriate security measures (e.g., authentication, authorization, input validation). * **Asynchronous Operations:** If your tool's command takes a long time to execute, consider using asynchronous operations (e.g., Celery, asyncio) to avoid blocking the server. You can send a "processing" message to Cursor and then send the final result when the command is finished. * **Input Validation:** Validate the input from Cursor to prevent security vulnerabilities and unexpected behavior. * **JSON-RPC Specification:** Refer to the JSON-RPC 2.0 specification for details on the protocol: [https://www.jsonrpc.org/specification](https://www.jsonrpc.org/specification) * **Cursor Documentation:** Check the Cursor documentation for the latest information on the Model Context Protocol. * **IDs:** Always echo the `id` from the incoming request in your response. This is essential for Cursor to match requests and responses correctly. If the request doesn't have an `id`, you can omit it from the response or use `null`. * **Logging:** Use logging extensively to track requests, responses, and errors. This will help you debug your tool. * **Deployment:** When deploying your tool to a production environment, use a proper WSGI server (like Gunicorn or uWSGI) and configure it correctly. Also, set `debug=False` in Flask. **Example with Selected Text** Let's say your tool is a code formatter. You could use the selected text as input to the formatter: ```python def handle_execute(params): query = params.get('query', '') context = params.get('context', {}) selected_text = context.get('selectedText', '') if selected_text: # Format the selected text stdout, stderr = run_command(["clang-format"], selected_text) # Example: Use clang-format if stdout: response = { "jsonrpc": "2.0", "result": { "response": f"Formatted code:\n\n{stdout}", "success": True, }, "id": 1 } return jsonify(response) else: response = { "jsonrpc": "2.0", "result": { "response": f"Error formatting code:\n\n{stderr}", "success": False, }, "id": 1 } return jsonify(response) else: response = { "jsonrpc": "2.0", "result": { "response": "No code selected to format.", "success": False, }, "id": 1 } return jsonify(response) ``` This example uses `clang-format` (you'll need to have it installed). It passes the `selected_text` as input to `clang-format` and returns the formatted code in the response. This comprehensive template and explanation should give you a solid foundation for building custom tools for Cursor IDE. Remember to adapt the code to your specific tool's requirements and focus on providing a positive and informative user experience. Good luck!
nasustim/mcp-server

Fabric MCP Agent
Enables natural language querying of Microsoft Fabric Data Warehouses with intelligent SQL generation, metadata exploration, and business-friendly result summarization. Features two-layer architecture with MCP-compliant server and agentic AI reasoning for production-ready enterprise data access.

MCP Terminal Server
A lightweight FastAPI server that allows remote execution of shell commands on Windows, with real-time output streaming and security features like API key authentication and rate limiting.

Remote MCP Server
A Cloudflare Workers-based Model Context Protocol server that enables AI assistants like Claude to access external tools via OAuth authentication.

Larkrs Mcp

Audio Player MCP Server
Un servidor que le permite a Claude controlar la reproducción de audio en tu computadora, compatible con archivos MP3, WAV y OGG con funciones como reproducir, listar y detener.
MCP-Model-Context-Protocol-Projects
En este repositorio estoy aprendiendo e implementando prácticamente clientes y servidores MCP en Python.

ToolFront MCP Server
Securely connects AI agents to multiple databases simultaneously while enabling collaborative learning from team query patterns, all while keeping data private by running locally.

sysauto Ask MCP Server
Un servidor MCP que se integra con la API de Sonar para proporcionar a Claude capacidades de búsqueda web en tiempo real para una investigación exhaustiva.
Toolhouse
Servidor MCP para toolhouse.ai. Este no depende de un LLM externo, a diferencia del servidor oficial.

Remote MCP Server
A Cloudflare Workers-based implementation of Model Context Protocol server that enables integration with Claude AI through OAuth login, allowing Claude to access and execute custom tools.

UNO-MCP
Operador Narrativo Unificado, enriquece y expande el texto de manera fluida, diseño agentivo 5 en 1.

Weather MCP Server
Un servidor MCP que se conecta a la API de OpenWeatherMap para proporcionar datos meteorológicos actuales y pronósticos de varios días para ubicaciones de todo el mundo en diferentes unidades de medida.

Cloudflare API MCP
A lightweight MCP server that enables agents to interface with Cloudflare's REST API, allowing management of DNS records and other Cloudflare services.

PayPal
El servidor del Protocolo de Contexto del Modelo de PayPal te permite integrarte con las APIs de PayPal a través de la invocación de funciones. Este protocolo admite varias herramientas para interactuar con diferentes servicios de PayPal.

Video RAG MCP Server
Enables natural language search and interaction with video content through three tools: ingesting videos to a Ragie index, retrieving relevant video segments based on queries, and creating video chunks from specific timestamps.

Aiven MCP Server
Un servidor de Protocolo de Contexto de Modelo que proporciona acceso a los servicios de Aiven (PostgreSQL, Kafka, ClickHouse, Valkey, OpenSearch), permitiendo a los LLM construir soluciones de pila completa al interactuar con estos servicios.

With-MCP
Middleware that converts a serverless handler with OpenAPI spec into an MCP endpoint following stateless MCP principles.

godoc-mcp
godoc-mcp is a Model Context Protocol (MCP) server that provides efficient access to Go documentation. It helps LLMs understand Go projects by providing direct access to package documentation without needing to read entire source files.

MCP Tekmetric
A Model Context Protocol server that allows AI assistants to interact with Tekmetric data, enabling users to query appointment details, vehicle information, repair order status, and parts inventory through natural language.