Discover Awesome MCP Servers

Extend your agent with 10,066 capabilities via MCP servers.

All10,066
Google Drive

Google Drive

File access and search capabilities for Google Drive

ClickHouse MCP Server

ClickHouse MCP Server

Un servidor de Protocolo de Contexto de Modelo que permite a los asistentes de IA interactuar de forma segura con bases de datos ClickHouse, admitiendo la exploración de tablas y la ejecución de consultas SQL a través de una interfaz controlada.

ODBC MCP Server

ODBC MCP Server

Habilita herramientas de LLM como Claude Desktop para consultar bases de datos a través de conexiones ODBC, permitiendo el acceso, el análisis y la generación de conocimiento a partir de los datos de la base de datos, al tiempo que se mantiene la seguridad mediante protecciones de solo lectura.

BICScan MCP Server

BICScan MCP Server

Proporciona puntuación de riesgo de direcciones blockchain e información de activos a través de la API de BICScan, permitiendo a los usuarios evaluar los riesgos de direcciones cripto, dominios y dApps en una escala de 0 a 100.

MCP Web Search Tool

MCP Web Search Tool

Un servidor de Protocolo de Contexto de Modelo que proporciona capacidades de búsqueda web en tiempo real a asistentes de IA a través de proveedores de búsqueda conectables, actualmente integrado con la API de Brave Search.

pyodide-mcp

pyodide-mcp

Pyodide MCP Server

Weather MCP Server

Weather MCP Server

Un servidor MCP que proporciona información meteorológica en tiempo real, incluyendo la temperatura, la humedad, la velocidad del viento y las horas de amanecer y atardecer, a través de la API de OpenWeatherMap.

@f4ww4z/mcp-mysql-server

@f4ww4z/mcp-mysql-server

Espejo de

MCP SSE demo

MCP SSE demo

demo of MCP SSE server limitations using the bun runtime

mcp-excalidraw

mcp-excalidraw

A Model Context Protocol server that enables LLMs to create, modify, and manipulate Excalidraw diagrams through a structured API.

S3 MCP Server

S3 MCP Server

Un servidor de Protocolo de Contexto de Modelo de Amazon S3 que permite a los Modelos de Lenguaje Grandes como Claude interactuar con el almacenamiento de AWS S3, proporcionando herramientas para listar buckets, listar objetos y recuperar el contenido de los objetos.

DALL-E MCP Server

DALL-E MCP Server

An MCP server that allows users to generate, edit, and create variations of images through OpenAI's DALL-E API, supporting both DALL-E 2 and DALL-E 3 models.

Command Execution MCP Server for Claude Desktop

Command Execution MCP Server for Claude Desktop

Command Execution MCP Server for Claude Desktop

Memory MCP Server

Memory MCP Server

Un servidor de Protocolo de Contexto de Modelo que proporciona capacidades de gestión de grafos de conocimiento.

ThemeParks.wiki API MCP Server

ThemeParks.wiki API MCP Server

Servidor MCP de la API de ThemeParks.wiki

Offline Cline Marketplace

Offline Cline Marketplace

Un proyecto para sincronizar periódicamente los servidores MCP del Marketplace oficial de Cline.

Sensei MCP

Sensei MCP

Un servidor de Protocolo de Contexto de Modelo que proporciona orientación experta para el desarrollo de Dojo y Cairo en Starknet, ofreciendo conocimiento especializado y asistencia para la construcción de mundos onchain utilizando el framework Dojo Entity Component System.

Kaltura Model Context Protocol (MCP) Server

Kaltura Model Context Protocol (MCP) Server

Una implementación del Protocolo de Contexto del Modelo que proporciona a los modelos de IA acceso estandarizado a las capacidades de gestión de medios de Kaltura, incluyendo la carga, la recuperación de metadatos, la búsqueda y la gestión de categorías y permisos.

Docker image for the MCP Everything server with SSE transport

Docker image for the MCP Everything server with SSE transport

Mirror of

MCP Node.js Debugger

MCP Node.js Debugger

Permite a Claude depurar directamente un servidor NodeJS estableciendo puntos de interrupción, inspeccionando variables y recorriendo el código paso a paso.

Web_Search_MCP

Web_Search_MCP

An MCP(Model Context Protocol) Server with a web search tool

G-Search MCP

G-Search MCP

Un potente servidor MCP que permite la búsqueda paralela en Google con múltiples palabras clave simultáneamente, proporcionando resultados estructurados a la vez que gestiona los CAPTCHAs y simula patrones de navegación de usuario.

WhatsUpDoc (downmarked)

WhatsUpDoc (downmarked)

Okay, I understand. Here's a breakdown of how you could approach this task, along with considerations and potential code snippets (using Python as a common scripting language). Keep in mind that this is a complex task, and the specific implementation will depend heavily on the structure of the developer documentation you're targeting. **Conceptual Outline** 1. **Identify the Target Documentation:** You need to know *where* the developer documentation lives. Is it a website? A collection of HTML files? A Git repository? The approach will vary significantly. 2. **Scraping (Web if applicable):** * Use a library like `requests` to fetch the HTML content of the documentation pages. * Use a library like `Beautiful Soup 4` to parse the HTML and extract the relevant content (e.g., headings, paragraphs, code examples). This is the trickiest part, as you'll need to identify the HTML elements that contain the actual documentation. * Consider using `Scrapy` for more complex websites or if you need to handle pagination, rate limiting, etc. 3. **Markdown Conversion:** * Once you've extracted the content, you'll need to format it as Markdown. You might need to do some string manipulation to convert HTML elements to their Markdown equivalents (e.g., `<h1>` to `# `, `<strong>` to `**`). * Libraries like `html2text` or `markdownify` can help with this conversion, but they might not be perfect and may require customization. 4. **Anthropic's MCP (Message Communication Protocol) Integration:** * This is where you'll need to understand how Anthropic's MCP works. The goal is to structure the scraped documentation in a way that the CLI and documentation server can communicate effectively. This likely involves defining a specific message format (e.g., JSON) that includes the documentation content and any relevant metadata (e.g., section title, keywords). * You'll need to adapt the scraped and converted Markdown content to fit this MCP format. 5. **Local Saving:** * Finally, save the MCP-formatted documentation as a Markdown file (or potentially a JSON file, depending on the MCP format). **Python Code Snippets (Illustrative)** ```python import requests from bs4 import BeautifulSoup import markdownify import json import os def scrape_and_convert(url, output_file): """ Scrapes a URL, converts the content to Markdown, and saves it locally in an MCP-compatible format. """ try: response = requests.get(url) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) soup = BeautifulSoup(response.content, 'html.parser') # **Crucially, adapt these selectors to your target website's structure!** # Example: Extract the main content area content_div = soup.find('div', {'class': 'documentation-content'}) if content_div: # Convert the content to Markdown markdown_content = markdownify.markdownify(str(content_div)) # Create MCP-formatted data (example) mcp_data = { "type": "documentation", "source_url": url, "content": markdown_content, "metadata": { "title": soup.title.string if soup.title else "Untitled", "keywords": [] # Add keywords if available } } # Save as Markdown (with MCP data embedded as a comment) with open(output_file, 'w', encoding='utf-8') as f: f.write(f"<!-- MCP Data: {json.dumps(mcp_data)} -->\n\n") # Embed MCP data as a comment f.write(markdown_content) print(f"Successfully scraped and saved to {output_file}") else: print(f"Could not find the 'documentation-content' div on {url}") except requests.exceptions.RequestException as e: print(f"Error fetching URL: {e}") except Exception as e: print(f"An error occurred: {e}") # Example Usage (replace with your actual URL and output file) if __name__ == "__main__": target_url = "https://example.com/developer-documentation" # Replace with the actual URL output_filename = "documentation.md" # Ensure the directory exists output_dir = "output" os.makedirs(output_dir, exist_ok=True) output_path = os.path.join(output_dir, output_filename) scrape_and_convert(target_url, output_path) ``` **Important Considerations and Next Steps** * **Website Structure:** The most important part is understanding the HTML structure of the target documentation website. Inspect the HTML source code using your browser's developer tools to identify the relevant elements. Adjust the `soup.find()` calls accordingly. * **Error Handling:** The code includes basic error handling, but you should add more robust error handling to catch potential issues during scraping and conversion. * **Rate Limiting:** Be respectful of the website's resources. Implement delays between requests to avoid overloading the server. Check the website's `robots.txt` file for scraping guidelines. * **Authentication:** If the documentation requires authentication, you'll need to handle that in your `requests` calls (e.g., using cookies or API keys). * **Pagination:** If the documentation is spread across multiple pages, you'll need to implement logic to follow the pagination links and scrape all the pages. * **MCP Format:** The example MCP format is very basic. You'll need to define a more comprehensive format that meets the requirements of your CLI and documentation server. Consider using JSON Schema to validate the MCP data. * **Code Examples:** Pay special attention to code examples. You might want to use a different Markdown syntax for code blocks (e.g., using triple backticks) and preserve syntax highlighting. * **Images and Other Assets:** If the documentation includes images or other assets, you'll need to download them and update the Markdown links accordingly. * **Testing:** Thoroughly test your scraper on a representative sample of the documentation pages to ensure that it's extracting the content correctly and converting it to Markdown properly. * **Legal:** Be aware of the terms of service of the website you're scraping. Make sure you have the right to scrape and use the documentation content. **To proceed, please provide the following information:** 1. **The URL of the developer documentation you want to scrape.** 2. **Details about Anthropic's MCP (or a link to the MCP specification).** What fields are required in the MCP messages? What data types are expected? 3. **Any specific requirements for the Markdown output.** For example, should code blocks be formatted in a particular way? Should images be handled in a specific way? Once I have this information, I can provide more specific and tailored code examples.

mcp-osint OSINT Server

mcp-osint OSINT Server

Servidor MCP para realizar varias tareas de OSINT aprovechando herramientas comunes de reconocimiento de redes.

AgentCraft MCP Server

AgentCraft MCP Server

Se integra con el marco de trabajo AgentCraft para permitir una comunicación segura y el intercambio de datos entre agentes de IA, admitiendo tanto agentes de IA empresariales predefinidos como personalizados.

MCP Server Coding Demo Guide

MCP Server Coding Demo Guide

Smartsheet MCP Server

Smartsheet MCP Server

Proporciona una integración perfecta con Smartsheet, permitiendo operaciones automatizadas en documentos de Smartsheet a través de una interfaz estandarizada que une herramientas de automatización impulsadas por IA con la plataforma de colaboración de Smartsheet.

ISO 9001 MCP Server

ISO 9001 MCP Server

ISO 9001 Model Context Protocol Server Implementation

Chroma MCP Server

Chroma MCP Server

Un servidor que proporciona capacidades de recuperación de datos impulsado por la base de datos de incrustaciones Chroma, permitiendo a los modelos de IA crear colecciones sobre datos generados y entradas de usuario, y recuperar esos datos utilizando búsqueda vectorial, búsqueda de texto completo y filtrado de metadatos.

MCP Etherscan Server

MCP Etherscan Server

Espejo de