Discover Awesome MCP Servers
Extend your agent with 10,066 capabilities via MCP servers.
- All10,066
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2

Google Drive
File access and search capabilities for Google Drive
ClickHouse MCP Server
Um servidor de Protocolo de Contexto de Modelo que permite que assistentes de IA interajam de forma segura com bancos de dados ClickHouse, suportando a exploração de tabelas e a execução de consultas SQL através de uma interface controlada.
ODBC MCP Server
Permite que ferramentas de LLM, como o Claude Desktop, consultem bancos de dados por meio de conexões ODBC, possibilitando o acesso, a análise e a geração de insights a partir dos dados do banco de dados, mantendo a segurança por meio de proteções de somente leitura.
BICScan MCP Server
Fornece avaliação de risco de endereços blockchain e informações de ativos através da API BICScan, permitindo que os usuários avaliem os riscos para endereços cripto, domínios e dApps em uma escala de 0 a 100.
MCP Web Search Tool
Um servidor de Protocolo de Contexto de Modelo que fornece capacidades de pesquisa na web em tempo real para assistentes de IA através de provedores de pesquisa conectáveis, atualmente integrado com a API de Pesquisa Brave.
pyodide-mcp
Pyodide MCP Server
Weather MCP Server
Um servidor MCP que fornece informações meteorológicas em tempo real, incluindo temperatura, humidade, velocidade do vento e horários do nascer e pôr do sol, através da API OpenWeatherMap.
@f4ww4z/mcp-mysql-server
Espelho de
MCP SSE demo
demo of MCP SSE server limitations using the bun runtime

mcp-excalidraw
Um servidor de Protocolo de Contexto de Modelo que permite que LLMs criem, modifiquem e manipulem diagramas Excalidraw através de uma API estruturada.
S3 MCP Server
Um servidor de Protocolo de Contexto de Modelo Amazon S3 que permite que Modelos de Linguagem Grandes como o Claude interajam com o armazenamento AWS S3, fornecendo ferramentas para listar buckets, listar objetos e recuperar o conteúdo de objetos.
DALL-E MCP Server
Um servidor MCP que permite aos usuários gerar, editar e criar variações de imagens através da API DALL-E da OpenAI, com suporte para os modelos DALL-E 2 e DALL-E 3.
Command Execution MCP Server for Claude Desktop
Command Execution MCP Server for Claude Desktop
Memory MCP Server
Um servidor de Protocolo de Contexto de Modelo que fornece capacidades de gerenciamento de grafos de conhecimento.
ThemeParks.wiki API MCP Server
Servidor MCP da API ThemeParks.wiki
Offline Cline Marketplace
Um projeto para sincronizar periodicamente servidores MCP do Cline Marketplace oficial.
Sensei MCP
Um servidor de Protocolo de Contexto de Modelo que oferece orientação especializada para o desenvolvimento em Dojo e Cairo no Starknet, fornecendo conhecimento e assistência especializados para a construção de mundos onchain usando o framework Dojo Entity Component System.
Kaltura Model Context Protocol (MCP) Server
Uma implementação do Protocolo de Contexto do Modelo que fornece aos modelos de IA acesso padronizado aos recursos de gerenciamento de mídia da Kaltura, incluindo upload, recuperação de metadados, pesquisa e gerenciamento de categorias e permissões.
Docker image for the MCP Everything server with SSE transport
Mirror of
MCP Node.js Debugger
Permite que Claude depure diretamente um servidor NodeJS definindo pontos de interrupção, inspecionando variáveis e percorrendo o código passo a passo.
Web_Search_MCP
Um servidor MCP (Protocolo de Contexto de Modelo) com uma ferramenta de pesquisa na web.

G-Search MCP
Um servidor MCP poderoso que permite a pesquisa paralela no Google com múltiplas palavras-chave simultaneamente, fornecendo resultados estruturados enquanto lida com CAPTCHAs e simula padrões de navegação do usuário.
WhatsUpDoc (downmarked)
Okay, here's a breakdown of how you could approach this task, along with considerations and potential code snippets. I'll focus on the core concepts and provide a structure you can adapt. Keep in mind that this is a complex task, and the specific implementation will depend heavily on the structure of the developer documentation you're targeting. **Understanding the Goal** The goal is to: 1. **Scrape Developer Documentation:** Extract the content from a website (or set of websites) that constitutes the developer documentation. 2. **Convert to Markdown:** Transform the scraped content into Markdown format. 3. **Save Locally:** Store the Markdown files on your local machine. 4. **Standardize with Anthropic's MCP (Hypothetical):** This is the trickiest part. Since Anthropic's MCP (presumably "Message Content Protocol" or something similar) isn't publicly documented (as of my knowledge cutoff), we'll have to *assume* its purpose and how it might be used. I'll assume it's a way to structure communication between a CLI tool and a documentation server, likely involving specific formatting or metadata. We'll aim to *prepare* the Markdown in a way that would be easily adaptable to MCP. **High-Level Steps** 1. **Identify Target Documentation:** Determine the exact website(s) or online resources that contain the developer documentation you want to scrape. Understand their structure (e.g., is it a single page, a series of linked pages, a documentation generator like Sphinx or MkDocs?). 2. **Web Scraping:** Use a library like `BeautifulSoup4` and `requests` (in Python) to fetch the HTML content of the documentation pages and extract the relevant text. 3. **HTML to Markdown Conversion:** Use a library like `html2text` or `markdownify` to convert the extracted HTML into Markdown. 4. **MCP-Related Formatting (Hypothetical):** This is where we'll make assumptions about Anthropic's MCP. We might need to: * Add specific metadata to the Markdown files (e.g., tags, categories, version information). * Structure the Markdown in a particular way (e.g., specific heading levels, consistent use of code blocks). * Potentially create a separate metadata file (e.g., JSON or YAML) that describes the Markdown content. 5. **Save to Local Files:** Write the Markdown content to `.md` files on your local machine. Organize the files into a directory structure that makes sense for your use case. **Python Code Example (Illustrative)** ```python import requests from bs4 import BeautifulSoup import html2text import os import json # For potential MCP metadata def scrape_and_convert(url, output_dir): """ Scrapes a URL, converts the content to Markdown, and saves it locally. Args: url: The URL of the documentation page. output_dir: The directory to save the Markdown file. """ try: response = requests.get(url) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) soup = BeautifulSoup(response.content, 'html.parser') # **Crucially, identify the specific HTML elements that contain the documentation content.** # This is the part that will vary the most depending on the website. content_div = soup.find('div', {'class': 'documentation-content'}) # Example: find a div with class "documentation-content" if not content_div: print(f"Warning: Could not find documentation content on {url}") return # Extract the HTML content from the identified element html_content = str(content_div) # Convert HTML to Markdown h = html2text.HTML2Text() h.ignore_links = False # Keep links markdown_content = h.handle(html_content) # **MCP-Related Formatting (Hypothetical)** # Add metadata (example) metadata = { "source_url": url, "category": "API Reference", "version": "1.0" } # Prepend metadata to the Markdown (or create a separate file) # This is just an example; adjust based on MCP requirements markdown_content = f"---\n{json.dumps(metadata)}\n---\n\n{markdown_content}" # Create the output directory if it doesn't exist os.makedirs(output_dir, exist_ok=True) # Determine the filename (e.g., based on the URL) filename = url.split('/')[-1].split('.')[0] + ".md" # Extract last part of URL as filename filepath = os.path.join(output_dir, filename) # Save the Markdown to a file with open(filepath, 'w', encoding='utf-8') as f: f.write(markdown_content) print(f"Successfully scraped and saved {url} to {filepath}") except requests.exceptions.RequestException as e: print(f"Error fetching {url}: {e}") except Exception as e: print(f"Error processing {url}: {e}") # Example Usage if __name__ == "__main__": documentation_urls = [ "https://example.com/api/endpoint1", # Replace with actual URLs "https://example.com/api/endpoint2" ] output_directory = "documentation_markdown" for url in documentation_urls: scrape_and_convert(url, output_directory) ``` **Explanation and Key Considerations** * **`requests`:** Used to fetch the HTML content from the web. Handles HTTP requests. * **`BeautifulSoup4`:** Parses the HTML, making it easy to navigate and extract specific elements. * **`html2text` (or `markdownify`):** Converts HTML to Markdown. You might need to experiment with the settings to get the best conversion. * **`os`:** Used for creating directories and handling file paths. * **Error Handling:** The `try...except` blocks are crucial for handling potential errors during web scraping (e.g., network errors, invalid URLs, changes in the website structure). * **`response.raise_for_status()`:** Checks for HTTP errors (4xx, 5xx) and raises an exception if one occurs. * **`soup.find()`:** This is the *most important* part to customize. You need to inspect the HTML source of the target documentation pages and identify the specific HTML elements (e.g., `<div>`, `<article>`, `<section>`) that contain the main content you want to extract. Use the `class` or `id` attributes of these elements to target them accurately. You might need to use more complex CSS selectors with `soup.select()` for more precise targeting. * **HTML Structure:** Websites vary *wildly* in their HTML structure. You'll likely need to write different scraping logic for each website you target. * **Rate Limiting:** Be respectful of the website you're scraping. Don't make requests too quickly, or you might get blocked. Consider adding delays between requests using `time.sleep()`. * **Robots.txt:** Check the website's `robots.txt` file to see if there are any restrictions on scraping. Adhere to those restrictions. * **JavaScript:** If the documentation content is dynamically loaded using JavaScript, you might need to use a headless browser like Selenium or Puppeteer to render the page before scraping. This adds significant complexity. * **MCP Integration:** The example includes a placeholder for MCP-related formatting. You'll need to replace this with the actual logic required by Anthropic's MCP. This might involve: * Adding specific headers or footers to the Markdown files. * Creating a separate metadata file (e.g., JSON or YAML) that describes the content of each Markdown file. * Structuring the Markdown content in a particular way (e.g., using specific heading levels, consistent code block formatting). * **Encoding:** The `encoding='utf-8'` argument in `open()` is important to ensure that the Markdown files are saved with UTF-8 encoding, which supports a wide range of characters. * **Organization:** Think about how you want to organize the scraped documentation. You might want to create a directory structure that mirrors the structure of the website. **Example of using `soup.select()` with CSS selectors:** ```python content_elements = soup.select("div.main-content > article > h1, div.main-content > article > p, div.main-content > article > pre") ``` This would select all `<h1>`, `<p>`, and `<pre>` elements that are descendants of an `<article>` element, which is itself a descendant of a `<div>` element with the class "main-content". **Important Disclaimer:** Web scraping can be a gray area legally and ethically. Always respect the website's terms of service and robots.txt file. Don't scrape excessively or in a way that could harm the website. If you're unsure about the legality of scraping a particular website, consult with a legal professional. **Portuguese Translation Considerations** After you have the Markdown files, you can use a translation service (like Google Translate API, DeepL API, or a similar service) to translate the content into Portuguese. Here's a general outline: 1. **Choose a Translation API:** Sign up for an account with a translation service and obtain an API key. 2. **Install the API Client:** Install the Python client library for the translation API you've chosen (e.g., `googletrans`, `deepl`). 3. **Translate the Markdown Files:** Read each Markdown file, split the content into smaller chunks (to avoid API limits), translate each chunk, and then reassemble the translated content into a new Markdown file. **Example using Google Translate API (using `googletrans` - note that `googletrans` is somewhat unreliable and may require workarounds):** ```python from googletrans import Translator import os def translate_markdown(input_file, output_file, target_language='pt'): """Translates a Markdown file to the specified language.""" try: translator = Translator() with open(input_file, 'r', encoding='utf-8') as infile: text = infile.read() # Split into smaller chunks (e.g., paragraphs) paragraphs = text.split('\n\n') # Simple paragraph splitting translated_paragraphs = [] for paragraph in paragraphs: try: translation = translator.translate(paragraph, dest=target_language) translated_paragraphs.append(translation.text) except Exception as e: print(f"Translation error for paragraph: {e}") translated_paragraphs.append(paragraph) # Keep original if translation fails translated_text = '\n\n'.join(translated_paragraphs) with open(output_file, 'w', encoding='utf-8') as outfile: outfile.write(translated_text) print(f"Translated {input_file} to {output_file}") except Exception as e: print(f"Error translating {input_file}: {e}") # Example Usage if __name__ == "__main__": input_markdown_file = "documentation_markdown/example.md" # Replace with your file output_markdown_file = "documentation_markdown/example_pt.md" translate_markdown(input_markdown_file, output_markdown_file) ``` **Important Considerations for Translation:** * **API Limits:** Translation APIs often have limits on the number of characters you can translate per day or per minute. Handle these limits gracefully (e.g., by adding delays or using a paid plan). * **Cost:** Translation APIs can be expensive, especially for large amounts of text. * **Quality:** Machine translation is not perfect. The quality of the translation will depend on the language pair and the complexity of the text. Consider having a human reviewer proofread the translated content. * **Code Blocks:** You might want to avoid translating code blocks. You can use regular expressions to identify code blocks and exclude them from the translation process. * **Markdown Syntax:** Be careful not to break the Markdown syntax during translation. Test the translated Markdown files to make sure they render correctly. This comprehensive guide should give you a solid foundation for scraping, converting, and potentially translating developer documentation. Remember to adapt the code to the specific structure of the documentation you're targeting and to handle errors gracefully. Good luck!
mcp-osint OSINT Server
Servidor MCP para executar várias tarefas de OSINT, aproveitando ferramentas comuns de reconhecimento de rede.
AgentCraft MCP Server
Integra-se com o framework AgentCraft para permitir comunicação e troca de dados seguras entre agentes de IA, suportando tanto agentes de IA empresariais pré-fabricados quanto personalizados.
MCP Server Coding Demo Guide
Smartsheet MCP Server
Oferece integração perfeita com o Smartsheet, permitindo operações automatizadas em documentos do Smartsheet por meio de uma interface padronizada que conecta ferramentas de automação baseadas em IA com a plataforma de colaboração do Smartsheet.
ISO 9001 MCP Server
ISO 9001 Model Context Protocol Server Implementation
Chroma MCP Server
Um servidor que oferece recursos de recuperação de dados alimentados pelo banco de dados de incorporação Chroma, permitindo que modelos de IA criem coleções sobre dados gerados e entradas de usuários, e recuperem esses dados usando pesquisa vetorial, pesquisa de texto completo e filtragem de metadados.
MCP Etherscan Server
Espelho de