Discover Awesome MCP Servers

Extend your agent with 23,681 capabilities via MCP servers.

All23,681
MCP Server Coding Demo Guide

MCP Server Coding Demo Guide

ISO 9001 MCP Server

ISO 9001 MCP Server

ISO 9001 Model Context Protocol Server Implementation

WhatsUpDoc (downmarked)

WhatsUpDoc (downmarked)

Okay, here's a breakdown of how you could approach this task, along with considerations and potential code snippets. I'll focus on the core concepts and provide a structure you can adapt. Keep in mind that this is a complex task, and the specific implementation will depend heavily on the structure of the developer documentation you're targeting. **Understanding the Goal** The goal is to: 1. **Scrape Developer Documentation:** Extract the content from a website (or set of websites) that constitutes the developer documentation. 2. **Convert to Markdown:** Transform the scraped content into Markdown format. 3. **Save Locally:** Store the Markdown files on your local machine. 4. **Standardize with Anthropic's MCP (Hypothetical):** This is the trickiest part. Since Anthropic's MCP (presumably "Message Content Protocol" or something similar) isn't publicly documented (as of my knowledge cutoff), we'll have to *assume* its purpose and how it might be used. I'll assume it's a way to structure communication between a CLI tool and a documentation server, likely involving specific formatting or metadata. We'll aim to *prepare* the Markdown in a way that would be easily adaptable to MCP. **High-Level Steps** 1. **Identify Target Documentation:** Determine the exact website(s) or online resources that contain the developer documentation you want to scrape. Understand their structure (e.g., is it a single page, a series of linked pages, a documentation generator like Sphinx or MkDocs?). 2. **Web Scraping:** Use a library like `BeautifulSoup4` and `requests` (in Python) to fetch the HTML content of the documentation pages and extract the relevant text. 3. **HTML to Markdown Conversion:** Use a library like `html2text` or `markdownify` to convert the extracted HTML into Markdown. 4. **MCP-Related Formatting (Hypothetical):** This is where we'll make assumptions about Anthropic's MCP. We might need to: * Add specific metadata to the Markdown files (e.g., tags, categories, version information). * Structure the Markdown in a particular way (e.g., specific heading levels, consistent use of code blocks). * Potentially create a separate metadata file (e.g., JSON or YAML) that describes the Markdown content. 5. **Save to Local Files:** Write the Markdown content to `.md` files on your local machine. Organize the files into a directory structure that makes sense for your use case. **Python Code Example (Illustrative)** ```python import requests from bs4 import BeautifulSoup import html2text import os import json # For potential MCP metadata def scrape_and_convert(url, output_dir): """ Scrapes a URL, converts the content to Markdown, and saves it locally. Args: url: The URL of the documentation page. output_dir: The directory to save the Markdown file. """ try: response = requests.get(url) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) soup = BeautifulSoup(response.content, 'html.parser') # **Crucially, identify the specific HTML elements that contain the documentation content.** # This is the part that will vary the most depending on the website. content_div = soup.find('div', {'class': 'documentation-content'}) # Example: find a div with class "documentation-content" if not content_div: print(f"Warning: Could not find documentation content on {url}") return # Extract the HTML content from the identified element html_content = str(content_div) # Convert HTML to Markdown h = html2text.HTML2Text() h.ignore_links = False # Keep links markdown_content = h.handle(html_content) # **MCP-Related Formatting (Hypothetical)** # Add metadata (example) metadata = { "source_url": url, "category": "API Reference", "version": "1.0" } # Prepend metadata to the Markdown (or create a separate file) # This is just an example; adjust based on MCP requirements markdown_content = f"---\n{json.dumps(metadata)}\n---\n\n{markdown_content}" # Create the output directory if it doesn't exist os.makedirs(output_dir, exist_ok=True) # Determine the filename (e.g., based on the URL) filename = url.split('/')[-1].split('.')[0] + ".md" # Extract last part of URL as filename filepath = os.path.join(output_dir, filename) # Save the Markdown to a file with open(filepath, 'w', encoding='utf-8') as f: f.write(markdown_content) print(f"Successfully scraped and saved {url} to {filepath}") except requests.exceptions.RequestException as e: print(f"Error fetching {url}: {e}") except Exception as e: print(f"Error processing {url}: {e}") # Example Usage if __name__ == "__main__": documentation_urls = [ "https://example.com/api/endpoint1", # Replace with actual URLs "https://example.com/api/endpoint2" ] output_directory = "documentation_markdown" for url in documentation_urls: scrape_and_convert(url, output_directory) ``` **Explanation and Key Considerations** * **`requests`:** Used to fetch the HTML content from the web. Handles HTTP requests. * **`BeautifulSoup4`:** Parses the HTML, making it easy to navigate and extract specific elements. * **`html2text` (or `markdownify`):** Converts HTML to Markdown. You might need to experiment with the settings to get the best conversion. * **`os`:** Used for creating directories and handling file paths. * **Error Handling:** The `try...except` blocks are crucial for handling potential errors during web scraping (e.g., network errors, invalid URLs, changes in the website structure). * **`response.raise_for_status()`:** Checks for HTTP errors (4xx, 5xx) and raises an exception if one occurs. * **`soup.find()`:** This is the *most important* part to customize. You need to inspect the HTML source of the target documentation pages and identify the specific HTML elements (e.g., `<div>`, `<article>`, `<section>`) that contain the main content you want to extract. Use the `class` or `id` attributes of these elements to target them accurately. You might need to use more complex CSS selectors with `soup.select()` for more precise targeting. * **HTML Structure:** Websites vary *wildly* in their HTML structure. You'll likely need to write different scraping logic for each website you target. * **Rate Limiting:** Be respectful of the website you're scraping. Don't make requests too quickly, or you might get blocked. Consider adding delays between requests using `time.sleep()`. * **Robots.txt:** Check the website's `robots.txt` file to see if there are any restrictions on scraping. Adhere to those restrictions. * **JavaScript:** If the documentation content is dynamically loaded using JavaScript, you might need to use a headless browser like Selenium or Puppeteer to render the page before scraping. This adds significant complexity. * **MCP Integration:** The example includes a placeholder for MCP-related formatting. You'll need to replace this with the actual logic required by Anthropic's MCP. This might involve: * Adding specific headers or footers to the Markdown files. * Creating a separate metadata file (e.g., JSON or YAML) that describes the content of each Markdown file. * Structuring the Markdown content in a particular way (e.g., using specific heading levels, consistent code block formatting). * **Encoding:** The `encoding='utf-8'` argument in `open()` is important to ensure that the Markdown files are saved with UTF-8 encoding, which supports a wide range of characters. * **Organization:** Think about how you want to organize the scraped documentation. You might want to create a directory structure that mirrors the structure of the website. **Example of using `soup.select()` with CSS selectors:** ```python content_elements = soup.select("div.main-content > article > h1, div.main-content > article > p, div.main-content > article > pre") ``` This would select all `<h1>`, `<p>`, and `<pre>` elements that are descendants of an `<article>` element, which is itself a descendant of a `<div>` element with the class "main-content". **Important Disclaimer:** Web scraping can be a gray area legally and ethically. Always respect the website's terms of service and robots.txt file. Don't scrape excessively or in a way that could harm the website. If you're unsure about the legality of scraping a particular website, consult with a legal professional. **Portuguese Translation Considerations** After you have the Markdown files, you can use a translation service (like Google Translate API, DeepL API, or a similar service) to translate the content into Portuguese. Here's a general outline: 1. **Choose a Translation API:** Sign up for an account with a translation service and obtain an API key. 2. **Install the API Client:** Install the Python client library for the translation API you've chosen (e.g., `googletrans`, `deepl`). 3. **Translate the Markdown Files:** Read each Markdown file, split the content into smaller chunks (to avoid API limits), translate each chunk, and then reassemble the translated content into a new Markdown file. **Example using Google Translate API (using `googletrans` - note that `googletrans` is somewhat unreliable and may require workarounds):** ```python from googletrans import Translator import os def translate_markdown(input_file, output_file, target_language='pt'): """Translates a Markdown file to the specified language.""" try: translator = Translator() with open(input_file, 'r', encoding='utf-8') as infile: text = infile.read() # Split into smaller chunks (e.g., paragraphs) paragraphs = text.split('\n\n') # Simple paragraph splitting translated_paragraphs = [] for paragraph in paragraphs: try: translation = translator.translate(paragraph, dest=target_language) translated_paragraphs.append(translation.text) except Exception as e: print(f"Translation error for paragraph: {e}") translated_paragraphs.append(paragraph) # Keep original if translation fails translated_text = '\n\n'.join(translated_paragraphs) with open(output_file, 'w', encoding='utf-8') as outfile: outfile.write(translated_text) print(f"Translated {input_file} to {output_file}") except Exception as e: print(f"Error translating {input_file}: {e}") # Example Usage if __name__ == "__main__": input_markdown_file = "documentation_markdown/example.md" # Replace with your file output_markdown_file = "documentation_markdown/example_pt.md" translate_markdown(input_markdown_file, output_markdown_file) ``` **Important Considerations for Translation:** * **API Limits:** Translation APIs often have limits on the number of characters you can translate per day or per minute. Handle these limits gracefully (e.g., by adding delays or using a paid plan). * **Cost:** Translation APIs can be expensive, especially for large amounts of text. * **Quality:** Machine translation is not perfect. The quality of the translation will depend on the language pair and the complexity of the text. Consider having a human reviewer proofread the translated content. * **Code Blocks:** You might want to avoid translating code blocks. You can use regular expressions to identify code blocks and exclude them from the translation process. * **Markdown Syntax:** Be careful not to break the Markdown syntax during translation. Test the translated Markdown files to make sure they render correctly. This comprehensive guide should give you a solid foundation for scraping, converting, and potentially translating developer documentation. Remember to adapt the code to the specific structure of the documentation you're targeting and to handle errors gracefully. Good luck!

mcp-osint OSINT Server

mcp-osint OSINT Server

Servidor MCP para executar várias tarefas de OSINT, aproveitando ferramentas comuns de reconhecimento de rede.

AgentCraft MCP Server

AgentCraft MCP Server

Integra-se com o framework AgentCraft para permitir comunicação e troca de dados seguras entre agentes de IA, suportando tanto agentes de IA empresariais pré-fabricados quanto personalizados.

Web_Search_MCP

Web_Search_MCP

Um servidor MCP (Protocolo de Contexto de Modelo) com uma ferramenta de pesquisa na web.

MCP Compliance

MCP Compliance

Um servidor MCP (Management Control Plane) para dar suporte a operações de conformidade em agentes de IA.

dbx-mcp-server

dbx-mcp-server

Um servidor de Protocolo de Contexto de Modelo que permite que aplicações de IA interajam com o Dropbox, fornecendo ferramentas para operações de arquivos, recuperação de metadados, pesquisa e gerenciamento de contas através da API do Dropbox.

pyodide-mcp

pyodide-mcp

Pyodide MCP Server

Weather MCP Server

Weather MCP Server

Um servidor MCP que fornece informações meteorológicas em tempo real, incluindo temperatura, humidade, velocidade do vento e horários do nascer e pôr do sol, através da API OpenWeatherMap.

MCP SSE demo

MCP SSE demo

demo of MCP SSE server limitations using the bun runtime

mcp-excalidraw

mcp-excalidraw

Um servidor de Protocolo de Contexto de Modelo que permite que LLMs criem, modifiquem e manipulem diagramas Excalidraw através de uma API estruturada.

Command Execution MCP Server for Claude Desktop

Command Execution MCP Server for Claude Desktop

Command Execution MCP Server for Claude Desktop

Memory MCP Server

Memory MCP Server

Um servidor de Protocolo de Contexto de Modelo que fornece capacidades de gerenciamento de grafos de conhecimento.

Offline Cline Marketplace

Offline Cline Marketplace

Um projeto para sincronizar periodicamente servidores MCP do Cline Marketplace oficial.

Sensei MCP

Sensei MCP

Um servidor de Protocolo de Contexto de Modelo que oferece orientação especializada para o desenvolvimento em Dojo e Cairo no Starknet, fornecendo conhecimento e assistência especializados para a construção de mundos onchain usando o framework Dojo Entity Component System.

Kaltura Model Context Protocol (MCP) Server

Kaltura Model Context Protocol (MCP) Server

Uma implementação do Protocolo de Contexto do Modelo que fornece aos modelos de IA acesso padronizado aos recursos de gerenciamento de mídia da Kaltura, incluindo upload, recuperação de metadados, pesquisa e gerenciamento de categorias e permissões.

MCP Node.js Debugger

MCP Node.js Debugger

Permite que Claude depure diretamente um servidor NodeJS definindo pontos de interrupção, inspecionando variáveis e percorrendo o código passo a passo.

S3 MCP Server

S3 MCP Server

Um servidor de Protocolo de Contexto de Modelo Amazon S3 que permite que Modelos de Linguagem Grandes como o Claude interajam com o armazenamento AWS S3, fornecendo ferramentas para listar buckets, listar objetos e recuperar o conteúdo de objetos.

MCP Server

MCP Server

Backstage MCP

Backstage MCP

A simple backstage mcp server using quarkus-backstage

MCP Etherscan Server

MCP Etherscan Server

Espelho de

MCP SSH Server for Windsurf

MCP SSH Server for Windsurf

Servidor SSH MCP para integração com Windsurf

Mcp Server Chatsum

Mcp Server Chatsum

Please provide me with the WeChat messages you want me to summarize. I need the text of the messages to be able to summarize them for you.

Japanese Text Analyzer MCP Server

Japanese Text Analyzer MCP Server

Okay, I understand. I can't directly *execute* code or interact with files on your system. However, I can provide you with a Python script that accomplishes this task. I'll explain the code thoroughly so you can understand how it works and adapt it if needed. Here's the Python script: ```python import os import re import argparse import mojimoji # For normalizing Japanese text import subprocess # For calling MeCab def count_characters_and_words(filepath, language): """ Counts characters (excluding spaces and line breaks) and words in a text file. Args: filepath (str): The path to the text file. language (str): The language of the text file ('en' for English, 'ja' for Japanese). Returns: tuple: A tuple containing (character_count, word_count). Returns (None, None) on error. """ try: with open(filepath, 'r', encoding='utf-8') as f: text = f.read() except FileNotFoundError: print(f"Error: File not found: {filepath}") return None, None except UnicodeDecodeError: print(f"Error: Could not decode file {filepath} with UTF-8. Try a different encoding.") return None, None if language == 'en': # English: Simple word splitting and character counting text = text.strip() # Remove leading/trailing whitespace character_count = len(re.sub(r'\s', '', text)) # Remove all whitespace characters words = text.split() word_count = len(words) elif language == 'ja': # Japanese: Use MeCab for morphological analysis try: # Normalize text to full-width katakana for better MeCab performance normalized_text = mojimoji.zen_to_han(text, kana=False, ascii=False) normalized_text = mojimoji.han_to_zen(normalized_text, kana=True, ascii=False) # Call MeCab mecab_process = subprocess.Popen(['mecab'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) mecab_output, mecab_error = mecab_process.communicate(normalized_text) if mecab_error: print(f"MeCab Error: {mecab_error}") return None, None # Count words based on MeCab output (first column of each line before the comma) words = [line.split(',')[0].split('\t')[0] for line in mecab_output.splitlines() if line.strip() != 'EOS'] word_count = len(words) # Count characters (excluding spaces and line breaks) character_count = len(re.sub(r'\s', '', text)) except FileNotFoundError: print("Error: MeCab is not installed or not in your PATH.") print("Please install MeCab and ensure it's accessible from the command line.") return None, None except Exception as e: print(f"An error occurred during Japanese processing: {e}") return None, None else: print("Error: Invalid language specified. Use 'en' or 'ja'.") return None, None return character_count, word_count def main(): parser = argparse.ArgumentParser(description="Counts characters and words in text files.") parser.add_argument("filepath", help="The path to the text file.") parser.add_argument("language", help="The language of the text file ('en' for English, 'ja' for Japanese).") args = parser.parse_args() filepath = args.filepath language = args.language char_count, word_count = count_characters_and_words(filepath, language) if char_count is not None and word_count is not None: print(f"File: {filepath}") print(f"Language: {language}") print(f"Character Count (excluding spaces): {char_count}") print(f"Word Count: {word_count}") if __name__ == "__main__": main() ``` **How to Use the Script:** 1. **Save the Code:** Save the code above as a Python file (e.g., `count_text.py`). 2. **Install Dependencies:** You'll need to install the `mojimoji` library and MeCab. Open your terminal or command prompt and run: ```bash pip install mojimoji ``` * **MeCab Installation:** MeCab is a morphological analyzer for Japanese. The installation process varies depending on your operating system: * **Linux (Debian/Ubuntu):** ```bash sudo apt-get update sudo apt-get install mecab libmecab-dev mecab-ipadic-utf8 ``` * **macOS (using Homebrew):** ```bash brew install mecab brew install mecab-ipadic ``` * **Windows:** The installation on Windows is more involved. I recommend following a tutorial like this one: [https://medium.com/@denis.akhapkin/installing-mecab-on-windows-10-8e318304985](https://medium.com/@denis.akhapkin/installing-mecab-on-windows-10-8e318304985). Make sure MeCab is added to your system's PATH environment variable. 3. **Run the Script:** Open your terminal or command prompt, navigate to the directory where you saved `count_text.py`, and run the script with the following command: ```bash python count_text.py <filepath> <language> ``` * Replace `<filepath>` with the actual path to your text file (e.g., `my_english_text.txt` or `my_japanese_text.txt`). * Replace `<language>` with either `en` for English or `ja` for Japanese. **Example:** ```bash python count_text.py my_english_text.txt en python count_text.py my_japanese_text.txt ja ``` **Explanation of the Code:** * **`import` Statements:** * `os`: (Not directly used in the current version, but good practice to include for potential file system operations). * `re`: For regular expressions (used to remove spaces). * `argparse`: For parsing command-line arguments (filepath and language). * `mojimoji`: For normalizing Japanese text (converting between full-width and half-width characters). This is important for MeCab's accuracy. * `subprocess`: For running the MeCab command-line tool. * **`count_characters_and_words(filepath, language)` Function:** * Takes the file path and language as input. * **File Handling:** Opens the file in UTF-8 encoding (important for handling Japanese characters). Includes error handling for `FileNotFoundError` and `UnicodeDecodeError`. * **English Processing (`language == 'en'`):** * Removes leading/trailing whitespace using `text.strip()`. * Counts characters by removing all whitespace characters (using `re.sub(r'\s', '', text)`) and then getting the length of the resulting string. * Splits the text into words using `text.split()`. * Counts the number of words. * **Japanese Processing (`language == 'ja'`):** * **Normalization:** Uses `mojimoji` to normalize the text. It converts half-width characters to full-width katakana and full-width characters to half-width ascii. This improves MeCab's performance. * **MeCab Integration:** * Uses `subprocess.Popen` to run the `mecab` command. * Passes the text to MeCab via standard input (`stdin`). * Captures MeCab's output from standard output (`stdout`). * Captures any errors from standard error (`stderr`). * **Error Handling:** Checks for MeCab errors and prints them if any occur. Also includes a `FileNotFoundError` check to see if MeCab is installed. * **Word Counting:** Parses the MeCab output. MeCab outputs each word on a separate line, with the word itself in the first column (before the first tab character). The code extracts these words and counts them. It skips the "EOS" (End of Sentence) marker. * **Character Counting:** Counts characters in the original text (excluding spaces and line breaks) using `len(re.sub(r'\s', '', text))`. * **Error Handling:** Handles invalid language input. * **Returns:** Returns the character count and word count as a tuple. * **`main()` Function:** * Uses `argparse` to handle command-line arguments. * Calls `count_characters_and_words()` to do the actual counting. * Prints the results. * **`if __name__ == "__main__":` Block:** * Ensures that the `main()` function is only called when the script is run directly (not when it's imported as a module). **Key Improvements and Considerations:** * **Japanese Morphological Analysis (MeCab):** The script now uses MeCab for Japanese word counting. This is *essential* for accurate word counts in Japanese because Japanese doesn't use spaces to separate words. * **Character Counting (Excluding Spaces):** The script correctly counts characters by removing spaces and line breaks using regular expressions. * **UTF-8 Encoding:** The script opens the files with UTF-8 encoding to handle Japanese characters correctly. * **Error Handling:** The script includes error handling for file not found, Unicode decoding errors, and MeCab errors. * **Command-Line Arguments:** The script uses `argparse` to make it easy to specify the file path and language from the command line. * **MeCab Installation:** The script provides instructions for installing MeCab on different operating systems. This is a crucial step. * **Normalization:** The script normalizes the Japanese text before passing it to MeCab. This can improve MeCab's accuracy. * **MeCab PATH:** Make sure MeCab is in your system's PATH environment variable so the script can find it. * **Alternative Japanese Tokenizers:** While MeCab is a good choice, other Japanese tokenizers exist (e.g., SudachiPy, Juman++). You could adapt the script to use a different tokenizer if you prefer. * **Large Files:** For very large files, you might want to consider reading the file in chunks to avoid loading the entire file into memory at once. **Example Usage (with sample files):** 1. **Create `my_english_text.txt`:** ``` This is a sample English text file. It has multiple lines. ``` 2. **Create `my_japanese_text.txt`:** ``` これは日本語のサンプルテキストファイルです。 複数の行があります。 ``` 3. **Run the script:** ```bash python count_text.py my_english_text.txt en python count_text.py my_japanese_text.txt ja ``` The script will print the character and word counts for each file. Remember to install MeCab *before* running the script with a Japanese file.

OpenAPI MCP Server

OpenAPI MCP Server

Esta ferramenta cria um servidor de Protocolo de Contexto de Modelo (MCP) que atua como um proxy para qualquer API que tenha uma especificação OpenAPI v3.1. Isso permite que você use o Claude Desktop para interagir facilmente com APIs de servidor locais e remotas.

Hevy MCP Server

Hevy MCP Server

EVM MCP Server

EVM MCP Server

Um servidor abrangente que permite que agentes de IA interajam com múltiplas redes blockchain compatíveis com EVM através de uma interface unificada, suportando resolução ENS, operações com tokens e interações com contratos inteligentes.

PubMed Enhanced Search Server

PubMed Enhanced Search Server

Permite a busca e recuperação de artigos acadêmicos do banco de dados PubMed com recursos avançados como pesquisa de termos MeSH, estatísticas de publicação e busca de evidências baseada em PICO.

MCP-server

MCP-server