Discover Awesome MCP Servers
Extend your agent with 17,823 capabilities via MCP servers.
- All17,823
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
LLDB-MCP
Sebuah server MCP untuk LLDB yang memungkinkan analisis, debugging, dan pembongkaran otomatis aplikasi asli.
Clarion Builder MCP Server
Mirror of
Google Drive
File access and search capabilities for Google Drive
Weather MCP Server
Sebuah server MCP yang menyediakan informasi cuaca real-time termasuk suhu, kelembapan, kecepatan angin, dan waktu matahari terbit/terbenam melalui OpenWeatherMap API.
Docker image for the MCP Everything server with SSE transport
Mirror of
Web_Search_MCP
Server MCP (Model Context Protocol) dengan alat pencarian web.
test-mcp-server
Cursor & Claude Think MCP
Enables Claude's explicit thinking mode in Cursor, allowing users to see Claude's step-by-step reasoning process when prefacing queries with 'think'.
WhatsUpDoc (downmarked)
Okay, I understand. Here's a breakdown of how you could approach this task, along with considerations and potential code snippets (using Python as a common scripting language). Keep in mind that this is a complex task, and the specific implementation will depend heavily on the structure of the developer documentation you're targeting. **Overall Strategy** 1. **Identify the Target Documentation:** Determine the website or source where the developer documentation resides. This is crucial because the scraping method will vary depending on the site's structure. 2. **Web Scraping:** Use a library like `requests` and `Beautiful Soup` (or `Scrapy` for more complex sites) to fetch and parse the HTML content of the documentation pages. 3. **Content Extraction:** Identify the relevant content within the HTML (e.g., headings, paragraphs, code examples). Use CSS selectors or XPath expressions to target these elements. 4. **Markdown Conversion:** Convert the extracted content into Markdown format. You might need to handle specific HTML elements and translate them into their Markdown equivalents. 5. **Anthropic MCP Integration (Conceptual):** This is where it gets more abstract. The idea is to use Anthropic's MCP (Message Content Protocol) to define a standardized way for the CLI (command-line interface) to request documentation and for the documentation server to respond. This likely involves defining a schema for the request and response messages. 6. **Local Storage:** Save the generated Markdown files locally. **Python Code Snippets (Illustrative)** ```python import requests from bs4 import BeautifulSoup import os import re # For cleaning up text # --- Configuration --- BASE_URL = "https://example.com/docs/" # Replace with the actual base URL START_PAGE = "index.html" # Replace with the starting page OUTPUT_DIR = "local_docs" # --- Helper Functions --- def clean_text(text): """Removes extra whitespace and cleans up text.""" text = re.sub(r'\s+', ' ', text).strip() # Remove multiple spaces return text def scrape_page(url): """Scrapes a single page and returns the relevant content.""" try: response = requests.get(url) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) soup = BeautifulSoup(response.content, 'html.parser') # --- Identify Content Areas (Adjust these selectors!) --- main_content = soup.find("div", class_="main-content") # Example: Find a div with class "main-content" if not main_content: print(f"Warning: No main content found on {url}") return None # Extract headings, paragraphs, code blocks, etc. # This is the most site-specific part. Inspect the HTML! headings = main_content.find_all(["h1", "h2", "h3", "h4", "h5", "h6"]) paragraphs = main_content.find_all("p") code_blocks = main_content.find_all("pre") # Often used for code return { "headings": headings, "paragraphs": paragraphs, "code_blocks": code_blocks, "url": url } except requests.exceptions.RequestException as e: print(f"Error fetching {url}: {e}") return None def convert_to_markdown(content): """Converts extracted content to Markdown.""" if not content: return "" markdown = "" markdown += f"# {content['url']}\n\n" # Add the URL as a heading for heading in content["headings"]: level = int(heading.name[1]) # h1 -> level 1, h2 -> level 2, etc. markdown += f"{'#' * level} {clean_text(heading.get_text())}\n\n" for paragraph in content["paragraphs"]: markdown += f"{clean_text(paragraph.get_text())}\n\n" for code_block in content["code_blocks"]: code = code_block.get_text() markdown += "```\n" + code + "\n```\n\n" return markdown def save_to_file(filename, content): """Saves Markdown content to a file.""" filepath = os.path.join(OUTPUT_DIR, filename + ".md") try: with open(filepath, "w", encoding="utf-8") as f: f.write(content) print(f"Saved to {filepath}") except Exception as e: print(f"Error saving to {filepath}: {e}") def crawl_and_scrape(start_url): """Crawls the documentation site and scrapes content.""" visited_urls = set() queue = [start_url] while queue: url = queue.pop(0) if url in visited_urls: continue visited_urls.add(url) print(f"Scraping: {url}") content = scrape_page(url) if content: markdown = convert_to_markdown(content) filename = url.replace(BASE_URL, "").replace("/", "_").replace(".html", "") # Create a filename save_to_file(filename, markdown) # --- Find links on the page and add them to the queue (Be careful of infinite loops!) --- try: response = requests.get(url) response.raise_for_status() soup = BeautifulSoup(response.content, 'html.parser') for link in soup.find_all('a', href=True): next_url = link['href'] if next_url.startswith("/"): next_url = BASE_URL + next_url[1:] # Handle relative links elif not next_url.startswith("http"): next_url = BASE_URL + next_url #Handle relative links without leading slash if BASE_URL in next_url: # Only crawl within the documentation site queue.append(next_url) except requests.exceptions.RequestException as e: print(f"Error finding links on {url}: {e}") # --- Main Execution --- if __name__ == "__main__": os.makedirs(OUTPUT_DIR, exist_ok=True) # Create the output directory start_url = BASE_URL + START_PAGE crawl_and_scrape(start_url) print("Scraping complete!") ``` **Explanation of the Code:** * **`requests`:** Fetches the HTML content from the web pages. * **`Beautiful Soup`:** Parses the HTML, making it easy to navigate and extract data. * **`BASE_URL` and `START_PAGE`:** You *must* replace these with the actual URL of the documentation you want to scrape. * **`scrape_page()`:** This function fetches the HTML, parses it, and then *crucially* uses CSS selectors (`soup.find()`, `soup.find_all()`) to locate the specific parts of the page you want to extract (headings, paragraphs, code blocks). **This is the part you'll need to customize heavily based on the structure of the target website.** Inspect the HTML source of the documentation pages to determine the correct CSS selectors. * **`convert_to_markdown()`:** This function takes the extracted content and converts it into Markdown format. It handles headings, paragraphs, and code blocks. You might need to add more logic to handle other HTML elements (lists, tables, images, etc.). * **`save_to_file()`:** Saves the Markdown content to a file in the `OUTPUT_DIR`. * **`crawl_and_scrape()`:** This function recursively crawls the documentation site, following links to other pages. It prevents infinite loops by keeping track of visited URLs. **Be very careful with this part. Make sure you're only crawling within the documentation site's domain.** **Important Considerations and Next Steps:** * **Website Structure:** The most important thing is to understand the structure of the website you're scraping. Use your browser's developer tools (usually by pressing F12) to inspect the HTML source code and identify the CSS selectors or XPath expressions that will allow you to extract the relevant content. * **Robots.txt:** Always check the website's `robots.txt` file (e.g., `https://example.com/robots.txt`) to see if there are any restrictions on scraping. Respect the website's rules. * **Rate Limiting:** Don't overload the website with requests. Implement a delay between requests (e.g., using `time.sleep()`) to avoid being blocked. * **Error Handling:** The code includes basic error handling, but you should add more robust error handling to catch potential issues (e.g., network errors, unexpected HTML structure). * **Dynamic Content:** If the documentation website uses JavaScript to load content dynamically, you might need to use a headless browser like Selenium or Puppeteer to render the page before scraping it. This adds significant complexity. * **Anthropic MCP Integration (Detailed):** * **Define the Schema:** You'll need to define a JSON schema (or similar) for the messages that will be exchanged between the CLI and the documentation server. For example: ```json // Request from CLI { "type": "documentation_request", "query": "how to use the API", "format": "markdown" // or "html", "text" } // Response from Documentation Server { "type": "documentation_response", "query": "how to use the API", "result": "# Using the API\n\nHere's how to use the API...", "format": "markdown" } ``` * **CLI Implementation:** The CLI would need to: * Construct a request message according to the schema. * Send the request to the documentation server (e.g., via HTTP). * Receive the response from the server. * Display the documentation to the user. * **Documentation Server Implementation:** The documentation server would need to: * Receive requests from the CLI. * Parse the request message. * Search the documentation (potentially using an index or search engine). * Format the results according to the requested format. * Send the response back to the CLI. * **MCP Benefits:** The MCP approach provides several benefits: * **Standardization:** Ensures that the CLI and the documentation server communicate in a consistent way. * **Flexibility:** Allows you to change the documentation server implementation without affecting the CLI (as long as the message schema remains the same). * **Extensibility:** Makes it easier to add new features to the documentation system. **Example of MCP usage (Conceptual):** ```python # CLI (Conceptual) import json import requests def get_documentation(query): request = { "type": "documentation_request", "query": query, "format": "markdown" } try: response = requests.post("http://your-documentation-server/api/docs", json=request) response.raise_for_status() data = response.json() if data["type"] == "documentation_response": print(data["result"]) # Display the markdown else: print("Unexpected response from server.") except requests.exceptions.RequestException as e: print(f"Error: {e}") # Documentation Server (Conceptual - Flask example) from flask import Flask, request, jsonify app = Flask(__name__) @app.route('/api/docs', methods=['POST']) def get_docs(): data = request.get_json() if data["type"] == "documentation_request": query = data["query"] # ... Search your documentation ... result = f"# Results for {query}\n\nSome documentation here." # Replace with actual search response = { "type": "documentation_response", "query": query, "result": result, "format": "markdown" } return jsonify(response) else: return jsonify({"error": "Invalid request"}), 400 if __name__ == '__main__': app.run(debug=True) ``` **In summary, this is a complex project that requires careful planning and implementation. Start by understanding the structure of the target documentation website, and then gradually build up the scraping and conversion logic. The MCP integration adds another layer of complexity, but it can provide significant benefits in terms of standardization and flexibility.** Remember to respect the website's terms of service and robots.txt file. Good luck!
Cloud Security Alliance Model Context Protocl Servers
Beberapa server MCP yang digunakan oleh CSA, dikumpulkan ke dalam satu repo agar mudah digunakan.
ODBC MCP Server
Mengaktifkan alat LLM seperti Claude Desktop untuk melakukan kueri database melalui koneksi ODBC, memungkinkan akses, analisis, dan pembuatan wawasan dari data database sambil menjaga keamanan melalui perlindungan hanya baca.
MCP Web Search Tool
Sebuah server Protokol Konteks Model yang menyediakan kemampuan pencarian web secara real-time untuk asisten AI melalui penyedia pencarian yang dapat dipasang (pluggable), saat ini terintegrasi dengan API Pencarian Brave.
DALL-E MCP Server
Sebuah server MCP yang memungkinkan pengguna untuk menghasilkan, mengedit, dan membuat variasi gambar melalui API DALL-E OpenAI, mendukung model DALL-E 2 dan DALL-E 3.
Command Execution MCP Server for Claude Desktop
Command Execution MCP Server for Claude Desktop
Memory MCP Server
Server Protokol Konteks Model yang menyediakan kemampuan manajemen grafik pengetahuan.
Supabase MCP Server
Mirror of
Offline Cline Marketplace
Sebuah proyek untuk menyinkronkan server MCP secara berkala dari Cline Marketplace resmi.
Task Planner MCP Server
Sebuah server MCP (Model Context Protocol) yang membantu asisten AI (seperti Claude) memecah tugas kompleks menjadi langkah-langkah yang dapat dikelola, melacak kemajuan, dan mengelola daftar tugas hierarkis.
MedAdapt Content Server
Server Protokol Konteks Model khusus yang meningkatkan pembelajaran medis berbantuan AI dengan menghubungkan Claude Desktop ke PubMed, NCBI Bookshelf, dan dokumen pengguna untuk mencari, mengambil, dan menganalisis konten pendidikan medis.
ThemeParks.wiki API MCP Server
API MCP Server ThemeParks.wiki
Sensei MCP
Server Protokol Konteks Model yang menyediakan panduan ahli untuk pengembangan Dojo dan Cairo di Starknet, menawarkan pengetahuan dan bantuan khusus untuk membangun dunia onchain menggunakan kerangka kerja Sistem Komponen Entitas Dojo.
Kaltura Model Context Protocol (MCP) Server
Implementasi dari Protokol Konteks Model yang menyediakan model AI dengan akses standar ke kemampuan manajemen media Kaltura, termasuk mengunggah, mengambil metadata, mencari, dan mengelola kategori dan izin.
PubMed Enhanced Search Server
Memungkinkan pencarian dan pengambilan makalah akademis dari database PubMed dengan fitur-fitur canggih seperti pencarian istilah MeSH, statistik publikasi, dan pencarian bukti berbasis PICO.
pyodide-mcp
Pyodide MCP Server
dbx-mcp-server
Server Protokol Konteks Model yang memungkinkan aplikasi AI berinteraksi dengan Dropbox, menyediakan alat untuk operasi file, pengambilan metadata, pencarian, dan manajemen akun melalui API Dropbox.
mcp-server-cli
Model Context Protocol server to run shell scripts or commands
S3 MCP Server
Sebuah server Protokol Konteks Model Amazon S3 yang memungkinkan Model Bahasa Besar seperti Claude untuk berinteraksi dengan penyimpanan AWS S3, menyediakan alat untuk mendaftar bucket, mendaftar objek, dan mengambil konten objek.
Backstage MCP
A simple backstage mcp server using quarkus-backstage
MCP Etherscan Server
Cermin dari
MCP SSH Server for Windsurf
MCP SSH server for Windsurf integration