Discover Awesome MCP Servers
Extend your agent with 15,099 capabilities via MCP servers.
- All15,099
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
1Panel MCP Server
1Panel MCP Server adalah implementasi dari Model Context Protocol (MCP) server untuk 1Panel.
OpenFGA MCP
Sebuah server Protokol Konteks Model eksperimental yang memungkinkan Model Bahasa Besar untuk membaca, mencari, dan memanipulasi penyimpanan otorisasi OpenFGA, membuka kontrol akses terperinci untuk AI agentik dan interaksi bahasa alami.
Jira MCP Server
Server Protokol Konteks Model yang memungkinkan asisten AI seperti Claude untuk berinteraksi dengan Jira, memungkinkan tugas manajemen proyek seperti membuat daftar proyek, mencari masalah, membuat tiket, dan mengelola sprint melalui kueri bahasa alami.
Cortellis MCP Server
BigQuery MCP Server
Cermin dari

GoogleAdsServer
Alat yang menghubungkan Google Ads dengan Claude AI, memungkinkan pengguna untuk menganalisis data periklanan melalui percakapan bahasa alami dan mengakses informasi kampanye, metrik kinerja, analisis kata kunci, dan manajemen iklan.
Clarion Builder MCP Server
Mirror of
ClickHouse MCP Server
Server Protokol Konteks Model yang memungkinkan asisten AI berinteraksi secara aman dengan database ClickHouse, mendukung penjelajahan tabel dan eksekusi kueri SQL melalui antarmuka yang terkontrol.
BICScan MCP Server
Menyediakan penilaian risiko alamat blockchain dan informasi aset melalui API BICScan, memungkinkan pengguna untuk menilai risiko untuk alamat kripto, domain, dan dApps pada skala 0-100.
Meta MCP Server
A meta server for orchestrating and leveraging multiple MCP servers and tools based on user requests
Mcp Kafka Processor
Server MCP untuk memproses kejadian Kafka dan memungkinkan LLM berinteraksi dengan aliran kejadian.
MedAdapt Content Server
Server Protokol Konteks Model khusus yang meningkatkan pembelajaran medis berbantuan AI dengan menghubungkan Claude Desktop ke PubMed, NCBI Bookshelf, dan dokumen pengguna untuk mencari, mengambil, dan menganalisis konten pendidikan medis.
mcp_server
NPM MCP Server
Sebuah server Protokol Konteks Model yang memungkinkan model AI untuk mengambil informasi detail tentang paket npm dan menemukan paket populer di ekosistem npm.
Yazdani-Supabase-MCP-Server
An MCP server that enables natural language querying of Supabase PostgreSQL databases using Claude 3.7, allowing users to inspect schemas, execute SQL, manage migrations, and convert natural language to SQL queries.
EVM MCP Server
Comprehensive Model Context Protocol server that enables AI agents to interact with 30+ Ethereum-compatible blockchain networks, supporting token transfers, smart contract interactions, and ENS name resolution through a unified interface.
Mcp-server
MPC Tally API Server
Sebuah server yang memungkinkan agen AI untuk berinteraksi dengan Tally API untuk mengakses data tata kelola DAO, proposal, dan metadata menggunakan Model Context Protocol (MCP), dibangun dengan TypeScript dan GraphQL.
MCP Compliance
Sebuah server mcp untuk mendukung operasi kepatuhan dalam agen AI
WhatsUpDoc (downmarked)
Okay, I understand. Here's a breakdown of how you could approach this task, along with considerations and potential code snippets (using Python as a common scripting language). Keep in mind that this is a complex task, and the specific implementation will depend heavily on the structure of the developer documentation you're targeting. **Overall Strategy** 1. **Identify the Target Documentation:** Determine the website or source where the developer documentation resides. This is crucial because the scraping method will vary depending on the site's structure. 2. **Web Scraping:** Use a library like `requests` and `Beautiful Soup` (or `Scrapy` for more complex sites) to fetch and parse the HTML content of the documentation pages. 3. **Content Extraction:** Identify the relevant content within the HTML (e.g., headings, paragraphs, code examples). Use CSS selectors or XPath expressions to target these elements. 4. **Markdown Conversion:** Convert the extracted content into Markdown format. You might need to handle specific HTML elements and translate them into their Markdown equivalents. 5. **Anthropic MCP Integration (Conceptual):** This is where it gets more abstract. The idea is to use Anthropic's MCP (Message Content Protocol) to define a standardized way for the CLI (command-line interface) to request documentation and for the documentation server to respond. This likely involves defining a schema for the request and response messages. 6. **Local Storage:** Save the generated Markdown files locally. **Python Code Snippets (Illustrative)** ```python import requests from bs4 import BeautifulSoup import os import re # For cleaning up text # --- Configuration --- BASE_URL = "https://example.com/docs/" # Replace with the actual base URL START_PAGE = "index.html" # Replace with the starting page OUTPUT_DIR = "local_docs" # --- Helper Functions --- def clean_text(text): """Removes extra whitespace and cleans up text.""" text = re.sub(r'\s+', ' ', text).strip() # Remove multiple spaces return text def scrape_page(url): """Scrapes a single page and returns the relevant content.""" try: response = requests.get(url) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) soup = BeautifulSoup(response.content, 'html.parser') # --- Identify Content Areas (Adjust these selectors!) --- main_content = soup.find("div", class_="main-content") # Example: Find a div with class "main-content" if not main_content: print(f"Warning: No main content found on {url}") return None # Extract headings, paragraphs, code blocks, etc. # This is the most site-specific part. Inspect the HTML! headings = main_content.find_all(["h1", "h2", "h3", "h4", "h5", "h6"]) paragraphs = main_content.find_all("p") code_blocks = main_content.find_all("pre") # Often used for code return { "headings": headings, "paragraphs": paragraphs, "code_blocks": code_blocks, "url": url } except requests.exceptions.RequestException as e: print(f"Error fetching {url}: {e}") return None def convert_to_markdown(content): """Converts extracted content to Markdown.""" if not content: return "" markdown = "" markdown += f"# {content['url']}\n\n" # Add the URL as a heading for heading in content["headings"]: level = int(heading.name[1]) # h1 -> level 1, h2 -> level 2, etc. markdown += f"{'#' * level} {clean_text(heading.get_text())}\n\n" for paragraph in content["paragraphs"]: markdown += f"{clean_text(paragraph.get_text())}\n\n" for code_block in content["code_blocks"]: code = code_block.get_text() markdown += "```\n" + code + "\n```\n\n" return markdown def save_to_file(filename, content): """Saves Markdown content to a file.""" filepath = os.path.join(OUTPUT_DIR, filename + ".md") try: with open(filepath, "w", encoding="utf-8") as f: f.write(content) print(f"Saved to {filepath}") except Exception as e: print(f"Error saving to {filepath}: {e}") def crawl_and_scrape(start_url): """Crawls the documentation site and scrapes content.""" visited_urls = set() queue = [start_url] while queue: url = queue.pop(0) if url in visited_urls: continue visited_urls.add(url) print(f"Scraping: {url}") content = scrape_page(url) if content: markdown = convert_to_markdown(content) filename = url.replace(BASE_URL, "").replace("/", "_").replace(".html", "") # Create a filename save_to_file(filename, markdown) # --- Find links on the page and add them to the queue (Be careful of infinite loops!) --- try: response = requests.get(url) response.raise_for_status() soup = BeautifulSoup(response.content, 'html.parser') for link in soup.find_all('a', href=True): next_url = link['href'] if next_url.startswith("/"): next_url = BASE_URL + next_url[1:] # Handle relative links elif not next_url.startswith("http"): next_url = BASE_URL + next_url #Handle relative links without leading slash if BASE_URL in next_url: # Only crawl within the documentation site queue.append(next_url) except requests.exceptions.RequestException as e: print(f"Error finding links on {url}: {e}") # --- Main Execution --- if __name__ == "__main__": os.makedirs(OUTPUT_DIR, exist_ok=True) # Create the output directory start_url = BASE_URL + START_PAGE crawl_and_scrape(start_url) print("Scraping complete!") ``` **Explanation of the Code:** * **`requests`:** Fetches the HTML content from the web pages. * **`Beautiful Soup`:** Parses the HTML, making it easy to navigate and extract data. * **`BASE_URL` and `START_PAGE`:** You *must* replace these with the actual URL of the documentation you want to scrape. * **`scrape_page()`:** This function fetches the HTML, parses it, and then *crucially* uses CSS selectors (`soup.find()`, `soup.find_all()`) to locate the specific parts of the page you want to extract (headings, paragraphs, code blocks). **This is the part you'll need to customize heavily based on the structure of the target website.** Inspect the HTML source of the documentation pages to determine the correct CSS selectors. * **`convert_to_markdown()`:** This function takes the extracted content and converts it into Markdown format. It handles headings, paragraphs, and code blocks. You might need to add more logic to handle other HTML elements (lists, tables, images, etc.). * **`save_to_file()`:** Saves the Markdown content to a file in the `OUTPUT_DIR`. * **`crawl_and_scrape()`:** This function recursively crawls the documentation site, following links to other pages. It prevents infinite loops by keeping track of visited URLs. **Be very careful with this part. Make sure you're only crawling within the documentation site's domain.** **Important Considerations and Next Steps:** * **Website Structure:** The most important thing is to understand the structure of the website you're scraping. Use your browser's developer tools (usually by pressing F12) to inspect the HTML source code and identify the CSS selectors or XPath expressions that will allow you to extract the relevant content. * **Robots.txt:** Always check the website's `robots.txt` file (e.g., `https://example.com/robots.txt`) to see if there are any restrictions on scraping. Respect the website's rules. * **Rate Limiting:** Don't overload the website with requests. Implement a delay between requests (e.g., using `time.sleep()`) to avoid being blocked. * **Error Handling:** The code includes basic error handling, but you should add more robust error handling to catch potential issues (e.g., network errors, unexpected HTML structure). * **Dynamic Content:** If the documentation website uses JavaScript to load content dynamically, you might need to use a headless browser like Selenium or Puppeteer to render the page before scraping it. This adds significant complexity. * **Anthropic MCP Integration (Detailed):** * **Define the Schema:** You'll need to define a JSON schema (or similar) for the messages that will be exchanged between the CLI and the documentation server. For example: ```json // Request from CLI { "type": "documentation_request", "query": "how to use the API", "format": "markdown" // or "html", "text" } // Response from Documentation Server { "type": "documentation_response", "query": "how to use the API", "result": "# Using the API\n\nHere's how to use the API...", "format": "markdown" } ``` * **CLI Implementation:** The CLI would need to: * Construct a request message according to the schema. * Send the request to the documentation server (e.g., via HTTP). * Receive the response from the server. * Display the documentation to the user. * **Documentation Server Implementation:** The documentation server would need to: * Receive requests from the CLI. * Parse the request message. * Search the documentation (potentially using an index or search engine). * Format the results according to the requested format. * Send the response back to the CLI. * **MCP Benefits:** The MCP approach provides several benefits: * **Standardization:** Ensures that the CLI and the documentation server communicate in a consistent way. * **Flexibility:** Allows you to change the documentation server implementation without affecting the CLI (as long as the message schema remains the same). * **Extensibility:** Makes it easier to add new features to the documentation system. **Example of MCP usage (Conceptual):** ```python # CLI (Conceptual) import json import requests def get_documentation(query): request = { "type": "documentation_request", "query": query, "format": "markdown" } try: response = requests.post("http://your-documentation-server/api/docs", json=request) response.raise_for_status() data = response.json() if data["type"] == "documentation_response": print(data["result"]) # Display the markdown else: print("Unexpected response from server.") except requests.exceptions.RequestException as e: print(f"Error: {e}") # Documentation Server (Conceptual - Flask example) from flask import Flask, request, jsonify app = Flask(__name__) @app.route('/api/docs', methods=['POST']) def get_docs(): data = request.get_json() if data["type"] == "documentation_request": query = data["query"] # ... Search your documentation ... result = f"# Results for {query}\n\nSome documentation here." # Replace with actual search response = { "type": "documentation_response", "query": query, "result": result, "format": "markdown" } return jsonify(response) else: return jsonify({"error": "Invalid request"}), 400 if __name__ == '__main__': app.run(debug=True) ``` **In summary, this is a complex project that requires careful planning and implementation. Start by understanding the structure of the target documentation website, and then gradually build up the scraping and conversion logic. The MCP integration adds another layer of complexity, but it can provide significant benefits in terms of standardization and flexibility.** Remember to respect the website's terms of service and robots.txt file. Good luck!
Cloud Security Alliance Model Context Protocl Servers
Beberapa server MCP yang digunakan oleh CSA, dikumpulkan ke dalam satu repo agar mudah digunakan.
Smartsheet MCP Server
Menyediakan integrasi tanpa hambatan dengan Smartsheet, memungkinkan operasi otomatis pada dokumen Smartsheet melalui antarmuka standar yang menjembatani alat otomatisasi bertenaga AI dengan platform kolaborasi Smartsheet.
ISO 9001 MCP Server
ISO 9001 Model Context Protocol Server Implementation

Coinmarketcap
Implementasi Protokol Konteks Model untuk API CoinMarketCap yang menyediakan akses standar ke data pasar mata uang kripto, informasi pertukaran, dan metrik blockchain.
Chroma MCP Server
Sebuah server yang menyediakan kemampuan pengambilan data yang didukung oleh basis data embedding Chroma, memungkinkan model AI untuk membuat koleksi atas data yang dihasilkan dan input pengguna, serta mengambil data tersebut menggunakan pencarian vektor, pencarian teks lengkap, dan penyaringan metadata.
Web_Search_MCP
Server MCP (Model Context Protocol) dengan alat pencarian web.
test-mcp-server
Cursor & Claude Think MCP
Enables Claude's explicit thinking mode in Cursor, allowing users to see Claude's step-by-step reasoning process when prefacing queries with 'think'.
DALL-E MCP Server
Sebuah server MCP yang memungkinkan pengguna untuk menghasilkan, mengedit, dan membuat variasi gambar melalui API DALL-E OpenAI, mendukung model DALL-E 2 dan DALL-E 3.
Command Execution MCP Server for Claude Desktop
Command Execution MCP Server for Claude Desktop