Discover Awesome MCP Servers
Extend your agent with 13,036 capabilities via MCP servers.
- All13,036
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2

atdata-email-verification-mcp-server
atdata-email-verification-mcp-server

sl-test
sl-test

Letter Counter MCP Server
모델 컨텍스트 프로토콜(Model Context Protocol) 학습 예제로 만들어진, LLM이 단어 내 특정 글자의 발생 횟수를 셀 수 있도록 하는 MCP 서버.

Rongcloud Native
Native MCP Overview RongCloud Native MCP is a lightweight RongCloud IM service wrapper based on the MCP (Model Control Protocol) protocol. By directly wrapping the high-performance Rust IM SDK, it provides a simple and efficient instant messaging solution for client-side or local applications

ERPNext MCP Server
A production-ready server that enables AI assistants like Claude Desktop to seamlessly integrate with ERPNext for document operations, reporting, and custom workflows through natural language interaction.
Mcp Starwars
SWAPI 스타워즈 API를 위한 MCP 서버입니다. 이 프로젝트의 주요 목표는 MCP 서버를 사용하여 API와 상호 작용하는 방법을 보여주는 것입니다.
GitLab MCP Server

MCP Selenium Server
A Model Context Protocol server implementation that enables browser automation through standardized MCP clients, supporting features like navigation, element interaction, and screenshots across Chrome, Firefox, and Edge browsers.

DeepSeek MCP Server
Allows seamless integration of DeepSeek's language models with MCP-compatible applications like Claude Desktop, supporting features such as model selection, temperature control, and multi-turn conversations with automatic model fallback.

A2A MCP Server
A centralized server that tracks and manages connected agents, providing a web interface to monitor their status while enabling agent communication through a central point.
Stock MCP 服务器

Angular MCP Toolkit
Provides 11 powerful tools for Angular development with enterprise-grade templates, scaffolding capabilities, and project generation features for modern Angular 20.0.x applications.
QMT-MCP-Server
거울
A Cloud Automator MCP server
Cloud Automator REST API를 사용하기 위한 비공식 MCP 서버입니다.
X_AI_AGENT_MCP
Gemini-Flash-2.0과 MCP 서버로 만들어진 X AI 에이전트
MCP Server Markup Language (MCPML)
MCPML(MCP 서버 마크업 언어) - CLI 및 OpenAI 에이전트 지원을 통해 MCP 서버를 구축하기 위한 Python 프레임워크입니다.

Enrichment MCP Server
A Model Context Protocol server that enables users to perform third-party enrichment lookups for security observables (IP addresses, domains, URLs, emails) through services like VirusTotal, Shodan, and others.

Claude Kali MCP Commander
An MCP server that provides secure access to Kali Linux cybersecurity tools through Claude's interface, enabling users to run Kali Linux commands directly from Claude Desktop.

Echo MCP Server
MCP 클라이언트 테스트를 위해 메시지를 다시 반향하는 모델 컨텍스트 프로토콜(MCP)을 구현하는 간단한 서버입니다.
Fetch MCP Server
## Playwright, SSE MCP Server, and Node.js Express.js: Fetching URLs from a Webpage Here's a breakdown of how you can fetch URLs from a webpage using Playwright, stream them to a client using Server-Sent Events (SSE) with an MCP (Message Channel Protocol) server, and manage the server with Node.js and Express.js. This example focuses on the core logic and assumes you have basic familiarity with these technologies. **1. Project Setup:** ```bash mkdir playwright-sse-example cd playwright-sse-example npm init -y npm install playwright express sse-channel ``` **2. `server.js` (Node.js Express Server with SSE and Playwright):** ```javascript const express = require('express'); const { chromium } = require('playwright'); const SSE = require('sse-channel'); const app = express(); const port = 3000; // SSE Channel const channel = new SSE({ pingInterval: 30000 // Optional: Send a ping every 30 seconds to keep the connection alive }); // Express Middleware to handle SSE requests app.get('/stream', channel.middleware); // Route to trigger Playwright and stream URLs app.get('/fetch-urls', async (req, res) => { const urlToScrape = req.query.url; // Get the URL from the query parameter if (!urlToScrape) { return res.status(400).send('Missing URL parameter. Please provide a URL like: /fetch-urls?url=https://example.com'); } try { // Launch Playwright const browser = await chromium.launch(); const page = await browser.newPage(); // Navigate to the URL await page.goto(urlToScrape); // Extract URLs (Adjust the selector as needed) const urls = await page.evaluate(() => { const links = Array.from(document.querySelectorAll('a')); // Select all <a> tags return links.map(link => link.href); }); // Stream URLs via SSE for (const url of urls) { channel.send({ data: url }); } // Close the browser await browser.close(); // Send a completion message channel.send({ data: 'DONE' }); console.log('Finished scraping and streaming URLs.'); } catch (error) { console.error('Error during scraping:', error); channel.send({ data: `ERROR: ${error.message}` }); res.status(500).send(`Error: ${error.message}`); // Also send an error to the initial request } }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` **3. `client.html` (Simple HTML Client to Receive SSE):** ```html <!DOCTYPE html> <html> <head> <title>SSE Client</title> </head> <body> <h1>SSE URL Stream</h1> <ul id="urlList"></ul> <script> const urlList = document.getElementById('urlList'); const eventSource = new EventSource('http://localhost:3000/stream'); // Connect to the SSE endpoint eventSource.onmessage = (event) => { const data = event.data; if (data === 'DONE') { const listItem = document.createElement('li'); listItem.textContent = 'Scraping Complete!'; urlList.appendChild(listItem); eventSource.close(); // Close the connection when done } else if (data.startsWith('ERROR:')) { const listItem = document.createElement('li'); listItem.textContent = `Error: ${data.substring(7)}`; // Remove "ERROR: " prefix urlList.appendChild(listItem); eventSource.close(); // Close the connection on error } else { const listItem = document.createElement('li'); listItem.textContent = data; urlList.appendChild(listItem); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); const listItem = document.createElement('li'); listItem.textContent = 'SSE Connection Error'; urlList.appendChild(listItem); eventSource.close(); }; // Function to start the scraping process function startScraping(url) { fetch(`http://localhost:3000/fetch-urls?url=${url}`) .then(response => { if (!response.ok) { throw new Error(`HTTP error! status: ${response.status}`); } console.log('Scraping started...'); }) .catch(error => { console.error('Error starting scraping:', error); const listItem = document.createElement('li'); listItem.textContent = `Error starting scraping: ${error.message}`; urlList.appendChild(listItem); eventSource.close(); }); } // Example usage: Start scraping when the page loads window.onload = () => { const urlToScrape = prompt("Enter the URL to scrape:"); if (urlToScrape) { startScraping(urlToScrape); } else { alert("No URL provided. Please refresh the page and enter a URL."); } }; </script> </body> </html> ``` **Explanation:** 1. **`server.js`:** - **Imports:** Imports `express`, `playwright`, and `sse-channel`. - **SSE Channel:** Creates an `SSE` channel to manage the streaming of data. The `pingInterval` is optional but recommended to prevent the connection from timing out. - **SSE Middleware:** Registers the `channel.middleware` to handle SSE requests on the `/stream` endpoint. This sets the correct headers for SSE. - **`/fetch-urls` Route:** - **URL Parameter:** Gets the URL to scrape from the `url` query parameter. Handles the case where the URL is missing. - **Playwright Launch:** Launches a Chromium browser instance using Playwright. - **Page Navigation:** Navigates the page to the specified URL. - **URL Extraction:** Uses `page.evaluate` to run JavaScript code within the browser context. This code selects all `<a>` tags and extracts their `href` attributes (the URLs). **Important:** Adjust the `document.querySelectorAll('a')` selector to match the specific HTML structure of the website you are scraping. You might need to use more specific CSS selectors to target the correct links. - **SSE Streaming:** Iterates through the extracted URLs and sends each one as a separate SSE event using `channel.send({ data: url })`. The `data` field is the content of the SSE event. - **Browser Closure:** Closes the Playwright browser instance to release resources. - **Completion Message:** Sends a `DONE` message via SSE to signal the end of the scraping process. - **Error Handling:** Includes a `try...catch` block to handle potential errors during scraping. Sends an `ERROR` message via SSE and also sends an HTTP error response to the initial request. - **Express Server:** Starts the Express server on port 3000. 2. **`client.html`:** - **SSE Connection:** Creates an `EventSource` object to connect to the `/stream` endpoint on the server. This establishes the SSE connection. - **`onmessage` Handler:** This function is called whenever the server sends a new SSE event. - **Data Handling:** It extracts the `data` from the event. - **URL Display:** It creates a new list item (`<li>`) and adds the URL to the `urlList` (the `<ul>` element in the HTML). - **Completion Handling:** If the data is `DONE`, it adds a "Scraping Complete!" message to the list and closes the `EventSource` connection. - **Error Handling:** If the data starts with `ERROR:`, it displays the error message and closes the `EventSource` connection. - **`onerror` Handler:** This function is called if there's an error with the SSE connection itself. It displays an error message and closes the connection. - **`startScraping` Function:** This function initiates the scraping process by making a `fetch` request to the `/fetch-urls` endpoint on the server, passing the URL to scrape as a query parameter. It handles potential errors during the initial request. - **`window.onload`:** This ensures that the scraping process starts when the page has fully loaded. It prompts the user to enter the URL to scrape. **How to Run:** 1. **Install Dependencies:** `npm install` in the project directory. 2. **Start the Server:** `node server.js` 3. **Open `client.html` in your browser.** The browser will prompt you for the URL to scrape. Enter a URL (e.g., `https://www.example.com`). 4. **Observe the Results:** The URLs extracted from the webpage will be displayed in the `urlList` in the browser. **Key Improvements and Considerations:** * **Error Handling:** The code includes more robust error handling, both on the server and the client. It sends error messages via SSE and also provides HTTP error responses. * **URL Parameter:** The server now expects the URL to scrape as a query parameter (e.g., `/fetch-urls?url=https://example.com`). This is a more standard way to pass data to a server. * **Client-Side Scraping Initiation:** The client now initiates the scraping process by making a `fetch` request to the server. This allows the server to handle the Playwright logic and stream the results back to the client. * **`DONE` Message:** The server sends a `DONE` message via SSE to signal the end of the scraping process. The client uses this message to close the SSE connection and display a completion message. * **`pingInterval`:** The `pingInterval` in the `SSE` constructor helps keep the SSE connection alive, especially for long-running scraping tasks. * **Selector Adjustment:** **Crucially, you'll need to adjust the CSS selector in the `page.evaluate` function to match the specific HTML structure of the website you are scraping.** The example uses `document.querySelectorAll('a')`, which selects all `<a>` tags. You might need to use more specific selectors to target the correct links. Use your browser's developer tools to inspect the HTML and determine the appropriate selector. * **Rate Limiting:** Be mindful of rate limiting when scraping websites. Implement delays between requests to avoid overloading the server. Playwright provides mechanisms for this. * **Asynchronous Operations:** The code uses `async/await` to handle asynchronous operations, making the code more readable and easier to manage. * **Resource Management:** The Playwright browser instance is closed after the scraping is complete to release resources. * **Security:** Be careful when scraping websites. Respect the website's `robots.txt` file and avoid scraping sensitive information. Sanitize the URLs before displaying them on the client to prevent XSS vulnerabilities. * **MCP (Message Channel Protocol):** The original request mentioned MCP. While SSE is a simpler solution for streaming data from the server to the client, MCP provides a more robust and bidirectional communication channel. If you need more complex communication patterns (e.g., the client sending commands to the server to control the scraping process), you might consider using a library like `socket.io` or `ws` (WebSockets) instead of SSE. However, for the basic task of streaming URLs, SSE is generally sufficient. Implementing a full MCP server is beyond the scope of this example. * **Scalability:** For high-volume scraping, consider using a message queue (e.g., RabbitMQ, Kafka) to distribute the scraping tasks across multiple workers. **Korean Translation of Key Concepts:** * **Playwright:** 플레이라이트 * **Server-Sent Events (SSE):** 서버 전송 이벤트 * **Message Channel Protocol (MCP):** 메시지 채널 프로토콜 * **Node.js:** 노드제이에스 * **Express.js:** 익스프레스제이에스 * **URL:** URL (유알엘) * **Scraping:** 스크래핑 (웹 스크래핑) * **Endpoint:** 엔드포인트 * **Query Parameter:** 쿼리 파라미터 * **CSS Selector:** CSS 선택자 * **Asynchronous:** 비동기 * **Rate Limiting:** 속도 제한 * **Error Handling:** 오류 처리 * **Resource Management:** 자원 관리 * **Security:** 보안 This comprehensive example provides a solid foundation for building a URL scraper using Playwright, SSE, and Node.js. Remember to adapt the code to your specific needs and always respect the terms of service of the websites you are scraping.
PC-MCP
개인 PC에서 작동하는 MCP 서버를 대상으로 하며, 현재는 smart-pet-with-mcp 프로젝트의 데모용으로 사용됩니다.
mcp_server
LLM 통합을 위한 MCP 서버

chromium-arm64
MCP server that enables browser automation and web testing on ARM64 devices like Raspberry Pi, allowing users to navigate websites, take screenshots, execute JavaScript, and perform UI testing via Claude.

Weather MCP Server
A Model Context Protocol server that provides weather information and forecasts based on user location or address input.

Oracle Financials Cloud MCP Server by CData
Oracle Financials Cloud MCP Server by CData
mcd-demo
간단한 MCP 서버 생성 및 LangChain 에이전트와의 통합 테스트 중입니다.
mcp_repo_9ebf5242
이 저장소는 GitHub용 MCP 서버의 테스트 스크립트에 의해 생성되었습니다.
Mcp Server Code Analyzer

stackzero-labs/mcp
A model context protocol server that enables applications to use stackzero-labs/ui components through the MCP protocol, supporting both standalone operation and integration with Claude Desktop and Cursor.

ABS MCP Server
An MCP server that provides AI assistants with access to Australian Bureau of Statistics data through the SDMX-ML API, enabling statistical data querying and analysis.