Discover Awesome MCP Servers

Extend your agent with 17,337 capabilities via MCP servers.

All17,337
Remote MCP Server on Cloudflare

Remote MCP Server on Cloudflare

Union MCP

Union MCP

一个 MCP 服务器,允许 Claude 模型在对话中使用 Union 任务、工作流和应用程序作为工具。

Claude Kali MCP Commander

Claude Kali MCP Commander

An MCP server that provides secure access to Kali Linux cybersecurity tools through Claude's interface, enabling users to run Kali Linux commands directly from Claude Desktop.

Echo MCP Server

Echo MCP Server

一个简单的服务器,实现了模型上下文协议(MCP),会将消息回显,专为测试 MCP 客户端而设计。

Fetch MCP Server

Fetch MCP Server

Okay, here's a breakdown of how you can fetch URLs from a webpage using Playwright, integrate it with an SSE (Server-Sent Events) MCP (Management Control Plane) server, and use Node.js with Express.js to orchestrate everything. I'll provide code snippets and explanations to guide you. **Conceptual Overview** 1. **Playwright (Web Scraping):** Playwright will be used to launch a browser, navigate to the target webpage, and extract the URLs you need. 2. **Node.js/Express.js (Server):** Express.js will create a web server that handles requests to start the scraping process and stream the results back to the client. 3. **SSE (Server-Sent Events):** SSE will be used to push the extracted URLs from the server to the client in real-time as they are found. This is more efficient than repeatedly polling the server. 4. **MCP (Management Control Plane):** The MCP part is a bit more abstract. It implies you have a system for managing and controlling the scraping process. This could involve: * Configuration: Specifying the target URL, selectors for extracting URLs, etc. * Monitoring: Tracking the progress of the scraping job. * Error Handling: Managing errors that occur during scraping. * Scaling: Distributing the scraping workload across multiple instances. **Code Example (Illustrative)** **1. Project Setup** ```bash mkdir playwright-sse-mcp cd playwright-sse-mcp npm init -y npm install playwright express eventsource ``` **2. `server.js` (Node.js/Express.js Server)** ```javascript const express = require('express'); const { chromium } = require('playwright'); const app = express(); const port = 3000; app.use(express.json()); // For parsing JSON request bodies // In-memory storage for SSE connections (replace with a more robust solution for production) const sseClients = []; app.get('/sse', (req, res) => { res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); res.flushHeaders(); const clientId = Date.now(); // Unique ID for the client const newClient = { id: clientId, res, }; sseClients.push(newClient); console.log(`${clientId} Connection open`); req.on('close', () => { console.log(`${clientId} Connection closed`); sseClients = sseClients.filter((client) => client.id !== clientId); }); }); function sendSSE(data) { sseClients.forEach((client) => { client.res.write(`data: ${JSON.stringify(data)}\n\n`); }); } app.post('/scrape', async (req, res) => { const { url, selector } = req.body; // Get URL and selector from request body if (!url || !selector) { return res.status(400).send('URL and selector are required.'); } console.log(`Starting scrape for ${url} with selector ${selector}`); try { const browser = await chromium.launch(); const page = await browser.newPage(); await page.goto(url); const links = await page.locator(selector).evaluateAll(links => links.map(link => link.href)); for (const link of links) { sendSSE({ url: link }); // Send each URL via SSE } await browser.close(); console.log(`Scrape complete for ${url}`); res.status(200).send('Scrape started and URLs are being streamed.'); } catch (error) { console.error('Scrape error:', error); sendSSE({ error: error.message }); // Send error via SSE res.status(500).send('Scrape failed.'); } }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` **3. `client.html` (Simple Client to Receive SSE)** ```html <!DOCTYPE html> <html> <head> <title>SSE Client</title> </head> <body> <h1>SSE Stream</h1> <ul id="urlList"></ul> <script> const urlList = document.getElementById('urlList'); const eventSource = new EventSource('http://localhost:3000/sse'); eventSource.onmessage = (event) => { const data = JSON.parse(event.data); if (data.url) { const listItem = document.createElement('li'); listItem.textContent = data.url; urlList.appendChild(listItem); } else if (data.error) { const listItem = document.createElement('li'); listItem.textContent = `Error: ${data.error}`; urlList.appendChild(listItem); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); }; </script> </body> </html> ``` **4. Running the Example** 1. **Start the Server:** `node server.js` 2. **Open `client.html`** in your browser. 3. **Send a POST request to `/scrape`:** You can use `curl`, `Postman`, or a similar tool. For example: ```bash curl -X POST -H "Content-Type: application/json" -d '{"url": "https://www.example.com", "selector": "a"}' http://localhost:3000/scrape ``` Replace `"https://www.example.com"` with the URL you want to scrape and `"a"` with the CSS selector for the links you want to extract. **Explanation** * **`server.js`:** * Sets up an Express.js server. * `/sse` endpoint: Handles SSE connections. It sets the correct headers for SSE and keeps track of connected clients. * `sendSSE(data)`: Sends data to all connected SSE clients. * `/scrape` endpoint: * Receives the target URL and CSS selector from the request body. * Launches a Playwright browser. * Navigates to the URL. * Uses `page.locator(selector).evaluateAll()` to extract the `href` attributes of all elements matching the selector. * Sends each URL back to the client via SSE. * Handles errors and sends error messages via SSE. * **`client.html`:** * Creates an `EventSource` object to connect to the `/sse` endpoint. * Listens for `message` events from the server. * Parses the JSON data and displays the URLs in a list. * Handles errors. **Important Considerations for Production (MCP)** * **Configuration Management:** Instead of hardcoding the URL and selector in the `curl` command, you'd typically store them in a database or configuration file. Your MCP would provide an interface for managing these configurations. * **Job Queuing:** Use a message queue (e.g., RabbitMQ, Kafka) to queue scraping jobs. This allows you to handle a large number of requests without overloading the server. * **Scaling:** Run multiple instances of the scraping server behind a load balancer. The message queue will distribute the jobs across the instances. * **Monitoring:** Use a monitoring system (e.g., Prometheus, Grafana) to track the performance of the scraping servers, the number of jobs in the queue, and any errors that occur. * **Error Handling:** Implement robust error handling and retry mechanisms. For example, if a scraping job fails, you might retry it a few times before giving up. * **Rate Limiting:** Be respectful of the websites you are scraping. Implement rate limiting to avoid overloading their servers. * **Data Storage:** Instead of just displaying the URLs in the browser, you'll likely want to store them in a database or other data store. * **Authentication/Authorization:** Secure your MCP endpoints with authentication and authorization to prevent unauthorized access. * **Headless Mode:** Run the Playwright browser in headless mode (which is the default) for better performance. **Example of MCP Integration (Conceptual)** Let's say you have an MCP API endpoint `/api/scrape-jobs` that allows you to create new scraping jobs. The request body might look like this: ```json { "url": "https://www.example.com", "selector": "a", "callbackUrl": "https://your-data-store.com/api/store-data" // Where to send the scraped data } ``` Your server would then: 1. Receive the request to `/api/scrape-jobs`. 2. Validate the request. 3. Create a job in the message queue (e.g., RabbitMQ). 4. A worker process (one of your scraping server instances) would pick up the job from the queue. 5. The worker would scrape the URL, extract the data, and send it to the `callbackUrl`. 6. The worker would update the job status in the MCP (e.g., "in progress", "completed", "failed"). This is a simplified example, but it illustrates the basic principles of integrating Playwright with an SSE server and an MCP. The specific implementation will depend on your requirements and the architecture of your MCP. **Chinese Translation of Key Concepts** * **Web Scraping:** 网络爬虫 (wǎngluò páchóng) * **Server-Sent Events (SSE):** 服务器发送事件 (fúwùqì fāsòng shìjiàn) * **Management Control Plane (MCP):** 管理控制平面 (guǎnlǐ kòngzhì píngmiàn) * **Playwright:** Playwright (no direct translation, use the English name) * **Node.js:** Node.js (no direct translation, use the English name) * **Express.js:** Express.js (no direct translation, use the English name) * **URL:** 网址 (wǎngzhǐ) * **CSS Selector:** CSS 选择器 (CSS xuǎnzéqì) * **Endpoint:** 端点 (duāndiǎn) * **Message Queue:** 消息队列 (xiāoxi duìliè) * **Load Balancer:** 负载均衡器 (fùzài jūnhéngqì) * **Monitoring:** 监控 (jiānkòng) * **Error Handling:** 错误处理 (cuòwù chǔlǐ) * **Rate Limiting:** 速率限制 (sùlǜ xiànzhì) This comprehensive explanation and code example should give you a solid foundation for building your Playwright-based web scraping solution with SSE and MCP integration. Remember to adapt the code and architecture to your specific needs and environment. Good luck!

Chuk Design System

Chuk Design System

Provides access to a universal design token system with 7 pre-built themes (colors, typography, spacing, motion) and multi-format export capabilities for Canva, Remotion, PPTX, CSS, and W3C JSON.

mcp_server

mcp_server

LLM 集成用的 MCP 服务器

shortcuts-mcp-server

shortcuts-mcp-server

shortcuts-mcp-server

MCP Remote Server

MCP Remote Server

Enables execution of SSH commands on remote servers and management of Google Cloud Platform (GCE) instances through Cursor IDE.

Acumatica MCP Server by CData

Acumatica MCP Server by CData

This read-only MCP Server allows you to connect to Acumatica data from Claude Desktop through CData JDBC Drivers. Free (beta) read/write servers available at https://www.cdata.com/solutions/mcp

Stock MCP 服务器

Stock MCP 服务器

Minecraft RCON MCP Server

Minecraft RCON MCP Server

一个微型的 Minecraft MCP 服务器,使用 RCON 接口。

Angular MCP Toolkit

Angular MCP Toolkit

Provides 11 powerful tools for Angular development with enterprise-grade templates, scaffolding capabilities, and project generation features for modern Angular 20.0.x applications.

Stock MCP Server

Stock MCP Server

一个使用模型上下文协议 (MCP) 的实时股票报价 MCP 服务器。

Pydantic MCP Agent with Chainlit

Pydantic MCP Agent with Chainlit

这个仓库利用 MCP 服务器来无缝集成多个代理工具。

MCP Codebase Index

MCP Codebase Index

Enables semantic search across your codebase using Google's Gemini embeddings and Qdrant Cloud vector storage. Supports 15+ programming languages with smart code chunking and real-time file change monitoring.

QMT-MCP-Server

QMT-MCP-Server

镜子 (jìng zi)

MCP Web Extractor

MCP Web Extractor

使用 Readability.js 提取网页内容的 MCP 服务器。

DAV MCP Server

DAV MCP Server

A Model Context Protocol server that allows users to interact with their CalDAV, CardDAV, and WebDAV services (calendars, contacts, and files) from Fastmail and Apple iCloud accounts.

FreeCAD MCP

FreeCAD MCP

Enables control of FreeCAD CAD software from Claude Desktop through natural language commands. Supports creating, editing, and managing 3D objects, executing Python code, and generating screenshots of designs.

MCP-server-using-python

MCP-server-using-python

我们正在构建一个本地 MCP 服务器,名为 mix_server,它能与 Claude 建立桌面连接,这样你就可以通过自然语言与你的工具进行交互。

MyFitnessPal MCP Server

MyFitnessPal MCP Server

Enables retrieval and analysis of MyFitnessPal nutrition data including daily summaries, meal breakdowns, exercise tracking, and macro/micronutrient analysis. Uses browser cookie authentication to access your personal MyFitnessPal account data through natural language queries.

Tambo Docs MCP Server

Tambo Docs MCP Server

Enables AI assistants to discover, fetch, and search through Tambo documentation from docs.tambo.co. Provides intelligent content parsing with caching for improved performance when accessing technical documentation.

MCP API Server Template

MCP API Server Template

MCP Meeting Summary System

MCP Meeting Summary System

An AI-powered meeting assistant that combines FastAPI backend with React frontend to generate high-quality meeting summaries and provide Q&A functionality using OpenAI and Selenium.

TG_MCP

TG_MCP

A lightweight Python interface that exposes TigerGraph database operations as structured tools and URI-based resources for Claude and other MCP agents.

Weather MCP Server

Weather MCP Server

Provides current weather data and hourly forecasts for any location worldwide using the Open-Meteo API, accessible through HTTP transport without requiring an API key.

JetsonMCP

JetsonMCP

Connects AI assistants to NVIDIA Jetson Nano systems for edge computing management, enabling natural language control of AI workloads, hardware optimization, and system administration tasks.

mcp-confluent

mcp-confluent

一个 MCP 服务器实现,旨在与 Confluent Kafka 和 Confluent Cloud REST API 进行交互。

Pagos Data MCP Server

Pagos Data MCP Server

Enables Claude to retrieve BIN (Bank Identification Number) data for payment cards, with options for basic or enhanced insights through the Pagos API.