Discover Awesome MCP Servers

Extend your agent with 16,005 capabilities via MCP servers.

All16,005
Echo MCP Server

Echo MCP Server

一个简单的服务器,实现了模型上下文协议(MCP),会将消息回显,专为测试 MCP 客户端而设计。

Fetch MCP Server

Fetch MCP Server

Okay, here's a breakdown of how you can fetch URLs from a webpage using Playwright, integrate it with an SSE (Server-Sent Events) MCP (Management Control Plane) server, and use Node.js with Express.js to orchestrate everything. I'll provide code snippets and explanations to guide you. **Conceptual Overview** 1. **Playwright (Web Scraping):** Playwright will be used to launch a browser, navigate to the target webpage, and extract the URLs you need. 2. **Node.js/Express.js (Server):** Express.js will create a web server that handles requests to start the scraping process and stream the results back to the client. 3. **SSE (Server-Sent Events):** SSE will be used to push the extracted URLs from the server to the client in real-time as they are found. This is more efficient than repeatedly polling the server. 4. **MCP (Management Control Plane):** The MCP part is a bit more abstract. It implies you have a system for managing and controlling the scraping process. This could involve: * Configuration: Specifying the target URL, selectors for extracting URLs, etc. * Monitoring: Tracking the progress of the scraping job. * Error Handling: Managing errors that occur during scraping. * Scaling: Distributing the scraping workload across multiple instances. **Code Example (Illustrative)** **1. Project Setup** ```bash mkdir playwright-sse-mcp cd playwright-sse-mcp npm init -y npm install playwright express eventsource ``` **2. `server.js` (Node.js/Express.js Server)** ```javascript const express = require('express'); const { chromium } = require('playwright'); const app = express(); const port = 3000; app.use(express.json()); // For parsing JSON request bodies // In-memory storage for SSE connections (replace with a more robust solution for production) const sseClients = []; app.get('/sse', (req, res) => { res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); res.flushHeaders(); const clientId = Date.now(); // Unique ID for the client const newClient = { id: clientId, res, }; sseClients.push(newClient); console.log(`${clientId} Connection open`); req.on('close', () => { console.log(`${clientId} Connection closed`); sseClients = sseClients.filter((client) => client.id !== clientId); }); }); function sendSSE(data) { sseClients.forEach((client) => { client.res.write(`data: ${JSON.stringify(data)}\n\n`); }); } app.post('/scrape', async (req, res) => { const { url, selector } = req.body; // Get URL and selector from request body if (!url || !selector) { return res.status(400).send('URL and selector are required.'); } console.log(`Starting scrape for ${url} with selector ${selector}`); try { const browser = await chromium.launch(); const page = await browser.newPage(); await page.goto(url); const links = await page.locator(selector).evaluateAll(links => links.map(link => link.href)); for (const link of links) { sendSSE({ url: link }); // Send each URL via SSE } await browser.close(); console.log(`Scrape complete for ${url}`); res.status(200).send('Scrape started and URLs are being streamed.'); } catch (error) { console.error('Scrape error:', error); sendSSE({ error: error.message }); // Send error via SSE res.status(500).send('Scrape failed.'); } }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` **3. `client.html` (Simple Client to Receive SSE)** ```html <!DOCTYPE html> <html> <head> <title>SSE Client</title> </head> <body> <h1>SSE Stream</h1> <ul id="urlList"></ul> <script> const urlList = document.getElementById('urlList'); const eventSource = new EventSource('http://localhost:3000/sse'); eventSource.onmessage = (event) => { const data = JSON.parse(event.data); if (data.url) { const listItem = document.createElement('li'); listItem.textContent = data.url; urlList.appendChild(listItem); } else if (data.error) { const listItem = document.createElement('li'); listItem.textContent = `Error: ${data.error}`; urlList.appendChild(listItem); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); }; </script> </body> </html> ``` **4. Running the Example** 1. **Start the Server:** `node server.js` 2. **Open `client.html`** in your browser. 3. **Send a POST request to `/scrape`:** You can use `curl`, `Postman`, or a similar tool. For example: ```bash curl -X POST -H "Content-Type: application/json" -d '{"url": "https://www.example.com", "selector": "a"}' http://localhost:3000/scrape ``` Replace `"https://www.example.com"` with the URL you want to scrape and `"a"` with the CSS selector for the links you want to extract. **Explanation** * **`server.js`:** * Sets up an Express.js server. * `/sse` endpoint: Handles SSE connections. It sets the correct headers for SSE and keeps track of connected clients. * `sendSSE(data)`: Sends data to all connected SSE clients. * `/scrape` endpoint: * Receives the target URL and CSS selector from the request body. * Launches a Playwright browser. * Navigates to the URL. * Uses `page.locator(selector).evaluateAll()` to extract the `href` attributes of all elements matching the selector. * Sends each URL back to the client via SSE. * Handles errors and sends error messages via SSE. * **`client.html`:** * Creates an `EventSource` object to connect to the `/sse` endpoint. * Listens for `message` events from the server. * Parses the JSON data and displays the URLs in a list. * Handles errors. **Important Considerations for Production (MCP)** * **Configuration Management:** Instead of hardcoding the URL and selector in the `curl` command, you'd typically store them in a database or configuration file. Your MCP would provide an interface for managing these configurations. * **Job Queuing:** Use a message queue (e.g., RabbitMQ, Kafka) to queue scraping jobs. This allows you to handle a large number of requests without overloading the server. * **Scaling:** Run multiple instances of the scraping server behind a load balancer. The message queue will distribute the jobs across the instances. * **Monitoring:** Use a monitoring system (e.g., Prometheus, Grafana) to track the performance of the scraping servers, the number of jobs in the queue, and any errors that occur. * **Error Handling:** Implement robust error handling and retry mechanisms. For example, if a scraping job fails, you might retry it a few times before giving up. * **Rate Limiting:** Be respectful of the websites you are scraping. Implement rate limiting to avoid overloading their servers. * **Data Storage:** Instead of just displaying the URLs in the browser, you'll likely want to store them in a database or other data store. * **Authentication/Authorization:** Secure your MCP endpoints with authentication and authorization to prevent unauthorized access. * **Headless Mode:** Run the Playwright browser in headless mode (which is the default) for better performance. **Example of MCP Integration (Conceptual)** Let's say you have an MCP API endpoint `/api/scrape-jobs` that allows you to create new scraping jobs. The request body might look like this: ```json { "url": "https://www.example.com", "selector": "a", "callbackUrl": "https://your-data-store.com/api/store-data" // Where to send the scraped data } ``` Your server would then: 1. Receive the request to `/api/scrape-jobs`. 2. Validate the request. 3. Create a job in the message queue (e.g., RabbitMQ). 4. A worker process (one of your scraping server instances) would pick up the job from the queue. 5. The worker would scrape the URL, extract the data, and send it to the `callbackUrl`. 6. The worker would update the job status in the MCP (e.g., "in progress", "completed", "failed"). This is a simplified example, but it illustrates the basic principles of integrating Playwright with an SSE server and an MCP. The specific implementation will depend on your requirements and the architecture of your MCP. **Chinese Translation of Key Concepts** * **Web Scraping:** 网络爬虫 (wǎngluò páchóng) * **Server-Sent Events (SSE):** 服务器发送事件 (fúwùqì fāsòng shìjiàn) * **Management Control Plane (MCP):** 管理控制平面 (guǎnlǐ kòngzhì píngmiàn) * **Playwright:** Playwright (no direct translation, use the English name) * **Node.js:** Node.js (no direct translation, use the English name) * **Express.js:** Express.js (no direct translation, use the English name) * **URL:** 网址 (wǎngzhǐ) * **CSS Selector:** CSS 选择器 (CSS xuǎnzéqì) * **Endpoint:** 端点 (duāndiǎn) * **Message Queue:** 消息队列 (xiāoxi duìliè) * **Load Balancer:** 负载均衡器 (fùzài jūnhéngqì) * **Monitoring:** 监控 (jiānkòng) * **Error Handling:** 错误处理 (cuòwù chǔlǐ) * **Rate Limiting:** 速率限制 (sùlǜ xiànzhì) This comprehensive explanation and code example should give you a solid foundation for building your Playwright-based web scraping solution with SSE and MCP integration. Remember to adapt the code and architecture to your specific needs and environment. Good luck!

PC-MCP

PC-MCP

针对操作个人电脑的 MCP 服务器,该项目当前主要针对 smart-pet-with-mcp 项目的搭配演示用。 (This translation aims to be as accurate and natural as possible.)

Pylon MCP Server

Pylon MCP Server

Enables interaction with Pylon's customer support platform API to manage users, contacts, issues, and knowledge base articles through natural language commands.

mcp_server

mcp_server

LLM 集成用的 MCP 服务器

Cloudflare Remote PostgreSQL Database MCP Server

Cloudflare Remote PostgreSQL Database MCP Server

Enables chatting with PostgreSQL databases through secure GitHub OAuth authentication, supporting read operations for all users and write operations for privileged users. Deployable as a production-ready remote MCP server on Cloudflare Workers with automatic schema discovery and SQL injection protection.

DDG MCP2

DDG MCP2

A basic MCP server template built with FastMCP framework that provides example tools for echoing messages and retrieving server information. Serves as a starting point for developing custom MCP servers with Docker support and CI/CD integration.

chromium-arm64

chromium-arm64

MCP server that enables browser automation and web testing on ARM64 devices like Raspberry Pi, allowing users to navigate websites, take screenshots, execute JavaScript, and perform UI testing via Claude.

Weather MCP Server

Weather MCP Server

A Model Context Protocol server that provides weather information and forecasts based on user location or address input.

Oracle Financials Cloud MCP Server by CData

Oracle Financials Cloud MCP Server by CData

Oracle Financials Cloud MCP Server by CData

mcd-demo

mcd-demo

正在测试创建简单的 Minecraft (MCP) 服务器并与 LangChain 代理集成。

mcp_repo_9ebf5242

mcp_repo_9ebf5242

这是一个由 MCP 服务器的测试脚本为 GitHub 创建的测试仓库。

ABS MCP Server

ABS MCP Server

An MCP server that provides AI assistants with access to Australian Bureau of Statistics data through the SDMX-ML API, enabling statistical data querying and analysis.

ynab-mcp-server

ynab-mcp-server

镜子 (jìng zi)

Postman

Postman

Postman’s remote MCP server connects AI agents, assistants, and chatbots directly to your APIs on Postman. Use natural language to prompt AI to automate work across your Postman collections, environments, workspaces, and more.

Vitest MCP Server

Vitest MCP Server

AI-optimized Vitest interface that provides structured test output, visual debugging context, and intelligent coverage analysis for more effective AI assistance with testing.

code-to-tree

code-to-tree

code-to-tree

DAV MCP Server

DAV MCP Server

A Model Context Protocol server that allows users to interact with their CalDAV, CardDAV, and WebDAV services (calendars, contacts, and files) from Fastmail and Apple iCloud accounts.

Dash MCP Server

Dash MCP Server

Provides tools to interact with the Dash documentation browser API, allowing users to list installed docsets, search across documentation, and enable full-text search.

Linear MCP Server

Linear MCP Server

一个处理所有线性资源类型的线性 MCP 实现。

DuckDuckGo MCP Server

DuckDuckGo MCP Server

A basic MCP server template built with FastMCP framework that provides example tools for echoing messages and retrieving server information. This appears to be a starter template rather than a fully functional DuckDuckGo integration.

Tambo Docs MCP Server

Tambo Docs MCP Server

Enables AI assistants to discover, fetch, and search through Tambo documentation from docs.tambo.co. Provides intelligent content parsing with caching for improved performance when accessing technical documentation.

MCP API Server Template

MCP API Server Template

Weather MCP Server

Weather MCP Server

Provides current weather data and hourly forecasts for any location worldwide using the Open-Meteo API, accessible through HTTP transport without requiring an API key.

A Cloud Automator MCP server

A Cloud Automator MCP server

一个非官方的 MCP 服务器,用于使用 Cloud Automator REST API。

MCP Server Markup Language (MCPML)

MCP Server Markup Language (MCPML)

MCP 服务器标记语言 (MCPML) - 一个 Python 框架,用于构建具有 CLI 和 OpenAI 代理支持的 MCP 服务器。

XRAY MCP

XRAY MCP

Enables AI assistants to understand and navigate codebases through structural analysis. Provides code mapping, symbol search, and impact analysis using ast-grep for accurate parsing of Python, JavaScript, TypeScript, and Go projects.

GitLab MCP Server

GitLab MCP Server

MCP Selenium Server

MCP Selenium Server

A Model Context Protocol server implementation that enables browser automation through standardized MCP clients, supporting features like navigation, element interaction, and screenshots across Chrome, Firefox, and Edge browsers.

A2A MCP Server

A2A MCP Server

A centralized server that tracks and manages connected agents, providing a web interface to monitor their status while enabling agent communication through a central point.