Discover Awesome MCP Servers
Extend your agent with 26,519 capabilities via MCP servers.
- All26,519
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Remote MCP Server
A serverless implementation of Model Context Protocol (MCP) on Cloudflare Workers that allows AI models to access custom tools without authentication.
mcp_repo_9ebf5242
这是一个由 MCP 服务器的测试脚本为 GitHub 创建的测试仓库。
MCP Synth Controller
Enables LLMs to control synthesizer parameters in real-time by translating natural language commands into OSC messages sent to a JUCE synthesizer application.
Mcp Server Code Analyzer
Trello MCP Server
An MCP server that connects Claude to Trello, enabling comprehensive management of boards, lists, and cards through natural language conversation. It supports a wide range of actions including searching cards, adding comments, managing checklists, and tracking board activity.
ABS MCP Server
An MCP server that provides AI assistants with access to Australian Bureau of Statistics data through the SDMX-ML API, enabling statistical data querying and analysis.
ynab-mcp-server
镜子 (jìng zi)
ImaginePro MCP Server
Enables AI assistants to generate images and videos through natural language using ImaginePro's API. Supports text-to-image generation, video creation, image upscaling, variants, inpainting, and multi-modal generation with real-time progress tracking.
MCP API Server Template
Weather MCP Server
Provides current weather data and hourly forecasts for any location worldwide using the Open-Meteo API, accessible through HTTP transport without requiring an API key.
Kedro RAG MCP
An MCP server that enables users to query Kedro framework documentation using retrieval-augmented generation. It builds a local knowledge base from documentation files to help users navigate and apply Kedro's data science pipeline framework.
Pagos Data MCP Server
Enables Claude to retrieve BIN (Bank Identification Number) data for payment cards, with options for basic or enhanced insights through the Pagos API.
A Cloud Automator MCP server
一个非官方的 MCP 服务器,用于使用 Cloud Automator REST API。
Trino MCP Server
Enables database schema analysis and management for Trino servers through dynamic connections. Supports DDL validation, dependency analysis, schema documentation generation, and safe SQL execution with multiple concurrent connections.
BFL MCP Server
An MCP server that enables generating and editing high-quality images with natural language using Black Forest Labs' FLUX.1 Kontext Pro model.
MCP Server Testing Web App
PyVista MCP Server
Enables 3D visualization capabilities through PyVista integration, allowing users to generate and export interactive 3D shape grids as HTML files. It provides tools for creating visual representations of geometric objects like cylinders, spheres, and arrows.
Letter Counter MCP Server
一个 MCP 服务器,它使 LLM 能够计算单词中特定字母的出现次数,创建该服务器是为了作为模型上下文协议的学习示例。
MCP-Kit Developer Task Assignment System
Enables intelligent task assignment to developers using hybrid AI algorithms that match tasks based on past experience, skill sets, workload balance, and project alignment. Features enterprise-grade security with AES-256 encryption and 75% performance optimization through smart caching.
Meme MCP Server
A Model Context Protocol server for managing, searching, and retrieving local meme images. It automatically hosts memes via a built-in HTTP server to ensure images can be displayed within chat applications.
PortOne Global MCP Server
Enables searching and reading of PortOne documentation, including OpenAPI schemas and product guides, through the Model Context Protocol. It allows AI agents to easily access and integrate payment-related technical specifications into their workflows.
MCP Server Markup Language (MCPML)
MCP 服务器标记语言 (MCPML) - 一个 Python 框架,用于构建具有 CLI 和 OpenAI 代理支持的 MCP 服务器。
llmkit-mcp-server
Query AI spending data from LLMKit. Track costs, budgets, usage stats, and session summaries across 11 AI providers.
MCP Google Calendar Server
Enables creating and managing Google Calendar events through OAuth 2.0 authentication, supporting event details like title, time, location, description, and attendees.
MCP server for LogSeq
Interacts with LogSeq via its API.
Echo MCP Server
一个简单的服务器,实现了模型上下文协议(MCP),会将消息回显,专为测试 MCP 客户端而设计。
Fetch MCP Server
Okay, here's a breakdown of how you can fetch URLs from a webpage using Playwright, integrate it with an SSE (Server-Sent Events) MCP (Management Control Plane) server, and use Node.js with Express.js to orchestrate everything. I'll provide code snippets and explanations to guide you. **Conceptual Overview** 1. **Playwright (Web Scraping):** Playwright will be used to launch a browser, navigate to the target webpage, and extract the URLs you need. 2. **Node.js/Express.js (Server):** Express.js will create a web server that handles requests to start the scraping process and stream the results back to the client. 3. **SSE (Server-Sent Events):** SSE will be used to push the extracted URLs from the server to the client in real-time as they are found. This is more efficient than repeatedly polling the server. 4. **MCP (Management Control Plane):** The MCP part is a bit more abstract. It implies you have a system for managing and controlling the scraping process. This could involve: * Configuration: Specifying the target URL, selectors for extracting URLs, etc. * Monitoring: Tracking the progress of the scraping job. * Error Handling: Managing errors that occur during scraping. * Scaling: Distributing the scraping workload across multiple instances. **Code Example (Illustrative)** **1. Project Setup** ```bash mkdir playwright-sse-mcp cd playwright-sse-mcp npm init -y npm install playwright express eventsource ``` **2. `server.js` (Node.js/Express.js Server)** ```javascript const express = require('express'); const { chromium } = require('playwright'); const app = express(); const port = 3000; app.use(express.json()); // For parsing JSON request bodies // In-memory storage for SSE connections (replace with a more robust solution for production) const sseClients = []; app.get('/sse', (req, res) => { res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); res.flushHeaders(); const clientId = Date.now(); // Unique ID for the client const newClient = { id: clientId, res, }; sseClients.push(newClient); console.log(`${clientId} Connection open`); req.on('close', () => { console.log(`${clientId} Connection closed`); sseClients = sseClients.filter((client) => client.id !== clientId); }); }); function sendSSE(data) { sseClients.forEach((client) => { client.res.write(`data: ${JSON.stringify(data)}\n\n`); }); } app.post('/scrape', async (req, res) => { const { url, selector } = req.body; // Get URL and selector from request body if (!url || !selector) { return res.status(400).send('URL and selector are required.'); } console.log(`Starting scrape for ${url} with selector ${selector}`); try { const browser = await chromium.launch(); const page = await browser.newPage(); await page.goto(url); const links = await page.locator(selector).evaluateAll(links => links.map(link => link.href)); for (const link of links) { sendSSE({ url: link }); // Send each URL via SSE } await browser.close(); console.log(`Scrape complete for ${url}`); res.status(200).send('Scrape started and URLs are being streamed.'); } catch (error) { console.error('Scrape error:', error); sendSSE({ error: error.message }); // Send error via SSE res.status(500).send('Scrape failed.'); } }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` **3. `client.html` (Simple Client to Receive SSE)** ```html <!DOCTYPE html> <html> <head> <title>SSE Client</title> </head> <body> <h1>SSE Stream</h1> <ul id="urlList"></ul> <script> const urlList = document.getElementById('urlList'); const eventSource = new EventSource('http://localhost:3000/sse'); eventSource.onmessage = (event) => { const data = JSON.parse(event.data); if (data.url) { const listItem = document.createElement('li'); listItem.textContent = data.url; urlList.appendChild(listItem); } else if (data.error) { const listItem = document.createElement('li'); listItem.textContent = `Error: ${data.error}`; urlList.appendChild(listItem); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); }; </script> </body> </html> ``` **4. Running the Example** 1. **Start the Server:** `node server.js` 2. **Open `client.html`** in your browser. 3. **Send a POST request to `/scrape`:** You can use `curl`, `Postman`, or a similar tool. For example: ```bash curl -X POST -H "Content-Type: application/json" -d '{"url": "https://www.example.com", "selector": "a"}' http://localhost:3000/scrape ``` Replace `"https://www.example.com"` with the URL you want to scrape and `"a"` with the CSS selector for the links you want to extract. **Explanation** * **`server.js`:** * Sets up an Express.js server. * `/sse` endpoint: Handles SSE connections. It sets the correct headers for SSE and keeps track of connected clients. * `sendSSE(data)`: Sends data to all connected SSE clients. * `/scrape` endpoint: * Receives the target URL and CSS selector from the request body. * Launches a Playwright browser. * Navigates to the URL. * Uses `page.locator(selector).evaluateAll()` to extract the `href` attributes of all elements matching the selector. * Sends each URL back to the client via SSE. * Handles errors and sends error messages via SSE. * **`client.html`:** * Creates an `EventSource` object to connect to the `/sse` endpoint. * Listens for `message` events from the server. * Parses the JSON data and displays the URLs in a list. * Handles errors. **Important Considerations for Production (MCP)** * **Configuration Management:** Instead of hardcoding the URL and selector in the `curl` command, you'd typically store them in a database or configuration file. Your MCP would provide an interface for managing these configurations. * **Job Queuing:** Use a message queue (e.g., RabbitMQ, Kafka) to queue scraping jobs. This allows you to handle a large number of requests without overloading the server. * **Scaling:** Run multiple instances of the scraping server behind a load balancer. The message queue will distribute the jobs across the instances. * **Monitoring:** Use a monitoring system (e.g., Prometheus, Grafana) to track the performance of the scraping servers, the number of jobs in the queue, and any errors that occur. * **Error Handling:** Implement robust error handling and retry mechanisms. For example, if a scraping job fails, you might retry it a few times before giving up. * **Rate Limiting:** Be respectful of the websites you are scraping. Implement rate limiting to avoid overloading their servers. * **Data Storage:** Instead of just displaying the URLs in the browser, you'll likely want to store them in a database or other data store. * **Authentication/Authorization:** Secure your MCP endpoints with authentication and authorization to prevent unauthorized access. * **Headless Mode:** Run the Playwright browser in headless mode (which is the default) for better performance. **Example of MCP Integration (Conceptual)** Let's say you have an MCP API endpoint `/api/scrape-jobs` that allows you to create new scraping jobs. The request body might look like this: ```json { "url": "https://www.example.com", "selector": "a", "callbackUrl": "https://your-data-store.com/api/store-data" // Where to send the scraped data } ``` Your server would then: 1. Receive the request to `/api/scrape-jobs`. 2. Validate the request. 3. Create a job in the message queue (e.g., RabbitMQ). 4. A worker process (one of your scraping server instances) would pick up the job from the queue. 5. The worker would scrape the URL, extract the data, and send it to the `callbackUrl`. 6. The worker would update the job status in the MCP (e.g., "in progress", "completed", "failed"). This is a simplified example, but it illustrates the basic principles of integrating Playwright with an SSE server and an MCP. The specific implementation will depend on your requirements and the architecture of your MCP. **Chinese Translation of Key Concepts** * **Web Scraping:** 网络爬虫 (wǎngluò páchóng) * **Server-Sent Events (SSE):** 服务器发送事件 (fúwùqì fāsòng shìjiàn) * **Management Control Plane (MCP):** 管理控制平面 (guǎnlǐ kòngzhì píngmiàn) * **Playwright:** Playwright (no direct translation, use the English name) * **Node.js:** Node.js (no direct translation, use the English name) * **Express.js:** Express.js (no direct translation, use the English name) * **URL:** 网址 (wǎngzhǐ) * **CSS Selector:** CSS 选择器 (CSS xuǎnzéqì) * **Endpoint:** 端点 (duāndiǎn) * **Message Queue:** 消息队列 (xiāoxi duìliè) * **Load Balancer:** 负载均衡器 (fùzài jūnhéngqì) * **Monitoring:** 监控 (jiānkòng) * **Error Handling:** 错误处理 (cuòwù chǔlǐ) * **Rate Limiting:** 速率限制 (sùlǜ xiànzhì) This comprehensive explanation and code example should give you a solid foundation for building your Playwright-based web scraping solution with SSE and MCP integration. Remember to adapt the code and architecture to your specific needs and environment. Good luck!
PC-MCP
针对操作个人电脑的 MCP 服务器,该项目当前主要针对 smart-pet-with-mcp 项目的搭配演示用。 (This translation aims to be as accurate and natural as possible.)
BlackLotus MCP Server
Provides AI tools for generating project planning documents including technical design plans, phase-based task checklists, and detailed role definitions as plain text files.
MCP Server for LimeSurvey
Enables interaction with LimeSurvey API to create and manage surveys, questions, question groups, activate surveys, and export responses through standardized MCP endpoints.