Discover Awesome MCP Servers
Extend your agent with 16,638 capabilities via MCP servers.
- All16,638
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Vitest MCP Server
AI-optimized Vitest interface that provides structured test output, visual debugging context, and intelligent coverage analysis for more effective AI assistance with testing.
EPICS MCP Server
A Python-based server that interacts with EPICS process variables, allowing users to retrieve PV values, set PV values, and fetch detailed information about PVs through a standardized interface.
MCP Server
A Python implementation of the Model Context Protocol (MCP) that connects client applications with AI models, primarily Anthropic's models, with setup instructions for local development and deployment.
VNDB MCP Server
Một máy chủ Giao thức Bối cảnh Mô hình (MCP) để truy cập API của Cơ sở dữ liệu Tiểu thuyết Trực quan (VNDB). Điều này cho phép Claude AI tìm kiếm và truy xuất thông tin về tiểu thuyết trực quan.
Trino MCP Server
Enables database schema analysis and management for Trino servers through dynamic connections. Supports DDL validation, dependency analysis, schema documentation generation, and safe SQL execution with multiple concurrent connections.
MCP Server for LimeSurvey
Enables interaction with LimeSurvey API to create and manage surveys, questions, question groups, activate surveys, and export responses through standardized MCP endpoints.
code-to-tree
code-to-tree
Dash MCP Server
Provides tools to interact with the Dash documentation browser API, allowing users to list installed docsets, search across documentation, and enable full-text search.
Linear MCP Server
Một triển khai MCP tuyến tính xử lý tất cả các loại tài nguyên tuyến tính.
DuckDuckGo MCP Server
A basic MCP server template built with FastMCP framework that provides example tools for echoing messages and retrieving server information. This appears to be a starter template rather than a fully functional DuckDuckGo integration.
Azure Pricing MCP Server
Provides AI assistants with real-time access to Azure retail pricing information, enabling price searches, regional cost comparisons, monthly bill estimates, and SKU discovery through natural language queries.
Tambo Docs MCP Server
Enables AI assistants to discover, fetch, and search through Tambo documentation from docs.tambo.co. Provides intelligent content parsing with caching for improved performance when accessing technical documentation.
MCP API Server Template
MCP Server Markup Language (MCPML)
MCP Server Markup Language (MCPML) - Một framework Python để xây dựng các MCP Server với sự hỗ trợ CLI và OpenAI Agent.
PC-MCP
Dự án này hiện tại chủ yếu dành cho việc trình diễn kết hợp với dự án smart-pet-with-mcp, nhắm đến máy chủ MCP để vận hành trên máy tính cá nhân.
Pylon MCP Server
Enables interaction with Pylon's customer support platform API to manage users, contacts, issues, and knowledge base articles through natural language commands.
mcp_server
Máy chủ MCP để tích hợp LLM
MCP Weather Notes Server
Provides real-time weather data from Open-Meteo API and enables creating and retrieving notes stored locally in JSON format.
API MCP Server
A TypeScript-based MCP server that implements a simple notes system, providing resources for accessing notes via URIs, tools for creating notes, and prompts for generating summaries.
C++ Style Guide MCP Server
Provides professional C++ code style checking, best practices analysis, and modern C++ upgrade suggestions with tools for naming conventions, memory safety, const correctness, and access to comprehensive C++ style documentation.
DDG MCP2
A basic MCP server template built with FastMCP framework that provides example tools for echoing messages and retrieving server information. Serves as a starting point for developing custom MCP servers with Docker support and CI/CD integration.
Claude Kali MCP Commander
An MCP server that provides secure access to Kali Linux cybersecurity tools through Claude's interface, enabling users to run Kali Linux commands directly from Claude Desktop.
Echo MCP Server
Một máy chủ đơn giản triển khai Giao thức Ngữ cảnh Mô hình (Model Context Protocol - MCP) có chức năng lặp lại các tin nhắn, được thiết kế để kiểm tra các ứng dụng khách MCP.
Fetch MCP Server
Okay, here's a breakdown of how you can fetch URLs from a webpage using Playwright, integrate it with an SSE (Server-Sent Events) MCP (Management Control Plane) server, and use Node.js with Express.js to orchestrate everything. I'll provide code snippets and explanations to guide you. **Conceptual Overview** 1. **Playwright:** This will be our web scraping tool. It allows us to launch a browser (Chromium, Firefox, or WebKit), navigate to a webpage, and extract the URLs we need. 2. **Node.js with Express.js:** This will be our backend server. It will: * Receive requests to start the scraping process. * Orchestrate Playwright to fetch the URLs. * Act as an SSE server, pushing updates (e.g., URLs found, progress status) to the client. * Potentially act as an MCP, managing the scraping process (start, stop, monitor). 3. **SSE (Server-Sent Events):** A one-way communication protocol where the server pushes updates to the client (typically a web browser). This is ideal for real-time updates on the scraping progress. 4. **MCP (Management Control Plane):** In this context, it's a conceptual layer for managing the scraping process. It could involve features like: * Starting and stopping the scraping. * Monitoring the scraping progress. * Configuring the scraping (e.g., target URL, selectors). * Error handling and reporting. **Code Example (Illustrative)** **1. Project Setup** ```bash mkdir playwright-sse-mcp cd playwright-sse-mcp npm init -y npm install playwright express sse ``` **2. `server.js` (Node.js/Express.js with SSE)** ```javascript const express = require('express'); const { chromium } = require('playwright'); const sse = require('sse'); const app = express(); const port = 3000; let sseStream = null; // Store the SSE stream // Function to send SSE events function sendSSEEvent(event, data) { if (sseStream) { sseStream.send({ event: event, data: JSON.stringify(data) }); } else { console.warn("SSE stream not initialized. Event:", event, "Data:", data); } } // SSE Endpoint app.get('/events', (req, res) => { res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); res.flushHeaders(); // send the headers immediately const sseHandler = new sse(req, res); sseStream = sseHandler.pipe(null, { end: false }); // Store the stream // Optional: Send a welcome event sendSSEEvent('connected', { message: 'SSE connection established' }); req.on('close', () => { console.log('Client disconnected from SSE'); sseStream = null; // Clear the stream }); }); // Playwright Scraping Function async function scrapeUrls(url) { try { const browser = await chromium.launch(); const page = await browser.newPage(); await page.goto(url); const links = await page.evaluate(() => { const anchors = Array.from(document.querySelectorAll('a')); return anchors.map(anchor => anchor.href); }); await browser.close(); return links; } catch (error) { console.error('Error during scraping:', error); sendSSEEvent('error', { message: 'Scraping failed', error: error.message }); return []; // Return an empty array in case of error } } // API Endpoint to Start Scraping app.get('/scrape', async (req, res) => { const targetUrl = req.query.url; if (!targetUrl) { return res.status(400).send('Missing URL parameter'); } sendSSEEvent('start', { message: 'Scraping started', url: targetUrl }); const urls = await scrapeUrls(targetUrl); if (urls.length > 0) { urls.forEach(url => { sendSSEEvent('url', { url: url }); }); sendSSEEvent('complete', { message: 'Scraping completed', urlCount: urls.length }); } else { sendSSEEvent('no-urls', { message: 'No URLs found' }); } res.status(200).send('Scraping initiated. Check /events for updates.'); // Acknowledge the request }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` **3. `client.html` (Simple HTML/JavaScript Client)** ```html <!DOCTYPE html> <html> <head> <title>Playwright SSE Client</title> </head> <body> <h1>Playwright SSE Client</h1> <button id="startButton">Start Scraping</button> <ul id="urlList"></ul> <script> const urlList = document.getElementById('urlList'); const startButton = document.getElementById('startButton'); const eventSource = new EventSource('http://localhost:3000/events'); eventSource.onmessage = (event) => { console.log('Received event:', event); // Log the raw event }; eventSource.addEventListener('connected', (event) => { const data = JSON.parse(event.data); console.log('Connected:', data.message); }); eventSource.addEventListener('url', (event) => { const data = JSON.parse(event.data); const listItem = document.createElement('li'); listItem.textContent = data.url; urlList.appendChild(listItem); }); eventSource.addEventListener('start', (event) => { const data = JSON.parse(event.data); console.log('Scraping started:', data.url); urlList.innerHTML = ''; // Clear the list }); eventSource.addEventListener('complete', (event) => { const data = JSON.parse(event.data); console.log('Scraping completed. Found', data.urlCount, 'URLs'); }); eventSource.addEventListener('error', (event) => { const data = JSON.parse(event.data); console.error('Error:', data.message, data.error); }); startButton.addEventListener('click', () => { const targetUrl = prompt('Enter the URL to scrape:'); if (targetUrl) { fetch(`http://localhost:3000/scrape?url=${targetUrl}`) .then(response => { if (!response.ok) { console.error('Error starting scraping:', response.statusText); } }); } }); </script> </body> </html> ``` **How to Run** 1. **Save the files:** Save the code as `server.js` and `client.html` in your `playwright-sse-mcp` directory. 2. **Start the server:** Run `node server.js` in your terminal. 3. **Open the client:** Open `client.html` in your web browser. 4. **Click the "Start Scraping" button:** Enter the URL you want to scrape (e.g., `https://www.example.com`). 5. **Observe the results:** The URLs found on the page will be displayed in the list in your browser, and you'll see messages in the browser's console. The server console will also show activity. **Explanation** * **`server.js`:** * Sets up an Express.js server. * Defines an SSE endpoint (`/events`). When a client connects to this endpoint, the server keeps the connection open and sends events as they occur. * The `scrapeUrls` function uses Playwright to: * Launch a browser. * Navigate to the target URL. * Extract all `href` attributes from `<a>` tags. * Close the browser. * The `/scrape` endpoint: * Receives the target URL as a query parameter. * Calls `scrapeUrls` to perform the scraping. * Sends SSE events to the client: * `start`: Indicates that scraping has started. * `url`: Sends each URL found. * `complete`: Indicates that scraping is finished. * `error`: Indicates an error occurred. * The `sendSSEEvent` function encapsulates the logic for sending SSE messages. * **`client.html`:** * Creates an `EventSource` object to connect to the SSE endpoint. * Listens for the `url`, `start`, `complete`, and `error` events. * Updates the HTML page to display the URLs as they are received. * A button allows the user to enter the URL to scrape. **Key Improvements and Considerations** * **Error Handling:** The code includes basic error handling, but you should add more robust error handling, especially in the `scrapeUrls` function. Consider retries, logging, and more informative error messages. * **Scalability:** For high-volume scraping, consider using a message queue (e.g., RabbitMQ, Kafka) to decouple the scraping tasks from the web server. This allows you to scale the scraping workers independently. * **Rate Limiting:** Be respectful of the websites you are scraping. Implement rate limiting to avoid overloading their servers. You can use libraries like `bottleneck` to help with this. * **User-Agent:** Set a proper User-Agent header in your Playwright code to identify your scraper. * **Headless Mode:** Run Playwright in headless mode (`chromium.launch({ headless: true })`) for better performance on a server. * **Selectors:** Instead of just getting all `<a>` tags, use more specific CSS selectors to target the URLs you actually need. This will make your scraper more robust. * **Dynamic Content:** If the website uses JavaScript to load content dynamically, you might need to use `page.waitForSelector()` or `page.waitForTimeout()` to wait for the content to load before extracting the URLs. * **Anti-Scraping Measures:** Websites often have anti-scraping measures in place. You might need to use techniques like: * Rotating proxies. * Solving CAPTCHAs. * Using realistic browser behavior (e.g., moving the mouse, scrolling). * **MCP Features:** * **Start/Stop:** Add API endpoints to start and stop the scraping process. You'll need to manage the Playwright browser instance and potentially use a separate process for scraping. * **Configuration:** Allow users to configure the scraping parameters (e.g., target URL, selectors, rate limits) through an API or a web interface. * **Monitoring:** Track the scraping progress (e.g., URLs found, pages visited, errors) and provide a dashboard to monitor the process. * **Data Storage:** Instead of just displaying the URLs in the browser, you'll likely want to store them in a database (e.g., MongoDB, PostgreSQL). **Example of Rate Limiting (using `bottleneck`)** ```javascript const Bottleneck = require("bottleneck"); const limiter = new Bottleneck({ maxConcurrent: 1, // Only one request at a time minTime: 1000 // Wait 1 second between requests }); async function scrapeUrls(url) { try { const browser = await chromium.launch(); const page = await browser.newPage(); // Wrap the page.goto call with the rate limiter await limiter.schedule(() => page.goto(url)); const links = await page.evaluate(() => { const anchors = Array.from(document.querySelectorAll('a')); return anchors.map(anchor => anchor.href); }); await browser.close(); return links; } catch (error) { console.error('Error during scraping:', error); sendSSEEvent('error', { message: 'Scraping failed', error: error.message }); return []; // Return an empty array in case of error } } ``` **Vietnamese Translation of Key Concepts** * **Playwright:** Công cụ tự động hóa trình duyệt (crawling/scraping). * **SSE (Server-Sent Events):** Sự kiện được máy chủ gửi (một chiều). * **MCP (Management Control Plane):** Lớp điều khiển quản lý (quản lý quá trình cào dữ liệu). * **Node.js/Express.js:** Máy chủ backend. * **Web Scraping:** Cào dữ liệu web. * **URL:** Đường dẫn trang web. * **Endpoint:** Điểm cuối API. * **Rate Limiting:** Giới hạn tốc độ yêu cầu. * **Headless Mode:** Chế độ không đầu (chạy trình duyệt ẩn). * **CSS Selector:** Bộ chọn CSS (để chọn các phần tử HTML cụ thể). This comprehensive example should give you a solid foundation for building your Playwright-based web scraper with SSE and a basic MCP. Remember to adapt the code to your specific needs and website structure. Good luck!
Chuk Design System
Provides access to a universal design token system with 7 pre-built themes (colors, typography, spacing, motion) and multi-format export capabilities for Canva, Remotion, PPTX, CSS, and W3C JSON.
shortcuts-mcp-server
shortcuts-mcp-server
Acumatica MCP Server by CData
This read-only MCP Server allows you to connect to Acumatica data from Claude Desktop through CData JDBC Drivers. Free (beta) read/write servers available at https://www.cdata.com/solutions/mcp
SuperMemory MCP
A tool that makes memories stored in ChatGPT accessible across various language models without requiring logins or paywalls.
Azure DevOps MCP Proxy
Enables interaction with Azure DevOps through Personal Access Token authentication. Supports work item management, wiki operations, project/repository listing, and build pipeline access through natural language.
Neo4j MCP Clients & Servers
Model Context Protocol with Neo4j