Discover Awesome MCP Servers
Extend your agent with 14,617 capabilities via MCP servers.
- All14,617
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2

XRAY MCP
Enables AI assistants to understand and navigate codebases through structural analysis. Provides code mapping, symbol search, and impact analysis using ast-grep for accurate parsing of Python, JavaScript, TypeScript, and Go projects.
Mcp Starwars
MCP Server for the SWAPI Star Wars API. The main goal of the project is to show how an MCP server can be used to interact with APIs.
Procesio MCP Server
Máy chủ MCP để tương tác với API Procesio
llm-mcp-server-template
Mẫu dự án phát triển máy chủ LLM-MCP.
GitLab MCP Server
MCP API Server Template

MCP Meeting Summary System
An AI-powered meeting assistant that combines FastAPI backend with React frontend to generate high-quality meeting summaries and provide Q&A functionality using OpenAI and Selenium.

TG_MCP
A lightweight Python interface that exposes TigerGraph database operations as structured tools and URI-based resources for Claude and other MCP agents.

JetsonMCP
Connects AI assistants to NVIDIA Jetson Nano systems for edge computing management, enabling natural language control of AI workloads, hardware optimization, and system administration tasks.

mcp-confluent
Một triển khai máy chủ MCP được xây dựng để tương tác với Confluent Kafka và các API REST của Confluent Cloud.

Pagos Data MCP Server
Enables Claude to retrieve BIN (Bank Identification Number) data for payment cards, with options for basic or enhanced insights through the Pagos API.
MCP Server Markup Language (MCPML)
MCP Server Markup Language (MCPML) - Một framework Python để xây dựng các MCP Server với sự hỗ trợ CLI và OpenAI Agent.

Enrichment MCP Server
A Model Context Protocol server that enables users to perform third-party enrichment lookups for security observables (IP addresses, domains, URLs, emails) through services like VirusTotal, Shodan, and others.
Remote MCP Server on Cloudflare

Claude Kali MCP Commander
An MCP server that provides secure access to Kali Linux cybersecurity tools through Claude's interface, enabling users to run Kali Linux commands directly from Claude Desktop.

Echo MCP Server
Một máy chủ đơn giản triển khai Giao thức Ngữ cảnh Mô hình (Model Context Protocol - MCP) có chức năng lặp lại các tin nhắn, được thiết kế để kiểm tra các ứng dụng khách MCP.

chromium-arm64
MCP server that enables browser automation and web testing on ARM64 devices like Raspberry Pi, allowing users to navigate websites, take screenshots, execute JavaScript, and perform UI testing via Claude.

Weather MCP Server
A Model Context Protocol server that provides weather information and forecasts based on user location or address input.
Mcp Server Code Analyzer

stackzero-labs/mcp
A model context protocol server that enables applications to use stackzero-labs/ui components through the MCP protocol, supporting both standalone operation and integration with Claude Desktop and Cursor.
ynab-mcp-server
Gương của
File System MCP Server
Gương của

PostgreSQL MCP Server
Một máy chủ giao thức Model Context (Ngữ cảnh Mô hình) cung cấp quyền truy cập chỉ đọc vào cơ sở dữ liệu PostgreSQL, cho phép các LLM (Mô hình Ngôn ngữ Lớn) kiểm tra lược đồ cơ sở dữ liệu và thực thi các truy vấn chỉ đọc.

MCP-RAG
An MCP-compatible system that handles large files (up to 200MB) with intelligent chunking and multi-format document support for advanced retrieval-augmented generation.

Firefly III MCP Server
Enables AI tools to interact with Firefly III personal finance management instances through a cloud-deployed MCP server. Supports financial operations like account management, transactions, budgeting, and reporting with configurable tool presets.
Sorry, read the code...

Tambo Docs MCP Server
Enables AI assistants to discover, fetch, and search through Tambo documentation from docs.tambo.co. Provides intelligent content parsing with caching for improved performance when accessing technical documentation.
Fetch MCP Server
Okay, here's a breakdown of how you can fetch URLs from a webpage using Playwright, integrate it with an SSE (Server-Sent Events) MCP (Management Control Plane) server, and use Node.js with Express.js to orchestrate everything. I'll provide code snippets and explanations to guide you. **Conceptual Overview** 1. **Playwright:** This will be our web scraping tool. It allows us to launch a browser (Chromium, Firefox, or WebKit), navigate to a webpage, and extract the URLs we need. 2. **Node.js with Express.js:** This will be our backend server. It will: * Receive requests to start the scraping process. * Orchestrate Playwright to fetch the URLs. * Act as an SSE server, pushing updates (e.g., URLs found, progress status) to the client. * Potentially act as an MCP, managing the scraping process (start, stop, monitor). 3. **SSE (Server-Sent Events):** A one-way communication protocol where the server pushes updates to the client (typically a web browser). This is ideal for real-time updates on the scraping progress. 4. **MCP (Management Control Plane):** In this context, it's a conceptual layer for managing the scraping process. It could involve features like: * Starting and stopping the scraping. * Monitoring the scraping progress. * Configuring the scraping (e.g., target URL, selectors). * Error handling and reporting. **Code Example (Illustrative)** **1. Project Setup** ```bash mkdir playwright-sse-mcp cd playwright-sse-mcp npm init -y npm install playwright express sse ``` **2. `server.js` (Node.js/Express.js with SSE)** ```javascript const express = require('express'); const { chromium } = require('playwright'); const sse = require('sse'); const app = express(); const port = 3000; let sseStream = null; // Store the SSE stream // Function to send SSE events function sendSSEEvent(event, data) { if (sseStream) { sseStream.send({ event: event, data: JSON.stringify(data) }); } else { console.warn("SSE stream not initialized. Event:", event, "Data:", data); } } // SSE Endpoint app.get('/events', (req, res) => { res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); res.flushHeaders(); // send the headers immediately const sseHandler = new sse(req, res); sseStream = sseHandler.pipe(null, { end: false }); // Store the stream // Optional: Send a welcome event sendSSEEvent('connected', { message: 'SSE connection established' }); req.on('close', () => { console.log('Client disconnected from SSE'); sseStream = null; // Clear the stream }); }); // Playwright Scraping Function async function scrapeUrls(url) { try { const browser = await chromium.launch(); const page = await browser.newPage(); await page.goto(url); const links = await page.evaluate(() => { const anchors = Array.from(document.querySelectorAll('a')); return anchors.map(anchor => anchor.href); }); await browser.close(); return links; } catch (error) { console.error('Error during scraping:', error); sendSSEEvent('error', { message: 'Scraping failed', error: error.message }); return []; // Return an empty array in case of error } } // API Endpoint to Start Scraping app.get('/scrape', async (req, res) => { const targetUrl = req.query.url; if (!targetUrl) { return res.status(400).send('Missing URL parameter'); } sendSSEEvent('start', { message: 'Scraping started', url: targetUrl }); const urls = await scrapeUrls(targetUrl); if (urls.length > 0) { urls.forEach(url => { sendSSEEvent('url', { url: url }); }); sendSSEEvent('complete', { message: 'Scraping completed', urlCount: urls.length }); } else { sendSSEEvent('no-urls', { message: 'No URLs found' }); } res.status(200).send('Scraping initiated. Check /events for updates.'); // Acknowledge the request }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` **3. `client.html` (Simple HTML/JavaScript Client)** ```html <!DOCTYPE html> <html> <head> <title>Playwright SSE Client</title> </head> <body> <h1>Playwright SSE Client</h1> <button id="startButton">Start Scraping</button> <ul id="urlList"></ul> <script> const urlList = document.getElementById('urlList'); const startButton = document.getElementById('startButton'); const eventSource = new EventSource('http://localhost:3000/events'); eventSource.onmessage = (event) => { console.log('Received event:', event); // Log the raw event }; eventSource.addEventListener('connected', (event) => { const data = JSON.parse(event.data); console.log('Connected:', data.message); }); eventSource.addEventListener('url', (event) => { const data = JSON.parse(event.data); const listItem = document.createElement('li'); listItem.textContent = data.url; urlList.appendChild(listItem); }); eventSource.addEventListener('start', (event) => { const data = JSON.parse(event.data); console.log('Scraping started:', data.url); urlList.innerHTML = ''; // Clear the list }); eventSource.addEventListener('complete', (event) => { const data = JSON.parse(event.data); console.log('Scraping completed. Found', data.urlCount, 'URLs'); }); eventSource.addEventListener('error', (event) => { const data = JSON.parse(event.data); console.error('Error:', data.message, data.error); }); startButton.addEventListener('click', () => { const targetUrl = prompt('Enter the URL to scrape:'); if (targetUrl) { fetch(`http://localhost:3000/scrape?url=${targetUrl}`) .then(response => { if (!response.ok) { console.error('Error starting scraping:', response.statusText); } }); } }); </script> </body> </html> ``` **How to Run** 1. **Save the files:** Save the code as `server.js` and `client.html` in your `playwright-sse-mcp` directory. 2. **Start the server:** Run `node server.js` in your terminal. 3. **Open the client:** Open `client.html` in your web browser. 4. **Click the "Start Scraping" button:** Enter the URL you want to scrape (e.g., `https://www.example.com`). 5. **Observe the results:** The URLs found on the page will be displayed in the list in your browser, and you'll see messages in the browser's console. The server console will also show activity. **Explanation** * **`server.js`:** * Sets up an Express.js server. * Defines an SSE endpoint (`/events`). When a client connects to this endpoint, the server keeps the connection open and sends events as they occur. * The `scrapeUrls` function uses Playwright to: * Launch a browser. * Navigate to the target URL. * Extract all `href` attributes from `<a>` tags. * Close the browser. * The `/scrape` endpoint: * Receives the target URL as a query parameter. * Calls `scrapeUrls` to perform the scraping. * Sends SSE events to the client: * `start`: Indicates that scraping has started. * `url`: Sends each URL found. * `complete`: Indicates that scraping is finished. * `error`: Indicates an error occurred. * The `sendSSEEvent` function encapsulates the logic for sending SSE messages. * **`client.html`:** * Creates an `EventSource` object to connect to the SSE endpoint. * Listens for the `url`, `start`, `complete`, and `error` events. * Updates the HTML page to display the URLs as they are received. * A button allows the user to enter the URL to scrape. **Key Improvements and Considerations** * **Error Handling:** The code includes basic error handling, but you should add more robust error handling, especially in the `scrapeUrls` function. Consider retries, logging, and more informative error messages. * **Scalability:** For high-volume scraping, consider using a message queue (e.g., RabbitMQ, Kafka) to decouple the scraping tasks from the web server. This allows you to scale the scraping workers independently. * **Rate Limiting:** Be respectful of the websites you are scraping. Implement rate limiting to avoid overloading their servers. You can use libraries like `bottleneck` to help with this. * **User-Agent:** Set a proper User-Agent header in your Playwright code to identify your scraper. * **Headless Mode:** Run Playwright in headless mode (`chromium.launch({ headless: true })`) for better performance on a server. * **Selectors:** Instead of just getting all `<a>` tags, use more specific CSS selectors to target the URLs you actually need. This will make your scraper more robust. * **Dynamic Content:** If the website uses JavaScript to load content dynamically, you might need to use `page.waitForSelector()` or `page.waitForTimeout()` to wait for the content to load before extracting the URLs. * **Anti-Scraping Measures:** Websites often have anti-scraping measures in place. You might need to use techniques like: * Rotating proxies. * Solving CAPTCHAs. * Using realistic browser behavior (e.g., moving the mouse, scrolling). * **MCP Features:** * **Start/Stop:** Add API endpoints to start and stop the scraping process. You'll need to manage the Playwright browser instance and potentially use a separate process for scraping. * **Configuration:** Allow users to configure the scraping parameters (e.g., target URL, selectors, rate limits) through an API or a web interface. * **Monitoring:** Track the scraping progress (e.g., URLs found, pages visited, errors) and provide a dashboard to monitor the process. * **Data Storage:** Instead of just displaying the URLs in the browser, you'll likely want to store them in a database (e.g., MongoDB, PostgreSQL). **Example of Rate Limiting (using `bottleneck`)** ```javascript const Bottleneck = require("bottleneck"); const limiter = new Bottleneck({ maxConcurrent: 1, // Only one request at a time minTime: 1000 // Wait 1 second between requests }); async function scrapeUrls(url) { try { const browser = await chromium.launch(); const page = await browser.newPage(); // Wrap the page.goto call with the rate limiter await limiter.schedule(() => page.goto(url)); const links = await page.evaluate(() => { const anchors = Array.from(document.querySelectorAll('a')); return anchors.map(anchor => anchor.href); }); await browser.close(); return links; } catch (error) { console.error('Error during scraping:', error); sendSSEEvent('error', { message: 'Scraping failed', error: error.message }); return []; // Return an empty array in case of error } } ``` **Vietnamese Translation of Key Concepts** * **Playwright:** Công cụ tự động hóa trình duyệt (crawling/scraping). * **SSE (Server-Sent Events):** Sự kiện được máy chủ gửi (một chiều). * **MCP (Management Control Plane):** Lớp điều khiển quản lý (quản lý quá trình cào dữ liệu). * **Node.js/Express.js:** Máy chủ backend. * **Web Scraping:** Cào dữ liệu web. * **URL:** Đường dẫn trang web. * **Endpoint:** Điểm cuối API. * **Rate Limiting:** Giới hạn tốc độ yêu cầu. * **Headless Mode:** Chế độ không đầu (chạy trình duyệt ẩn). * **CSS Selector:** Bộ chọn CSS (để chọn các phần tử HTML cụ thể). This comprehensive example should give you a solid foundation for building your Playwright-based web scraper with SSE and a basic MCP. Remember to adapt the code to your specific needs and website structure. Good luck!
mcp_server
Máy chủ MCP để tích hợp LLM

shortcuts-mcp-server
shortcuts-mcp-server