Discover Awesome MCP Servers

Extend your agent with 28,766 capabilities via MCP servers.

All28,766
Gmail AutoAuth MCP Server

Gmail AutoAuth MCP Server

Enables AI assistants to manage Gmail through natural language, supporting operations like sending, searching, and reading emails. It includes comprehensive label management, batch processing, and a streamlined OAuth2 auto-authentication flow.

PyVista MCP Server

PyVista MCP Server

Enables 3D visualization capabilities through PyVista integration, allowing users to generate and export interactive 3D shape grids as HTML files. It provides tools for creating visual representations of geometric objects like cylinders, spheres, and arrows.

Letter Counter MCP Server

Letter Counter MCP Server

Một máy chủ MCP cho phép LLM đếm số lần xuất hiện của các chữ cái cụ thể trong các từ, được tạo ra như một ví dụ học tập cho Giao thức Ngữ cảnh Mô hình (Model Context Protocol).

Meme MCP Server

Meme MCP Server

A Model Context Protocol server for managing, searching, and retrieving local meme images. It automatically hosts memes via a built-in HTTP server to ensure images can be displayed within chat applications.

Satellite Tracking MCP Server

Satellite Tracking MCP Server

Provides access to real-time satellite tracking data using the N2YO API, enabling users to get satellite positions, predict visible passes, search satellites by name or category, and retrieve TLE orbital data.

Geocode MCP Server

Geocode MCP Server

Provides global geocoding capabilities to convert city names and addresses into latitude/longitude coordinates using the free OpenStreetMap Nominatim API.

XRAY MCP

XRAY MCP

Enables AI assistants to understand and navigate codebases through structural analysis. Provides code mapping, symbol search, and impact analysis using ast-grep for accurate parsing of Python, JavaScript, TypeScript, and Go projects.

A Cloud Automator MCP server

A Cloud Automator MCP server

Một máy chủ MCP không chính thức để sử dụng Cloud Automator REST API.

PortOne Global MCP Server

PortOne Global MCP Server

Enables searching and reading of PortOne documentation, including OpenAPI schemas and product guides, through the Model Context Protocol. It allows AI agents to easily access and integrate payment-related technical specifications into their workflows.

MCP Server Markup Language (MCPML)

MCP Server Markup Language (MCPML)

MCP Server Markup Language (MCPML) - Một framework Python để xây dựng các MCP Server với sự hỗ trợ CLI và OpenAI Agent.

llmkit-mcp-server

llmkit-mcp-server

Query AI spending data from LLMKit. Track costs, budgets, usage stats, and session summaries across 11 AI providers.

SimBrief MCP Server

SimBrief MCP Server

SimBrief MCP Server brings flight planning data directly into your AI workspace. Access your SimBrief flight plans (OFP), retrieve real-time METAR weather reports for any airport, and fetch pilot information - all through natural conversation. Built for virtual pilots who want to streamline their pre-flight briefing workflow in Microsoft Flight Simulator, X-Plane, and other flight simulation platf

MCP Google Calendar Server

MCP Google Calendar Server

Enables creating and managing Google Calendar events through OAuth 2.0 authentication, supporting event details like title, time, location, description, and attendees.

OpenTabs

OpenTabs

Open-source MCP server that gives AI agents access to 100+ web applications through the user's authenticated browser session. No API keys, no screenshots, no DOM scraping — talks directly to web app internal APIs via a Chrome extension. Supports Slack, Discord, GitHub, Jira, Notion, Reddit, X, and 100+ more services with ~2,000 tools. Works with Claude Code, Cursor, Windsurf, and any MCP client.

MCP server for LogSeq

MCP server for LogSeq

Interacts with LogSeq via its API.

MCP Weather Notes Server

MCP Weather Notes Server

Provides real-time weather data from Open-Meteo API and enables creating and retrieving notes stored locally in JSON format.

Bear MCP Server

Bear MCP Server

Provides comprehensive access to Bear notes using a hybrid sync-safe approach - direct database reads for performance and Bear's official API for writes, enabling natural language note management, search, organization, and analysis while maintaining iCloud sync compatibility.

API MCP Server

API MCP Server

A TypeScript-based MCP server that implements a simple notes system, providing resources for accessing notes via URIs, tools for creating notes, and prompts for generating summaries.

C++ Style Guide MCP Server

C++ Style Guide MCP Server

Provides professional C++ code style checking, best practices analysis, and modern C++ upgrade suggestions with tools for naming conventions, memory safety, const correctness, and access to comprehensive C++ style documentation.

Memory MCP Server

Memory MCP Server

Enables AI agents to maintain context across conversation sessions by saving and retrieving summaries of key points from past interactions.

DDG MCP2

DDG MCP2

A basic MCP server template built with FastMCP framework that provides example tools for echoing messages and retrieving server information. Serves as a starting point for developing custom MCP servers with Docker support and CI/CD integration.

headless-oracle

headless-oracle

Cryptographically signed market state verification for autonomous financial agents. Ed25519 receipts, fail-closed safety, 28 global exchanges (equities, derivatives, 24/7 crypto). MCP-native, x402-payable, SMA Protocol conformant.

Flux Cloudflare MCP

Flux Cloudflare MCP

An MCP server that enables AI assistants to generate images using Black Forest Labs' Flux model via Cloudflare Workers.

World Bank Documents MCP Server

World Bank Documents MCP Server

Enables discovery and retrieval of World Bank reports and publications through the Documents & Reports API. It supports full-text search, structured filtering by topic or country, and metadata extraction for research and data analysis.

Memory MCP Worker

Memory MCP Worker

Provides cross-device access to a persistent knowledge graph via Cloudflare Workers, enabling memory storage and retrieval through both MCP protocol and REST API with full-text search capabilities.

Fetch MCP Server

Fetch MCP Server

Okay, here's a breakdown of how you can fetch URLs from a webpage using Playwright, integrate it with an SSE (Server-Sent Events) MCP (Management Control Plane) server, and use Node.js with Express.js to orchestrate everything. I'll provide code snippets and explanations to guide you. **Conceptual Overview** 1. **Playwright:** This will be our web scraping tool. It allows us to launch a browser (Chromium, Firefox, or WebKit), navigate to a webpage, and extract the URLs we need. 2. **Node.js with Express.js:** This will be our backend server. It will: * Receive requests to start the scraping process. * Orchestrate Playwright to fetch the URLs. * Act as an SSE server, pushing updates (e.g., URLs found, progress status) to the client. * Potentially act as an MCP, managing the scraping process (start, stop, monitor). 3. **SSE (Server-Sent Events):** A one-way communication protocol where the server pushes updates to the client (typically a web browser). This is ideal for real-time updates on the scraping progress. 4. **MCP (Management Control Plane):** In this context, it's a conceptual layer for managing the scraping process. It could involve features like: * Starting and stopping the scraping. * Monitoring the scraping progress. * Configuring the scraping (e.g., target URL, selectors). * Error handling and reporting. **Code Example (Illustrative)** **1. Project Setup** ```bash mkdir playwright-sse-mcp cd playwright-sse-mcp npm init -y npm install playwright express sse ``` **2. `server.js` (Node.js/Express.js with SSE)** ```javascript const express = require('express'); const { chromium } = require('playwright'); const sse = require('sse'); const app = express(); const port = 3000; let sseStream = null; // Store the SSE stream // Function to send SSE events function sendSSEEvent(event, data) { if (sseStream) { sseStream.send({ event: event, data: JSON.stringify(data) }); } else { console.warn("SSE stream not initialized. Event:", event, "Data:", data); } } // SSE Endpoint app.get('/events', (req, res) => { res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); res.flushHeaders(); // send the headers immediately const sseHandler = new sse(req, res); sseStream = sseHandler.pipe(null, { end: false }); // Store the stream // Optional: Send a welcome event sendSSEEvent('connected', { message: 'SSE connection established' }); req.on('close', () => { console.log('Client disconnected from SSE'); sseStream = null; // Clear the stream }); }); // Playwright Scraping Function async function scrapeUrls(url) { try { const browser = await chromium.launch(); const page = await browser.newPage(); await page.goto(url); const links = await page.evaluate(() => { const anchors = Array.from(document.querySelectorAll('a')); return anchors.map(anchor => anchor.href); }); await browser.close(); return links; } catch (error) { console.error('Error during scraping:', error); sendSSEEvent('error', { message: 'Scraping failed', error: error.message }); return []; // Return an empty array in case of error } } // API Endpoint to Start Scraping app.get('/scrape', async (req, res) => { const targetUrl = req.query.url; if (!targetUrl) { return res.status(400).send('Missing URL parameter'); } sendSSEEvent('start', { message: 'Scraping started', url: targetUrl }); const urls = await scrapeUrls(targetUrl); if (urls.length > 0) { urls.forEach(url => { sendSSEEvent('url', { url: url }); }); sendSSEEvent('complete', { message: 'Scraping completed', urlCount: urls.length }); } else { sendSSEEvent('no-urls', { message: 'No URLs found' }); } res.status(200).send('Scraping initiated. Check /events for updates.'); // Acknowledge the request }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` **3. `client.html` (Simple HTML/JavaScript Client)** ```html <!DOCTYPE html> <html> <head> <title>Playwright SSE Client</title> </head> <body> <h1>Playwright SSE Client</h1> <button id="startButton">Start Scraping</button> <ul id="urlList"></ul> <script> const urlList = document.getElementById('urlList'); const startButton = document.getElementById('startButton'); const eventSource = new EventSource('http://localhost:3000/events'); eventSource.onmessage = (event) => { console.log('Received event:', event); // Log the raw event }; eventSource.addEventListener('connected', (event) => { const data = JSON.parse(event.data); console.log('Connected:', data.message); }); eventSource.addEventListener('url', (event) => { const data = JSON.parse(event.data); const listItem = document.createElement('li'); listItem.textContent = data.url; urlList.appendChild(listItem); }); eventSource.addEventListener('start', (event) => { const data = JSON.parse(event.data); console.log('Scraping started:', data.url); urlList.innerHTML = ''; // Clear the list }); eventSource.addEventListener('complete', (event) => { const data = JSON.parse(event.data); console.log('Scraping completed. Found', data.urlCount, 'URLs'); }); eventSource.addEventListener('error', (event) => { const data = JSON.parse(event.data); console.error('Error:', data.message, data.error); }); startButton.addEventListener('click', () => { const targetUrl = prompt('Enter the URL to scrape:'); if (targetUrl) { fetch(`http://localhost:3000/scrape?url=${targetUrl}`) .then(response => { if (!response.ok) { console.error('Error starting scraping:', response.statusText); } }); } }); </script> </body> </html> ``` **How to Run** 1. **Save the files:** Save the code as `server.js` and `client.html` in your `playwright-sse-mcp` directory. 2. **Start the server:** Run `node server.js` in your terminal. 3. **Open the client:** Open `client.html` in your web browser. 4. **Click the "Start Scraping" button:** Enter the URL you want to scrape (e.g., `https://www.example.com`). 5. **Observe the results:** The URLs found on the page will be displayed in the list in your browser, and you'll see messages in the browser's console. The server console will also show activity. **Explanation** * **`server.js`:** * Sets up an Express.js server. * Defines an SSE endpoint (`/events`). When a client connects to this endpoint, the server keeps the connection open and sends events as they occur. * The `scrapeUrls` function uses Playwright to: * Launch a browser. * Navigate to the target URL. * Extract all `href` attributes from `<a>` tags. * Close the browser. * The `/scrape` endpoint: * Receives the target URL as a query parameter. * Calls `scrapeUrls` to perform the scraping. * Sends SSE events to the client: * `start`: Indicates that scraping has started. * `url`: Sends each URL found. * `complete`: Indicates that scraping is finished. * `error`: Indicates an error occurred. * The `sendSSEEvent` function encapsulates the logic for sending SSE messages. * **`client.html`:** * Creates an `EventSource` object to connect to the SSE endpoint. * Listens for the `url`, `start`, `complete`, and `error` events. * Updates the HTML page to display the URLs as they are received. * A button allows the user to enter the URL to scrape. **Key Improvements and Considerations** * **Error Handling:** The code includes basic error handling, but you should add more robust error handling, especially in the `scrapeUrls` function. Consider retries, logging, and more informative error messages. * **Scalability:** For high-volume scraping, consider using a message queue (e.g., RabbitMQ, Kafka) to decouple the scraping tasks from the web server. This allows you to scale the scraping workers independently. * **Rate Limiting:** Be respectful of the websites you are scraping. Implement rate limiting to avoid overloading their servers. You can use libraries like `bottleneck` to help with this. * **User-Agent:** Set a proper User-Agent header in your Playwright code to identify your scraper. * **Headless Mode:** Run Playwright in headless mode (`chromium.launch({ headless: true })`) for better performance on a server. * **Selectors:** Instead of just getting all `<a>` tags, use more specific CSS selectors to target the URLs you actually need. This will make your scraper more robust. * **Dynamic Content:** If the website uses JavaScript to load content dynamically, you might need to use `page.waitForSelector()` or `page.waitForTimeout()` to wait for the content to load before extracting the URLs. * **Anti-Scraping Measures:** Websites often have anti-scraping measures in place. You might need to use techniques like: * Rotating proxies. * Solving CAPTCHAs. * Using realistic browser behavior (e.g., moving the mouse, scrolling). * **MCP Features:** * **Start/Stop:** Add API endpoints to start and stop the scraping process. You'll need to manage the Playwright browser instance and potentially use a separate process for scraping. * **Configuration:** Allow users to configure the scraping parameters (e.g., target URL, selectors, rate limits) through an API or a web interface. * **Monitoring:** Track the scraping progress (e.g., URLs found, pages visited, errors) and provide a dashboard to monitor the process. * **Data Storage:** Instead of just displaying the URLs in the browser, you'll likely want to store them in a database (e.g., MongoDB, PostgreSQL). **Example of Rate Limiting (using `bottleneck`)** ```javascript const Bottleneck = require("bottleneck"); const limiter = new Bottleneck({ maxConcurrent: 1, // Only one request at a time minTime: 1000 // Wait 1 second between requests }); async function scrapeUrls(url) { try { const browser = await chromium.launch(); const page = await browser.newPage(); // Wrap the page.goto call with the rate limiter await limiter.schedule(() => page.goto(url)); const links = await page.evaluate(() => { const anchors = Array.from(document.querySelectorAll('a')); return anchors.map(anchor => anchor.href); }); await browser.close(); return links; } catch (error) { console.error('Error during scraping:', error); sendSSEEvent('error', { message: 'Scraping failed', error: error.message }); return []; // Return an empty array in case of error } } ``` **Vietnamese Translation of Key Concepts** * **Playwright:** Công cụ tự động hóa trình duyệt (crawling/scraping). * **SSE (Server-Sent Events):** Sự kiện được máy chủ gửi (một chiều). * **MCP (Management Control Plane):** Lớp điều khiển quản lý (quản lý quá trình cào dữ liệu). * **Node.js/Express.js:** Máy chủ backend. * **Web Scraping:** Cào dữ liệu web. * **URL:** Đường dẫn trang web. * **Endpoint:** Điểm cuối API. * **Rate Limiting:** Giới hạn tốc độ yêu cầu. * **Headless Mode:** Chế độ không đầu (chạy trình duyệt ẩn). * **CSS Selector:** Bộ chọn CSS (để chọn các phần tử HTML cụ thể). This comprehensive example should give you a solid foundation for building your Playwright-based web scraper with SSE and a basic MCP. Remember to adapt the code to your specific needs and website structure. Good luck!

PC-MCP

PC-MCP

Dự án này hiện tại chủ yếu dành cho việc trình diễn kết hợp với dự án smart-pet-with-mcp, nhắm đến máy chủ MCP để vận hành trên máy tính cá nhân.

Pylon MCP Server

Pylon MCP Server

Enables interaction with Pylon's customer support platform API to manage users, contacts, issues, and knowledge base articles through natural language commands.

mcp_server

mcp_server

Máy chủ MCP để tích hợp LLM

Anki MCP Server

Anki MCP Server

Provides programmatic access to Anki flashcard operations through the AnkiConnect API, allowing users to list and create decks and cards. It enables seamless management of flashcards directly from MCP-compatible clients like Claude Desktop.