Discover Awesome MCP Servers

Extend your agent with 28,766 capabilities via MCP servers.

All28,766
Gmail AutoAuth MCP Server

Gmail AutoAuth MCP Server

Enables AI assistants to manage Gmail through natural language, supporting operations like sending, searching, and reading emails. It includes comprehensive label management, batch processing, and a streamlined OAuth2 auto-authentication flow.

PyVista MCP Server

PyVista MCP Server

Enables 3D visualization capabilities through PyVista integration, allowing users to generate and export interactive 3D shape grids as HTML files. It provides tools for creating visual representations of geometric objects like cylinders, spheres, and arrows.

Letter Counter MCP Server

Letter Counter MCP Server

모델 컨텍스트 프로토콜(Model Context Protocol) 학습 예제로 만들어진, LLM이 단어 내 특정 글자의 발생 횟수를 셀 수 있도록 하는 MCP 서버.

Meme MCP Server

Meme MCP Server

A Model Context Protocol server for managing, searching, and retrieving local meme images. It automatically hosts memes via a built-in HTTP server to ensure images can be displayed within chat applications.

Satellite Tracking MCP Server

Satellite Tracking MCP Server

Provides access to real-time satellite tracking data using the N2YO API, enabling users to get satellite positions, predict visible passes, search satellites by name or category, and retrieve TLE orbital data.

Geocode MCP Server

Geocode MCP Server

Provides global geocoding capabilities to convert city names and addresses into latitude/longitude coordinates using the free OpenStreetMap Nominatim API.

XRAY MCP

XRAY MCP

Enables AI assistants to understand and navigate codebases through structural analysis. Provides code mapping, symbol search, and impact analysis using ast-grep for accurate parsing of Python, JavaScript, TypeScript, and Go projects.

A Cloud Automator MCP server

A Cloud Automator MCP server

Cloud Automator REST API를 사용하기 위한 비공식 MCP 서버입니다.

PortOne Global MCP Server

PortOne Global MCP Server

Enables searching and reading of PortOne documentation, including OpenAPI schemas and product guides, through the Model Context Protocol. It allows AI agents to easily access and integrate payment-related technical specifications into their workflows.

MCP Server Markup Language (MCPML)

MCP Server Markup Language (MCPML)

MCPML(MCP 서버 마크업 언어) - CLI 및 OpenAI 에이전트 지원을 통해 MCP 서버를 구축하기 위한 Python 프레임워크입니다.

llmkit-mcp-server

llmkit-mcp-server

Query AI spending data from LLMKit. Track costs, budgets, usage stats, and session summaries across 11 AI providers.

SimBrief MCP Server

SimBrief MCP Server

SimBrief MCP Server brings flight planning data directly into your AI workspace. Access your SimBrief flight plans (OFP), retrieve real-time METAR weather reports for any airport, and fetch pilot information - all through natural conversation. Built for virtual pilots who want to streamline their pre-flight briefing workflow in Microsoft Flight Simulator, X-Plane, and other flight simulation platf

MCP Google Calendar Server

MCP Google Calendar Server

Enables creating and managing Google Calendar events through OAuth 2.0 authentication, supporting event details like title, time, location, description, and attendees.

OpenTabs

OpenTabs

Open-source MCP server that gives AI agents access to 100+ web applications through the user's authenticated browser session. No API keys, no screenshots, no DOM scraping — talks directly to web app internal APIs via a Chrome extension. Supports Slack, Discord, GitHub, Jira, Notion, Reddit, X, and 100+ more services with ~2,000 tools. Works with Claude Code, Cursor, Windsurf, and any MCP client.

MCP server for LogSeq

MCP server for LogSeq

Interacts with LogSeq via its API.

MCP Weather Notes Server

MCP Weather Notes Server

Provides real-time weather data from Open-Meteo API and enables creating and retrieving notes stored locally in JSON format.

Bear MCP Server

Bear MCP Server

Provides comprehensive access to Bear notes using a hybrid sync-safe approach - direct database reads for performance and Bear's official API for writes, enabling natural language note management, search, organization, and analysis while maintaining iCloud sync compatibility.

API MCP Server

API MCP Server

A TypeScript-based MCP server that implements a simple notes system, providing resources for accessing notes via URIs, tools for creating notes, and prompts for generating summaries.

C++ Style Guide MCP Server

C++ Style Guide MCP Server

Provides professional C++ code style checking, best practices analysis, and modern C++ upgrade suggestions with tools for naming conventions, memory safety, const correctness, and access to comprehensive C++ style documentation.

Memory MCP Server

Memory MCP Server

Enables AI agents to maintain context across conversation sessions by saving and retrieving summaries of key points from past interactions.

DDG MCP2

DDG MCP2

A basic MCP server template built with FastMCP framework that provides example tools for echoing messages and retrieving server information. Serves as a starting point for developing custom MCP servers with Docker support and CI/CD integration.

headless-oracle

headless-oracle

Cryptographically signed market state verification for autonomous financial agents. Ed25519 receipts, fail-closed safety, 28 global exchanges (equities, derivatives, 24/7 crypto). MCP-native, x402-payable, SMA Protocol conformant.

Flux Cloudflare MCP

Flux Cloudflare MCP

An MCP server that enables AI assistants to generate images using Black Forest Labs' Flux model via Cloudflare Workers.

World Bank Documents MCP Server

World Bank Documents MCP Server

Enables discovery and retrieval of World Bank reports and publications through the Documents & Reports API. It supports full-text search, structured filtering by topic or country, and metadata extraction for research and data analysis.

Memory MCP Worker

Memory MCP Worker

Provides cross-device access to a persistent knowledge graph via Cloudflare Workers, enabling memory storage and retrieval through both MCP protocol and REST API with full-text search capabilities.

Fetch MCP Server

Fetch MCP Server

## Playwright, SSE MCP Server, and Node.js Express.js: Fetching URLs from a Webpage Here's a breakdown of how you can fetch URLs from a webpage using Playwright, stream them to a client using Server-Sent Events (SSE) with an MCP (Message Channel Protocol) server, and manage the server with Node.js and Express.js. This example focuses on the core logic and assumes you have basic familiarity with these technologies. **1. Project Setup:** ```bash mkdir playwright-sse-example cd playwright-sse-example npm init -y npm install playwright express sse-channel ``` **2. `server.js` (Node.js Express Server with SSE and Playwright):** ```javascript const express = require('express'); const { chromium } = require('playwright'); const SSE = require('sse-channel'); const app = express(); const port = 3000; // SSE Channel const channel = new SSE({ pingInterval: 30000 // Optional: Send a ping every 30 seconds to keep the connection alive }); // Express Middleware to handle SSE requests app.get('/stream', channel.middleware); // Route to trigger Playwright and stream URLs app.get('/fetch-urls', async (req, res) => { const urlToScrape = req.query.url; // Get the URL from the query parameter if (!urlToScrape) { return res.status(400).send('Missing URL parameter. Please provide a URL like: /fetch-urls?url=https://example.com'); } try { // Launch Playwright const browser = await chromium.launch(); const page = await browser.newPage(); // Navigate to the URL await page.goto(urlToScrape); // Extract URLs (Adjust the selector as needed) const urls = await page.evaluate(() => { const links = Array.from(document.querySelectorAll('a')); // Select all <a> tags return links.map(link => link.href); }); // Stream URLs via SSE for (const url of urls) { channel.send({ data: url }); } // Close the browser await browser.close(); // Send a completion message channel.send({ data: 'DONE' }); console.log('Finished scraping and streaming URLs.'); } catch (error) { console.error('Error during scraping:', error); channel.send({ data: `ERROR: ${error.message}` }); res.status(500).send(`Error: ${error.message}`); // Also send an error to the initial request } }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` **3. `client.html` (Simple HTML Client to Receive SSE):** ```html <!DOCTYPE html> <html> <head> <title>SSE Client</title> </head> <body> <h1>SSE URL Stream</h1> <ul id="urlList"></ul> <script> const urlList = document.getElementById('urlList'); const eventSource = new EventSource('http://localhost:3000/stream'); // Connect to the SSE endpoint eventSource.onmessage = (event) => { const data = event.data; if (data === 'DONE') { const listItem = document.createElement('li'); listItem.textContent = 'Scraping Complete!'; urlList.appendChild(listItem); eventSource.close(); // Close the connection when done } else if (data.startsWith('ERROR:')) { const listItem = document.createElement('li'); listItem.textContent = `Error: ${data.substring(7)}`; // Remove "ERROR: " prefix urlList.appendChild(listItem); eventSource.close(); // Close the connection on error } else { const listItem = document.createElement('li'); listItem.textContent = data; urlList.appendChild(listItem); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); const listItem = document.createElement('li'); listItem.textContent = 'SSE Connection Error'; urlList.appendChild(listItem); eventSource.close(); }; // Function to start the scraping process function startScraping(url) { fetch(`http://localhost:3000/fetch-urls?url=${url}`) .then(response => { if (!response.ok) { throw new Error(`HTTP error! status: ${response.status}`); } console.log('Scraping started...'); }) .catch(error => { console.error('Error starting scraping:', error); const listItem = document.createElement('li'); listItem.textContent = `Error starting scraping: ${error.message}`; urlList.appendChild(listItem); eventSource.close(); }); } // Example usage: Start scraping when the page loads window.onload = () => { const urlToScrape = prompt("Enter the URL to scrape:"); if (urlToScrape) { startScraping(urlToScrape); } else { alert("No URL provided. Please refresh the page and enter a URL."); } }; </script> </body> </html> ``` **Explanation:** 1. **`server.js`:** - **Imports:** Imports `express`, `playwright`, and `sse-channel`. - **SSE Channel:** Creates an `SSE` channel to manage the streaming of data. The `pingInterval` is optional but recommended to prevent the connection from timing out. - **SSE Middleware:** Registers the `channel.middleware` to handle SSE requests on the `/stream` endpoint. This sets the correct headers for SSE. - **`/fetch-urls` Route:** - **URL Parameter:** Gets the URL to scrape from the `url` query parameter. Handles the case where the URL is missing. - **Playwright Launch:** Launches a Chromium browser instance using Playwright. - **Page Navigation:** Navigates the page to the specified URL. - **URL Extraction:** Uses `page.evaluate` to run JavaScript code within the browser context. This code selects all `<a>` tags and extracts their `href` attributes (the URLs). **Important:** Adjust the `document.querySelectorAll('a')` selector to match the specific HTML structure of the website you are scraping. You might need to use more specific CSS selectors to target the correct links. - **SSE Streaming:** Iterates through the extracted URLs and sends each one as a separate SSE event using `channel.send({ data: url })`. The `data` field is the content of the SSE event. - **Browser Closure:** Closes the Playwright browser instance to release resources. - **Completion Message:** Sends a `DONE` message via SSE to signal the end of the scraping process. - **Error Handling:** Includes a `try...catch` block to handle potential errors during scraping. Sends an `ERROR` message via SSE and also sends an HTTP error response to the initial request. - **Express Server:** Starts the Express server on port 3000. 2. **`client.html`:** - **SSE Connection:** Creates an `EventSource` object to connect to the `/stream` endpoint on the server. This establishes the SSE connection. - **`onmessage` Handler:** This function is called whenever the server sends a new SSE event. - **Data Handling:** It extracts the `data` from the event. - **URL Display:** It creates a new list item (`<li>`) and adds the URL to the `urlList` (the `<ul>` element in the HTML). - **Completion Handling:** If the data is `DONE`, it adds a "Scraping Complete!" message to the list and closes the `EventSource` connection. - **Error Handling:** If the data starts with `ERROR:`, it displays the error message and closes the `EventSource` connection. - **`onerror` Handler:** This function is called if there's an error with the SSE connection itself. It displays an error message and closes the connection. - **`startScraping` Function:** This function initiates the scraping process by making a `fetch` request to the `/fetch-urls` endpoint on the server, passing the URL to scrape as a query parameter. It handles potential errors during the initial request. - **`window.onload`:** This ensures that the scraping process starts when the page has fully loaded. It prompts the user to enter the URL to scrape. **How to Run:** 1. **Install Dependencies:** `npm install` in the project directory. 2. **Start the Server:** `node server.js` 3. **Open `client.html` in your browser.** The browser will prompt you for the URL to scrape. Enter a URL (e.g., `https://www.example.com`). 4. **Observe the Results:** The URLs extracted from the webpage will be displayed in the `urlList` in the browser. **Key Improvements and Considerations:** * **Error Handling:** The code includes more robust error handling, both on the server and the client. It sends error messages via SSE and also provides HTTP error responses. * **URL Parameter:** The server now expects the URL to scrape as a query parameter (e.g., `/fetch-urls?url=https://example.com`). This is a more standard way to pass data to a server. * **Client-Side Scraping Initiation:** The client now initiates the scraping process by making a `fetch` request to the server. This allows the server to handle the Playwright logic and stream the results back to the client. * **`DONE` Message:** The server sends a `DONE` message via SSE to signal the end of the scraping process. The client uses this message to close the SSE connection and display a completion message. * **`pingInterval`:** The `pingInterval` in the `SSE` constructor helps keep the SSE connection alive, especially for long-running scraping tasks. * **Selector Adjustment:** **Crucially, you'll need to adjust the CSS selector in the `page.evaluate` function to match the specific HTML structure of the website you are scraping.** The example uses `document.querySelectorAll('a')`, which selects all `<a>` tags. You might need to use more specific selectors to target the correct links. Use your browser's developer tools to inspect the HTML and determine the appropriate selector. * **Rate Limiting:** Be mindful of rate limiting when scraping websites. Implement delays between requests to avoid overloading the server. Playwright provides mechanisms for this. * **Asynchronous Operations:** The code uses `async/await` to handle asynchronous operations, making the code more readable and easier to manage. * **Resource Management:** The Playwright browser instance is closed after the scraping is complete to release resources. * **Security:** Be careful when scraping websites. Respect the website's `robots.txt` file and avoid scraping sensitive information. Sanitize the URLs before displaying them on the client to prevent XSS vulnerabilities. * **MCP (Message Channel Protocol):** The original request mentioned MCP. While SSE is a simpler solution for streaming data from the server to the client, MCP provides a more robust and bidirectional communication channel. If you need more complex communication patterns (e.g., the client sending commands to the server to control the scraping process), you might consider using a library like `socket.io` or `ws` (WebSockets) instead of SSE. However, for the basic task of streaming URLs, SSE is generally sufficient. Implementing a full MCP server is beyond the scope of this example. * **Scalability:** For high-volume scraping, consider using a message queue (e.g., RabbitMQ, Kafka) to distribute the scraping tasks across multiple workers. **Korean Translation of Key Concepts:** * **Playwright:** 플레이라이트 * **Server-Sent Events (SSE):** 서버 전송 이벤트 * **Message Channel Protocol (MCP):** 메시지 채널 프로토콜 * **Node.js:** 노드제이에스 * **Express.js:** 익스프레스제이에스 * **URL:** URL (유알엘) * **Scraping:** 스크래핑 (웹 스크래핑) * **Endpoint:** 엔드포인트 * **Query Parameter:** 쿼리 파라미터 * **CSS Selector:** CSS 선택자 * **Asynchronous:** 비동기 * **Rate Limiting:** 속도 제한 * **Error Handling:** 오류 처리 * **Resource Management:** 자원 관리 * **Security:** 보안 This comprehensive example provides a solid foundation for building a URL scraper using Playwright, SSE, and Node.js. Remember to adapt the code to your specific needs and always respect the terms of service of the websites you are scraping.

PC-MCP

PC-MCP

개인 PC에서 작동하는 MCP 서버를 대상으로 하며, 현재는 smart-pet-with-mcp 프로젝트의 데모용으로 사용됩니다.

Pylon MCP Server

Pylon MCP Server

Enables interaction with Pylon's customer support platform API to manage users, contacts, issues, and knowledge base articles through natural language commands.

mcp_server

mcp_server

LLM 통합을 위한 MCP 서버

Anki MCP Server

Anki MCP Server

Provides programmatic access to Anki flashcard operations through the AnkiConnect API, allowing users to list and create decks and cards. It enables seamless management of flashcards directly from MCP-compatible clients like Claude Desktop.