Discover Awesome MCP Servers
Extend your agent with 14,529 capabilities via MCP servers.
- All14,529
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2

atdata-email-verification-mcp-server
atdata-email-verification-mcp-server

Letter Counter MCP Server
모델 컨텍스트 프로토콜(Model Context Protocol) 학습 예제로 만들어진, LLM이 단어 내 특정 글자의 발생 횟수를 셀 수 있도록 하는 MCP 서버.

XRAY MCP
Enables AI assistants to understand and navigate codebases through structural analysis. Provides code mapping, symbol search, and impact analysis using ast-grep for accurate parsing of Python, JavaScript, TypeScript, and Go projects.

MCP Obsidian MongoDB Server
A Model Context Protocol server that connects Cursor AI IDE with Obsidian notes and MongoDB, creating an integrated knowledge management system with real-time synchronization and search capabilities.

MCP Selenium Server
A Model Context Protocol server implementation that enables browser automation through standardized MCP clients, supporting features like navigation, element interaction, and screenshots across Chrome, Firefox, and Edge browsers.

MCPy
A next-generation Minecraft server engine built with Python and Cython, aiming to deliver exceptional performance and flexibility through advanced scientific computing libraries and optimized architecture.
PC-MCP
개인 PC에서 작동하는 MCP 서버를 대상으로 하며, 현재는 smart-pet-with-mcp 프로젝트의 데모용으로 사용됩니다.
mcp_server
LLM 통합을 위한 MCP 서버

GMail Manager MCP
Connects Claude Desktop to your Gmail so you can start managing your inbox using natural language. Bulk delete promos & newsletters, organize labels and get useful insights.
Librarian MCP Server
Librarian은 구조화된 방식으로 저장된 마크다운 파일을 나열, 검색 및 검색하기 위한 API를 제공하는 모델 컨텍스트 프로토콜(MCP) 서버입니다.
GitHub MCP Server

Violett MCP Server
A Model Context Protocol server that provides tools for fetching and posting HTTP data, with built-in prompts for URL summarization and API analysis.

IBM AS400 ISeries MCP Server
This project implements a customizable MCP server to execute RTE protocol. The intention of this project is to be easily extensible. You can add your own MCP tools by using RteUtils class. Also is needed to define the RTE connection parameters and the RTE flow in a yaml format.

OneNote MCP Server by CData
This read-only MCP Server allows you to connect to OneNote data from Claude Desktop through CData JDBC Drivers. Free (beta) read/write servers available at https://www.cdata.com/solutions/mcp

CodingConverse
An MCP server that provides user dialogue capabilities for AI code editors, allowing AI to interact with users through dialog boxes when needing input during the coding process.
MCP server for news-data

EPICS MCP Server
A Python-based server that interacts with EPICS process variables, allowing users to retrieve PV values, set PV values, and fetch detailed information about PVs through a standardized interface.

MCP Server
A Python implementation of the Model Context Protocol (MCP) that connects client applications with AI models, primarily Anthropic's models, with setup instructions for local development and deployment.
Fetch MCP Server
## Playwright, SSE MCP Server, and Node.js Express.js: Fetching URLs from a Webpage Here's a breakdown of how you can fetch URLs from a webpage using Playwright, stream them to a client using Server-Sent Events (SSE) with an MCP (Message Channel Protocol) server, and manage the server with Node.js and Express.js. This example focuses on the core logic and assumes you have basic familiarity with these technologies. **1. Project Setup:** ```bash mkdir playwright-sse-example cd playwright-sse-example npm init -y npm install playwright express sse-channel ``` **2. `server.js` (Node.js Express Server with SSE and Playwright):** ```javascript const express = require('express'); const { chromium } = require('playwright'); const SSE = require('sse-channel'); const app = express(); const port = 3000; // SSE Channel const channel = new SSE({ pingInterval: 30000 // Optional: Send a ping every 30 seconds to keep the connection alive }); // Express Middleware to handle SSE requests app.get('/stream', channel.middleware); // Route to trigger Playwright and stream URLs app.get('/fetch-urls', async (req, res) => { const urlToScrape = req.query.url; // Get the URL from the query parameter if (!urlToScrape) { return res.status(400).send('Missing URL parameter. Please provide a URL like: /fetch-urls?url=https://example.com'); } try { // Launch Playwright const browser = await chromium.launch(); const page = await browser.newPage(); // Navigate to the URL await page.goto(urlToScrape); // Extract URLs (Adjust the selector as needed) const urls = await page.evaluate(() => { const links = Array.from(document.querySelectorAll('a')); // Select all <a> tags return links.map(link => link.href); }); // Stream URLs via SSE for (const url of urls) { channel.send({ data: url }); } // Close the browser await browser.close(); // Send a completion message channel.send({ data: 'DONE' }); console.log('Finished scraping and streaming URLs.'); } catch (error) { console.error('Error during scraping:', error); channel.send({ data: `ERROR: ${error.message}` }); res.status(500).send(`Error: ${error.message}`); // Also send an error to the initial request } }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` **3. `client.html` (Simple HTML Client to Receive SSE):** ```html <!DOCTYPE html> <html> <head> <title>SSE Client</title> </head> <body> <h1>SSE URL Stream</h1> <ul id="urlList"></ul> <script> const urlList = document.getElementById('urlList'); const eventSource = new EventSource('http://localhost:3000/stream'); // Connect to the SSE endpoint eventSource.onmessage = (event) => { const data = event.data; if (data === 'DONE') { const listItem = document.createElement('li'); listItem.textContent = 'Scraping Complete!'; urlList.appendChild(listItem); eventSource.close(); // Close the connection when done } else if (data.startsWith('ERROR:')) { const listItem = document.createElement('li'); listItem.textContent = `Error: ${data.substring(7)}`; // Remove "ERROR: " prefix urlList.appendChild(listItem); eventSource.close(); // Close the connection on error } else { const listItem = document.createElement('li'); listItem.textContent = data; urlList.appendChild(listItem); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); const listItem = document.createElement('li'); listItem.textContent = 'SSE Connection Error'; urlList.appendChild(listItem); eventSource.close(); }; // Function to start the scraping process function startScraping(url) { fetch(`http://localhost:3000/fetch-urls?url=${url}`) .then(response => { if (!response.ok) { throw new Error(`HTTP error! status: ${response.status}`); } console.log('Scraping started...'); }) .catch(error => { console.error('Error starting scraping:', error); const listItem = document.createElement('li'); listItem.textContent = `Error starting scraping: ${error.message}`; urlList.appendChild(listItem); eventSource.close(); }); } // Example usage: Start scraping when the page loads window.onload = () => { const urlToScrape = prompt("Enter the URL to scrape:"); if (urlToScrape) { startScraping(urlToScrape); } else { alert("No URL provided. Please refresh the page and enter a URL."); } }; </script> </body> </html> ``` **Explanation:** 1. **`server.js`:** - **Imports:** Imports `express`, `playwright`, and `sse-channel`. - **SSE Channel:** Creates an `SSE` channel to manage the streaming of data. The `pingInterval` is optional but recommended to prevent the connection from timing out. - **SSE Middleware:** Registers the `channel.middleware` to handle SSE requests on the `/stream` endpoint. This sets the correct headers for SSE. - **`/fetch-urls` Route:** - **URL Parameter:** Gets the URL to scrape from the `url` query parameter. Handles the case where the URL is missing. - **Playwright Launch:** Launches a Chromium browser instance using Playwright. - **Page Navigation:** Navigates the page to the specified URL. - **URL Extraction:** Uses `page.evaluate` to run JavaScript code within the browser context. This code selects all `<a>` tags and extracts their `href` attributes (the URLs). **Important:** Adjust the `document.querySelectorAll('a')` selector to match the specific HTML structure of the website you are scraping. You might need to use more specific CSS selectors to target the correct links. - **SSE Streaming:** Iterates through the extracted URLs and sends each one as a separate SSE event using `channel.send({ data: url })`. The `data` field is the content of the SSE event. - **Browser Closure:** Closes the Playwright browser instance to release resources. - **Completion Message:** Sends a `DONE` message via SSE to signal the end of the scraping process. - **Error Handling:** Includes a `try...catch` block to handle potential errors during scraping. Sends an `ERROR` message via SSE and also sends an HTTP error response to the initial request. - **Express Server:** Starts the Express server on port 3000. 2. **`client.html`:** - **SSE Connection:** Creates an `EventSource` object to connect to the `/stream` endpoint on the server. This establishes the SSE connection. - **`onmessage` Handler:** This function is called whenever the server sends a new SSE event. - **Data Handling:** It extracts the `data` from the event. - **URL Display:** It creates a new list item (`<li>`) and adds the URL to the `urlList` (the `<ul>` element in the HTML). - **Completion Handling:** If the data is `DONE`, it adds a "Scraping Complete!" message to the list and closes the `EventSource` connection. - **Error Handling:** If the data starts with `ERROR:`, it displays the error message and closes the `EventSource` connection. - **`onerror` Handler:** This function is called if there's an error with the SSE connection itself. It displays an error message and closes the connection. - **`startScraping` Function:** This function initiates the scraping process by making a `fetch` request to the `/fetch-urls` endpoint on the server, passing the URL to scrape as a query parameter. It handles potential errors during the initial request. - **`window.onload`:** This ensures that the scraping process starts when the page has fully loaded. It prompts the user to enter the URL to scrape. **How to Run:** 1. **Install Dependencies:** `npm install` in the project directory. 2. **Start the Server:** `node server.js` 3. **Open `client.html` in your browser.** The browser will prompt you for the URL to scrape. Enter a URL (e.g., `https://www.example.com`). 4. **Observe the Results:** The URLs extracted from the webpage will be displayed in the `urlList` in the browser. **Key Improvements and Considerations:** * **Error Handling:** The code includes more robust error handling, both on the server and the client. It sends error messages via SSE and also provides HTTP error responses. * **URL Parameter:** The server now expects the URL to scrape as a query parameter (e.g., `/fetch-urls?url=https://example.com`). This is a more standard way to pass data to a server. * **Client-Side Scraping Initiation:** The client now initiates the scraping process by making a `fetch` request to the server. This allows the server to handle the Playwright logic and stream the results back to the client. * **`DONE` Message:** The server sends a `DONE` message via SSE to signal the end of the scraping process. The client uses this message to close the SSE connection and display a completion message. * **`pingInterval`:** The `pingInterval` in the `SSE` constructor helps keep the SSE connection alive, especially for long-running scraping tasks. * **Selector Adjustment:** **Crucially, you'll need to adjust the CSS selector in the `page.evaluate` function to match the specific HTML structure of the website you are scraping.** The example uses `document.querySelectorAll('a')`, which selects all `<a>` tags. You might need to use more specific selectors to target the correct links. Use your browser's developer tools to inspect the HTML and determine the appropriate selector. * **Rate Limiting:** Be mindful of rate limiting when scraping websites. Implement delays between requests to avoid overloading the server. Playwright provides mechanisms for this. * **Asynchronous Operations:** The code uses `async/await` to handle asynchronous operations, making the code more readable and easier to manage. * **Resource Management:** The Playwright browser instance is closed after the scraping is complete to release resources. * **Security:** Be careful when scraping websites. Respect the website's `robots.txt` file and avoid scraping sensitive information. Sanitize the URLs before displaying them on the client to prevent XSS vulnerabilities. * **MCP (Message Channel Protocol):** The original request mentioned MCP. While SSE is a simpler solution for streaming data from the server to the client, MCP provides a more robust and bidirectional communication channel. If you need more complex communication patterns (e.g., the client sending commands to the server to control the scraping process), you might consider using a library like `socket.io` or `ws` (WebSockets) instead of SSE. However, for the basic task of streaming URLs, SSE is generally sufficient. Implementing a full MCP server is beyond the scope of this example. * **Scalability:** For high-volume scraping, consider using a message queue (e.g., RabbitMQ, Kafka) to distribute the scraping tasks across multiple workers. **Korean Translation of Key Concepts:** * **Playwright:** 플레이라이트 * **Server-Sent Events (SSE):** 서버 전송 이벤트 * **Message Channel Protocol (MCP):** 메시지 채널 프로토콜 * **Node.js:** 노드제이에스 * **Express.js:** 익스프레스제이에스 * **URL:** URL (유알엘) * **Scraping:** 스크래핑 (웹 스크래핑) * **Endpoint:** 엔드포인트 * **Query Parameter:** 쿼리 파라미터 * **CSS Selector:** CSS 선택자 * **Asynchronous:** 비동기 * **Rate Limiting:** 속도 제한 * **Error Handling:** 오류 처리 * **Resource Management:** 자원 관리 * **Security:** 보안 This comprehensive example provides a solid foundation for building a URL scraper using Playwright, SSE, and Node.js. Remember to adapt the code to your specific needs and always respect the terms of service of the websites you are scraping.

shortcuts-mcp-server
shortcuts-mcp-server

Acumatica MCP Server by CData
This read-only MCP Server allows you to connect to Acumatica data from Claude Desktop through CData JDBC Drivers. Free (beta) read/write servers available at https://www.cdata.com/solutions/mcp

SuperMemory MCP
A tool that makes memories stored in ChatGPT accessible across various language models without requiring logins or paywalls.

Azure DevOps MCP Proxy
Enables interaction with Azure DevOps through Personal Access Token authentication. Supports work item management, wiki operations, project/repository listing, and build pipeline access through natural language.
Neo4j MCP Clients & Servers
Model Context Protocol with Neo4j

MCP AgentRun Server
Enables safe Python code execution in isolated Docker containers through the AgentRun framework. Provides automatic container lifecycle management and comprehensive error handling for secure and reproducible code execution.

XML Documents MCP Server by CData
XML Documents MCP Server by CData
🚀 Go-Tapd-SDK
Go Tapd SDK는 Tapd API에 접근하기 위한 Go 클라이언트 라이브러리이며, 최신 MCP 서버도 지원합니다.
Playwright Testing Framework
Playwright 커서 MCP 서버를 사용한 AI 기반 테스트 자동화
rest-to-mcp
REST API를 MCP 서버로 변환하는 튜토리얼 프로젝트

DeepSeek MCP Server
Allows seamless integration of DeepSeek's language models with MCP-compatible applications like Claude Desktop, supporting features such as model selection, temperature control, and multi-turn conversations with automatic model fallback.