Discover Awesome MCP Servers
Extend your agent with 26,375 capabilities via MCP servers.
- All26,375
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
ImaginePro MCP Server
Enables AI assistants to generate images and videos through natural language using ImaginePro's API. Supports text-to-image generation, video creation, image upscaling, variants, inpainting, and multi-modal generation with real-time progress tracking.
MCP Server Markup Language (MCPML)
MCP Server Markup Language (MCPML) - Um framework Python para construir Servidores MCP com suporte a CLI e Agente OpenAI.
llmkit-mcp-server
Query AI spending data from LLMKit. Track costs, budgets, usage stats, and session summaries across 11 AI providers.
MCP Google Calendar Server
Enables creating and managing Google Calendar events through OAuth 2.0 authentication, supporting event details like title, time, location, description, and attendees.
MCP server for LogSeq
Interacts with LogSeq via its API.
Echo MCP Server
Um servidor simples que implementa o Protocolo de Contexto de Modelo (MCP) que ecoa as mensagens de volta, projetado para testar clientes MCP.
Fetch MCP Server
Okay, here's a breakdown of how you can fetch URLs from a webpage using Playwright, integrate it with an SSE (Server-Sent Events) MCP (Management Control Protocol) server, and use Node.js with Express.js to manage the whole process. I'll provide code snippets and explanations to guide you. **Conceptual Overview** 1. **Playwright (Web Scraping):** Playwright will be used to launch a browser, navigate to the target webpage, and extract the URLs you need. 2. **SSE (Server-Sent Events):** SSE will be used to push the extracted URLs to the client (e.g., a web browser) in real-time as they are found. This is more efficient than repeatedly polling the server. 3. **Node.js/Express.js (Server):** Express.js will provide the framework for your server. It will handle: * Starting the Playwright scraping process. * Managing the SSE connection. * Sending the extracted URLs as SSE events. * Potentially handling other management tasks (MCP). **Code Example (with Explanations)** ```javascript // server.js (Node.js/Express.js) const express = require('express'); const { chromium } = require('playwright'); const app = express(); const port = 3000; // SSE setup app.get('/events', (req, res) => { res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); res.flushHeaders(); // send the headers immediately // Function to send SSE events const sendEvent = (data) => { res.write(`data: ${JSON.stringify(data)}\n\n`); }; // Start the Playwright scraping process (async () => { const browser = await chromium.launch(); const page = await browser.newPage(); try { await page.goto('https://www.example.com'); // Replace with your target URL // Extract URLs (example: get all links) const links = await page.locator('a').evaluateAll(links => links.map(a => a.href)); // Send each URL as an SSE event for (const link of links) { sendEvent({ url: link }); } sendEvent({status: "complete"}); // Signal completion } catch (error) { console.error('Error during scraping:', error); sendEvent({ error: error.message }); // Send error to client } finally { await browser.close(); res.end(); // Close the SSE connection } })(); req.on('close', () => { console.log('Client disconnected'); }); }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` **Explanation:** 1. **Dependencies:** * `express`: For the Express.js web server. * `playwright`: For browser automation and web scraping. Install them: ```bash npm install express playwright ``` 2. **Express Setup:** * `const app = express();`: Creates an Express application. * `const port = 3000;`: Defines the port the server will listen on. 3. **SSE Endpoint (`/events`):** * `app.get('/events', (req, res) => { ... });`: Handles GET requests to the `/events` endpoint. This is where the SSE connection is established. * `res.setHeader(...)`: Sets the necessary headers for SSE: * `Content-Type: text/event-stream`: Tells the client that the server will be sending SSE events. * `Cache-Control: no-cache`: Prevents caching of the SSE stream. * `Connection: keep-alive`: Keeps the connection open. * `res.flushHeaders()`: Sends the headers immediately. This is important for SSE to work correctly. 4. **`sendEvent(data)` Function:** * This function formats the data as an SSE event and sends it to the client. The format is crucial: * `data: ${JSON.stringify(data)}\n\n`: The `data:` prefix is required by the SSE protocol. The `\n\n` (two newlines) signals the end of the event. `JSON.stringify()` converts the JavaScript object into a JSON string for transmission. 5. **Playwright Scraping:** * `(async () => { ... })();`: An immediately invoked async function to run the Playwright code. * `const browser = await chromium.launch();`: Launches a Chromium browser instance. You can also use `firefox.launch()` or `webkit.launch()` if you prefer. * `const page = await browser.newPage();`: Creates a new page (tab) in the browser. * `await page.goto('https://www.example.com');`: Navigates the page to the target URL. **Replace `https://www.example.com` with the actual URL you want to scrape.** * `const links = await page.locator('a').evaluateAll(links => links.map(a => a.href));`: This is the core scraping logic. * `page.locator('a')`: Selects all `<a>` (link) elements on the page. You can use any CSS selector here to target the elements containing the URLs you want. * `.evaluateAll(links => links.map(a => a.href))`: Executes a function in the browser context to extract the `href` attribute (the URL) from each link element. The `evaluateAll` method is used to process all the selected elements. * `for (const link of links) { sendEvent({ url: link }); }`: Iterates through the extracted URLs and sends each one as an SSE event using the `sendEvent` function. * `sendEvent({status: "complete"});`: Sends a final event to signal that the scraping is complete. This allows the client to know when to stop listening for events. 6. **Error Handling:** * The `try...catch` block handles potential errors during the scraping process. If an error occurs, it sends an error event to the client. 7. **Browser Closure:** * `await browser.close();`: Closes the browser instance to release resources. This is done in the `finally` block to ensure it always happens, even if an error occurs. * `res.end();`: Closes the SSE connection. 8. **Client Disconnect Handling:** * `req.on('close', () => { ... });`: Listens for the `close` event on the request object. This event is triggered when the client disconnects from the SSE stream. You can use this to clean up resources or log the disconnection. 9. **Server Startup:** * `app.listen(port, () => { ... });`: Starts the Express server and listens for incoming requests on the specified port. **Client-Side (Browser) Code (Example)** ```html <!DOCTYPE html> <html> <head> <title>SSE Example</title> </head> <body> <h1>Extracted URLs:</h1> <ul id="urlList"></ul> <script> const eventSource = new EventSource('http://localhost:3000/events'); // Replace with your server URL eventSource.onmessage = (event) => { const data = JSON.parse(event.data); if (data.url) { const listItem = document.createElement('li'); listItem.textContent = data.url; document.getElementById('urlList').appendChild(listItem); } else if (data.status === "complete") { console.log("Scraping complete!"); eventSource.close(); // Close the connection when done } else if (data.error) { console.error("Error from server:", data.error); eventSource.close(); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); eventSource.close(); }; </script> </body> </html> ``` **Explanation of Client-Side Code:** 1. **`EventSource`:** * `const eventSource = new EventSource('http://localhost:3000/events');`: Creates a new `EventSource` object, which establishes the SSE connection to the server's `/events` endpoint. **Replace `http://localhost:3000/events` with the actual URL of your server.** 2. **`onmessage` Event Handler:** * `eventSource.onmessage = (event) => { ... };`: This function is called whenever the server sends an SSE event. * `const data = JSON.parse(event.data);`: Parses the JSON data from the event. * The `if (data.url)` block handles the URL events, creating a list item (`<li>`) for each URL and adding it to the `urlList` element in the HTML. * The `else if (data.status === "complete")` block handles the completion event, logging a message to the console and closing the SSE connection using `eventSource.close()`. * The `else if (data.error)` block handles error events, logging the error to the console and closing the SSE connection. 3. **`onerror` Event Handler:** * `eventSource.onerror = (error) => { ... };`: This function is called if there is an error with the SSE connection. It logs the error to the console and closes the connection. **How to Run:** 1. **Save:** Save the server-side code as `server.js` and the client-side code as `index.html` (or any other HTML file). 2. **Install Dependencies:** Run `npm install express playwright` in your terminal. 3. **Start the Server:** Run `node server.js` in your terminal. 4. **Open in Browser:** Open `index.html` in your web browser. **Important Considerations and Improvements:** * **Error Handling:** The error handling in both the server and client is basic. You should add more robust error handling to catch and handle different types of errors gracefully. * **Scalability:** For high-volume scraping, consider using a message queue (e.g., RabbitMQ, Kafka) to decouple the scraping process from the SSE server. This will improve scalability and reliability. * **Rate Limiting:** Implement rate limiting to avoid overloading the target website and getting blocked. Playwright provides mechanisms for controlling the speed of your scraping. * **User-Agent:** Set a realistic User-Agent header in Playwright to avoid being identified as a bot. * **Headless Mode:** Run Playwright in headless mode (`chromium.launch({ headless: true })`) for better performance on a server. * **Selectors:** Choose your CSS selectors carefully to target the correct elements on the page. Use the browser's developer tools to inspect the HTML and find the appropriate selectors. * **Dynamic Content:** If the website uses JavaScript to load content dynamically, you may need to use `page.waitForSelector()` or `page.waitForTimeout()` to wait for the content to load before extracting the URLs. * **MCP (Management Control Protocol):** The example provides a basic SSE setup. For a full MCP implementation, you would need to define a protocol for sending commands to the server (e.g., start scraping, stop scraping, change the target URL). You could use additional Express routes to handle these commands. The SSE stream would then be used to report the status of the scraping process and send the extracted URLs. * **Authentication:** If the target website requires authentication, you'll need to handle the login process in your Playwright code. This may involve filling out forms, clicking buttons, and storing cookies. * **Legal and Ethical Considerations:** Always respect the website's `robots.txt` file and terms of service. Avoid scraping websites that explicitly prohibit it. Be mindful of the load you are placing on the website's servers. **Portuguese Translation of Key Terms:** * **Fetch URLs:** Obter URLs * **Webpage:** Página da web * **Playwright:** (No direct translation, use the English term) * **SSE (Server-Sent Events):** Eventos Enviados pelo Servidor * **MCP (Management Control Protocol):** Protocolo de Controle de Gerenciamento * **Node.js:** (No direct translation, use the English term) * **Express.js:** (No direct translation, use the English term) * **Scraping:** Raspagem (de dados) * **Endpoint:** Ponto de extremidade * **Event:** Evento * **Browser:** Navegador * **Server:** Servidor * **Client:** Cliente * **Connection:** Conexão * **Error:** Erro * **Complete:** Completo * **User-Agent:** Agente do Usuário * **Headless Mode:** Modo Sem Interface Gráfica This comprehensive example should give you a solid foundation for building your web scraping application with Playwright, SSE, and Node.js/Express.js. Remember to adapt the code to your specific needs and the structure of the target website. Good luck!
PC-MCP
Este projeto é para um servidor MCP que opera em um computador pessoal, atualmente focado principalmente em demonstrações em conjunto com o projeto smart-pet-with-mcp.
Pylon MCP Server
Enables interaction with Pylon's customer support platform API to manage users, contacts, issues, and knowledge base articles through natural language commands.
mcp_server
Servidor MCP para integração de LLM
Anki MCP Server
Provides programmatic access to Anki flashcard operations through the AnkiConnect API, allowing users to list and create decks and cards. It enables seamless management of flashcards directly from MCP-compatible clients like Claude Desktop.
shortcuts-mcp-server
shortcuts-mcp-server
Package README Core MCP Server
Intelligently detects package managers and provides unified access to documentation and information across 15+ package ecosystems including npm, PyPI, and others. Automatically routes requests to appropriate package-specific MCP servers for README retrieval, package information, and cross-ecosystem package search.
MCP Weather Notes Server
Provides real-time weather data from Open-Meteo API and enables creating and retrieving notes stored locally in JSON format.
Bear MCP Server
Provides comprehensive access to Bear notes using a hybrid sync-safe approach - direct database reads for performance and Bear's official API for writes, enabling natural language note management, search, organization, and analysis while maintaining iCloud sync compatibility.
Tree-Sitter MCP
Enables fast semantic code search and analysis across 15+ programming languages. Supports searching for functions and classes, tracing code usage, detecting syntax errors, and analyzing code quality and structure.
API MCP Server
A TypeScript-based MCP server that implements a simple notes system, providing resources for accessing notes via URIs, tools for creating notes, and prompts for generating summaries.
Cloudflare Remote PostgreSQL Database MCP Server
Enables chatting with PostgreSQL databases through secure GitHub OAuth authentication, supporting read operations for all users and write operations for privileged users. Deployable as a production-ready remote MCP server on Cloudflare Workers with automatic schema discovery and SQL injection protection.
C++ Style Guide MCP Server
Provides professional C++ code style checking, best practices analysis, and modern C++ upgrade suggestions with tools for naming conventions, memory safety, const correctness, and access to comprehensive C++ style documentation.
Memory MCP Server
Enables AI agents to maintain context across conversation sessions by saving and retrieving summaries of key points from past interactions.
DDG MCP2
A basic MCP server template built with FastMCP framework that provides example tools for echoing messages and retrieving server information. Serves as a starting point for developing custom MCP servers with Docker support and CI/CD integration.
MCP API Server Template
Weather MCP Server
Provides current weather data and hourly forecasts for any location worldwide using the Open-Meteo API, accessible through HTTP transport without requiring an API key.
Valkey MCP Task Management Server
Enables AI agents to create, manage, and track tasks within plans using Valkey as the persistence layer. Supports plan and task management with Markdown notes, status tracking, and prioritization through multiple transport protocols (SSE, Streamable HTTP, STDIO).
Kedro RAG MCP
An MCP server that enables users to query Kedro framework documentation using retrieval-augmented generation. It builds a local knowledge base from documentation files to help users navigate and apply Kedro's data science pipeline framework.
Pagos Data MCP Server
Enables Claude to retrieve BIN (Bank Identification Number) data for payment cards, with options for basic or enhanced insights through the Pagos API.
A Cloud Automator MCP server
Um servidor MCP não oficial para usar a API REST do Cloud Automator.
PortOne Global MCP Server
Enables searching and reading of PortOne documentation, including OpenAPI schemas and product guides, through the Model Context Protocol. It allows AI agents to easily access and integrate payment-related technical specifications into their workflows.
Satellite Tracking MCP Server
Provides access to real-time satellite tracking data using the N2YO API, enabling users to get satellite positions, predict visible passes, search satellites by name or category, and retrieve TLE orbital data.
ERPNext MCP Server
A production-ready server that enables AI assistants like Claude Desktop to seamlessly integrate with ERPNext for document operations, reporting, and custom workflows through natural language interaction.