Discover Awesome MCP Servers

Extend your agent with 17,090 capabilities via MCP servers.

All17,090
Claude Kali MCP Commander

Claude Kali MCP Commander

An MCP server that provides secure access to Kali Linux cybersecurity tools through Claude's interface, enabling users to run Kali Linux commands directly from Claude Desktop.

Echo MCP Server

Echo MCP Server

Um servidor simples que implementa o Protocolo de Contexto de Modelo (MCP) que ecoa as mensagens de volta, projetado para testar clientes MCP.

Fetch MCP Server

Fetch MCP Server

Okay, here's a breakdown of how you can fetch URLs from a webpage using Playwright, integrate it with an SSE (Server-Sent Events) MCP (Management Control Protocol) server, and use Node.js with Express.js to manage the whole process. I'll provide code snippets and explanations to guide you. **Conceptual Overview** 1. **Playwright (Web Scraping):** Playwright will be used to launch a browser, navigate to the target webpage, and extract the URLs you need. 2. **SSE (Server-Sent Events):** SSE will be used to push the extracted URLs to the client (e.g., a web browser) in real-time as they are found. This is more efficient than repeatedly polling the server. 3. **Node.js/Express.js (Server):** Express.js will provide the framework for your server. It will handle: * Starting the Playwright scraping process. * Managing the SSE connection. * Sending the extracted URLs as SSE events. * Potentially handling other management tasks (MCP). **Code Example (with Explanations)** ```javascript // server.js (Node.js/Express.js) const express = require('express'); const { chromium } = require('playwright'); const app = express(); const port = 3000; // SSE setup app.get('/events', (req, res) => { res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); res.flushHeaders(); // send the headers immediately // Function to send SSE events const sendEvent = (data) => { res.write(`data: ${JSON.stringify(data)}\n\n`); }; // Start the Playwright scraping process (async () => { const browser = await chromium.launch(); const page = await browser.newPage(); try { await page.goto('https://www.example.com'); // Replace with your target URL // Extract URLs (example: get all links) const links = await page.locator('a').evaluateAll(links => links.map(a => a.href)); // Send each URL as an SSE event for (const link of links) { sendEvent({ url: link }); } sendEvent({status: "complete"}); // Signal completion } catch (error) { console.error('Error during scraping:', error); sendEvent({ error: error.message }); // Send error to client } finally { await browser.close(); res.end(); // Close the SSE connection } })(); req.on('close', () => { console.log('Client disconnected'); }); }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` **Explanation:** 1. **Dependencies:** * `express`: For the Express.js web server. * `playwright`: For browser automation and web scraping. Install them: ```bash npm install express playwright ``` 2. **Express Setup:** * `const app = express();`: Creates an Express application. * `const port = 3000;`: Defines the port the server will listen on. 3. **SSE Endpoint (`/events`):** * `app.get('/events', (req, res) => { ... });`: Handles GET requests to the `/events` endpoint. This is where the SSE connection is established. * `res.setHeader(...)`: Sets the necessary headers for SSE: * `Content-Type: text/event-stream`: Tells the client that the server will be sending SSE events. * `Cache-Control: no-cache`: Prevents caching of the SSE stream. * `Connection: keep-alive`: Keeps the connection open. * `res.flushHeaders()`: Sends the headers immediately. This is important for SSE to work correctly. 4. **`sendEvent(data)` Function:** * This function formats the data as an SSE event and sends it to the client. The format is crucial: * `data: ${JSON.stringify(data)}\n\n`: The `data:` prefix is required by the SSE protocol. The `\n\n` (two newlines) signals the end of the event. `JSON.stringify()` converts the JavaScript object into a JSON string for transmission. 5. **Playwright Scraping:** * `(async () => { ... })();`: An immediately invoked async function to run the Playwright code. * `const browser = await chromium.launch();`: Launches a Chromium browser instance. You can also use `firefox.launch()` or `webkit.launch()` if you prefer. * `const page = await browser.newPage();`: Creates a new page (tab) in the browser. * `await page.goto('https://www.example.com');`: Navigates the page to the target URL. **Replace `https://www.example.com` with the actual URL you want to scrape.** * `const links = await page.locator('a').evaluateAll(links => links.map(a => a.href));`: This is the core scraping logic. * `page.locator('a')`: Selects all `<a>` (link) elements on the page. You can use any CSS selector here to target the elements containing the URLs you want. * `.evaluateAll(links => links.map(a => a.href))`: Executes a function in the browser context to extract the `href` attribute (the URL) from each link element. The `evaluateAll` method is used to process all the selected elements. * `for (const link of links) { sendEvent({ url: link }); }`: Iterates through the extracted URLs and sends each one as an SSE event using the `sendEvent` function. * `sendEvent({status: "complete"});`: Sends a final event to signal that the scraping is complete. This allows the client to know when to stop listening for events. 6. **Error Handling:** * The `try...catch` block handles potential errors during the scraping process. If an error occurs, it sends an error event to the client. 7. **Browser Closure:** * `await browser.close();`: Closes the browser instance to release resources. This is done in the `finally` block to ensure it always happens, even if an error occurs. * `res.end();`: Closes the SSE connection. 8. **Client Disconnect Handling:** * `req.on('close', () => { ... });`: Listens for the `close` event on the request object. This event is triggered when the client disconnects from the SSE stream. You can use this to clean up resources or log the disconnection. 9. **Server Startup:** * `app.listen(port, () => { ... });`: Starts the Express server and listens for incoming requests on the specified port. **Client-Side (Browser) Code (Example)** ```html <!DOCTYPE html> <html> <head> <title>SSE Example</title> </head> <body> <h1>Extracted URLs:</h1> <ul id="urlList"></ul> <script> const eventSource = new EventSource('http://localhost:3000/events'); // Replace with your server URL eventSource.onmessage = (event) => { const data = JSON.parse(event.data); if (data.url) { const listItem = document.createElement('li'); listItem.textContent = data.url; document.getElementById('urlList').appendChild(listItem); } else if (data.status === "complete") { console.log("Scraping complete!"); eventSource.close(); // Close the connection when done } else if (data.error) { console.error("Error from server:", data.error); eventSource.close(); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); eventSource.close(); }; </script> </body> </html> ``` **Explanation of Client-Side Code:** 1. **`EventSource`:** * `const eventSource = new EventSource('http://localhost:3000/events');`: Creates a new `EventSource` object, which establishes the SSE connection to the server's `/events` endpoint. **Replace `http://localhost:3000/events` with the actual URL of your server.** 2. **`onmessage` Event Handler:** * `eventSource.onmessage = (event) => { ... };`: This function is called whenever the server sends an SSE event. * `const data = JSON.parse(event.data);`: Parses the JSON data from the event. * The `if (data.url)` block handles the URL events, creating a list item (`<li>`) for each URL and adding it to the `urlList` element in the HTML. * The `else if (data.status === "complete")` block handles the completion event, logging a message to the console and closing the SSE connection using `eventSource.close()`. * The `else if (data.error)` block handles error events, logging the error to the console and closing the SSE connection. 3. **`onerror` Event Handler:** * `eventSource.onerror = (error) => { ... };`: This function is called if there is an error with the SSE connection. It logs the error to the console and closes the connection. **How to Run:** 1. **Save:** Save the server-side code as `server.js` and the client-side code as `index.html` (or any other HTML file). 2. **Install Dependencies:** Run `npm install express playwright` in your terminal. 3. **Start the Server:** Run `node server.js` in your terminal. 4. **Open in Browser:** Open `index.html` in your web browser. **Important Considerations and Improvements:** * **Error Handling:** The error handling in both the server and client is basic. You should add more robust error handling to catch and handle different types of errors gracefully. * **Scalability:** For high-volume scraping, consider using a message queue (e.g., RabbitMQ, Kafka) to decouple the scraping process from the SSE server. This will improve scalability and reliability. * **Rate Limiting:** Implement rate limiting to avoid overloading the target website and getting blocked. Playwright provides mechanisms for controlling the speed of your scraping. * **User-Agent:** Set a realistic User-Agent header in Playwright to avoid being identified as a bot. * **Headless Mode:** Run Playwright in headless mode (`chromium.launch({ headless: true })`) for better performance on a server. * **Selectors:** Choose your CSS selectors carefully to target the correct elements on the page. Use the browser's developer tools to inspect the HTML and find the appropriate selectors. * **Dynamic Content:** If the website uses JavaScript to load content dynamically, you may need to use `page.waitForSelector()` or `page.waitForTimeout()` to wait for the content to load before extracting the URLs. * **MCP (Management Control Protocol):** The example provides a basic SSE setup. For a full MCP implementation, you would need to define a protocol for sending commands to the server (e.g., start scraping, stop scraping, change the target URL). You could use additional Express routes to handle these commands. The SSE stream would then be used to report the status of the scraping process and send the extracted URLs. * **Authentication:** If the target website requires authentication, you'll need to handle the login process in your Playwright code. This may involve filling out forms, clicking buttons, and storing cookies. * **Legal and Ethical Considerations:** Always respect the website's `robots.txt` file and terms of service. Avoid scraping websites that explicitly prohibit it. Be mindful of the load you are placing on the website's servers. **Portuguese Translation of Key Terms:** * **Fetch URLs:** Obter URLs * **Webpage:** Página da web * **Playwright:** (No direct translation, use the English term) * **SSE (Server-Sent Events):** Eventos Enviados pelo Servidor * **MCP (Management Control Protocol):** Protocolo de Controle de Gerenciamento * **Node.js:** (No direct translation, use the English term) * **Express.js:** (No direct translation, use the English term) * **Scraping:** Raspagem (de dados) * **Endpoint:** Ponto de extremidade * **Event:** Evento * **Browser:** Navegador * **Server:** Servidor * **Client:** Cliente * **Connection:** Conexão * **Error:** Erro * **Complete:** Completo * **User-Agent:** Agente do Usuário * **Headless Mode:** Modo Sem Interface Gráfica This comprehensive example should give you a solid foundation for building your web scraping application with Playwright, SSE, and Node.js/Express.js. Remember to adapt the code to your specific needs and the structure of the target website. Good luck!

Chuk Design System

Chuk Design System

Provides access to a universal design token system with 7 pre-built themes (colors, typography, spacing, motion) and multi-format export capabilities for Canva, Remotion, PPTX, CSS, and W3C JSON.

mcp_server

mcp_server

Servidor MCP para integração de LLM

shortcuts-mcp-server

shortcuts-mcp-server

shortcuts-mcp-server

MCP Remote Server

MCP Remote Server

Enables execution of SSH commands on remote servers and management of Google Cloud Platform (GCE) instances through Cursor IDE.

Acumatica MCP Server by CData

Acumatica MCP Server by CData

This read-only MCP Server allows you to connect to Acumatica data from Claude Desktop through CData JDBC Drivers. Free (beta) read/write servers available at https://www.cdata.com/solutions/mcp

API MCP Server

API MCP Server

A TypeScript-based MCP server that implements a simple notes system, providing resources for accessing notes via URIs, tools for creating notes, and prompts for generating summaries.

SuperMemory MCP

SuperMemory MCP

A tool that makes memories stored in ChatGPT accessible across various language models without requiring logins or paywalls.

DDG MCP2

DDG MCP2

A basic MCP server template built with FastMCP framework that provides example tools for echoing messages and retrieving server information. Serves as a starting point for developing custom MCP servers with Docker support and CI/CD integration.

Azure DevOps MCP Proxy

Azure DevOps MCP Proxy

Enables interaction with Azure DevOps through Personal Access Token authentication. Supports work item management, wiki operations, project/repository listing, and build pipeline access through natural language.

Neo4j MCP Clients & Servers

Neo4j MCP Clients & Servers

Model Context Protocol with Neo4j

akhq-mcp-server

akhq-mcp-server

Servidor de Protocolo de Contexto de Modelo Experimental para a ferramenta de monitoramento Kafka AKHQ.

MCP AgentRun Server

MCP AgentRun Server

Enables safe Python code execution in isolated Docker containers through the AgentRun framework. Provides automatic container lifecycle management and comprehensive error handling for secure and reproducible code execution.

XML Documents MCP Server by CData

XML Documents MCP Server by CData

XML Documents MCP Server by CData

Pabal MCP

Pabal MCP

Manages App Store Connect and Google Play Console metadata, releases, and ASO workflows locally through MCP tools, enabling store management directly from AI clients without manual console navigation.

🚀 Go-Tapd-SDK

🚀 Go-Tapd-SDK

O Go Tapd SDK é uma biblioteca cliente Go para acessar a API Tapd e também oferece suporte ao servidor MCP mais recente.

Polymarket MCP Tool

Polymarket MCP Tool

A Model Context Protocol server that enables interaction with Polymarket prediction markets through Claude Desktop.

WAHA MCP Server

WAHA MCP Server

Enables AI assistants to interact with WhatsApp through the WAHA (WhatsApp HTTP API) platform. Supports chat management, message operations including sending/receiving messages, and marking chats as read.

MCP Demo Server

MCP Demo Server

A demonstration server based on Model Context Protocol (MCP) that showcases how to build custom tools for AI assistants, providing mathematical calculation and multilingual greeting capabilities.

MCP REST API Server

MCP REST API Server

A server implementation of the Model Context Protocol (MCP) that provides REST API endpoints for managing and interacting with MCP resources.

Databricks MCP Server Template

Databricks MCP Server Template

Enables AI assistants like Claude to interact with Databricks workspaces through a secure, authenticated interface. Supports custom prompts and tools that leverage the Databricks SDK for workspace management, job execution, and SQL operations.

Trino MCP Server

Trino MCP Server

Enables database schema analysis and management for Trino servers through dynamic connections. Supports DDL validation, dependency analysis, schema documentation generation, and safe SQL execution with multiple concurrent connections.

Cursor Talk to Figma MCP

Cursor Talk to Figma MCP

Enables Cursor AI to communicate with Figma for reading designs and modifying them programmatically, allowing users to automate design tasks through natural language.

Tambo Docs MCP Server

Tambo Docs MCP Server

Enables AI assistants to discover, fetch, and search through Tambo documentation from docs.tambo.co. Provides intelligent content parsing with caching for improved performance when accessing technical documentation.

MCP API Server Template

MCP API Server Template

MCP Meeting Summary System

MCP Meeting Summary System

An AI-powered meeting assistant that combines FastAPI backend with React frontend to generate high-quality meeting summaries and provide Q&A functionality using OpenAI and Selenium.

TG_MCP

TG_MCP

A lightweight Python interface that exposes TigerGraph database operations as structured tools and URI-based resources for Claude and other MCP agents.

Weather MCP Server

Weather MCP Server

Provides current weather data and hourly forecasts for any location worldwide using the Open-Meteo API, accessible through HTTP transport without requiring an API key.