Discover Awesome MCP Servers

Extend your agent with 14,265 capabilities via MCP servers.

All14,265
FreeCAD MCP

FreeCAD MCP

Enables control of FreeCAD CAD software from Claude Desktop through natural language commands. Supports creating, editing, and managing 3D objects, executing Python code, and generating screenshots of designs.

MCP-server-using-python

MCP-server-using-python

Estamos construindo um servidor MCP local chamado mix_server, uma conexão funcional com o Claude para desktop para que você possa conversar com suas ferramentas através da linguagem natural.

YouTube Knowledge MCP

YouTube Knowledge MCP

Transforms YouTube into a queryable knowledge source with search, video details, transcript analysis, and AI-powered tools for summaries, learning paths, and knowledge graphs. Features quota-aware API access with caching and optional OpenAI/Anthropic integration for advanced content analysis.

Playwright Testing Framework

Playwright Testing Framework

automação_de_teste_orientada_por_ia_com_Playwright_Cursor_MCP_Server

shortcuts-mcp-server

shortcuts-mcp-server

shortcuts-mcp-server

Acumatica MCP Server by CData

Acumatica MCP Server by CData

This read-only MCP Server allows you to connect to Acumatica data from Claude Desktop through CData JDBC Drivers. Free (beta) read/write servers available at https://www.cdata.com/solutions/mcp

SuperMemory MCP

SuperMemory MCP

A tool that makes memories stored in ChatGPT accessible across various language models without requiring logins or paywalls.

Azure DevOps MCP Proxy

Azure DevOps MCP Proxy

Enables interaction with Azure DevOps through Personal Access Token authentication. Supports work item management, wiki operations, project/repository listing, and build pipeline access through natural language.

Neo4j MCP Clients & Servers

Neo4j MCP Clients & Servers

Model Context Protocol with Neo4j

akhq-mcp-server

akhq-mcp-server

Servidor de Protocolo de Contexto de Modelo Experimental para a ferramenta de monitoramento Kafka AKHQ.

MCP AgentRun Server

MCP AgentRun Server

Enables safe Python code execution in isolated Docker containers through the AgentRun framework. Provides automatic container lifecycle management and comprehensive error handling for secure and reproducible code execution.

XML Documents MCP Server by CData

XML Documents MCP Server by CData

XML Documents MCP Server by CData

GitHub-Jira MCP Server

GitHub-Jira MCP Server

Enables secure integration between GitHub and Jira with permission controls, allowing users to manage repositories, create issues and pull requests, and handle Jira project workflows through natural language. Supports OAuth authentication and comprehensive security enforcement for both platforms.

🚀 Go-Tapd-SDK

🚀 Go-Tapd-SDK

O Go Tapd SDK é uma biblioteca cliente Go para acessar a API Tapd e também oferece suporte ao servidor MCP mais recente.

Polymarket MCP Tool

Polymarket MCP Tool

A Model Context Protocol server that enables interaction with Polymarket prediction markets through Claude Desktop.

MCP Meeting Summary System

MCP Meeting Summary System

An AI-powered meeting assistant that combines FastAPI backend with React frontend to generate high-quality meeting summaries and provide Q&A functionality using OpenAI and Selenium.

TG_MCP

TG_MCP

A lightweight Python interface that exposes TigerGraph database operations as structured tools and URI-based resources for Claude and other MCP agents.

JetsonMCP

JetsonMCP

Connects AI assistants to NVIDIA Jetson Nano systems for edge computing management, enabling natural language control of AI workloads, hardware optimization, and system administration tasks.

mcp-confluent

mcp-confluent

Uma implementação de servidor MCP construída para interagir com as APIs REST do Confluent Kafka e do Confluent Cloud.

Pagos Data MCP Server

Pagos Data MCP Server

Enables Claude to retrieve BIN (Bank Identification Number) data for payment cards, with options for basic or enhanced insights through the Pagos API.

Z3 Theorem Prover with Functional Programming

Z3 Theorem Prover with Functional Programming

Um servidor MCP para o provador de teoremas z3.

Enrichment MCP Server

Enrichment MCP Server

A Model Context Protocol server that enables users to perform third-party enrichment lookups for security observables (IP addresses, domains, URLs, emails) through services like VirusTotal, Shodan, and others.

Remote MCP Server on Cloudflare

Remote MCP Server on Cloudflare

Union MCP

Union MCP

Um servidor MCP que permite que os modelos Claude usem tarefas, fluxos de trabalho e aplicativos Union como ferramentas em conversas.

Claude Kali MCP Commander

Claude Kali MCP Commander

An MCP server that provides secure access to Kali Linux cybersecurity tools through Claude's interface, enabling users to run Kali Linux commands directly from Claude Desktop.

Echo MCP Server

Echo MCP Server

Um servidor simples que implementa o Protocolo de Contexto de Modelo (MCP) que ecoa as mensagens de volta, projetado para testar clientes MCP.

Fetch MCP Server

Fetch MCP Server

Okay, here's a breakdown of how you can fetch URLs from a webpage using Playwright, integrate it with an SSE (Server-Sent Events) MCP (Management Control Protocol) server, and use Node.js with Express.js to manage the whole process. I'll provide code snippets and explanations to guide you. **Conceptual Overview** 1. **Playwright (Web Scraping):** Playwright will be used to launch a browser, navigate to the target webpage, and extract the URLs you need. 2. **SSE (Server-Sent Events):** SSE will be used to push the extracted URLs to the client (e.g., a web browser) in real-time as they are found. This is more efficient than repeatedly polling the server. 3. **Node.js/Express.js (Server):** Express.js will provide the framework for your server. It will handle: * Starting the Playwright scraping process. * Managing the SSE connection. * Sending the extracted URLs as SSE events. * Potentially handling other management tasks (MCP). **Code Example (with Explanations)** ```javascript // server.js (Node.js/Express.js) const express = require('express'); const { chromium } = require('playwright'); const app = express(); const port = 3000; // SSE setup app.get('/events', (req, res) => { res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); res.flushHeaders(); // send the headers immediately // Function to send SSE events const sendEvent = (data) => { res.write(`data: ${JSON.stringify(data)}\n\n`); }; // Start the Playwright scraping process (async () => { const browser = await chromium.launch(); const page = await browser.newPage(); try { await page.goto('https://www.example.com'); // Replace with your target URL // Extract URLs (example: get all links) const links = await page.locator('a').evaluateAll(links => links.map(a => a.href)); // Send each URL as an SSE event for (const link of links) { sendEvent({ url: link }); } sendEvent({status: "complete"}); // Signal completion } catch (error) { console.error('Error during scraping:', error); sendEvent({ error: error.message }); // Send error to client } finally { await browser.close(); res.end(); // Close the SSE connection } })(); req.on('close', () => { console.log('Client disconnected'); }); }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` **Explanation:** 1. **Dependencies:** * `express`: For the Express.js web server. * `playwright`: For browser automation and web scraping. Install them: ```bash npm install express playwright ``` 2. **Express Setup:** * `const app = express();`: Creates an Express application. * `const port = 3000;`: Defines the port the server will listen on. 3. **SSE Endpoint (`/events`):** * `app.get('/events', (req, res) => { ... });`: Handles GET requests to the `/events` endpoint. This is where the SSE connection is established. * `res.setHeader(...)`: Sets the necessary headers for SSE: * `Content-Type: text/event-stream`: Tells the client that the server will be sending SSE events. * `Cache-Control: no-cache`: Prevents caching of the SSE stream. * `Connection: keep-alive`: Keeps the connection open. * `res.flushHeaders()`: Sends the headers immediately. This is important for SSE to work correctly. 4. **`sendEvent(data)` Function:** * This function formats the data as an SSE event and sends it to the client. The format is crucial: * `data: ${JSON.stringify(data)}\n\n`: The `data:` prefix is required by the SSE protocol. The `\n\n` (two newlines) signals the end of the event. `JSON.stringify()` converts the JavaScript object into a JSON string for transmission. 5. **Playwright Scraping:** * `(async () => { ... })();`: An immediately invoked async function to run the Playwright code. * `const browser = await chromium.launch();`: Launches a Chromium browser instance. You can also use `firefox.launch()` or `webkit.launch()` if you prefer. * `const page = await browser.newPage();`: Creates a new page (tab) in the browser. * `await page.goto('https://www.example.com');`: Navigates the page to the target URL. **Replace `https://www.example.com` with the actual URL you want to scrape.** * `const links = await page.locator('a').evaluateAll(links => links.map(a => a.href));`: This is the core scraping logic. * `page.locator('a')`: Selects all `<a>` (link) elements on the page. You can use any CSS selector here to target the elements containing the URLs you want. * `.evaluateAll(links => links.map(a => a.href))`: Executes a function in the browser context to extract the `href` attribute (the URL) from each link element. The `evaluateAll` method is used to process all the selected elements. * `for (const link of links) { sendEvent({ url: link }); }`: Iterates through the extracted URLs and sends each one as an SSE event using the `sendEvent` function. * `sendEvent({status: "complete"});`: Sends a final event to signal that the scraping is complete. This allows the client to know when to stop listening for events. 6. **Error Handling:** * The `try...catch` block handles potential errors during the scraping process. If an error occurs, it sends an error event to the client. 7. **Browser Closure:** * `await browser.close();`: Closes the browser instance to release resources. This is done in the `finally` block to ensure it always happens, even if an error occurs. * `res.end();`: Closes the SSE connection. 8. **Client Disconnect Handling:** * `req.on('close', () => { ... });`: Listens for the `close` event on the request object. This event is triggered when the client disconnects from the SSE stream. You can use this to clean up resources or log the disconnection. 9. **Server Startup:** * `app.listen(port, () => { ... });`: Starts the Express server and listens for incoming requests on the specified port. **Client-Side (Browser) Code (Example)** ```html <!DOCTYPE html> <html> <head> <title>SSE Example</title> </head> <body> <h1>Extracted URLs:</h1> <ul id="urlList"></ul> <script> const eventSource = new EventSource('http://localhost:3000/events'); // Replace with your server URL eventSource.onmessage = (event) => { const data = JSON.parse(event.data); if (data.url) { const listItem = document.createElement('li'); listItem.textContent = data.url; document.getElementById('urlList').appendChild(listItem); } else if (data.status === "complete") { console.log("Scraping complete!"); eventSource.close(); // Close the connection when done } else if (data.error) { console.error("Error from server:", data.error); eventSource.close(); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); eventSource.close(); }; </script> </body> </html> ``` **Explanation of Client-Side Code:** 1. **`EventSource`:** * `const eventSource = new EventSource('http://localhost:3000/events');`: Creates a new `EventSource` object, which establishes the SSE connection to the server's `/events` endpoint. **Replace `http://localhost:3000/events` with the actual URL of your server.** 2. **`onmessage` Event Handler:** * `eventSource.onmessage = (event) => { ... };`: This function is called whenever the server sends an SSE event. * `const data = JSON.parse(event.data);`: Parses the JSON data from the event. * The `if (data.url)` block handles the URL events, creating a list item (`<li>`) for each URL and adding it to the `urlList` element in the HTML. * The `else if (data.status === "complete")` block handles the completion event, logging a message to the console and closing the SSE connection using `eventSource.close()`. * The `else if (data.error)` block handles error events, logging the error to the console and closing the SSE connection. 3. **`onerror` Event Handler:** * `eventSource.onerror = (error) => { ... };`: This function is called if there is an error with the SSE connection. It logs the error to the console and closes the connection. **How to Run:** 1. **Save:** Save the server-side code as `server.js` and the client-side code as `index.html` (or any other HTML file). 2. **Install Dependencies:** Run `npm install express playwright` in your terminal. 3. **Start the Server:** Run `node server.js` in your terminal. 4. **Open in Browser:** Open `index.html` in your web browser. **Important Considerations and Improvements:** * **Error Handling:** The error handling in both the server and client is basic. You should add more robust error handling to catch and handle different types of errors gracefully. * **Scalability:** For high-volume scraping, consider using a message queue (e.g., RabbitMQ, Kafka) to decouple the scraping process from the SSE server. This will improve scalability and reliability. * **Rate Limiting:** Implement rate limiting to avoid overloading the target website and getting blocked. Playwright provides mechanisms for controlling the speed of your scraping. * **User-Agent:** Set a realistic User-Agent header in Playwright to avoid being identified as a bot. * **Headless Mode:** Run Playwright in headless mode (`chromium.launch({ headless: true })`) for better performance on a server. * **Selectors:** Choose your CSS selectors carefully to target the correct elements on the page. Use the browser's developer tools to inspect the HTML and find the appropriate selectors. * **Dynamic Content:** If the website uses JavaScript to load content dynamically, you may need to use `page.waitForSelector()` or `page.waitForTimeout()` to wait for the content to load before extracting the URLs. * **MCP (Management Control Protocol):** The example provides a basic SSE setup. For a full MCP implementation, you would need to define a protocol for sending commands to the server (e.g., start scraping, stop scraping, change the target URL). You could use additional Express routes to handle these commands. The SSE stream would then be used to report the status of the scraping process and send the extracted URLs. * **Authentication:** If the target website requires authentication, you'll need to handle the login process in your Playwright code. This may involve filling out forms, clicking buttons, and storing cookies. * **Legal and Ethical Considerations:** Always respect the website's `robots.txt` file and terms of service. Avoid scraping websites that explicitly prohibit it. Be mindful of the load you are placing on the website's servers. **Portuguese Translation of Key Terms:** * **Fetch URLs:** Obter URLs * **Webpage:** Página da web * **Playwright:** (No direct translation, use the English term) * **SSE (Server-Sent Events):** Eventos Enviados pelo Servidor * **MCP (Management Control Protocol):** Protocolo de Controle de Gerenciamento * **Node.js:** (No direct translation, use the English term) * **Express.js:** (No direct translation, use the English term) * **Scraping:** Raspagem (de dados) * **Endpoint:** Ponto de extremidade * **Event:** Evento * **Browser:** Navegador * **Server:** Servidor * **Client:** Cliente * **Connection:** Conexão * **Error:** Erro * **Complete:** Completo * **User-Agent:** Agente do Usuário * **Headless Mode:** Modo Sem Interface Gráfica This comprehensive example should give you a solid foundation for building your web scraping application with Playwright, SSE, and Node.js/Express.js. Remember to adapt the code to your specific needs and the structure of the target website. Good luck!

godoc-mcp-server

godoc-mcp-server

Um servidor MCP fornece pkg.go.dev. Para todos os programadores Golang.

mxHERO Mail2Cloud MCP

mxHERO Mail2Cloud MCP

High performance email search for teams & companies. Secure & scalable across email accounts w/ custom filters (e.g., customer facing communications only).

Miro MCP Server

Miro MCP Server

Servidor de Protocolo de Contexto do Modelo (MCP) para Miro