Discover Awesome MCP Servers

Extend your agent with 20,402 capabilities via MCP servers.

All20,402
MCP Multi-Server System

MCP Multi-Server System

A dual-server MCP system with PostgreSQL integration that provides financial tools (stock prices, portfolio calculation, financial news) on one server and utility tools (weather, time, data processing, text analysis) on another server.

PostgreSQL MCP Server

PostgreSQL MCP Server

Um servidor de Protocolo de Contexto de Modelo que fornece acesso somente leitura a bancos de dados PostgreSQL, permitindo que LLMs (Modelos de Linguagem Grandes) inspecionem esquemas de banco de dados e executem consultas somente leitura.

MCPDemo - Visual SQL Chat Platform

MCPDemo - Visual SQL Chat Platform

An AI-driven data visualization and SQL chat platform that enables natural language to SQL conversion, interactive chart generation, and database querying through a comprehensive MCP interface. Supports multiple databases, intelligent chart recommendations, and secure data management with temporary link sharing.

Planning Data GOV.UK MCP Server

Planning Data GOV.UK MCP Server

Auto-generated Model Context Protocol server that provides an interface to the planning.data.gov.uk API, allowing users to interact with UK planning data resources through natural language commands.

Gitea MCP Tool

Gitea MCP Tool

Enables AI assistants to interact with Gitea repositories through intelligent tools for issue/PR management, workflow analysis, compliance checking, and content generation, plus 200+ CLI commands for complete CRUD operations.

MyFitnessPal MCP Server

MyFitnessPal MCP Server

Enables retrieval and analysis of MyFitnessPal nutrition data including daily summaries, meal breakdowns, exercise tracking, and macro/micronutrient analysis. Uses browser cookie authentication to access your personal MyFitnessPal account data through natural language queries.

Yandex Search MCP Server

Yandex Search MCP Server

Enables AI assistants to perform real-time web searches and retrieve AI-generated answers using the Yandex Search API. It provides tools for accessing up-to-date internet information with support for both raw search results and summarized content via the Yazeka model.

YouTube Knowledge MCP

YouTube Knowledge MCP

Transforms YouTube into a queryable knowledge source with search, video details, transcript analysis, and AI-powered tools for summaries, learning paths, and knowledge graphs. Features quota-aware API access with caching and optional OpenAI/Anthropic integration for advanced content analysis.

Playwright Testing Framework

Playwright Testing Framework

automação_de_teste_orientada_por_ia_com_Playwright_Cursor_MCP_Server

Cloudflare Remote MCP Server Template

Cloudflare Remote MCP Server Template

A template for deploying MCP servers on Cloudflare Workers without authentication. Enables easy creation and deployment of custom MCP tools that can be accessed remotely via Claude Desktop or Cloudflare AI Playground.

rest-to-mcp

rest-to-mcp

Projeto tutorial para converter uma API REST em um servidor MCP

Stellar MCP

Stellar MCP

Enables interaction with Stellar Classic and Soroban smart contracts, allowing users to manage accounts, process payments, and handle asset operations. It provides comprehensive tools for building, deploying, and retrieving the interfaces of smart contracts on the Stellar network.

Decodo MCP Server

Decodo MCP Server

Enables web scraping and data extraction from websites with geographic flexibility, privacy features, and anti-detection capabilities. Supports scraping general websites, Google Search, Amazon Search, and Reddit with customizable parameters for rendering, geolocation, and locale.

MCP Server Markup Language (MCPML)

MCP Server Markup Language (MCPML)

MCP Server Markup Language (MCPML) - Um framework Python para construir Servidores MCP com suporte a CLI e Agente OpenAI.

Enrichment MCP Server

Enrichment MCP Server

A Model Context Protocol server that enables users to perform third-party enrichment lookups for security observables (IP addresses, domains, URLs, emails) through services like VirusTotal, Shodan, and others.

City By Api Ninjas MCP Server

City By Api Ninjas MCP Server

Provides access to the City By Api Ninjas API to retrieve detailed city information worldwide. Users can search and filter results by name, country, population range, and geographic coordinates.

SQLx MCP Server

SQLx MCP Server

Provides comprehensive database management tools for PostgreSQL, MySQL, and SQLite databases. Enables querying table structures, executing read-only and write queries, exporting DDL statements, and managing database metadata through natural language.

MCP server for LogSeq

MCP server for LogSeq

Interacts with LogSeq via its API.

Claude Kali MCP Commander

Claude Kali MCP Commander

An MCP server that provides secure access to Kali Linux cybersecurity tools through Claude's interface, enabling users to run Kali Linux commands directly from Claude Desktop.

CockroachDB MCP Server

CockroachDB MCP Server

The CockroachDB MCP Server is a natural language interface designed for agentic applications and LLMs to manage, monitor and query data in CockroachDB.

Echo MCP Server

Echo MCP Server

Um servidor simples que implementa o Protocolo de Contexto de Modelo (MCP) que ecoa as mensagens de volta, projetado para testar clientes MCP.

Fetch MCP Server

Fetch MCP Server

Okay, here's a breakdown of how you can fetch URLs from a webpage using Playwright, integrate it with an SSE (Server-Sent Events) MCP (Management Control Protocol) server, and use Node.js with Express.js to manage the whole process. I'll provide code snippets and explanations to guide you. **Conceptual Overview** 1. **Playwright (Web Scraping):** Playwright will be used to launch a browser, navigate to the target webpage, and extract the URLs you need. 2. **SSE (Server-Sent Events):** SSE will be used to push the extracted URLs to the client (e.g., a web browser) in real-time as they are found. This is more efficient than repeatedly polling the server. 3. **Node.js/Express.js (Server):** Express.js will provide the framework for your server. It will handle: * Starting the Playwright scraping process. * Managing the SSE connection. * Sending the extracted URLs as SSE events. * Potentially handling other management tasks (MCP). **Code Example (with Explanations)** ```javascript // server.js (Node.js/Express.js) const express = require('express'); const { chromium } = require('playwright'); const app = express(); const port = 3000; // SSE setup app.get('/events', (req, res) => { res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); res.flushHeaders(); // send the headers immediately // Function to send SSE events const sendEvent = (data) => { res.write(`data: ${JSON.stringify(data)}\n\n`); }; // Start the Playwright scraping process (async () => { const browser = await chromium.launch(); const page = await browser.newPage(); try { await page.goto('https://www.example.com'); // Replace with your target URL // Extract URLs (example: get all links) const links = await page.locator('a').evaluateAll(links => links.map(a => a.href)); // Send each URL as an SSE event for (const link of links) { sendEvent({ url: link }); } sendEvent({status: "complete"}); // Signal completion } catch (error) { console.error('Error during scraping:', error); sendEvent({ error: error.message }); // Send error to client } finally { await browser.close(); res.end(); // Close the SSE connection } })(); req.on('close', () => { console.log('Client disconnected'); }); }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` **Explanation:** 1. **Dependencies:** * `express`: For the Express.js web server. * `playwright`: For browser automation and web scraping. Install them: ```bash npm install express playwright ``` 2. **Express Setup:** * `const app = express();`: Creates an Express application. * `const port = 3000;`: Defines the port the server will listen on. 3. **SSE Endpoint (`/events`):** * `app.get('/events', (req, res) => { ... });`: Handles GET requests to the `/events` endpoint. This is where the SSE connection is established. * `res.setHeader(...)`: Sets the necessary headers for SSE: * `Content-Type: text/event-stream`: Tells the client that the server will be sending SSE events. * `Cache-Control: no-cache`: Prevents caching of the SSE stream. * `Connection: keep-alive`: Keeps the connection open. * `res.flushHeaders()`: Sends the headers immediately. This is important for SSE to work correctly. 4. **`sendEvent(data)` Function:** * This function formats the data as an SSE event and sends it to the client. The format is crucial: * `data: ${JSON.stringify(data)}\n\n`: The `data:` prefix is required by the SSE protocol. The `\n\n` (two newlines) signals the end of the event. `JSON.stringify()` converts the JavaScript object into a JSON string for transmission. 5. **Playwright Scraping:** * `(async () => { ... })();`: An immediately invoked async function to run the Playwright code. * `const browser = await chromium.launch();`: Launches a Chromium browser instance. You can also use `firefox.launch()` or `webkit.launch()` if you prefer. * `const page = await browser.newPage();`: Creates a new page (tab) in the browser. * `await page.goto('https://www.example.com');`: Navigates the page to the target URL. **Replace `https://www.example.com` with the actual URL you want to scrape.** * `const links = await page.locator('a').evaluateAll(links => links.map(a => a.href));`: This is the core scraping logic. * `page.locator('a')`: Selects all `<a>` (link) elements on the page. You can use any CSS selector here to target the elements containing the URLs you want. * `.evaluateAll(links => links.map(a => a.href))`: Executes a function in the browser context to extract the `href` attribute (the URL) from each link element. The `evaluateAll` method is used to process all the selected elements. * `for (const link of links) { sendEvent({ url: link }); }`: Iterates through the extracted URLs and sends each one as an SSE event using the `sendEvent` function. * `sendEvent({status: "complete"});`: Sends a final event to signal that the scraping is complete. This allows the client to know when to stop listening for events. 6. **Error Handling:** * The `try...catch` block handles potential errors during the scraping process. If an error occurs, it sends an error event to the client. 7. **Browser Closure:** * `await browser.close();`: Closes the browser instance to release resources. This is done in the `finally` block to ensure it always happens, even if an error occurs. * `res.end();`: Closes the SSE connection. 8. **Client Disconnect Handling:** * `req.on('close', () => { ... });`: Listens for the `close` event on the request object. This event is triggered when the client disconnects from the SSE stream. You can use this to clean up resources or log the disconnection. 9. **Server Startup:** * `app.listen(port, () => { ... });`: Starts the Express server and listens for incoming requests on the specified port. **Client-Side (Browser) Code (Example)** ```html <!DOCTYPE html> <html> <head> <title>SSE Example</title> </head> <body> <h1>Extracted URLs:</h1> <ul id="urlList"></ul> <script> const eventSource = new EventSource('http://localhost:3000/events'); // Replace with your server URL eventSource.onmessage = (event) => { const data = JSON.parse(event.data); if (data.url) { const listItem = document.createElement('li'); listItem.textContent = data.url; document.getElementById('urlList').appendChild(listItem); } else if (data.status === "complete") { console.log("Scraping complete!"); eventSource.close(); // Close the connection when done } else if (data.error) { console.error("Error from server:", data.error); eventSource.close(); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); eventSource.close(); }; </script> </body> </html> ``` **Explanation of Client-Side Code:** 1. **`EventSource`:** * `const eventSource = new EventSource('http://localhost:3000/events');`: Creates a new `EventSource` object, which establishes the SSE connection to the server's `/events` endpoint. **Replace `http://localhost:3000/events` with the actual URL of your server.** 2. **`onmessage` Event Handler:** * `eventSource.onmessage = (event) => { ... };`: This function is called whenever the server sends an SSE event. * `const data = JSON.parse(event.data);`: Parses the JSON data from the event. * The `if (data.url)` block handles the URL events, creating a list item (`<li>`) for each URL and adding it to the `urlList` element in the HTML. * The `else if (data.status === "complete")` block handles the completion event, logging a message to the console and closing the SSE connection using `eventSource.close()`. * The `else if (data.error)` block handles error events, logging the error to the console and closing the SSE connection. 3. **`onerror` Event Handler:** * `eventSource.onerror = (error) => { ... };`: This function is called if there is an error with the SSE connection. It logs the error to the console and closes the connection. **How to Run:** 1. **Save:** Save the server-side code as `server.js` and the client-side code as `index.html` (or any other HTML file). 2. **Install Dependencies:** Run `npm install express playwright` in your terminal. 3. **Start the Server:** Run `node server.js` in your terminal. 4. **Open in Browser:** Open `index.html` in your web browser. **Important Considerations and Improvements:** * **Error Handling:** The error handling in both the server and client is basic. You should add more robust error handling to catch and handle different types of errors gracefully. * **Scalability:** For high-volume scraping, consider using a message queue (e.g., RabbitMQ, Kafka) to decouple the scraping process from the SSE server. This will improve scalability and reliability. * **Rate Limiting:** Implement rate limiting to avoid overloading the target website and getting blocked. Playwright provides mechanisms for controlling the speed of your scraping. * **User-Agent:** Set a realistic User-Agent header in Playwright to avoid being identified as a bot. * **Headless Mode:** Run Playwright in headless mode (`chromium.launch({ headless: true })`) for better performance on a server. * **Selectors:** Choose your CSS selectors carefully to target the correct elements on the page. Use the browser's developer tools to inspect the HTML and find the appropriate selectors. * **Dynamic Content:** If the website uses JavaScript to load content dynamically, you may need to use `page.waitForSelector()` or `page.waitForTimeout()` to wait for the content to load before extracting the URLs. * **MCP (Management Control Protocol):** The example provides a basic SSE setup. For a full MCP implementation, you would need to define a protocol for sending commands to the server (e.g., start scraping, stop scraping, change the target URL). You could use additional Express routes to handle these commands. The SSE stream would then be used to report the status of the scraping process and send the extracted URLs. * **Authentication:** If the target website requires authentication, you'll need to handle the login process in your Playwright code. This may involve filling out forms, clicking buttons, and storing cookies. * **Legal and Ethical Considerations:** Always respect the website's `robots.txt` file and terms of service. Avoid scraping websites that explicitly prohibit it. Be mindful of the load you are placing on the website's servers. **Portuguese Translation of Key Terms:** * **Fetch URLs:** Obter URLs * **Webpage:** Página da web * **Playwright:** (No direct translation, use the English term) * **SSE (Server-Sent Events):** Eventos Enviados pelo Servidor * **MCP (Management Control Protocol):** Protocolo de Controle de Gerenciamento * **Node.js:** (No direct translation, use the English term) * **Express.js:** (No direct translation, use the English term) * **Scraping:** Raspagem (de dados) * **Endpoint:** Ponto de extremidade * **Event:** Evento * **Browser:** Navegador * **Server:** Servidor * **Client:** Cliente * **Connection:** Conexão * **Error:** Erro * **Complete:** Completo * **User-Agent:** Agente do Usuário * **Headless Mode:** Modo Sem Interface Gráfica This comprehensive example should give you a solid foundation for building your web scraping application with Playwright, SSE, and Node.js/Express.js. Remember to adapt the code to your specific needs and the structure of the target website. Good luck!

Spotify MCP Node Server

Spotify MCP Node Server

Enables AI assistants to control Spotify playback, manage playlists, search music, and access listening history. Requires Spotify Premium and uses secure OAuth 2.0 with PKCE authentication.

CST MCP

CST MCP

An MCP server that integrates CST Studio Suite with the Model Context Protocol to enable automated electromagnetic simulation workflows. It provides specialized tools for material definition management and direct interaction with the CST Studio Suite environment.

Angular MCP Toolkit

Angular MCP Toolkit

Provides 11 powerful tools for Angular development with enterprise-grade templates, scaffolding capabilities, and project generation features for modern Angular 20.0.x applications.

SBA MCP Server

SBA MCP Server

Provides access to SBA small business size standards and SBIR/STTR federal R\&D funding data. It enables users to look up NAICS thresholds, verify business eligibility, and search for specific firms or award records across various federal agencies.

MCP Codebase Index

MCP Codebase Index

Enables semantic search across your codebase using Google's Gemini embeddings and Qdrant Cloud vector storage. Supports 15+ programming languages with smart code chunking and real-time file change monitoring.

QMT-MCP-Server

QMT-MCP-Server

Espelho de

FireScrape MCP Server

FireScrape MCP Server

Enables AI models to search the web using DuckDuckGo, scrape webpage content in markdown format, and browse/read local files for code analysis and debugging.

MCP Web Extractor

MCP Web Extractor

Servidor MCP que extrai conteúdo da web usando Readability.js