Discover Awesome MCP Servers
Extend your agent with 16,005 capabilities via MCP servers.
- All16,005
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Echo MCP Server
Server sederhana yang mengimplementasikan Model Context Protocol (MCP) yang menggemakan pesan kembali, dirancang untuk menguji klien MCP.
Fetch MCP Server
Okay, here's a breakdown of how you can fetch URLs from a webpage using Playwright, integrate it with an SSE (Server-Sent Events) MCP (Management Control Protocol) server, and use Node.js with Express.js to orchestrate everything. I'll provide code snippets and explanations to guide you. **Conceptual Overview** 1. **Playwright (Web Scraping):** Playwright will be used to launch a browser, navigate to the target webpage, and extract the URLs you need. 2. **Node.js/Express.js (Server):** Express.js will create a web server that handles requests from clients (e.g., a web browser or another application). This server will: * Receive requests to start the URL fetching process. * Orchestrate the Playwright scraping. * Send progress updates and results back to the client using Server-Sent Events (SSE). 3. **SSE (Server-Sent Events):** SSE is a one-way communication protocol where the server pushes updates to the client in real-time. This is ideal for providing progress updates as the scraping happens. 4. **MCP (Management Control Protocol):** The term "MCP" is a bit vague in this context. I'm assuming it refers to a system or protocol for managing and controlling the scraping process. In this example, I'll treat it as a way to send commands to the server (e.g., "start scraping," "stop scraping," "set target URL"). You might need to adapt this part based on the specifics of your MCP implementation. **Code Example (Illustrative)** **1. Project Setup** ```bash mkdir playwright-sse-scraper cd playwright-sse-scraper npm init -y npm install express playwright eventsource ``` **2. `server.js` (Node.js/Express.js Server)** ```javascript const express = require('express'); const { chromium } = require('playwright'); const EventEmitter = require('events'); const app = express(); const port = 3000; app.use(express.json()); // For parsing JSON request bodies // Event emitter for SSE updates const sseEmitter = new EventEmitter(); let browser = null; // Playwright browser instance let page = null; // Playwright page instance let isScraping = false; let targetUrl = ''; let scrapedUrls = []; // Function to start Playwright and navigate to the target URL async function initializePlaywright(url) { try { browser = await chromium.launch({ headless: true }); // Or false for debugging page = await browser.newPage(); await page.goto(url); return true; } catch (error) { console.error("Error initializing Playwright:", error); sseEmitter.emit('update', { type: 'error', message: `Playwright initialization failed: ${error.message}` }); return false; } } // Function to extract URLs from the page async function extractUrls() { try { const urls = await page.evaluate(() => { const links = Array.from(document.querySelectorAll('a')); return links.map(link => link.href); }); return urls; } catch (error) { console.error("Error extracting URLs:", error); sseEmitter.emit('update', { type: 'error', message: `URL extraction failed: ${error.message}` }); return []; } } // Function to scrape URLs async function scrape(url) { if (!isScraping) { sseEmitter.emit('update', { type: 'status', message: 'Scraping stopped.' }); return; } if (!browser || !page) { const initialized = await initializePlaywright(url); if (!initialized) { isScraping = false; return; } } sseEmitter.emit('update', { type: 'status', message: 'Scraping in progress...' }); try { const newUrls = await extractUrls(); scrapedUrls = [...new Set([...scrapedUrls, ...newUrls])]; // Deduplicate sseEmitter.emit('update', { type: 'urls', urls: newUrls }); sseEmitter.emit('update', { type: 'status', message: `Found ${newUrls.length} new URLs. Total: ${scrapedUrls.length}` }); // Example: Scroll down to load more content (adjust as needed) await page.evaluate(() => { window.scrollBy(0, window.innerHeight); }); await page.waitForTimeout(2000); // Wait for content to load if (isScraping) { scrape(url); // Recursive call to continue scraping } else { sseEmitter.emit('update', { type: 'status', message: 'Scraping completed.' }); await browser.close(); browser = null; page = null; } } catch (error) { console.error("Scraping error:", error); sseEmitter.emit('update', { type: 'error', message: `Scraping failed: ${error.message}` }); isScraping = false; if (browser) { await browser.close(); browser = null; page = null; } } } // SSE endpoint app.get('/events', (req, res) => { res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); res.flushHeaders(); const listener = (data) => { res.write(`data: ${JSON.stringify(data)}\n\n`); }; sseEmitter.on('update', listener); req.on('close', () => { sseEmitter.removeListener('update', listener); console.log('Client disconnected'); }); }); // MCP-like endpoints (adjust to your MCP) app.post('/mcp/start', (req, res) => { targetUrl = req.body.url; if (!targetUrl) { return res.status(400).send({ error: 'URL is required' }); } if (isScraping) { return res.status(400).send({ error: 'Scraping already in progress' }); } isScraping = true; scrapedUrls = []; // Reset scraped URLs scrape(targetUrl); res.send({ message: 'Scraping started' }); }); app.post('/mcp/stop', (req, res) => { isScraping = false; res.send({ message: 'Scraping stopped' }); }); app.get('/mcp/status', (req, res) => { res.send({ isScraping: isScraping, targetUrl: targetUrl, scrapedUrls: scrapedUrls.length }); }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` **3. `index.html` (Client-side Example)** ```html <!DOCTYPE html> <html> <head> <title>Playwright SSE Scraper</title> </head> <body> <h1>Playwright SSE Scraper</h1> <label for="url">URL:</label> <input type="text" id="url" value="https://example.com"><br><br> <button id="startButton">Start Scraping</button> <button id="stopButton">Stop Scraping</button> <h2>Status:</h2> <div id="status"></div> <h2>URLs:</h2> <ul id="urls"></ul> <script> const urlInput = document.getElementById('url'); const startButton = document.getElementById('startButton'); const stopButton = document.getElementById('stopButton'); const statusDiv = document.getElementById('status'); const urlsList = document.getElementById('urls'); let eventSource = null; startButton.addEventListener('click', () => { const url = urlInput.value; startScraping(url); }); stopButton.addEventListener('click', () => { stopScraping(); }); function startScraping(url) { fetch('/mcp/start', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ url: url }) }) .then(response => response.json()) .then(data => { console.log(data); connectToSSE(); }) .catch(error => { console.error('Error starting scraping:', error); statusDiv.textContent = `Error starting scraping: ${error}`; }); } function stopScraping() { fetch('/mcp/stop', { method: 'POST' }) .then(response => response.json()) .then(data => { console.log(data); if (eventSource) { eventSource.close(); eventSource = null; } }) .catch(error => { console.error('Error stopping scraping:', error); statusDiv.textContent = `Error stopping scraping: ${error}`; }); } function connectToSSE() { if (eventSource) { eventSource.close(); } eventSource = new EventSource('/events'); eventSource.onmessage = (event) => { const data = JSON.parse(event.data); console.log('Received SSE event:', data); switch (data.type) { case 'status': statusDiv.textContent = data.message; break; case 'urls': data.urls.forEach(url => { const li = document.createElement('li'); li.textContent = url; urlsList.appendChild(li); }); break; case 'error': statusDiv.textContent = `Error: ${data.message}`; break; } }; eventSource.onerror = (error) => { console.error('SSE error:', error); statusDiv.textContent = `SSE error: ${error}`; eventSource.close(); eventSource = null; }; } </script> </body> </html> ``` **Explanation** * **`server.js`:** * Sets up an Express.js server. * Uses `playwright` to launch a Chromium browser and navigate to the target URL. * `extractUrls()` uses `page.evaluate()` to run JavaScript code within the browser context to extract all `href` attributes from `<a>` tags. * The `/events` endpoint sets up the SSE connection. It sends updates as `data: JSON.stringify(data)\n\n`. * The `/mcp/start` endpoint receives the target URL, starts the Playwright scraping process, and resets the `scrapedUrls` array. * The `/mcp/stop` endpoint sets `isScraping` to `false`, which will stop the scraping loop. * The `/mcp/status` endpoint provides the current status of the scraper. * The `scrape()` function is recursive. It extracts URLs, scrolls down (to load more content), waits, and then calls itself again if `isScraping` is still true. This creates a continuous scraping loop. * Error handling is included to catch Playwright errors and send error messages via SSE. * The `EventEmitter` is used to manage SSE updates. * **`index.html`:** * Provides a simple user interface with a URL input, start/stop buttons, a status display, and a list to show the extracted URLs. * Uses JavaScript to: * Send POST requests to `/mcp/start` and `/mcp/stop` to control the scraping process. * Connect to the `/events` SSE endpoint using `EventSource`. * Update the UI based on the SSE messages received from the server. **How to Run** 1. Save the code as `server.js` and `index.html` in your `playwright-sse-scraper` directory. 2. Run `node server.js` in your terminal. 3. Open `index.html` in your web browser (usually `http://localhost:3000`). 4. Enter a URL in the input field and click "Start Scraping." 5. Click "Stop Scraping" to halt the process. **Important Considerations and Improvements** * **Error Handling:** The error handling in the example is basic. You should add more robust error handling, logging, and retry mechanisms. * **Rate Limiting:** Be very careful not to overload the target website. Implement delays and respect `robots.txt`. Consider using a proxy server to avoid getting your IP address blocked. * **`robots.txt`:** Always check the `robots.txt` file of the target website to see which pages are allowed to be scraped. Respect the rules. * **Dynamic Content:** If the website uses a lot of JavaScript to load content dynamically, you might need to use `page.waitForSelector()` or `page.waitForTimeout()` to wait for elements to appear before extracting URLs. * **Selectors:** The `document.querySelectorAll('a')` selector is very basic. You might need to use more specific CSS selectors to target the URLs you want to extract. * **Headless Mode:** `headless: true` runs the browser in the background. Set it to `false` for debugging. * **Scalability:** For large-scale scraping, consider using a more robust queueing system (e.g., Redis, RabbitMQ) and running multiple Playwright instances in parallel. * **MCP Integration:** Adapt the `/mcp/*` endpoints to match the specific commands and data format of your MCP system. * **Data Storage:** Instead of just displaying the URLs in the browser, you'll likely want to store them in a database or file. * **Authentication:** If the target website requires authentication, you'll need to handle login using Playwright. * **Legal and Ethical Considerations:** Make sure you have the right to scrape the website. Be transparent about your scraping activities. Don't scrape personal information without consent. This comprehensive example should give you a solid foundation for building your Playwright-based URL scraper with SSE and Node.js. Remember to adapt the code to your specific needs and always be ethical and responsible when scraping websites.
PC-MCP
Proyek ini ditujukan untuk server MCP yang beroperasi pada komputer pribadi, dan saat ini terutama digunakan untuk demonstrasi yang dipasangkan dengan proyek smart-pet-with-mcp.
Pylon MCP Server
Enables interaction with Pylon's customer support platform API to manage users, contacts, issues, and knowledge base articles through natural language commands.
mcp_server
Here are a few possible translations, depending on the specific context of "MCP server" and "LLM integration": **General Translation:** * **Server MCP untuk integrasi LLM** (This is a direct translation and likely the most common and understandable option.) **More Specific Translations (depending on context):** * **Server MCP untuk integrasi dengan LLM** (Slightly more formal, using "dengan" for "with") * **Server MCP untuk integrasi model bahasa besar (LLM)** (This explicitly spells out what LLM stands for, which might be helpful if the audience is not familiar with the acronym.) * **Server MCP untuk mengintegrasikan LLM** (This uses the verb "mengintegrasikan" which means "to integrate") **Explanation of Choices:** * **MCP:** "MCP" is likely best left as is, assuming it's an acronym or a specific product name. Trying to translate it might lose the original meaning. * **Server:** "Server" translates directly to "Server" in Indonesian. * **for:** "for" can be translated as "untuk". * **LLM Integration:** "LLM Integration" can be translated as "integrasi LLM" or "integrasi model bahasa besar (LLM)". The latter is more explicit. **Recommendation:** I would recommend using the first option: **Server MCP untuk integrasi LLM** unless you have a specific reason to be more explicit or formal.
Cloudflare Remote PostgreSQL Database MCP Server
Enables chatting with PostgreSQL databases through secure GitHub OAuth authentication, supporting read operations for all users and write operations for privileged users. Deployable as a production-ready remote MCP server on Cloudflare Workers with automatic schema discovery and SQL injection protection.
DDG MCP2
A basic MCP server template built with FastMCP framework that provides example tools for echoing messages and retrieving server information. Serves as a starting point for developing custom MCP servers with Docker support and CI/CD integration.
chromium-arm64
MCP server that enables browser automation and web testing on ARM64 devices like Raspberry Pi, allowing users to navigate websites, take screenshots, execute JavaScript, and perform UI testing via Claude.
Weather MCP Server
A Model Context Protocol server that provides weather information and forecasts based on user location or address input.
Oracle Financials Cloud MCP Server by CData
Oracle Financials Cloud MCP Server by CData
mcd-demo
Baiklah, saya akan menerjemahkan teks tersebut ke dalam bahasa Indonesia: **Menguji pembuatan server MCP sederhana dan integrasi dengan agen LangChain.**
mcp_repo_9ebf5242
Ini adalah repositori pengujian yang dibuat oleh skrip pengujian MCP Server untuk GitHub.
ABS MCP Server
An MCP server that provides AI assistants with access to Australian Bureau of Statistics data through the SDMX-ML API, enabling statistical data querying and analysis.
ynab-mcp-server
Cermin dari
Postman
Postman’s remote MCP server connects AI agents, assistants, and chatbots directly to your APIs on Postman. Use natural language to prompt AI to automate work across your Postman collections, environments, workspaces, and more.
Vitest MCP Server
AI-optimized Vitest interface that provides structured test output, visual debugging context, and intelligent coverage analysis for more effective AI assistance with testing.
code-to-tree
code-to-tree
DAV MCP Server
A Model Context Protocol server that allows users to interact with their CalDAV, CardDAV, and WebDAV services (calendars, contacts, and files) from Fastmail and Apple iCloud accounts.
Dash MCP Server
Provides tools to interact with the Dash documentation browser API, allowing users to list installed docsets, search across documentation, and enable full-text search.
Linear MCP Server
Implementasi MCP Linear yang menangani semua tipe sumber daya Linear.
DuckDuckGo MCP Server
A basic MCP server template built with FastMCP framework that provides example tools for echoing messages and retrieving server information. This appears to be a starter template rather than a fully functional DuckDuckGo integration.
Tambo Docs MCP Server
Enables AI assistants to discover, fetch, and search through Tambo documentation from docs.tambo.co. Provides intelligent content parsing with caching for improved performance when accessing technical documentation.
MCP API Server Template
Weather MCP Server
Provides current weather data and hourly forecasts for any location worldwide using the Open-Meteo API, accessible through HTTP transport without requiring an API key.
A Cloud Automator MCP server
Server MCP tidak resmi untuk menggunakan Cloud Automator REST API.
MCP Server Markup Language (MCPML)
MCP Server Markup Language (MCPML) - Sebuah kerangka kerja Python untuk membangun Server MCP dengan dukungan CLI dan Agen OpenAI.
XRAY MCP
Enables AI assistants to understand and navigate codebases through structural analysis. Provides code mapping, symbol search, and impact analysis using ast-grep for accurate parsing of Python, JavaScript, TypeScript, and Go projects.
GitLab MCP Server
MCP Selenium Server
A Model Context Protocol server implementation that enables browser automation through standardized MCP clients, supporting features like navigation, element interaction, and screenshots across Chrome, Firefox, and Edge browsers.
A2A MCP Server
A centralized server that tracks and manages connected agents, providing a web interface to monitor their status while enabling agent communication through a central point.