Discover Awesome MCP Servers

Extend your agent with 26,715 capabilities via MCP servers.

All26,715
n8n MCP Server

n8n MCP Server

Enables Large Language Models to interact with n8n automation instances through the Model Context Protocol. Supports workflow management, execution, credentials handling, and security audits through natural language commands.

Arche Browser

Arche Browser

An MCP server for comprehensive browser automation and full local PC control via shell commands and Python execution. It enables tasks like web navigation, file system management, and remote access through SSE with token authentication.

CortexScout

CortexScout

CortexScout is the Deep Research & Web Extraction module within the Cortex-Works ecosystem. Designed for agent workloads that require token-efficient web retrieval, reliable anti-bot handling, and optional Human-in-the-Loop (HITL) fallback.

Brevo MCP Server

Brevo MCP Server

A comprehensive MCP server providing Claude with full access to Brevo's marketing automation platform through the official SDK, featuring tools for email operations, contact management, campaigns, SMS, conversations, webhooks, e-commerce, and account management.

Hound MCP

Hound MCP

The dependency bloodhound for AI coding agents. Hound is a free, open-source MCP server that gives AI coding agents a nose for supply chain security. It scans packages for vulnerabilities, checks licenses, inspects dependency trees, and detects typosquatting — with zero API keys, zero config, and zero cost.

brain-trust

brain-trust

Enables AI agents to ask questions and review planning documents by connecting to OpenAI's GPT-4. Provides context-aware question answering and multi-level plan analysis with structured feedback including strengths, weaknesses, and suggestions.

Basic MCP Server

Basic MCP Server

A minimal Model Context Protocol server template demonstrating basic implementation of tools, resources, and prompts built with Smithery SDK.

Open MCP

Open MCP

An open-source Model Context Protocol application management platform that allows users to create applications from GitHub repositories with automatic information extraction and database integration.

Alibaba Cloud FC MCP Server

Alibaba Cloud FC MCP Server

A Model Context Protocol server that enables agent applications like Cursor and Cline to integrate with Alibaba Cloud Function Compute, allowing them to deploy and manage serverless functions through natural language interactions.

Useful-mcps

Useful-mcps

Aquí tienes una lista de pequeños servidores MCP útiles, incluyendo: * docx\_replace: reemplazar etiquetas en documentos de Word * yt-dlp: extraer capítulos y subtítulos basándose en los capítulos * mermaid: generar y renderizar imágenes usando la API de mermaidchart.com

Google Analytics MCP Server by CData

Google Analytics MCP Server by CData

Google Analytics MCP Server by CData

HPE Aruba Networking Central MCP Server

HPE Aruba Networking Central MCP Server

Exposes 90 production-grade tools for interacting with the complete HPE Aruba Networking Central REST API surface, including network inventory, configuration, and security management. It features enterprise-ready OAuth2 handling and semantic tool filtering for optimized performance with both hosted and local LLMs.

Rootstock MCP Server

Rootstock MCP Server

A backend service that enables seamless interaction with the Rootstock blockchain using the Model Context Protocol, providing standardized APIs for querying, transacting, and managing assets on Rootstock.

TechMCP - PSG College of Technology MCP Server

TechMCP - PSG College of Technology MCP Server

Enables AI assistants to access PSG College of Technology e-campus portal data including CA marks, attendance records, timetable schedules, and course information through natural language queries.

MCP Jupyter Server

MCP Jupyter Server

Enables inspection and editing of Jupyter notebook files (.ipynb) through tools for reading, adding, updating, deleting, moving, and converting cells while preserving metadata.

Fusion MCP

Fusion MCP

A Model Context Protocol server that connects LLMs with Autodesk Fusion, enabling CAD operations through natural language dialogue.

Shaka Packager MCP Server

Shaka Packager MCP Server

Un servidor MCP que integra Shaka Packager con aplicaciones Claude AI, permitiendo a Claude analizar, transcodificar y empaquetar archivos de video para su transmisión en formatos como HLS y DASH.

EpicMe MCP

EpicMe MCP

An application that demonstrates the future of user interactions through natural language with LLMs, enabling user registration, authentication, and data interaction exclusively via Model Context Protocol (MCP) tools.

Stock Analysis MCP Server

Stock Analysis MCP Server

An MCP server that provides real-time stock data, technical indicators, and financial metrics using the date.590.net data source. It enables AI-powered intelligent analysis and batch reporting through integration with the Deepseek API.

Google Drive MCP Server

Google Drive MCP Server

¡Vamos a crear un servidor MCP en Google Drive! Empecemos con la hoja de cálculo.

N Lobby MCP Server

N Lobby MCP Server

A Model Context Protocol server that provides secure access to N Lobby school portal data including announcements, schedules, and learning resources through browser-based authentication.

Remote MCP Server (Authless)

Remote MCP Server (Authless)

A template for deploying authentication-free MCP servers on Cloudflare Workers. Enables connecting remote MCP tools to clients like Claude Desktop or the Cloudflare AI Playground via SSE.

Keboola Explorer MCP Server

Keboola Explorer MCP Server

This server facilitates interaction with Keboola's Storage API, enabling users to browse and manage project buckets, tables, and components efficiently through Claude Desktop.

MCP OpenDART

MCP OpenDART

Enables AI assistants to access South Korea's financial disclosure system (OpenDART), allowing users to retrieve corporate financial reports, disclosure documents, shareholder information, and automatically extract and search financial statement notes through natural language queries.

Terra Config MCP Server

Terra Config MCP Server

Enables LLMs to configure the TerraAPI dashboard by managing health and fitness integrations, destinations, and provider credentials. It allows users to programmatically interact with the Terra ecosystem to handle developer settings and data source configurations.

fastmcp-opengauss

fastmcp-opengauss

Enables interaction with openGauss databases through multiple transport methods (Stdio, SSE, Streamable-Http). Supports database operations and queries with configurable connection parameters.

TimeChimp MCP Server

TimeChimp MCP Server

Enables interaction with the TimeChimp API v2 to manage projects, time entries, expenses, and invoices through natural language. It supports full CRUD operations across all major TimeChimp resources, including advanced OData query filtering and pagination.

doc-tools-mcp

doc-tools-mcp

Here are a few ways to approach implementing Word document reading and writing with MCP (presumably referring to Message Passing Concurrency) in Node.js, along with considerations and code snippets: **Understanding the Challenge** * **Word Processing Libraries:** Node.js doesn't have built-in Word processing capabilities. You'll need a library like `docx`, `mammoth`, or `officegen`. These libraries handle the complexities of the DOCX format. * **Message Passing Concurrency (MCP):** Node.js is single-threaded. To achieve concurrency, you'll typically use techniques like: * **Child Processes:** Spawn separate Node.js processes to handle document processing tasks. Use `child_process.fork` or `child_process.spawn`. * **Worker Threads:** (Node.js 12+) Use `worker_threads` for true parallelism within the same Node.js process. This is often more efficient than child processes for CPU-bound tasks. * **Message Queues (Redis, RabbitMQ):** Decouple the main application from the document processing tasks. The main app puts document processing requests onto a queue, and worker processes/threads consume and process them. **General Architecture** 1. **Main Process/Thread:** * Receives requests to read or write Word documents (e.g., from an HTTP endpoint). * Serializes the document data and any processing instructions. * Sends a message to a worker process/thread or a message queue. 2. **Worker Process/Thread (or Queue Consumer):** * Receives the message. * Uses a Word processing library (`docx`, `mammoth`, `officegen`) to perform the read or write operation. * Serializes the result (e.g., extracted text, success/failure status). * Sends a message back to the main process/thread (or updates a database). 3. **Main Process/Thread:** * Receives the result from the worker. * Sends a response back to the client (e.g., the extracted text, a confirmation message). **Example using Child Processes and `docx` (for reading)** ```javascript // main.js (Main process) const { fork } = require('child_process'); const express = require('express'); const app = express(); const port = 3000; app.use(express.json()); // for parsing application/json app.post('/extract-text', (req, res) => { const filePath = req.body.filePath; // Get the file path from the request if (!filePath) { return res.status(400).send({ error: 'File path is required' }); } const worker = fork('./worker.js'); // Path to the worker script worker.send({ type: 'extractText', filePath: filePath }); worker.on('message', (message) => { if (message.type === 'textExtracted') { res.send({ text: message.text }); worker.kill(); // Terminate the worker after processing } else if (message.type === 'error') { res.status(500).send({ error: message.error }); worker.kill(); } }); worker.on('exit', (code) => { if (code !== 0) { console.error(`Worker process exited with code ${code}`); } }); }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` ```javascript // worker.js (Worker process) const docx = require('docx'); const fs = require('fs').promises; // Use promises for async file operations process.on('message', async (message) => { if (message.type === 'extractText') { const filePath = message.filePath; try { const buffer = await fs.readFile(filePath); const document = new docx.Document({ sections: [{ properties: {}, children: [], }], }); const text = await docx.extractRawText(buffer); process.send({ type: 'textExtracted', text: text }); } catch (error) { console.error('Error extracting text:', error); process.send({ type: 'error', error: error.message }); } } }); ``` **Explanation:** * **`main.js`:** * Sets up an Express.js server with a `/extract-text` endpoint. * When a request is received, it uses `child_process.fork` to create a new Node.js process running `worker.js`. * It sends a message to the worker with the file path. * It listens for messages from the worker: * `textExtracted`: Sends the extracted text back to the client. * `error`: Sends an error response. * It kills the worker process after processing. * **`worker.js`:** * Listens for messages from the parent process. * When it receives an `extractText` message: * Reads the Word document using `fs.readFile`. * Uses `docx.extractRawText` to extract the text. * Sends the extracted text back to the parent process. * Handles errors and sends an error message back to the parent. **How to Run:** 1. **Install dependencies:** ```bash npm install express docx ``` 2. **Create `main.js` and `worker.js`** (as shown above). 3. **Run the server:** ```bash node main.js ``` 4. **Send a POST request to `http://localhost:3000/extract-text`** with a JSON body like: ```json { "filePath": "/path/to/your/document.docx" } ``` (Replace `/path/to/your/document.docx` with the actual path to your Word document.) **Important Considerations:** * **Error Handling:** Robust error handling is crucial. Catch errors in both the main process and the worker process and send appropriate error messages. * **Security:** Be very careful about accepting file paths from the client. Validate the file path to prevent malicious users from accessing arbitrary files on your server. Consider using a dedicated upload directory and generating unique filenames. * **Resource Management:** Word processing can be memory-intensive. Limit the size of documents that can be processed to prevent your server from running out of memory. Terminate worker processes after they've finished their task to release resources. * **Scalability:** For high-volume document processing, consider using a message queue (like Redis or RabbitMQ) to distribute the workload across multiple worker processes/threads. This will improve the scalability and resilience of your application. * **Choosing a Library:** * **`docx`:** Good for creating and manipulating DOCX files programmatically. Also has some text extraction capabilities. * **`mammoth`:** Specifically designed for converting DOCX files to HTML. Excellent for extracting formatted text. * **`officegen`:** Another option for creating Office documents (DOCX, PPTX, XLSX). * **Worker Threads:** If you're using Node.js 12 or later, `worker_threads` can be a more efficient alternative to child processes, especially for CPU-bound tasks. The code structure is similar, but you'll use `worker_threads.Worker` instead of `child_process.fork`. Be mindful of data sharing between threads (use `TransferArrayBuffer` for efficient data transfer). **Example using Worker Threads (Node.js 12+)** ```javascript // main.js (Main thread) const { Worker } = require('worker_threads'); const express = require('express'); const app = express(); const port = 3000; app.use(express.json()); app.post('/extract-text', (req, res) => { const filePath = req.body.filePath; if (!filePath) { return res.status(400).send({ error: 'File path is required' }); } const worker = new Worker('./worker.js'); worker.postMessage({ type: 'extractText', filePath: filePath }); worker.on('message', (message) => { if (message.type === 'textExtracted') { res.send({ text: message.text }); } else if (message.type === 'error') { res.status(500).send({ error: message.error }); } }); worker.on('error', (err) => { console.error('Worker error:', err); res.status(500).send({ error: 'Worker error' }); }); worker.on('exit', (code) => { if (code !== 0) { console.error(`Worker stopped with exit code ${code}`); } }); }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` ```javascript // worker.js (Worker thread) const { parentPort } = require('worker_threads'); const docx = require('docx'); const fs = require('fs').promises; parentPort.on('message', async (message) => { if (message.type === 'extractText') { const filePath = message.filePath; try { const buffer = await fs.readFile(filePath); const document = new docx.Document({ sections: [{ properties: {}, children: [], }], }); const text = await docx.extractRawText(buffer); parentPort.postMessage({ type: 'textExtracted', text: text }); } catch (error) { console.error('Error extracting text:', error); parentPort.postMessage({ type: 'error', error: error.message }); } } }); ``` **Key Differences with Worker Threads:** * **`worker_threads` module:** Uses `require('worker_threads')`. * **`Worker` constructor:** `new Worker('./worker.js')`. * **`parentPort`:** The worker thread uses `parentPort` to communicate with the main thread. * **`postMessage`:** Uses `postMessage` to send messages. **Choosing Between Child Processes and Worker Threads:** * **Child Processes:** * **Pros:** Good isolation (each process has its own memory space). More robust if a worker crashes (it won't bring down the main process). * **Cons:** Higher overhead (process creation, inter-process communication). * **Worker Threads:** * **Pros:** Lower overhead (threads share the same memory space). Potentially faster for CPU-bound tasks. * **Cons:** Less isolation (a crash in a worker thread can crash the entire process). Requires careful synchronization to avoid race conditions when sharing data. **Message Queues (Advanced)** For a more scalable and robust solution, use a message queue like Redis or RabbitMQ. The main process publishes messages to the queue, and worker processes/threads consume messages from the queue. This decouples the main application from the document processing tasks, allowing you to scale the workers independently. **Example (Conceptual with Redis):** 1. **Main Process:** * Receives a request to extract text. * Pushes a message onto a Redis queue (e.g., `docx_extraction_queue`) with the file path. 2. **Worker Process/Thread:** * Subscribes to the `docx_extraction_queue` in Redis. * When a message arrives, it extracts the text from the Word document. * Pushes a message onto another Redis queue (e.g., `docx_extraction_results`) with the extracted text and a correlation ID (to match the result to the original request). 3. **Main Process:** * Subscribes to the `docx_extraction_results` queue. * When a message arrives, it uses the correlation ID to find the original request and send the extracted text back to the client. This approach is more complex to set up but provides significant benefits in terms of scalability, resilience, and decoupling. **Summary** Implementing Word document reading and writing with concurrency in Node.js requires careful consideration of the trade-offs between child processes, worker threads, and message queues. Choose the approach that best suits your application's requirements for performance, scalability, and robustness. Remember to prioritize error handling, security, and resource management.

mcp-server-playground

mcp-server-playground

@excalimate/mcp-server

@excalimate/mcp-server

Turn Excalidraw diagrams into keyframe animations. AI-powered creation via MCP, E2E encrypted sharing, export to MP4/WebM/GIF/SVG.