Discover Awesome MCP Servers

Extend your agent with 26,794 capabilities via MCP servers.

All26,794
EpicMe MCP

EpicMe MCP

An application that demonstrates the future of user interactions through natural language with LLMs, enabling user registration, authentication, and data interaction exclusively via Model Context Protocol (MCP) tools.

Stock Analysis MCP Server

Stock Analysis MCP Server

An MCP server that provides real-time stock data, technical indicators, and financial metrics using the date.590.net data source. It enables AI-powered intelligent analysis and batch reporting through integration with the Deepseek API.

Google Drive MCP Server

Google Drive MCP Server

¡Vamos a crear un servidor MCP en Google Drive! Empecemos con la hoja de cálculo.

TunnelHub MCP

TunnelHub MCP

Connects MCP clients to TunnelHub to monitor automations, inspect executions, and analyze logs or traces. It enables users to manage environments and troubleshoot integration failures through natural language commands.

N Lobby MCP Server

N Lobby MCP Server

A Model Context Protocol server that provides secure access to N Lobby school portal data including announcements, schedules, and learning resources through browser-based authentication.

Keboola Explorer MCP Server

Keboola Explorer MCP Server

This server facilitates interaction with Keboola's Storage API, enabling users to browse and manage project buckets, tables, and components efficiently through Claude Desktop.

MCP OpenDART

MCP OpenDART

Enables AI assistants to access South Korea's financial disclosure system (OpenDART), allowing users to retrieve corporate financial reports, disclosure documents, shareholder information, and automatically extract and search financial statement notes through natural language queries.

Coconuts MCP Server

Coconuts MCP Server

Enables management of Google Maps saved places through a SQLite database. Allows users to store, query, and organize their Google Maps places data locally.

Terra Config MCP Server

Terra Config MCP Server

Enables LLMs to configure the TerraAPI dashboard by managing health and fitness integrations, destinations, and provider credentials. It allows users to programmatically interact with the Terra ecosystem to handle developer settings and data source configurations.

fastmcp-opengauss

fastmcp-opengauss

Enables interaction with openGauss databases through multiple transport methods (Stdio, SSE, Streamable-Http). Supports database operations and queries with configurable connection parameters.

Graphistry MCP

Graphistry MCP

GPU-accelerated graph visualization and analytics server for Large Language Models that integrates with Model Control Protocol (MCP), enabling AI assistants to visualize and analyze complex network data.

TimeChimp MCP Server

TimeChimp MCP Server

Enables interaction with the TimeChimp API v2 to manage projects, time entries, expenses, and invoices through natural language. It supports full CRUD operations across all major TimeChimp resources, including advanced OData query filtering and pagination.

doc-tools-mcp

doc-tools-mcp

Here are a few ways to approach implementing Word document reading and writing with MCP (presumably referring to Message Passing Concurrency) in Node.js, along with considerations and code snippets: **Understanding the Challenge** * **Word Processing Libraries:** Node.js doesn't have built-in Word processing capabilities. You'll need a library like `docx`, `mammoth`, or `officegen`. These libraries handle the complexities of the DOCX format. * **Message Passing Concurrency (MCP):** Node.js is single-threaded. To achieve concurrency, you'll typically use techniques like: * **Child Processes:** Spawn separate Node.js processes to handle document processing tasks. Use `child_process.fork` or `child_process.spawn`. * **Worker Threads:** (Node.js 12+) Use `worker_threads` for true parallelism within the same Node.js process. This is often more efficient than child processes for CPU-bound tasks. * **Message Queues (Redis, RabbitMQ):** Decouple the main application from the document processing tasks. The main app puts document processing requests onto a queue, and worker processes/threads consume and process them. **General Architecture** 1. **Main Process/Thread:** * Receives requests to read or write Word documents (e.g., from an HTTP endpoint). * Serializes the document data and any processing instructions. * Sends a message to a worker process/thread or a message queue. 2. **Worker Process/Thread (or Queue Consumer):** * Receives the message. * Uses a Word processing library (`docx`, `mammoth`, `officegen`) to perform the read or write operation. * Serializes the result (e.g., extracted text, success/failure status). * Sends a message back to the main process/thread (or updates a database). 3. **Main Process/Thread:** * Receives the result from the worker. * Sends a response back to the client (e.g., the extracted text, a confirmation message). **Example using Child Processes and `docx` (for reading)** ```javascript // main.js (Main process) const { fork } = require('child_process'); const express = require('express'); const app = express(); const port = 3000; app.use(express.json()); // for parsing application/json app.post('/extract-text', (req, res) => { const filePath = req.body.filePath; // Get the file path from the request if (!filePath) { return res.status(400).send({ error: 'File path is required' }); } const worker = fork('./worker.js'); // Path to the worker script worker.send({ type: 'extractText', filePath: filePath }); worker.on('message', (message) => { if (message.type === 'textExtracted') { res.send({ text: message.text }); worker.kill(); // Terminate the worker after processing } else if (message.type === 'error') { res.status(500).send({ error: message.error }); worker.kill(); } }); worker.on('exit', (code) => { if (code !== 0) { console.error(`Worker process exited with code ${code}`); } }); }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` ```javascript // worker.js (Worker process) const docx = require('docx'); const fs = require('fs').promises; // Use promises for async file operations process.on('message', async (message) => { if (message.type === 'extractText') { const filePath = message.filePath; try { const buffer = await fs.readFile(filePath); const document = new docx.Document({ sections: [{ properties: {}, children: [], }], }); const text = await docx.extractRawText(buffer); process.send({ type: 'textExtracted', text: text }); } catch (error) { console.error('Error extracting text:', error); process.send({ type: 'error', error: error.message }); } } }); ``` **Explanation:** * **`main.js`:** * Sets up an Express.js server with a `/extract-text` endpoint. * When a request is received, it uses `child_process.fork` to create a new Node.js process running `worker.js`. * It sends a message to the worker with the file path. * It listens for messages from the worker: * `textExtracted`: Sends the extracted text back to the client. * `error`: Sends an error response. * It kills the worker process after processing. * **`worker.js`:** * Listens for messages from the parent process. * When it receives an `extractText` message: * Reads the Word document using `fs.readFile`. * Uses `docx.extractRawText` to extract the text. * Sends the extracted text back to the parent process. * Handles errors and sends an error message back to the parent. **How to Run:** 1. **Install dependencies:** ```bash npm install express docx ``` 2. **Create `main.js` and `worker.js`** (as shown above). 3. **Run the server:** ```bash node main.js ``` 4. **Send a POST request to `http://localhost:3000/extract-text`** with a JSON body like: ```json { "filePath": "/path/to/your/document.docx" } ``` (Replace `/path/to/your/document.docx` with the actual path to your Word document.) **Important Considerations:** * **Error Handling:** Robust error handling is crucial. Catch errors in both the main process and the worker process and send appropriate error messages. * **Security:** Be very careful about accepting file paths from the client. Validate the file path to prevent malicious users from accessing arbitrary files on your server. Consider using a dedicated upload directory and generating unique filenames. * **Resource Management:** Word processing can be memory-intensive. Limit the size of documents that can be processed to prevent your server from running out of memory. Terminate worker processes after they've finished their task to release resources. * **Scalability:** For high-volume document processing, consider using a message queue (like Redis or RabbitMQ) to distribute the workload across multiple worker processes/threads. This will improve the scalability and resilience of your application. * **Choosing a Library:** * **`docx`:** Good for creating and manipulating DOCX files programmatically. Also has some text extraction capabilities. * **`mammoth`:** Specifically designed for converting DOCX files to HTML. Excellent for extracting formatted text. * **`officegen`:** Another option for creating Office documents (DOCX, PPTX, XLSX). * **Worker Threads:** If you're using Node.js 12 or later, `worker_threads` can be a more efficient alternative to child processes, especially for CPU-bound tasks. The code structure is similar, but you'll use `worker_threads.Worker` instead of `child_process.fork`. Be mindful of data sharing between threads (use `TransferArrayBuffer` for efficient data transfer). **Example using Worker Threads (Node.js 12+)** ```javascript // main.js (Main thread) const { Worker } = require('worker_threads'); const express = require('express'); const app = express(); const port = 3000; app.use(express.json()); app.post('/extract-text', (req, res) => { const filePath = req.body.filePath; if (!filePath) { return res.status(400).send({ error: 'File path is required' }); } const worker = new Worker('./worker.js'); worker.postMessage({ type: 'extractText', filePath: filePath }); worker.on('message', (message) => { if (message.type === 'textExtracted') { res.send({ text: message.text }); } else if (message.type === 'error') { res.status(500).send({ error: message.error }); } }); worker.on('error', (err) => { console.error('Worker error:', err); res.status(500).send({ error: 'Worker error' }); }); worker.on('exit', (code) => { if (code !== 0) { console.error(`Worker stopped with exit code ${code}`); } }); }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` ```javascript // worker.js (Worker thread) const { parentPort } = require('worker_threads'); const docx = require('docx'); const fs = require('fs').promises; parentPort.on('message', async (message) => { if (message.type === 'extractText') { const filePath = message.filePath; try { const buffer = await fs.readFile(filePath); const document = new docx.Document({ sections: [{ properties: {}, children: [], }], }); const text = await docx.extractRawText(buffer); parentPort.postMessage({ type: 'textExtracted', text: text }); } catch (error) { console.error('Error extracting text:', error); parentPort.postMessage({ type: 'error', error: error.message }); } } }); ``` **Key Differences with Worker Threads:** * **`worker_threads` module:** Uses `require('worker_threads')`. * **`Worker` constructor:** `new Worker('./worker.js')`. * **`parentPort`:** The worker thread uses `parentPort` to communicate with the main thread. * **`postMessage`:** Uses `postMessage` to send messages. **Choosing Between Child Processes and Worker Threads:** * **Child Processes:** * **Pros:** Good isolation (each process has its own memory space). More robust if a worker crashes (it won't bring down the main process). * **Cons:** Higher overhead (process creation, inter-process communication). * **Worker Threads:** * **Pros:** Lower overhead (threads share the same memory space). Potentially faster for CPU-bound tasks. * **Cons:** Less isolation (a crash in a worker thread can crash the entire process). Requires careful synchronization to avoid race conditions when sharing data. **Message Queues (Advanced)** For a more scalable and robust solution, use a message queue like Redis or RabbitMQ. The main process publishes messages to the queue, and worker processes/threads consume messages from the queue. This decouples the main application from the document processing tasks, allowing you to scale the workers independently. **Example (Conceptual with Redis):** 1. **Main Process:** * Receives a request to extract text. * Pushes a message onto a Redis queue (e.g., `docx_extraction_queue`) with the file path. 2. **Worker Process/Thread:** * Subscribes to the `docx_extraction_queue` in Redis. * When a message arrives, it extracts the text from the Word document. * Pushes a message onto another Redis queue (e.g., `docx_extraction_results`) with the extracted text and a correlation ID (to match the result to the original request). 3. **Main Process:** * Subscribes to the `docx_extraction_results` queue. * When a message arrives, it uses the correlation ID to find the original request and send the extracted text back to the client. This approach is more complex to set up but provides significant benefits in terms of scalability, resilience, and decoupling. **Summary** Implementing Word document reading and writing with concurrency in Node.js requires careful consideration of the trade-offs between child processes, worker threads, and message queues. Choose the approach that best suits your application's requirements for performance, scalability, and robustness. Remember to prioritize error handling, security, and resource management.

MBTA Worcester Line MCP Server

MBTA Worcester Line MCP Server

Provides real-time train schedule information and live predictions for all 18 stations on the MBTA Worcester Line. It enables users to query upcoming departures and filter by direction or date through natural language.

MaiAgent MCP Server 3.0

MaiAgent MCP Server 3.0

An intelligent AI assistant routing and management platform that integrates 107 production-grade AI assistants into a unified enterprise intelligence service center.

Pega DX MCP Server

Pega DX MCP Server

Transforms complex Pega Platform interactions into intuitive, conversational experiences by exposing Pega DX APIs through the standardized Model Context Protocol, enabling AI applications to interact with Pega through natural language.

Layer Prompt Manager

Layer Prompt Manager

Esto te ayuda a crear un servidor MCP al que tu IDE puede acceder para crear y guardar prompts para tu base de código.

MWAA MCP Server

MWAA MCP Server

Enables management of Amazon Managed Workflows for Apache Airflow (MWAA) environments and operations including DAG management, workflow execution monitoring, and access to Airflow connections and variables through a unified interface.

Search Intent MCP

Search Intent MCP

Un servicio basado en MCP (Microcontrolador Periférico) que analiza las palabras clave de búsqueda de los usuarios para determinar su intención, proporcionando clasificaciones, razonamiento, referencias y sugerencias de búsqueda para apoyar el análisis SEO.

Nearest Tailwind Colors

Nearest Tailwind Colors

Finds the closest Tailwind CSS palette colors to any given CSS color value. Supports multiple color spaces and customizable result filtering to help match designs to Tailwind's color system.

Sora MCP Server

Sora MCP Server

Integrates with OpenAI's Sora 2 API to generate, remix, and manage AI-generated videos from text prompts. Supports video creation, status monitoring, downloading, and remixing through natural language commands.

Org-roam MCP Server

Org-roam MCP Server

Enables interaction with org-roam knowledge bases, allowing search, retrieval, creation, and linking of notes while respecting org-roam's file structure and conventions.

FastMCP Document Analyzer

FastMCP Document Analyzer

A comprehensive document analysis server that performs sentiment analysis, keyword extraction, readability scoring, and text statistics while providing document management capabilities including storage, search, and organization.

Local RAG

Local RAG

Privacy-first local document search using semantic search. Runs entirely on your machine with no cloud services, supporting PDF, DOCX, TXT, and Markdown files.

aws-pricing-mcp

aws-pricing-mcp

aws-pricing-mcp

Logseq MCP Server

Logseq MCP Server

Connects AI assistants to Logseq knowledge graphs to read, write, and search pages, blocks, and journals via the Model Context Protocol. It features 17 tools for full graph management, including CRUD operations, batch block insertion, and full-text search.

Docker MCP Server

Docker MCP Server

Enables AI assistants to manage Docker containers, images, networks, volumes, and Compose services through the Model Context Protocol. It supports system operations, command execution within containers, and integration with Docker Hub and GitHub Container Registry.

DrissionPage MCP Server

DrissionPage MCP Server

A professional browser automation server that enables MCP clients to perform structured web navigation, element interaction, and data extraction using the DrissionPage framework. It features 14 deterministic tools optimized for LLMs to automate web workflows efficiently without relying on vision-based models.

Comax Payment Link MCP

Comax Payment Link MCP

Allows integration with Comax ERP/payment systems to create payment links, manage orders, and retrieve customer information using the MCP protocol.

Qdrant Sync MCP Server

Qdrant Sync MCP Server

Enables bidirectional synchronization of vector database collections between local and remote Qdrant instances with tools for comparison, dry-run previews, and sync operations in both directions.