Discover Awesome MCP Servers
Extend your agent with 20,299 capabilities via MCP servers.
- All20,299
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
ExcelReadMCP
Enables reading and searching Excel files through MCP-compatible clients. Provides tools to retrieve workbook metadata, read sheet contents, and search across all sheets using absolute file paths.
AirNow MCP Server
A Model Context Protocol implementation that enables LLMs to access real-time, forecasted, and historical U.S. air quality data through the AirNow API.
chainlist-mcp
chainlist-mcp
Node Omnibus MCP Server
거울
Crunchbase MCP Server
AI 어시스턴트를 위한 Crunchbase 데이터 접근을 제공하는 모델 컨텍스트 프로토콜 (MCP) 서버입니다. 이 서버를 통해 AI 어시스턴트는 Crunchbase에서 회사 검색, 회사 상세 정보, 투자 정보, 인수 합병 정보, 그리고 인물 데이터를 얻을 수 있습니다.
Mcp Server Docker
## Docker를 사용하여 Minecraft (MCP) 서버를 만드는 예시 다음은 Docker를 사용하여 Minecraft (MCP) 서버를 만드는 간단한 예시입니다. 이 예시는 기본적인 설정만 포함하며, 필요에 따라 설정을 변경해야 할 수 있습니다. **1. Dockerfile 생성:** 먼저, Dockerfile을 생성합니다. 이 파일은 Docker 이미지를 빌드하는 데 사용되는 명령어들을 포함합니다. ```dockerfile FROM openjdk:17-slim # Minecraft 서버 파일 다운로드 (원하는 버전으로 변경) ARG MINECRAFT_VERSION=1.20.4 RUN wget https://piston-data.mojang.com/v1/objects/8f3dd283551f30656562e643b1a2145182c5a63a/server.jar -O minecraft_server.jar # 서버 설정 파일 (eula.txt) 생성 RUN echo "eula=true" > eula.txt # 포트 설정 (Minecraft 기본 포트) EXPOSE 25565 # 볼륨 설정 (데이터 저장) VOLUME /data # 서버 실행 명령어 ENTRYPOINT ["java", "-Xmx2G", "-Xms2G", "-jar", "minecraft_server.jar", "nogui"] ``` **설명:** * `FROM openjdk:17-slim`: Java 17을 기반으로 하는 Docker 이미지를 사용합니다. Minecraft 서버는 Java가 필요합니다. * `ARG MINECRAFT_VERSION=1.20.4`: Minecraft 서버 버전을 지정합니다. 원하는 버전으로 변경하십시오. * `RUN wget ...`: Minecraft 서버 파일을 다운로드합니다. 위 URL은 예시이며, Minecraft 공식 웹사이트에서 최신 버전의 다운로드 링크를 확인하십시오. * `RUN echo "eula=true" > eula.txt`: Minecraft EULA (End User License Agreement)에 동의합니다. 서버를 실행하기 전에 EULA를 읽고 동의해야 합니다. * `EXPOSE 25565`: Minecraft 서버의 기본 포트인 25565를 Docker 컨테이너 외부로 노출합니다. * `VOLUME /data`: `/data` 디렉토리를 볼륨으로 설정합니다. 이 디렉토리는 Minecraft 서버 데이터 (월드, 설정 등)를 저장하는 데 사용됩니다. * `ENTRYPOINT ["java", "-Xmx2G", "-Xms2G", "-jar", "minecraft_server.jar", "nogui"]`: Minecraft 서버를 실행하는 명령어입니다. `-Xmx2G`와 `-Xms2G`는 각각 최대 및 최소 메모리 할당량을 2GB로 설정합니다. `nogui`는 GUI 없이 서버를 실행합니다. **2. Docker 이미지 빌드:** Dockerfile이 있는 디렉토리에서 다음 명령어를 실행하여 Docker 이미지를 빌드합니다. ```bash docker build -t minecraft-server . ``` **설명:** * `docker build`: Docker 이미지를 빌드하는 명령어입니다. * `-t minecraft-server`: 이미지에 `minecraft-server`라는 태그를 지정합니다. * `.`: 현재 디렉토리를 빌드 컨텍스트로 사용합니다. **3. Docker 컨테이너 실행:** 다음 명령어를 실행하여 Docker 컨테이너를 실행합니다. ```bash docker run -d -p 25565:25565 -v minecraft_data:/data minecraft-server ``` **설명:** * `docker run`: Docker 컨테이너를 실행하는 명령어입니다. * `-d`: 백그라운드에서 컨테이너를 실행합니다. * `-p 25565:25565`: 호스트 시스템의 25565 포트를 컨테이너의 25565 포트에 매핑합니다. * `-v minecraft_data:/data`: `minecraft_data`라는 이름의 Docker 볼륨을 컨테이너의 `/data` 디렉토리에 마운트합니다. 이렇게 하면 서버 데이터가 컨테이너가 삭제되어도 유지됩니다. * `minecraft-server`: 실행할 Docker 이미지의 이름입니다. **4. 서버 접속:** 이제 Minecraft 클라이언트를 사용하여 서버에 접속할 수 있습니다. 서버 주소는 Docker 컨테이너가 실행되는 호스트 시스템의 IP 주소 또는 도메인 이름입니다. **주의 사항:** * 이 예시는 기본적인 설정만 포함하며, 필요에 따라 설정을 변경해야 할 수 있습니다. * Minecraft 서버의 성능은 호스트 시스템의 리소스에 따라 달라집니다. * Docker 볼륨을 사용하여 서버 데이터를 저장하는 것이 좋습니다. 이렇게 하면 컨테이너가 삭제되어도 데이터가 유지됩니다. * Minecraft 서버의 보안을 위해 방화벽 설정을 확인하십시오. 이 예시가 Docker를 사용하여 Minecraft 서버를 만드는 데 도움이 되기를 바랍니다.
Data Visualization MCP Server
데이터-비즈-엠씨피-서버
Product MCP Server
Enables AI agents to manage an in-memory product catalog with tools to add products, search by name or category, and retrieve product details by ID.
Remote MCP Server
A serverless solution for deploying authentication-free Model Context Protocol (MCP) servers on Cloudflare Workers, allowing users to create custom AI tools that can be accessed from Cloudflare AI Playground or Claude Desktop.
Snowflake Cortex MCP Server
Enables users to query both structured and unstructured data in Snowflake using Cortex AI features including Cortex Search for RAG applications, Cortex Analyst for semantic modeling, and Cortex Agent for agentic orchestration across data types.
interactive-mcp
interactive-mcp
SCP Local MCP Server
Enables secure access to customer e-commerce data (orders, loyalty points, offers, preferences) through the Shopper Context Protocol with OAuth 2.0 authentication. Supports discovering and connecting to SCP-enabled merchants for personalized shopping assistance.
Claude Code MCP Server
A server that allows LLMs to run Claude Code with all permissions bypassed automatically, enabling code execution and file editing without permission interruptions.
Finance MCP
Enables financial research and analysis through AI agents that combine web search, content crawling, entity extraction, and deep research workflows. Supports extracting stock/fund entities with security codes and conducting structured financial investigations.
MCP Server Demo in python
Okay, here's a basic implementation of a Model Communication Protocol (MCP) server using Python over the network using Server-Sent Events (SSE) transport. This is a simplified example and will need to be adapted based on your specific MCP requirements (e.g., message format, error handling, authentication, etc.). ```python import asyncio import json import uuid from aiohttp import web # In-memory storage for connected clients (replace with a database for persistence) clients = {} # {client_id: response} async def handle_mcp_request(request): """Handles incoming MCP requests (e.g., POST requests with model input).""" try: data = await request.json() # Assuming JSON payload # Process the data (e.g., send it to your model) # Replace this with your actual model inference logic model_output = await run_model(data) # Simulate model inference client_id = data.get("client_id") # Assuming client_id is in the request if not client_id: return web.json_response({"error": "client_id is required"}, status=400) if client_id not in clients: return web.json_response({"error": "Client not connected"}, status=404) # Send the model output back to the client via SSE await send_sse_message(clients[client_id], json.dumps(model_output)) return web.json_response({"status": "success"}) except json.JSONDecodeError: return web.json_response({"error": "Invalid JSON"}, status=400) except Exception as e: print(f"Error processing request: {e}") return web.json_response({"error": str(e)}, status=500) async def run_model(data): """Simulates running a model. Replace with your actual model inference.""" await asyncio.sleep(1) # Simulate processing time # Example: Echo back the input with a "processed" flag return {"input": data, "processed": True} async def sse_handler(request): """Handles SSE connections from clients.""" client_id = str(uuid.uuid4()) # Generate a unique client ID response = web.StreamResponse( status=200, headers={ 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache', 'Connection': 'keep-alive', } ) await response.prepare(request) clients[client_id] = response # Store the client's response object print(f"Client connected: {client_id}") try: # Send a welcome message with the client ID await send_sse_message(response, json.dumps({"message": "Connected", "client_id": client_id})) # Keep the connection alive until the client disconnects while True: await asyncio.sleep(0.1) # Small delay to prevent busy-waiting except asyncio.CancelledError: print(f"Client disconnected: {client_id}") except Exception as e: print(f"Error with client {client_id}: {e}") finally: del clients[client_id] # Remove the client when disconnected await response.write_eof() return response async def send_sse_message(response, message): """Sends an SSE message to the client.""" event_data = f"data: {message}\n\n" await response.write(event_data.encode('utf-8')) await response.drain() # Ensure data is sent immediately async def on_shutdown(app): """Gracefully close all client connections on shutdown.""" for client_id, response in clients.items(): print(f"Closing connection for client: {client_id}") await send_sse_message(response, json.dumps({"message": "Server shutting down"})) await response.write_eof() async def main(): app = web.Application() app.add_routes([ web.get('/sse', sse_handler), web.post('/mcp', handle_mcp_request), # MCP endpoint ]) app.on_shutdown.append(on_shutdown) runner = web.AppRunner(app) await runner.setup() site = web.TCPSite(runner, 'localhost', 8080) # Change host/port as needed await site.start() print("Server started at http://localhost:8080") # Keep the server running indefinitely try: await asyncio.Future() # Run forever except asyncio.CancelledError: pass # Handle cancellation if needed finally: await runner.cleanup() if __name__ == '__main__': asyncio.run(main()) ``` Key improvements and explanations: * **Uses `aiohttp`:** `aiohttp` is an asynchronous web framework, crucial for handling SSE efficiently. SSE relies on long-lived connections, and asynchronous handling prevents blocking the server. * **SSE Content Type:** Sets the correct `Content-Type` header (`text/event-stream`) for SSE. Also sets `Cache-Control: no-cache` and `Connection: keep-alive` which are important for SSE. * **Client Tracking:** The `clients` dictionary stores connected clients (using their `client_id`) and their `StreamResponse` objects. This is *essential* for sending data back to specific clients. **Important:** This is an in-memory store. For production, you'll need a persistent store (e.g., Redis, database) to handle server restarts. * **Unique Client IDs:** Generates a unique `client_id` for each connecting client using `uuid`. This is how you'll identify which client to send data to. The client needs to send this ID back in subsequent requests. * **`send_sse_message` function:** Encapsulates the logic for sending SSE messages. It formats the message correctly (`data: ...\n\n`) and encodes it to UTF-8. Critically, it also calls `await response.drain()` to ensure the data is *immediately* sent to the client. Without `drain()`, the data might buffer and not be sent until much later. * **Error Handling:** Includes basic `try...except` blocks to catch potential errors (JSON decoding, model errors, client disconnects). More robust error handling is needed for production. * **Asynchronous Model Simulation:** The `run_model` function is now asynchronous using `asyncio.sleep` to simulate model processing time. This is important to prevent blocking the event loop. **Replace this with your actual asynchronous model inference code.** * **MCP Request Handling:** The `handle_mcp_request` function now expects a `client_id` in the incoming JSON data. It uses this `client_id` to look up the correct client in the `clients` dictionary and send the model output back via SSE. * **Graceful Shutdown:** The `on_shutdown` function iterates through all connected clients and sends a "Server shutting down" message before closing the connections. This is important for a clean shutdown. * **Keep-Alive:** The `sse_handler` function now includes a `while True` loop with a small `asyncio.sleep` to keep the connection alive. This prevents the connection from closing prematurely. * **Client Disconnect Handling:** The `try...except...finally` block in `sse_handler` ensures that the client is removed from the `clients` dictionary when they disconnect (either normally or due to an error). This prevents memory leaks and ensures that you don't try to send data to a disconnected client. * **Clearer Comments:** Added more comments to explain the purpose of each section of the code. **How to Run:** 1. **Install `aiohttp`:** ```bash pip install aiohttp ``` 2. **Save:** Save the code as a Python file (e.g., `mcp_server.py`). 3. **Run:** Execute the file from your terminal: ```bash python mcp_server.py ``` **Example Client (JavaScript/HTML):** ```html <!DOCTYPE html> <html> <head> <title>MCP Client</title> </head> <body> <h1>MCP Client</h1> <div id="output"></div> <script> const outputDiv = document.getElementById('output'); let clientId = null; // Store the client ID // Establish SSE connection const eventSource = new EventSource('http://localhost:8080/sse'); eventSource.onopen = () => { outputDiv.innerHTML += '<p>Connected to server.</p>'; }; eventSource.onmessage = (event) => { const data = JSON.parse(event.data); outputDiv.innerHTML += `<p>Received: ${JSON.stringify(data)}</p>`; // Store the client ID when received if (data.client_id) { clientId = data.client_id; outputDiv.innerHTML += `<p>Client ID: ${clientId}</p>`; } }; eventSource.onerror = (error) => { console.error('SSE error:', error); outputDiv.innerHTML += '<p>Error connecting to server.</p>'; }; // Function to send an MCP request function sendMCPRequest(inputData) { if (!clientId) { outputDiv.innerHTML += '<p>Error: Client ID not yet received. Wait for connection.</p>'; return; } fetch('http://localhost:8080/mcp', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ ...inputData, client_id: clientId }), // Include client_id }) .then(response => response.json()) .then(data => { outputDiv.innerHTML += `<p>MCP Request Status: ${JSON.stringify(data)}</p>`; }) .catch(error => { console.error('Error sending MCP request:', error); outputDiv.innerHTML += `<p>Error sending MCP request.</p>`; }); } // Example usage: Send a request after a delay setTimeout(() => { sendMCPRequest({ message: 'Hello from the client!' }); }, 2000); // Wait 2 seconds to ensure connection is established </script> </body> </html> ``` **Explanation of the Client:** 1. **SSE Connection:** Creates an `EventSource` object to connect to the `/sse` endpoint on the server. 2. **Event Handlers:** - `onopen`: Called when the connection is established. - `onmessage`: Called when the server sends an SSE message. It parses the JSON data and displays it in the `outputDiv`. It also *stores* the `client_id` when it receives the initial connection message from the server. - `onerror`: Called if there's an error with the SSE connection. 3. **`sendMCPRequest` Function:** - Sends a `POST` request to the `/mcp` endpoint. - **Crucially, it includes the `client_id` in the JSON payload.** This is how the server knows which client to send the response to. - Handles the response from the server and displays the status. 4. **Example Usage:** The `setTimeout` function sends an example MCP request after a 2-second delay. This gives the SSE connection time to establish and receive the `client_id`. **Important Considerations for Production:** * **Authentication/Authorization:** Implement proper authentication and authorization to secure your MCP endpoint. You don't want unauthorized clients sending requests to your model. * **Error Handling:** Implement more robust error handling on both the server and the client. Log errors, provide informative error messages, and handle unexpected situations gracefully. * **Scalability:** For high-traffic applications, consider using a message queue (e.g., RabbitMQ, Kafka) to decouple the MCP server from the model inference process. This will allow you to scale the server and the model inference independently. * **Connection Management:** Implement more sophisticated connection management, including heartbeats to detect dead connections and automatic reconnection logic on the client. * **Data Validation:** Validate the incoming data to ensure it's in the correct format and within acceptable ranges. This will help prevent errors and security vulnerabilities. * **Model Deployment:** Consider using a model serving framework (e.g., TensorFlow Serving, TorchServe, Triton Inference Server) to deploy your model in a production environment. These frameworks provide features like model versioning, A/B testing, and monitoring. * **Security:** Be aware of potential security risks, such as denial-of-service attacks and injection vulnerabilities. Implement appropriate security measures to protect your server and your model. * **Load Balancing:** Use a load balancer to distribute traffic across multiple MCP server instances. * **Monitoring:** Implement monitoring to track the performance of your MCP server and your model. This will help you identify and resolve issues quickly. * **Client ID Persistence:** If clients need to reconnect and maintain the same ID, you'll need a mechanism to persist the client ID (e.g., using cookies, local storage, or a database). The current implementation generates a new ID on each connection. This comprehensive example provides a solid foundation for building an MCP server using Python and SSE. Remember to adapt it to your specific needs and consider the production considerations outlined above.
PBIXRay MCP Server
거울
AI-Powered Image Generation Worker
MCP Browser Screenshot Server
Enables AI assistants to capture screenshots of web pages using automated browser sessions. Supports full-page and element-specific screenshots, device simulation, and JavaScript execution for comprehensive web testing and monitoring.
Awels PDF Processing Server
Enables conversion of PDF files to Markdown format with optional image extraction using docling. Supports batch processing of multiple PDFs with structured output including metadata and processing statistics.
Claude Parallel Tasks MCP Server
Enables running multiple Claude prompts simultaneously in parallel with support for file contexts and output redirection to individual files.
RAG-MCP Server
A server that integrates Retrieval-Augmented Generation (RAG) with the Model Control Protocol (MCP) to provide web search capabilities and document analysis for AI assistants.
AI_SOC_MCP_Server_Sher
AI\_SOC\_MCP\_Server\_Sher
Jokes MCP Server
Fetches jokes from multiple APIs including Chuck Norris, Dad jokes, and Yo Mama jokes, providing humor-focused content through standardized MCP tools.
🎭 Playwright Test Automation Framework
🎭 최신 E2E 테스트 프레임워크 | Playwright + TypeScript + MCP 서버 | 데이터 기반 POM 아키텍처 | 고급 테스트 녹화 및 재생
ASUS Merlin Router MCP Server
Enables management of ASUS routers running Asuswrt-Merlin firmware via SSH/SCP. Supports system monitoring, device management, WiFi control, service restarts, NVRAM operations, file transfers, VPN management, and custom command execution.
MCP HTTP TAVILY DATE OAUTH
Enables web searches using TAVILY API with fallback to DuckDuckGo, datetime queries, and optional Ollama AI processing. Features HTTP transport with OAuth2 authentication for secure access to search capabilities.
MCP Demo Project
A demonstration server implementing the Model Context Protocol (MCP) with both STDIO and Server-Sent Events (SSE) support, featuring a word reversal tool that transforms input text by reversing characters.
E-commerce MCP Server
A Model Context Protocol server that provides real-time access to MongoDB product data, enabling sophisticated e-commerce queries with price range filters, category searching, and product recommendations through a conversational interface.
Remote MCP Server on Cloudflare
Stock Analysis MCP Server
A FastMCP-based server that provides tools for analyzing stock market data, including concept sector strength, financial indicators, F10 information, market emotion indicators, and tracking limit-up stocks.