Discover Awesome MCP Servers
Extend your agent with 15,860 capabilities via MCP servers.
- All15,860
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Tavily Web Search MCP Server
Enables web search capabilities through the Tavily API, allowing users to search the internet for information using natural language queries. Serves as a demonstration and educational project for building MCP servers with external API integrations.
YaVendió Tools
An MCP-based messaging system that allows AI systems to interact with various messaging platforms through standardized tools for sending text, images, documents, buttons, and alerts.
Model Context Protocol (MCP)
Okay, here's a breakdown of a working pattern for SSE-based (Server-Sent Events) MCP (Message Channel Protocol) clients and servers using Gemini LLM, along with explanations and considerations: **Core Idea:** This pattern leverages SSE for real-time, unidirectional (server-to-client) communication of LLM-generated content. MCP provides a structured way to manage the conversation flow and metadata. The server uses Gemini LLM to generate responses, and streams them to the client via SSE. The client displays the content as it arrives. **Components:** 1. **MCP Client (Frontend - e.g., Web Browser, Mobile App):** * **Initiates Conversation:** Sends an initial message (e.g., user query) to the server via a standard HTTP request (POST or GET). This request includes MCP metadata (e.g., conversation ID, message ID, user ID). * **Establishes SSE Connection:** After the initial request, the client opens an SSE connection to a specific endpoint on the server. This endpoint is dedicated to receiving streaming responses for the given conversation. * **Receives SSE Events:** Listens for `message` events from the SSE stream. Each event contains a chunk of the LLM-generated response, along with MCP metadata. * **Reconstructs and Displays Response:** As events arrive, the client appends the data to a display area, providing a real-time, streaming experience. * **Handles Errors and Completion:** Listens for specific SSE events (e.g., `error`, `done`) to handle errors or signal the completion of the LLM response. * **Manages Conversation State:** The client may need to store the conversation ID and other relevant metadata to maintain context for subsequent requests. * **Sends Subsequent Messages:** After receiving a complete response, the client can send new messages to the server, continuing the conversation. These messages are sent via standard HTTP requests, and a new SSE stream is established for the response. 2. **MCP Server (Backend - e.g., Node.js, Python/Flask, Java/Spring Boot):** * **Receives Initial Request:** Handles the initial HTTP request from the client containing the user's query and MCP metadata. * **Validates Request:** Validates the request and MCP metadata. * **Interacts with Gemini LLM:** Sends the user's query to the Gemini LLM API. Crucially, it uses the streaming capabilities of the Gemini API (if available). * **Generates SSE Events:** As the Gemini LLM generates text, the server creates SSE `message` events. Each event contains a chunk of the generated text and relevant MCP metadata (e.g., conversation ID, message ID, chunk ID). * **Manages SSE Connections:** Maintains a list of active SSE connections. Each connection is associated with a specific conversation ID. * **Sends SSE Events to Client:** Pushes the SSE events to the appropriate client connection. * **Handles Errors:** If an error occurs during LLM generation, the server sends an `error` event to the client via SSE. * **Sends Completion Event:** When the LLM response is complete, the server sends a `done` event to the client via SSE. * **Manages Conversation State:** The server stores the conversation history and other relevant metadata. This is essential for maintaining context across multiple turns in the conversation. A database (e.g., PostgreSQL, MongoDB) is typically used for this purpose. * **MCP Implementation:** The server needs to implement the MCP protocol, including message formatting, routing, and error handling. **Gemini LLM Integration:** * **Streaming API:** Use the Gemini LLM's streaming API (if available). This allows the server to receive the LLM's response in chunks, which can then be immediately sent to the client via SSE. * **Prompt Engineering:** Carefully design prompts to guide the LLM's responses and ensure they are appropriate for the conversation. * **Rate Limiting:** Implement rate limiting to prevent abuse of the LLM API. * **Error Handling:** Handle errors from the LLM API gracefully. If an error occurs, send an `error` event to the client via SSE. **MCP Considerations:** * **Message Format:** Define a clear message format for MCP messages. This format should include fields for: * Conversation ID * Message ID * User ID * Message Type (e.g., `user_message`, `llm_response`, `error`, `done`) * Payload (the actual text of the message) * Chunk ID (for SSE streaming) * **Routing:** Implement a routing mechanism to direct messages to the correct conversation. * **Error Handling:** Define a standard way to handle errors. This should include error codes and error messages. * **Security:** Implement security measures to protect against unauthorized access and data breaches. **Example (Conceptual - Python/Flask):** ```python from flask import Flask, request, Response, stream_with_context import google.generativeai as genai import os import json import time app = Flask(__name__) # Configure Gemini API (replace with your API key) GOOGLE_API_KEY = os.environ.get("GOOGLE_API_KEY") genai.configure(api_key=GOOGLE_API_KEY) model = genai.GenerativeModel('gemini-pro') # Or 'gemini-pro-vision' # In-memory conversation store (replace with a database in production) conversations = {} def generate_llm_response_stream(conversation_id, user_message): """Generates a streaming response from Gemini LLM.""" global conversations if conversation_id not in conversations: conversations[conversation_id] = [] conversations[conversation_id].append({"role": "user", "parts": [user_message]}) try: chat = model.start_chat(history=conversations[conversation_id]) response = chat.send_message(user_message, stream=True) for chunk in response: llm_text = chunk.text conversations[conversation_id].append({"role": "model", "parts": [llm_text]}) # Store LLM response mcp_message = { "conversation_id": conversation_id, "message_type": "llm_response", "payload": llm_text } yield f"data: {json.dumps(mcp_message)}\n\n" time.sleep(0.1) # Simulate processing time mcp_done_message = { "conversation_id": conversation_id, "message_type": "done" } yield f"data: {json.dumps(mcp_done_message)}\n\n" except Exception as e: mcp_error_message = { "conversation_id": conversation_id, "message_type": "error", "payload": str(e) } yield f"data: {json.dumps(mcp_error_message)}\n\n" @app.route('/chat', methods=['POST']) def chat_handler(): """Handles the initial chat request and establishes the SSE stream.""" data = request.get_json() conversation_id = data.get('conversation_id') user_message = data.get('message') if not conversation_id or not user_message: return "Missing conversation_id or message", 400 def stream(): yield from generate_llm_response_stream(conversation_id, user_message) return Response(stream_with_context(stream()), mimetype='text/event-stream') if __name__ == '__main__': app.run(debug=True, port=5000) ``` **Client-Side Example (JavaScript):** ```javascript const conversationId = 'unique-conversation-id'; // Generate a unique ID const messageInput = document.getElementById('messageInput'); const chatOutput = document.getElementById('chatOutput'); const sendButton = document.getElementById('sendButton'); sendButton.addEventListener('click', () => { const message = messageInput.value; messageInput.value = ''; // 1. Send initial message via HTTP POST fetch('/chat', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ conversation_id: conversationId, message: message }) }).then(() => { // 2. Establish SSE connection const eventSource = new EventSource('/chat'); eventSource.onmessage = (event) => { const data = JSON.parse(event.data); console.log("Received SSE event:", data); if (data.message_type === 'llm_response') { chatOutput.textContent += data.payload; } else if (data.message_type === 'done') { console.log('LLM response complete.'); eventSource.close(); // Close the SSE connection } else if (data.message_type === 'error') { console.error('Error from server:', data.payload); chatOutput.textContent += `Error: ${data.payload}`; eventSource.close(); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); eventSource.close(); }; }).catch(error => { console.error("Error sending initial message:", error); }); }); ``` **Key Improvements and Considerations:** * **Error Handling:** Robust error handling is crucial. The server should catch exceptions during LLM generation and send error events to the client. The client should display these errors to the user. * **Conversation History:** The server *must* maintain a conversation history to provide context for subsequent requests. This can be stored in a database. The example code uses an in-memory store, which is not suitable for production. * **Security:** Implement appropriate security measures, such as authentication and authorization, to protect against unauthorized access. * **Scalability:** For high-traffic applications, consider using a message queue (e.g., RabbitMQ, Kafka) to decouple the server from the LLM API. This can improve scalability and resilience. * **Rate Limiting:** Implement rate limiting to prevent abuse of the LLM API. * **Prompt Engineering:** Experiment with different prompts to optimize the LLM's responses. * **Token Management:** Be mindful of token limits for both input and output with the LLM. Implement strategies to truncate or summarize the conversation history if necessary. * **User Interface:** Design a user interface that provides a clear and intuitive experience for the user. Consider adding features such as: * Loading indicators * Error messages * Conversation history * **Metadata:** Include relevant metadata in the MCP messages, such as timestamps, user IDs, and message IDs. This can be helpful for debugging and analysis. * **Chunking Strategy:** Experiment with different chunking strategies to optimize the streaming experience. Smaller chunks will result in a more responsive UI, but may also increase overhead. * **Cancellation:** Implement a mechanism for the user to cancel a long-running LLM request. This can be done by sending a cancellation signal to the server, which can then terminate the LLM request. * **Context Management:** Consider using a more sophisticated context management strategy, such as retrieval-augmented generation (RAG), to provide the LLM with access to external knowledge sources. This comprehensive pattern provides a solid foundation for building SSE-based MCP clients and servers using Gemini LLM. Remember to adapt the code and configurations to your specific needs and environment. Good luck!
AnyDocs MCP Server
Transforms any website's documentation into an MCP-compatible interactive knowledge base with universal scraping, advanced search, and AI-powered tools. Supports GitBook, Notion, Confluence, and custom documentation platforms with real-time synchronization.
Dynamic Reincarnation Story
Enables interactive reincarnation storytelling where users choose their path after death, becoming characters like a vengeful spirit, Bilbo Baggins, or Monkey D. Luffy. Features dynamic narrative generation with personalized story paths based on user choices and soul-searching questions.
Toast MCP Server
一个 MCP 服务器,可以在 Windows 10 和 macOS 上显示桌面通知,兼容 VSCode Cline,并支持自定义通知参数。
mcp-dev
尝试 MCP (chángshì MCP)
SVM-MCP
一个模型上下文协议服务器,将 Claude AI 与 SOON 和其他基于 SVM 的区块链连接起来,允许用户在 SOON 的测试网和主网上查看账户余额、获取最近的交易以及查看代币持有情况。
DeepSource MCP Server
DeepSource 的模型上下文协议 (MCP) 服务器
ONEDeFi MCP Server
Enables AI-powered DeFi operations across Ethereum, Polygon, and Solana with automated portfolio optimization, risk assessment, and yield farming strategies. Provides intelligent portfolio diagnostics, investment strategy generation, and multi-chain DeFi protocol integration through natural language.
Yourware MCP
An MCP server that allows users to upload projects (single files or directories) to the Yourware platform, enabling easy deployment and showcasing of projects.
PyMCP
Primarily to be used as a template for developing MCP servers with FastMCP in Python, PyMCP is somewhat inspired by the official everything MCP server in Typescript.
DuckDuckGo MCP Server
A basic MCP server with example tools including message echo functionality and server information retrieval. Built with FastMCP framework and supports both stdio and HTTP transports.
Files-DB-MCP
A local vector database system that provides LLM coding agents with fast, efficient semantic search capabilities for software projects via the Message Control Protocol.
SendForSign MCP Server
Enables interaction with the SendForSign API to list document templates and read template content for electronic signature workflows.
SSH MCP Server
Enables SSH operations including connecting to remote servers, executing commands, and transferring files between local and remote systems. Supports multiple SSH connections with both password and private key authentication methods.
MCP-openproject
MCP-openproject
Exegol MCP Server
Provides secure access to Exegol penetration testing tools for CTF competitions and red team learning. Enables AI assistants to perform web penetration testing, network reconnaissance, password cracking, OSINT analysis, and more for educational purposes.
kill-process-mcp
An MCP server that allows users to list and kill operating system processes through natural language queries, making process management more intuitive across Windows, macOS, and Linux.
arXiv Research Assistant MCP Server
An MCP server that allows Claude AI to search, explore, and compare arXiv papers efficiently through a custom-built local server.
MCP Memory
Enables MCP clients to remember user information, preferences, and behaviors across conversations using vector search technology. Built on Cloudflare infrastructure with AI-powered semantic search to find relevant memories based on meaning rather than keywords.
G Suite Integration
集成服务器,支持与 Google 服务(包括 Gmail 和 Calendar)进行交互,允许用户跨多个 Google 谷歌帐户管理电子邮件、起草回复和安排活动。
Webby MCP Server
Provides comprehensive website validation across performance, accessibility, SEO, and security dimensions using multiple testing services including WebPageTest, Google PageSpeed Insights, Axe DevTools, Mozilla Observatory, and SSL Labs. Enables automated website health assessments through browser automation and API integrations.
Earth Engine MCP Server
Enables satellite imagery analysis through Google Earth Engine, allowing users to search datasets, calculate vegetation indices like NDVI, filter collections by location and date, and export imagery to cloud storage. Supports major satellite datasets including Sentinel-2, Landsat, and MODIS for applications like agriculture monitoring and deforestation tracking.
FFmpeg Video Processor
一个 Node.js 服务器,可以通过自然语言请求实现视频处理,包括将视频调整为不同的分辨率(从 360p 到 1080p)以及提取各种格式的音频(MP3、AAC、WAV、OGG)。
kickstart-mcp
🚀 Kickstart-mcp 是一个关于使用 MCP 的教程,教你如何创建自己的 MCP 服务器/客户端。我们将引导你完成 MCP 之旅的每一步。
BlazeMeter MCP Server
Provides programmatic access to BlazeMeter's performance testing platform through MCP tools. Enables users to retrieve test runs, analyze performance data, view error reports, and manage testing resources via natural language interactions.
Payload CMS MCP Server
Enables AI assistants to interact with Payload CMS instances through authenticated REST API operations. Supports creating, searching, and updating documents in collections with automatic JWT authentication and localization support.
MiAI_MCP
构建和集成 MCP 服务器的演示
Jira MCP Server
An MCP server that enables communication with Jira, allowing users to perform operations like getting, searching, creating, and editing issues through natural language interaction.