Discover Awesome MCP Servers
Extend your agent with 26,375 capabilities via MCP servers.
- All26,375
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Apple Shortcuts MCP Server
Enables the generation, management, and validation of Apple Shortcuts (.shortcut files) by providing tools to search actions and build control flow blocks. It allows users to programmatically create and analyze shortcut structures for deployment on iOS and macOS devices.
Tavily Web Search MCP Server
Enables web search capabilities through the Tavily API, allowing users to search the internet for information using natural language queries. Serves as a demonstration and educational project for building MCP servers with external API integrations.
UUID MCP Provider
Một máy chủ giao thức Model Context đơn giản tạo UUID dựa trên dấu thời gian (v7) khi được gọi bởi LLM, cung cấp các mã định danh duy nhất có thể sắp xếp theo thứ tự thời gian mà không cần bất kỳ tham số đầu vào nào.
vk-mcp-server
Model Context Protocol server for VK (VKontakte) social network API
YaVendió Tools
An MCP-based messaging system that allows AI systems to interact with various messaging platforms through standardized tools for sending text, images, documents, buttons, and alerts.
pumpswap-mcp
pumpswap-mcp
Model Context Protocol (MCP)
Okay, I understand. You want a working pattern (a design pattern or architectural approach) for building SSE (Server-Sent Events) based MCP (Message Channel Protocol) clients and servers, leveraging the Gemini LLM (Large Language Model). Here's a breakdown of a potential working pattern, along with explanations and considerations: **Core Idea:** The pattern aims to use SSE for real-time communication between MCP clients and servers, with Gemini LLM playing a role in processing or generating messages within this communication flow. This could involve: * **LLM-Enhanced Message Generation:** The server uses Gemini to generate dynamic and context-aware messages sent to the client via SSE. * **LLM-Powered Client Interaction:** The client uses Gemini to understand and respond to SSE messages, potentially generating new messages to send back to the server (perhaps via a separate channel like a standard HTTP POST). * **LLM-Driven Data Enrichment:** The server uses Gemini to enrich data before sending it to the client via SSE, providing more insightful or personalized information. **Pattern Name:** LLM-Augmented Real-Time Messaging (or something similar) **Components:** 1. **MCP Client:** * **SSE Connection Handler:** Establishes and maintains the SSE connection to the server. Handles incoming SSE events. * **Message Parser:** Parses the data received in SSE events. This might involve JSON parsing, or other formats depending on your MCP implementation. * **Gemini Integration (Optional):** * **Message Understanding:** Uses Gemini to understand the meaning and intent of incoming SSE messages. * **Response Generation:** Uses Gemini to generate appropriate responses to the server, potentially based on the content of the SSE messages and the user's current context. * **Data Presentation:** Uses Gemini to format and present the data received via SSE in a user-friendly way. * **User Interface (UI):** Displays information to the user and allows them to interact with the system. * **Request Sender (Optional):** If the client needs to send data back to the server (e.g., user input, acknowledgements), this component handles sending HTTP requests (e.g., POST, PUT) to the server. 2. **MCP Server:** * **SSE Endpoint:** An HTTP endpoint that serves SSE events. * **Message Generator:** Creates the messages to be sent to the client via SSE. * **Gemini Integration (Crucial):** * **Message Generation:** Uses Gemini to generate dynamic and context-aware messages. This could involve: * Generating personalized recommendations. * Creating summaries of data. * Crafting engaging notifications. * Adapting messages based on user behavior. * **Data Enrichment:** Uses Gemini to enrich data before sending it to the client. For example, adding sentiment analysis, topic extraction, or contextual information. * **Request Processing (Optional):** Handles incoming HTTP requests from the client (e.g., POST requests with user input). May use Gemini to process or understand these requests. * **Data Source:** The source of the data that is being sent to the client. This could be a database, an API, or other data sources. * **Event Stream:** A mechanism for triggering the generation of SSE events. This could be based on database changes, external events, or scheduled tasks. **Communication Flow:** 1. **Client Connection:** The client establishes an SSE connection to the server's SSE endpoint. 2. **Server Event Trigger:** An event occurs on the server (e.g., data update, timer expiry). 3. **Message Generation (Server):** The server's message generator, potentially using Gemini, creates a message to be sent to the client. Gemini might be used to personalize the message, summarize data, or add context. 4. **SSE Event Transmission:** The server sends the message to the client as an SSE event. 5. **Client Event Reception:** The client receives the SSE event. 6. **Message Parsing (Client):** The client parses the message data. 7. **Gemini Processing (Optional Client):** The client, optionally, uses Gemini to understand the message and generate a response. 8. **UI Update (Client):** The client updates the UI to display the information to the user. 9. **Client Request (Optional):** The client, optionally, sends a request back to the server (e.g., user input, acknowledgement). 10. **Server Request Processing (Optional):** The server processes the client's request, potentially using Gemini to understand the request. **Example Scenario: Real-Time Stock Updates with LLM-Powered Insights** * **MCP Client:** A stock trading application. * **MCP Server:** A server providing real-time stock data. * **SSE:** Used to push stock price updates to the client. * **Gemini (Server):** Used to generate insights and summaries about the stock price movements. For example: * "Stock XYZ is up 5% today, driven by positive news about their new product launch. Analysts predict further gains in the short term." * "Warning: Stock ABC has experienced a sudden drop of 10% in the last hour. This may be due to a negative earnings report." * **Gemini (Client - Optional):** Could be used to allow the user to ask questions about the stock data, such as "Why is stock XYZ going up?" and receive a Gemini-generated answer based on the SSE data and external information. **Code Snippets (Illustrative - Python with Flask and `sse_starlette`):** **Server (Python/Flask):** ```python from flask import Flask, Response, request from sse_starlette.sse import EventSourceResponse import google.generativeai as genai import os app = Flask(__name__) # Configure Gemini (replace with your API key) GOOGLE_API_KEY = os.environ.get("GOOGLE_API_KEY") genai.configure(api_key=GOOGLE_API_KEY) model = genai.GenerativeModel('gemini-pro') # Mock stock data (replace with your actual data source) stock_prices = {"XYZ": 100.00, "ABC": 50.00} async def stock_price_stream(): while True: # Simulate stock price changes stock_prices["XYZ"] += (random.random() - 0.5) * 2 # +/- 1 stock_prices["ABC"] -= (random.random() - 0.5) * 1 # +/- 0.5 # Generate LLM-powered insight prompt = f"Summarize the current stock prices: XYZ is {stock_prices['XYZ']:.2f}, ABC is {stock_prices['ABC']:.2f}." response = model.generate_content(prompt) insight = response.text event_data = { "XYZ": stock_prices["XYZ"], "ABC": stock_prices["ABC"], "insight": insight } yield { "event": "stock_update", "data": json.dumps(event_data) } await asyncio.sleep(5) # Send updates every 5 seconds @app.route('/stream') async def stream(): return EventSourceResponse(stock_price_stream()) if __name__ == '__main__': import random import asyncio import json app.run(debug=True, port=5000) ``` **Client (JavaScript):** ```html <!DOCTYPE html> <html> <head> <title>Stock Updates</title> </head> <body> <h1>Stock Prices</h1> <div id="stock-data"></div> <script> const eventSource = new EventSource('/stream'); eventSource.onmessage = function(event) { console.log("Raw event data:", event.data); // Log the raw data try { const data = JSON.parse(event.data); console.log("Parsed data:", data); // Log the parsed data const stockDataDiv = document.getElementById('stock-data'); stockDataDiv.innerHTML = ` XYZ: ${data.XYZ.toFixed(2)}<br> ABC: ${data.ABC.toFixed(2)}<br> Insight: ${data.insight} `; } catch (error) { console.error("Error parsing JSON:", error); console.error("Event data that caused the error:", event.data); } }; eventSource.onerror = function(error) { console.error("SSE error:", error); }; </script> </body> </html> ``` **Key Considerations:** * **Gemini API Usage:** Be mindful of Gemini's API usage limits and costs. Implement caching and rate limiting to avoid exceeding your quota. * **Error Handling:** Implement robust error handling on both the client and server. Handle connection errors, API errors, and data parsing errors gracefully. * **Security:** Secure your SSE endpoint. Use authentication and authorization to prevent unauthorized access. Sanitize any user input before sending it to Gemini. * **Data Format:** Choose a data format for SSE events that is efficient and easy to parse (e.g., JSON). * **Scalability:** Consider the scalability of your SSE server. Use a scalable SSE library or framework. You might need to use a message queue (e.g., RabbitMQ, Kafka) to handle a large number of clients. * **Real-time Requirements:** SSE is near real-time, but it's not guaranteed to be perfectly real-time. If you need true real-time communication, consider using WebSockets. * **Complexity:** Integrating an LLM adds complexity. Carefully consider whether the benefits of using Gemini outweigh the added complexity. * **Context Management:** For more complex interactions, you'll need to manage the context of the conversation between the client and server. This could involve storing conversation history or using Gemini's context management features. * **Prompt Engineering:** The quality of Gemini's output depends heavily on the prompts you provide. Experiment with different prompts to find the ones that produce the best results. **Benefits of this Pattern:** * **Real-time Updates:** SSE provides real-time updates to the client. * **LLM-Powered Intelligence:** Gemini can be used to generate dynamic, personalized, and insightful messages. * **Improved User Experience:** The combination of real-time updates and LLM-powered intelligence can lead to a more engaging and informative user experience. * **Flexibility:** The pattern is flexible and can be adapted to a variety of use cases. **Alternatives:** * **WebSockets:** WebSockets provide full-duplex communication, which can be useful if the client needs to send a lot of data back to the server. * **Long Polling:** Long polling is a simpler alternative to SSE, but it is less efficient. * **Traditional HTTP Requests:** For applications that don't require real-time updates, traditional HTTP requests may be sufficient. This detailed explanation and code example should give you a solid foundation for building SSE-based MCP clients and servers using Gemini LLM. Remember to adapt the pattern to your specific needs and requirements. Good luck!
Oomol Connect MCP Server
Enables integration with Oomol Connect for executing tasks, managing blocks and packages, and uploading files. Supports task execution with real-time progress monitoring and intelligent polling across audio, video, and other processing workflows.
MuseScore MCP Server
A Model Context Protocol server that provides programmatic control over MuseScore through a WebSocket-based plugin system, allowing AI assistants to compose music, add lyrics, navigate scores, and control MuseScore directly.
Local Snowflake MCP Server
Enables Claude Desktop to interact with Snowflake databases through natural-language SQL queries. Built in Python, it allows secure local integration between LLMs and enterprise data systems for database operations and analysis.
MCP Filesystem Server
Một máy chủ Giao thức Ngữ cảnh Mô hình (Model Context Protocol) cung cấp tương tác an toàn và thông minh với các tệp và hệ thống tệp, cung cấp khả năng quản lý ngữ cảnh thông minh và các hoạt động tiết kiệm token để làm việc với các tệp lớn và cấu trúc thư mục phức tạp.
MCP Docker Server
Enables secure Docker command execution from isolated environments like containers through MCP protocol. Provides tools for managing Docker containers, images, and Docker Compose services with security validation and async operation support.
Remote MCP Server
A deployable Model Context Protocol server for Cloudflare Workers that allows users to create custom AI tools without authentication requirements and connect them to Cloudflare AI Playground or Claude Desktop.
erpnext-server
Đây là một máy chủ MCP dựa trên TypeScript, cung cấp tích hợp với API của ERPNext/Frappe. Nó cho phép các trợ lý AI tương tác với dữ liệu và chức năng của ERPNext thông qua Giao thức Ngữ cảnh Mô hình (Model Context Protocol).
LinkedIn MCP Assistant
LinkedIn MCP Assistant
vSphere MCP Server
Enables AI agents to manage VMware vSphere virtual infrastructure through comprehensive operations including VM power control, snapshot management, resource monitoring, performance analytics, and bulk operations with built-in safety confirmations for destructive actions.
Interactive Feedback MCP
A Model Context Protocol server that enables AI assistants to request user feedback at critical points during interactions, improving communication and reducing unnecessary tool calls.
For Five Coffee MCP Server
Enables AI assistants to fetch, search, and organize menu information from For Five Coffee café. Provides access to complete menu data, category filtering, and item search capabilities through both MCP and REST API interfaces.
MCP-NOSTR
Một cầu nối cho phép các mô hình ngôn ngữ AI đăng tải nội dung lên mạng Nostr bằng cách triển khai Giao thức Bối cảnh Mô hình (MCP).
Matomo MCP Server
A Model Context Protocol server that provides tools to interact with Matomo Analytics API, enabling management of sites, users, goals, segments, and access to analytics reports through a MCP interface.
ReasonForge
Provides a suite of deterministic math tools powered by SymPy to handle algebra, calculus, linear algebra, and statistics via the Model Context Protocol. It enables smaller language models to delegate complex computations to a verified symbolic backend for accurate and reliable results.
MCP API Service
Máy chủ Giao thức Bối cảnh Mô hình (MCP) tương tác với các API hệ thống, cho phép người dùng kiểm tra kết nối, tìm kiếm nhân viên, đăng ký ăn sáng và cập nhật thông tin hóa chất theo ca.
YaTracker Connector
Enables interaction with Yandex Tracker through its API for managing tasks, comments, and attachments. It supports issue searching, status transitions, and metadata retrieval for automated project management.
FastAPI MCP Demo Server
A demonstration MCP server built with FastAPI that provides basic mathematical operations and greeting services. Integrates with Gemini CLI to showcase MCP protocol implementation with simple REST endpoints.
MCP Terminal
Một máy chủ cho phép các trợ lý AI thực thi các lệnh terminal và truy xuất kết quả thông qua Giao thức Ngữ cảnh Mô hình (MCP).
Self-Hosted Supabase MCP Server
Enables developers to interact with self-hosted Supabase instances, providing database introspection, migration management, auth user operations, storage management, and TypeScript type generation directly from MCP-compatible development environments.
@container-inc/mcp
Máy chủ MCP để triển khai tự động lên Container Inc.
Kylas CRM MCP Server
Enables management of Kylas CRM lead operations, including creating leads, searching and filtering records, and resolving user, product, or pipeline IDs. It provides specialized tools for monitoring idle leads and accessing lead schema instructions through natural language.
Scrapeless MCP Server
Một triển khai máy chủ Giao thức Ngữ cảnh Mô hình (Model Context Protocol) cho phép các trợ lý AI như Claude thực hiện tìm kiếm Google và truy xuất dữ liệu web trực tiếp thông qua các yêu cầu bằng ngôn ngữ tự nhiên.
Files-DB-MCP
Một hệ thống cơ sở dữ liệu vector cục bộ cung cấp cho các tác nhân lập trình LLM khả năng tìm kiếm ngữ nghĩa nhanh chóng và hiệu quả cho các dự án phần mềm thông qua Giao thức Điều khiển Tin nhắn (Message Control Protocol).