Discover Awesome MCP Servers

Extend your agent with 24,070 capabilities via MCP servers.

All24,070
MCP Docker Server

MCP Docker Server

Enables secure Docker command execution from isolated environments like containers through MCP protocol. Provides tools for managing Docker containers, images, and Docker Compose services with security validation and async operation support.

Remote MCP Server

Remote MCP Server

A deployable Model Context Protocol server for Cloudflare Workers that allows users to create custom AI tools without authentication requirements and connect them to Cloudflare AI Playground or Claude Desktop.

erpnext-server

erpnext-server

これはTypeScriptベースのMCPサーバーで、ERPNext/Frappe APIとの連携機能を提供します。AIアシスタントがModel Context Protocolを通じてERPNextのデータや機能とやり取りできるようにします。

MCP-openproject

MCP-openproject

MCP-openproject

Puch AI MCP Starter

Puch AI MCP Starter

A starter template for creating MCP servers compatible with Puch AI, featuring built-in tools for job searching and analysis, plus basic image processing capabilities. Includes authentication and deployment guidance for extending Puch AI with custom tools.

AnalyticDB for MySQL MCP Server

AnalyticDB for MySQL MCP Server

鏡 (Kagami)

Time Tools MCP Server

Time Tools MCP Server

A Model Context Protocol server for time manipulation tasks, enabling AI models to get the current date/time and calculate duration between timestamps.

MCP Gemini Server

MCP Gemini Server

鏡 (Kagami)

Jupyter MCP Server

Jupyter MCP Server

JupyterLab環境内で、Model Context Protocolを通じてJupyter Notebookとのインタラクションを可能にし、コードの実行やMarkdownの挿入をサポートします。

Optimization MCP

Optimization MCP

Provides nine specialized production-ready solvers for advanced resource allocation, network flow, and multi-objective optimization with native Monte Carlo integration. It enables users to perform constraint-based decision-making and performance analysis directly through Claude Code.

MCP Weather & Files AI Server

MCP Weather & Files AI Server

An advanced Model Context Protocol server that integrates real-time weather data, local file system navigation, and geographic information with AI-powered analysis tools. It enables users to perform complex data evaluations and manage files while accessing live climatic and demographic data through compatible MCP clients.

Logstash MCP Server

Logstash MCP Server

A Model Context Protocol server that provides comprehensive tools for monitoring and identifying performance bottlenecks in Logstash instances through an interactive web UI and JSON-RPC interface.

MarketAuxMcpServer

MarketAuxMcpServer

A MCP Server Implementation that integrates the Marketaux api providing search based on entity, countries, industries, symbols etc.

MCP-NOSTR

MCP-NOSTR

AI言語モデルがModel Context Protocol (MCP) を実装することで、Nostrネットワークにコンテンツを公開できるようにするブリッジ。

Matomo MCP Server

Matomo MCP Server

A Model Context Protocol server that provides tools to interact with Matomo Analytics API, enabling management of sites, users, goals, segments, and access to analytics reports through a MCP interface.

MCP API Service

MCP API Service

システムAPIと連携するモデルコンテキストプロトコル(MCP)サーバー。ユーザーは、接続の確認、従業員の検索、朝食の登録、シフトごとの化学物質情報の更新を行うことができます。

Local Falcon MCP Server

Local Falcon MCP Server

Connects AI systems to Local Falcon API, enabling access to local SEO reporting tools including scan reports, trend analysis, keyword tracking, and competitor data through the Model Context Protocol.

MaxKB

MaxKB

MaxKBは、すぐに使えるRAGチャットボットで、堅牢なワークフローとMCPツール利用機能を備えています。DeepSeek-R1、Llama 3.3、OpenAIなど、幅広い主流のLLM(大規模言語モデル)をサポートしています。

Apple Shortcuts MCP Server

Apple Shortcuts MCP Server

Enables the generation, management, and validation of Apple Shortcuts (.shortcut files) by providing tools to search actions and build control flow blocks. It allows users to programmatically create and analyze shortcut structures for deployment on iOS and macOS devices.

UUID MCP Provider

UUID MCP Provider

LLMから呼び出されると、タイムスタンプベースのUUID(v7)を生成するシンプルなModel Context Protocolサーバー。入力パラメータは不要で、時系列順にソート可能なユニークな識別子を提供します。

YaVendió Tools

YaVendió Tools

An MCP-based messaging system that allows AI systems to interact with various messaging platforms through standardized tools for sending text, images, documents, buttons, and alerts.

pumpswap-mcp

pumpswap-mcp

pumpswap-mcp

Model Context Protocol (MCP)

Model Context Protocol (MCP)

Okay, I understand. You want a working pattern (a design pattern or architectural approach) for building SSE (Server-Sent Events) based MCP (Message Channel Protocol) clients and servers, leveraging the Gemini LLM (Large Language Model). This is a complex topic, so let's break it down into key components and considerations. Here's a proposed working pattern, along with explanations and considerations: **I. Core Concepts & Technologies** * **SSE (Server-Sent Events):** A unidirectional communication protocol where the server pushes data to the client over a single HTTP connection. Ideal for real-time updates and streaming data. * **MCP (Message Channel Protocol):** A protocol for exchanging messages between different components of a system. In this context, it defines the structure and semantics of the messages sent over the SSE connection. You'll need to define your own MCP schema. * **Gemini LLM:** Google's Large Language Model. Used for tasks like: * **Content Generation:** Generating responses, summaries, or other text-based data to be sent to the client. * **Intent Recognition:** Understanding the client's requests (if the client sends any data back to the server, perhaps via a separate API). * **Data Transformation:** Converting data from one format to another. * **Error Handling:** Generating helpful error messages. * **JSON (JavaScript Object Notation):** A common data format for structuring messages in both SSE and MCP. Highly recommended for interoperability. **II. Architecture Diagram** ``` +---------------------+ +---------------------+ +---------------------+ | MCP Client | <--> | SSE Server | <--> | Gemini LLM | | (e.g., Web Browser) | | (e.g., Node.js App) | | (Google Cloud/API) | +---------------------+ +---------------------+ +---------------------+ | | | | SSE Connection | MCP Messages | | | | V V V (Receives Updates) (Handles Logic) (Generates Content) ``` **III. Detailed Pattern** 1. **MCP Message Definition (Crucial):** * **Define a clear MCP schema.** This is the most important step. Consider using JSON Schema to formally define the structure of your messages. This will help with validation and consistency. * **Example MCP Message (JSON):** ```json { "type": "news_update", "timestamp": "2024-10-27T10:00:00Z", "headline": "Breaking News: ...", "content": "Detailed news article content...", "source": "Associated Press" } ``` ```json { "type": "summary", "query": "What is the capital of France?", "answer": "The capital of France is Paris." } ``` * **Message Types:** Define different message types to represent different kinds of data or events. Examples: `news_update`, `status_update`, `error_message`, `summary`, `recommendation`. 2. **SSE Server Implementation:** * **Technology:** Node.js with libraries like `express` and `sse-express` is a common choice. Python with `Flask` or `FastAPI` is also viable. * **Endpoint:** Create an SSE endpoint (e.g., `/stream`). * **Connection Handling:** When a client connects to the SSE endpoint: * Set the appropriate headers: `Content-Type: text/event-stream` and `Cache-Control: no-cache`. * Keep the connection open. * **Message Generation & Sending:** * **Trigger:** The server needs a trigger to generate new messages. This could be: * A timer (e.g., send updates every 5 seconds). * An event (e.g., a new news article is published). * A request from the client (via a separate API endpoint). * **Data Processing:** * **Call Gemini LLM:** Use the Gemini API to generate content based on the trigger. For example, if a new news article is published, you could ask Gemini to generate a summary. * **Format as MCP Message:** Take the data from Gemini and format it into a valid MCP message (JSON). * **Send via SSE:** Use the SSE protocol to send the MCP message to the client. The format is: ``` event: <event_name> (Optional) data: <JSON_string> id: <unique_id> (Optional) ``` Example: ``` event: news_update data: {"type": "news_update", "timestamp": "...", "headline": "...", "content": "..."} id: 12345 ``` * **Error Handling:** If the Gemini API call fails, or if there's an error formatting the message, send an `error_message` MCP message to the client. * **Example (Node.js with Express and sse-express):** ```javascript const express = require('express'); const sseExpress = require('sse-express'); const { GoogleGenerativeAI } = require("@google/generative-ai"); const app = express(); const port = 3000; const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY); const model = genAI.getGenerativeModel({ model: "gemini-1.5-pro-latest" }); app.get('/stream', sseExpress, async (req, res) => { // Set SSE headers (done by sseExpress) let messageId = 0; // Simulate a news update every 5 seconds setInterval(async () => { messageId++; try { const prompt = "Write a short, breaking news headline about a technological advancement."; const result = await model.generateContent(prompt); const responseText = result.response.text(); const mcpMessage = { type: "news_update", timestamp: new Date().toISOString(), headline: responseText, content: "More details to follow...", source: "AI News Service" }; res.sse(`data: ${JSON.stringify(mcpMessage)}\nid: ${messageId}\n\n`); // Send SSE event } catch (error) { console.error("Error generating news update:", error); const errorMessage = { type: "error_message", message: "Failed to generate news update." }; res.sse(`data: ${JSON.stringify(errorMessage)}\nid: ${messageId}\n\n`); } }, 5000); // Every 5 seconds }); app.listen(port, () => { console.log(`SSE server listening at http://localhost:${port}`); }); ``` 3. **SSE Client Implementation:** * **Technology:** JavaScript in a web browser is the most common. You can use the `EventSource` API. * **Connect to SSE Endpoint:** Create an `EventSource` object pointing to your SSE endpoint. * **Event Listeners:** Add event listeners to the `EventSource` object to handle incoming SSE events. * **`message` event:** This is the default event. It will receive the `data` field of the SSE event. Parse the JSON data to get the MCP message. * **Custom Events:** If you use the `event` field in your SSE messages (e.g., `event: news_update`), you can add event listeners for those specific events. * **`error` event:** Handle errors in the SSE connection. * **MCP Message Handling:** Based on the `type` field of the MCP message, take appropriate action. For example, if the `type` is `news_update`, display the headline and content in the UI. * **Error Handling:** Display error messages to the user. * **Example (JavaScript in a web browser):** ```html <!DOCTYPE html> <html> <head> <title>SSE Client</title> </head> <body> <h1>News Updates</h1> <div id="news-container"></div> <script> const eventSource = new EventSource('http://localhost:3000/stream'); // Replace with your server URL const newsContainer = document.getElementById('news-container'); eventSource.onmessage = (event) => { try { const mcpMessage = JSON.parse(event.data); switch (mcpMessage.type) { case 'news_update': const newsItem = document.createElement('div'); newsItem.innerHTML = `<h3>${mcpMessage.headline}</h3><p>${mcpMessage.content}</p><p>Source: ${mcpMessage.source}</p><p>Timestamp: ${mcpMessage.timestamp}</p><hr>`; newsContainer.appendChild(newsItem); break; case 'error_message': const errorItem = document.createElement('div'); errorItem.innerHTML = `<p style="color: red;">Error: ${mcpMessage.message}</p><hr>`; newsContainer.appendChild(errorItem); break; default: console.warn('Unknown message type:', mcpMessage.type); } } catch (error) { console.error('Error parsing JSON:', error); } }; eventSource.onerror = (error) => { console.error('SSE error:', error); newsContainer.innerHTML = '<p style="color: red;">Connection error. Check the server.</p>'; }; </script> </body> </html> ``` 4. **Gemini LLM Integration:** * **Authentication:** Use the appropriate authentication method for the Gemini API (e.g., API key, service account). *Never* expose your API key in client-side code. * **Prompt Engineering:** Carefully craft your prompts to Gemini to get the desired results. Experiment with different prompts to optimize the quality of the generated content. * **Rate Limiting:** Be aware of the Gemini API's rate limits and implement appropriate throttling in your server code. * **Error Handling:** Handle errors from the Gemini API gracefully. Retry failed requests (with exponential backoff) or send an error message to the client. * **Content Filtering:** Consider using Gemini's content filtering features to prevent the generation of harmful or inappropriate content. * **Model Selection:** Choose the appropriate Gemini model for your use case. Different models have different capabilities and cost. The example uses `gemini-1.5-pro-latest`. **IV. Considerations and Best Practices** * **Security:** * **Authentication/Authorization:** Implement authentication and authorization to protect your SSE endpoint. You don't want unauthorized clients receiving data. Consider using JWTs (JSON Web Tokens). * **Data Validation:** Validate all incoming data (especially if the client can send data to the server). Prevent injection attacks. * **CORS:** Configure CORS (Cross-Origin Resource Sharing) properly if your client and server are on different domains. * **Scalability:** * **Load Balancing:** Use a load balancer to distribute traffic across multiple SSE servers. * **Connection Limits:** Be aware of the connection limits of your server and database. * **Asynchronous Operations:** Use asynchronous operations (e.g., `async/await` in JavaScript) to avoid blocking the main thread. * **Reliability:** * **Heartbeats:** Implement heartbeats to detect broken connections. The server can send a "ping" message periodically, and the client can respond with a "pong" message. * **Automatic Reconnection:** The `EventSource` API automatically attempts to reconnect if the connection is lost. You can configure the reconnection delay. * **Message Queues:** Consider using a message queue (e.g., RabbitMQ, Kafka) to buffer messages if the Gemini API is unavailable or if the server is overloaded. * **Monitoring:** * **Metrics:** Collect metrics about the number of connected clients, message throughput, and error rates. * **Logging:** Log important events and errors. * **Client-to-Server Communication (Optional):** * While SSE is primarily unidirectional, you might need the client to send data to the server (e.g., to request a specific summary). In this case, you can use a separate REST API endpoint for the client to send requests. The server can then process the request, call Gemini, and send the result back to the client via SSE. * **MCP Evolution:** Design your MCP schema to be extensible. You might need to add new message types or fields in the future. Use versioning to handle changes gracefully. * **Data Serialization:** While JSON is recommended, consider other serialization formats like Protocol Buffers or Avro if you need higher performance or more compact messages. **V. Example Use Cases** * **Real-time News Updates:** The server streams news headlines and summaries generated by Gemini to the client. * **Live Chat:** The server streams chat messages generated by Gemini (e.g., suggested responses) to the client. * **Personalized Recommendations:** The server streams personalized recommendations generated by Gemini to the client. * **Live Translation:** The server streams translated text generated by Gemini to the client. * **AI-Powered Customer Support:** The server streams suggested responses generated by Gemini to a customer support agent's interface. **VI. Translation to Japanese** Here's a translation of the core concepts and the architecture diagram into Japanese: **I. コアコンセプトとテクノロジー** * **SSE (Server-Sent Events):** サーバーからクライアントへ、単一のHTTP接続を通じてデータをプッシュする一方向通信プロトコル。リアルタイム更新やストリーミングデータに最適です。 * **MCP (Message Channel Protocol):** システムの異なるコンポーネント間でメッセージを交換するためのプロトコル。ここでは、SSE接続を介して送信されるメッセージの構造と意味を定義します。独自のMCPスキーマを定義する必要があります。 * **Gemini LLM:** Googleの大規模言語モデル。以下のタスクに使用されます。 * **コンテンツ生成:** クライアントに送信する応答、要約、またはその他のテキストベースのデータを生成します。 * **意図認識:** クライアントのリクエストを理解します(クライアントが別のAPIを介してサーバーにデータを送信する場合)。 * **データ変換:** データをある形式から別の形式に変換します。 * **エラー処理:** 役立つエラーメッセージを生成します。 * **JSON (JavaScript Object Notation):** SSEとMCPの両方でメッセージを構造化するための一般的なデータ形式。相互運用性のために強く推奨されます。 **II. アーキテクチャ図** ``` +---------------------+ +---------------------+ +---------------------+ | MCPクライアント | <--> | SSEサーバー | <--> | Gemini LLM | | (例:Webブラウザ) | | (例:Node.jsアプリ) | | (Google Cloud/API) | +---------------------+ +---------------------+ +---------------------+ | | | | SSE接続 | MCPメッセージ | | | | V V V (更新を受信) (ロジックを処理) (コンテンツを生成) ``` **VII. Important Notes for Japanese Context** * **Character Encoding:** Ensure your server and client are using UTF-8 encoding to handle Japanese characters correctly. * **Gemini's Japanese Language Capabilities:** Test Gemini's performance with Japanese text. While it's generally good, there might be nuances or specific tasks where it's less accurate. Fine-tuning might be necessary for optimal results. * **Cultural Considerations:** When generating content with Gemini, be mindful of Japanese cultural norms and sensitivities. Avoid generating content that could be offensive or inappropriate. This detailed pattern should give you a solid foundation for building your SSE-based MCP clients and servers with Gemini LLM. Remember to adapt it to your specific requirements and test thoroughly. Good luck!

Arcaelas MCP

Arcaelas MCP

Provides AI-powered tools for image generation, text-to-speech conversion, background removal, and image upscaling through OpenAI and image processing APIs.

watsonx MCP Server

watsonx MCP Server

Enables Claude to delegate tasks to IBM watsonx.ai foundation models (Granite, Llama, Mistral) for text generation, chat, embeddings, and document analysis. Supports two-agent architectures where Claude can leverage IBM's enterprise AI capabilities for specialized workloads.

MySQL MCP Server

MySQL MCP Server

Enables secure interaction with MySQL databases through listing tables, reading data, and executing SQL queries with proper error handling and controlled access.

Kylas CRM MCP Server

Kylas CRM MCP Server

Enables management of Kylas CRM lead operations, including creating leads, searching and filtering records, and resolving user, product, or pipeline IDs. It provides specialized tools for monitoring idle leads and accessing lead schema instructions through natural language.

Scrapeless MCP Server

Scrapeless MCP Server

自然言語のリクエストを通じて、ClaudeのようなAIアシスタントが直接Google検索を実行し、ウェブデータを取得できるようにする、Model Context Protocolサーバーの実装。

RAG Documentation MCP Server

RAG Documentation MCP Server

AIアシスタントが、関連するドキュメントのコンテキストで応答を強化できるように、ベクター検索を通じてドキュメントを取得および処理するためのツールを提供します。

test-newsapi-mcp

test-newsapi-mcp

Provides AI agents access to news aggregation from 80,000+ sources via the NewsAPI. Supports both authenticated access with API keys and pay-per-use via HTTP 402 protocol with blockchain payment verification.