Discover Awesome MCP Servers

Extend your agent with 14,529 capabilities via MCP servers.

All14,529
mcp-server-bluesky

mcp-server-bluesky

镜子 (jìng zi)

Kubectl MCP Tool

Kubectl MCP Tool

一个模型上下文协议服务器,它使 AI 助手能够通过自然语言与 Kubernetes 集群交互,支持核心 Kubernetes 操作、监控、安全和诊断。

Apache Doris MCP Server

Apache Doris MCP Server

Apache Doris 和 VeloDB 的 MCP 服务器

Choose MCP Server Setup

Choose MCP Server Setup

镜子 (jìng zi)

mock-assistant-mcp-server

mock-assistant-mcp-server

MCP 服务器助手,用于模拟数据。

Creating an MCP Server in Go and Serving it with Docker

Creating an MCP Server in Go and Serving it with Docker

McpDocs

McpDocs

Okay, this is a complex task involving several moving parts. Here's a breakdown of how you can provide Elixir project documentation (including dependencies) to an LLM via an SSE (Server-Sent Events) MCP (Message Channel Protocol) server. I'll outline the steps, technologies, and considerations. **1. Project Setup and Documentation Generation** * **Elixir Project:** Assume you have an Elixir project with modules, functions, and proper documentation using `ExDoc`. * **ExDoc Configuration:** Ensure your `mix.exs` file has the `ex_doc` dependency and is configured correctly. This is crucial for generating the documentation in a usable format. ```elixir def deps do [ {:ex_doc, "~> 0.31", only: :dev, runtime: false} ] end def project do [ app: :my_elixir_app, version: "0.1.0", elixir: "~> 1.14", start_permanent: Mix.env() == :prod, deps: deps(), docs: [ main: "MyElixirApp", # Optional: The main page for your documentation source_url: "https://github.com/your-username/my-elixir-app", # Optional: Link to your source code extras: ["README.md"] # Optional: Include other files in the documentation ] ] end ``` * **Generate Documentation:** Run `mix docs` in your project directory. This will create a `doc` directory containing the HTML documentation. **2. Extracting Documentation Data** The key challenge is extracting the relevant information from the generated documentation in a format suitable for an LLM. Here are a few approaches: * **Parsing HTML:** The simplest approach is to parse the generated HTML files. You can use libraries like `Floki` or `HTML5ever` in Elixir to extract the function names, arguments, documentation strings, and module names. This is fragile because the HTML structure of `ExDoc` can change between versions. ```elixir # Example using Floki (requires adding Floki to your dependencies) defmodule DocExtractor do require Logger import Floki def extract_function_docs(html_file) do html_content = File.read!(html_file) {:ok, html} = HTML5.parse(html_content) # Example: Extract function names and descriptions (adjust selectors as needed) function_data = html |> Floki.find(".detail-header") # Adjust selector to match ExDoc's HTML |> Enum.map(fn element -> function_name = element |> Floki.find(".detail-name") |> Floki.text() |> List.first() description = element |> Floki.find(".detail-doc") |> Floki.text() |> List.first() %{name: function_name, description: description} end) function_data end end ``` * **Using `ExDoc.Markdown` (More Robust):** A better approach is to leverage `ExDoc.Markdown` directly. This allows you to parse the Markdown source files that `ExDoc` uses internally. This is more stable than parsing HTML. You'll need to understand how `ExDoc` structures its Markdown files. This is more complex but less prone to breaking with `ExDoc` updates. ```elixir # Example (Conceptual - requires more detailed implementation) defmodule DocExtractor do require Logger def extract_function_docs_from_markdown(markdown_file) do markdown_content = File.read!(markdown_file) # Use ExDoc.Markdown to parse the Markdown content # (This part requires more research and implementation) # Example: # parsed_markdown = ExDoc.Markdown.parse(markdown_content) # Then, traverse the parsed_markdown structure to extract function details # ... end end ``` * **Directly Accessing `ExDoc`'s Internal Data (Advanced, Risky):** You *could* try to access `ExDoc`'s internal data structures directly. This is the most fragile approach and is highly discouraged because it relies on undocumented internal APIs that are subject to change without notice. You'd need to delve into `ExDoc`'s source code to understand how it stores the documentation data. **3. Dependency Documentation** * **Include Dependencies:** When generating documentation with `mix docs`, `ExDoc` will also generate documentation for your project's dependencies (if they have `ExDoc` documentation). The documentation for dependencies will be located in the `doc` directory, organized by package name. * **Extract Dependency Docs:** Use the same extraction methods (HTML parsing or `ExDoc.Markdown`) to extract documentation from the dependency documentation files. Make sure to clearly identify the source of the documentation (i.e., the dependency name and version). **4. Structuring the Data for the LLM** The LLM needs structured data. A good format is JSON, with each function represented as a JSON object. ```json [ { "module": "MyElixirApp.MyModule", "function": "my_function", "signature": "my_function(arg1, arg2)", "docstring": "This function does something important.", "source_file": "lib/my_elixir_app/my_module.ex", "dependency": null // or "SomeDependency" if it's from a dependency }, { "module": "SomeDependency.SomeModule", "function": "another_function", "signature": "another_function(x)", "docstring": "This function is from a dependency.", "source_file": "deps/some_dependency/lib/some_dependency/some_module.ex", "dependency": "SomeDependency" } ] ``` **5. SSE MCP Server Implementation** * **Choose a Web Server:** Use a web server like `Phoenix` or `Plug` to create the SSE endpoint. `Phoenix` is a full-featured framework, while `Plug` is a lightweight option. * **SSE Endpoint:** Create an endpoint that streams the documentation data as Server-Sent Events. SSE is a simple protocol for pushing data from the server to the client over HTTP. * **MCP (Message Channel Protocol):** MCP is a higher-level protocol that can be built on top of SSE. It provides features like message IDs, acknowledgments, and error handling. You'll need to implement the MCP logic on both the server and client sides. There isn't a standard Elixir MCP library, so you'll likely need to build your own or adapt an existing one from another language. Here's a basic example using `Plug` for the SSE server: ```elixir defmodule DocServer do use Plug.Router require Logger plug(:match) plug(:dispatch) def init(_opts) do [] end def call(conn, _opts) do conn |> Plug.Conn.put_resp_header("content-type", "text/event-stream") |> Plug.Conn.send_resp(200, stream_docs()) end defp stream_docs() do # Replace with your actual documentation extraction logic docs = [ %{module: "MyModule", function: "my_func", docstring: "Does something"}, %{module: "AnotherModule", function: "another_func", docstring: "Does something else"} ] Enum.map_join(docs, "\n", fn doc -> data = Jason.encode!(doc) # Use Jason for JSON encoding "event: doc\ndata: #{data}\n\n" end) end end ``` * **MCP Layer (Conceptual):** * **Message IDs:** Assign a unique ID to each documentation chunk sent. * **Acknowledgments:** The LLM client should send an acknowledgment message back to the server after receiving each chunk. * **Error Handling:** Implement error handling to deal with network issues or data corruption. * **Framing:** MCP often involves framing messages to ensure proper delivery. This might involve adding a header with the message length. **6. LLM Client** * **SSE Client:** The LLM client needs to connect to the SSE endpoint and receive the documentation data. Most programming languages have libraries for handling SSE connections. * **MCP Client:** Implement the MCP client logic to send acknowledgments and handle errors. * **Data Processing:** The LLM client needs to parse the JSON data and feed it to the LLM. **7. Considerations** * **Scalability:** For large projects, consider using a more robust streaming solution like Apache Kafka or RabbitMQ. SSE is suitable for smaller datasets. * **Error Handling:** Implement comprehensive error handling throughout the system. * **Security:** If the documentation contains sensitive information, secure the SSE endpoint with authentication and authorization. * **Rate Limiting:** Implement rate limiting to prevent the LLM client from overwhelming the server. * **Data Size:** Consider the size of the documentation data. If it's very large, you might need to chunk it into smaller pieces. * **LLM Context Window:** Be mindful of the LLM's context window. You might need to break the documentation into smaller chunks that fit within the context window. * **Tokenization:** Consider the tokenization used by the LLM. You might need to pre-tokenize the documentation data to optimize performance. * **Incremental Updates:** Implement a mechanism for sending incremental updates to the LLM when the documentation changes. **Example Client (Conceptual - Python):** ```python import sseclient import requests import json url = "http://localhost:4000" # Replace with your server URL def main(): response = requests.get(url, stream=True) client = sseclient.SSEClient(response) for event in client.events(): if event.event == 'doc': try: doc = json.loads(event.data) print(f"Received doc: {doc['module']}.{doc['function']}") # Process the documentation data and feed it to the LLM # ... # Send an acknowledgment (MCP) # ... except json.JSONDecodeError as e: print(f"Error decoding JSON: {e}") # Handle the error (MCP) # ... else: print(f"Received event: {event.event}, data: {event.data}") if __name__ == "__main__": main() ``` **Summary** This is a complex project that requires a good understanding of Elixir, `ExDoc`, web servers, SSE, MCP, and LLMs. Start with the basics (documentation generation and HTML parsing) and gradually add complexity as needed. Consider using a more robust streaming solution if you have a large project. Remember to prioritize error handling and security. Good luck!

🐋 Docker MCP server

🐋 Docker MCP server

镜子 (jìng zi)

mcp-server-testWhat is MCP Server Test?How to use MCP Server Test?Key features of MCP Server Test?Use cases of MCP Server Test?FAQ from MCP Server Test?

mcp-server-testWhat is MCP Server Test?How to use MCP Server Test?Key features of MCP Server Test?Use cases of MCP Server Test?FAQ from MCP Server Test?

测试 MCP 服务器 (Cèshì MCP fúwùqì)

MCP Server для Prom.ua

MCP Server для Prom.ua

MCP 服务器,用于与 Prom.ua API 交互

MCP-DeanMachines

MCP-DeanMachines

testmcpgithubdemo1

testmcpgithubdemo1

从 MCP 服务器演示创建。

Linear MCP Server

Linear MCP Server

镜子 (jìng zi)

untapped-mcp

untapped-mcp

一个未被使用的 MCP 服务器,用于与 Claude 配合使用。

Simple Memory Extension MCP Server

Simple Memory Extension MCP Server

一个 MCP 服务器,通过提供存储、检索和搜索记忆的工具来扩展 AI 代理的上下文窗口,从而使代理能够在长时间的交互中保持历史记录和上下文。

Telegram MCP Server

Telegram MCP Server

MCP服务器向Telegram发送通知

ChatGPT MCP Server

ChatGPT MCP Server

镜子 (jìng zi)

Mcp Servers Wiki Website

Mcp Servers Wiki Website

Binance Market Data MCP Server

Binance Market Data MCP Server

create-mcp-server

create-mcp-server

构建具有集成 Web 功能的强大模型上下文协议 (MCP) 服务器的综合架构

MCP System Monitor

MCP System Monitor

一个系统监控工具,通过模型上下文协议(MCP)暴露系统指标。该工具允许大型语言模型(LLM)通过兼容 MCP 的接口检索实时系统信息。

mpc-csharp-semantickernel

mpc-csharp-semantickernel

Okay, here's an example demonstrating how to use Microsoft Semantic Kernel with OpenAI and a hypothetical "MCP Server" (assuming MCP stands for something like "My Custom Processing Server" or "Message Control Protocol Server"). Since "MCP Server" is vague, I'll make some assumptions about its functionality and how it might interact with Semantic Kernel. You'll need to adapt this to your specific MCP Server's capabilities. **Conceptual Overview** The core idea is to use Semantic Kernel to orchestrate interactions between OpenAI (for language understanding and generation) and your MCP Server (for specialized processing, data retrieval, or control actions). **Assumptions about the MCP Server** * **API Endpoint:** It exposes an API endpoint (e.g., REST API) for receiving requests and sending responses. * **Functionality:** Let's assume it can perform a specific task, like: * **Data Lookup:** Retrieve information from a database based on a query. * **System Control:** Execute a command on a system. * **Message Routing:** Route a message to a specific destination. * **Input/Output:** It expects structured input (e.g., JSON) and returns structured output (e.g., JSON). **Example Scenario: Smart Home Control** Let's imagine an MCP Server that controls smart home devices. We want to use Semantic Kernel and OpenAI to allow users to control their home with natural language. **Code Example (C#)** ```csharp using Microsoft.SemanticKernel; using Microsoft.SemanticKernel.Connectors.OpenAI; using System.Net.Http; using System.Text; using System.Text.Json; using System.Threading.Tasks; public class SmartHomePlugin { private readonly HttpClient _httpClient; private readonly string _mcpServerEndpoint; public SmartHomePlugin(string mcpServerEndpoint) { _httpClient = new HttpClient(); _mcpServerEndpoint = mcpServerEndpoint; } [KernelFunction, Description("Controls a smart home device.")] public async Task<string> ControlDevice( [Description("The device to control (e.g., lights, thermostat).")] string device, [Description("The action to perform (e.g., turn on, turn off, set temperature).")] string action, [Description("The value to set (e.g., 22 for temperature).")] string value = "" ) { // 1. Prepare the request to the MCP Server var requestData = new { device = device, action = action, value = value }; string jsonRequest = JsonSerializer.Serialize(requestData); var content = new StringContent(jsonRequest, Encoding.UTF8, "application/json"); // 2. Send the request to the MCP Server HttpResponseMessage response = await _httpClient.PostAsync(_mcpServerEndpoint, content); // 3. Handle the response from the MCP Server if (response.IsSuccessStatusCode) { string jsonResponse = await response.Content.ReadAsStringAsync(); // Deserialize the JSON response (assuming MCP Server returns JSON) try { var responseObject = JsonSerializer.Deserialize<Dictionary<string, string>>(jsonResponse); return responseObject?["status"] ?? "Unknown status"; // Assuming MCP returns a "status" field } catch (JsonException ex) { Console.WriteLine($"Error deserializing MCP Server response: {ex.Message}"); return "Error processing MCP Server response."; } } else { Console.WriteLine($"MCP Server request failed: {response.StatusCode}"); return $"MCP Server request failed with status code: {response.StatusCode}"; } } } public class Example { public static async Task Main() { // 1. Configure Semantic Kernel string apiKey = "YOUR_OPENAI_API_KEY"; string orgId = "YOUR_OPENAI_ORG_ID"; // Optional Kernel kernel = Kernel.CreateBuilder() .AddOpenAIChatCompletion("gpt-3.5-turbo", apiKey, orgId) // Or "gpt-4" .Build(); // 2. Define the MCP Server endpoint string mcpServerEndpoint = "http://your-mcp-server.com/api/control"; // Replace with your actual endpoint // 3. Import the SmartHomePlugin var smartHomePlugin = new SmartHomePlugin(mcpServerEndpoint); kernel.ImportPluginFromObject(smartHomePlugin, "SmartHome"); // 4. Create a Semantic Function (Prompt) string prompt = @" Control the smart home device. Device: {{$device}} Action: {{$action}} Value: {{$value}} {{SmartHome.ControlDevice $device $action $value}} "; var smartHomeFunction = kernel.CreateFunction(prompt); // 5. Run the Semantic Function with user input var arguments = new KernelArguments { ["device"] = "lights", ["action"] = "turn on", ["value"] = "" }; var result = await smartHomeFunction.InvokeAsync(kernel, arguments); Console.WriteLine($"Result: {result.GetValue<string>()}"); // Example 2: More natural language input using OpenAI to extract parameters string naturalLanguagePrompt = "Turn on the living room lights."; // Define a prompt to extract device, action, and value from the natural language input string extractionPrompt = @" Extract the device, action, and value from the following text: Text: {{$text}} Device: Action: Value: "; var extractionFunction = kernel.CreateFunction(extractionPrompt); var extractionResult = await extractionFunction.InvokeAsync(kernel, new KernelArguments { ["text"] = naturalLanguagePrompt }); string extractedText = extractionResult.GetValue<string>()!; // Parse the extracted text (this is a simplified example; you might need more robust parsing) string extractedDevice = extractedText.Split("Device:")[1].Split("Action:")[0].Trim(); string extractedAction = extractedText.Split("Action:")[1].Split("Value:")[0].Trim(); string extractedValue = extractedText.Split("Value:")[1].Trim(); Console.WriteLine($"Extracted Device: {extractedDevice}"); Console.WriteLine($"Extracted Action: {extractedAction}"); Console.WriteLine($"Extracted Value: {extractedValue}"); // Now use the extracted parameters with the SmartHome.ControlDevice function var controlArguments = new KernelArguments { ["device"] = extractedDevice, ["action"] = extractedAction, ["value"] = extractedValue }; var controlResult = await smartHomeFunction.InvokeAsync(kernel, controlArguments); Console.WriteLine($"Control Result: {controlResult.GetValue<string>()}"); } } ``` **Explanation:** 1. **`SmartHomePlugin`:** * This class represents a Semantic Kernel plugin that interacts with the MCP Server. * It takes the MCP Server endpoint as a constructor parameter. * The `ControlDevice` function is decorated with `[KernelFunction]` to make it available to Semantic Kernel. * It constructs a JSON request based on the input parameters (`device`, `action`, `value`). * It sends a POST request to the MCP Server. * It handles the response from the MCP Server, deserializing the JSON and returning a status message. Error handling is included. 2. **`Example.Main`:** * **Configure Semantic Kernel:** Sets up the Semantic Kernel with your OpenAI API key and organization ID. * **Define MCP Server Endpoint:** Replace `"http://your-mcp-server.com/api/control"` with the actual URL of your MCP Server's API endpoint. * **Import Plugin:** Creates an instance of the `SmartHomePlugin` and imports it into the Semantic Kernel. This makes the `ControlDevice` function available for use in prompts. * **Create Semantic Function (Prompt):** Defines a prompt that uses the `SmartHome.ControlDevice` function. The prompt takes `device`, `action`, and `value` as input parameters. * **Run Semantic Function:** Creates a `KernelArguments` object with the desired device, action, and value, and then invokes the semantic function. The result from the MCP Server is printed to the console. * **Natural Language Example:** Demonstrates how to use OpenAI to extract the device, action, and value from a natural language prompt. This allows users to control their smart home with more natural commands. A separate prompt is used for extraction. The extracted parameters are then used to call the `SmartHome.ControlDevice` function. **Key Points and Considerations:** * **MCP Server API:** The most important part is understanding the API of your MCP Server. You need to know the endpoint, the expected request format (JSON schema), and the format of the response. * **Error Handling:** The example includes basic error handling for network requests and JSON deserialization. You should add more robust error handling for production code. * **Security:** If your MCP Server requires authentication, you'll need to add authentication headers to the `HttpClient` requests. Never hardcode sensitive information like API keys directly in your code. Use environment variables or a secure configuration mechanism. * **Prompt Engineering:** The prompts are crucial for getting the desired behavior. Experiment with different prompts to improve the accuracy and reliability of the system. Consider using techniques like few-shot learning to provide examples to the language model. * **JSON Serialization/Deserialization:** The example uses `System.Text.Json`. You can use other JSON libraries like Newtonsoft.Json if you prefer. * **Dependency Injection:** For larger applications, consider using dependency injection to manage the `HttpClient` and other dependencies. * **Asynchronous Operations:** The example uses `async` and `await` for asynchronous operations. This is important for avoiding blocking the main thread and improving performance. * **Parameter Extraction:** The natural language example uses a simple string splitting approach to extract parameters. For more complex scenarios, you might need to use more sophisticated techniques like regular expressions or a dedicated natural language processing library. Semantic Kernel also offers more advanced techniques for parameter extraction. * **Semantic Kernel Plugins:** Consider breaking down your MCP Server functionality into multiple Semantic Kernel plugins for better organization and reusability. * **Testing:** Write unit tests to verify the functionality of your Semantic Kernel plugins and the interactions with the MCP Server. **How to Adapt This Example:** 1. **Replace Placeholders:** Replace `"YOUR_OPENAI_API_KEY"`, `"YOUR_OPENAI_ORG_ID"`, and `"http://your-mcp-server.com/api/control"` with your actual values. 2. **Implement MCP Server Interaction:** Modify the `SmartHomePlugin` to match the API of your MCP Server. Adjust the request format, response handling, and error handling accordingly. 3. **Customize Prompts:** Adjust the prompts to match the specific tasks you want to perform. 4. **Add Error Handling:** Implement more robust error handling to handle potential issues with the MCP Server or the OpenAI API. 5. **Add Security:** Implement appropriate security measures to protect your API keys and other sensitive information. **Chinese Translation of Key Concepts:** * **Microsoft Semantic Kernel:** 微软语义内核 (Wēiruǎn yǔyì kènèi) * **OpenAI:** 开放人工智能 (Kāifàng réngōng zhìnéng) * **MCP Server:** (You'll need to translate this based on what MCP stands for in your context. For example, if it's "My Custom Processing Server," you could translate it as: 我的自定义处理服务器 (Wǒ de zì dìngyì chǔlǐ fúwùqì)) * **Plugin:** 插件 (Chājiàn) * **Kernel Function:** 内核函数 (Nèihé hánshù) * **Prompt:** 提示 (Tíshì) * **Semantic Function:** 语义函数 (Yǔyì hánshù) * **API Endpoint:** 应用程序接口端点 (Yìngyòng chéngxù jiēkǒu duāndiǎn) * **Natural Language:** 自然语言 (Zìrán yǔyán) This comprehensive example should give you a solid foundation for using Microsoft Semantic Kernel with OpenAI and your MCP Server. Remember to adapt the code to your specific needs and to thoroughly test your implementation. Good luck!

MCP Server Pool

MCP Server Pool

MCP 服务合集 (MCP fúwù héjí)

google-workspace-mcp

google-workspace-mcp

Linear

Linear

mcp-server-fetch-typescript MCP Server

mcp-server-fetch-typescript MCP Server

镜子 (jìng zi)

MCP Server Runner

MCP Server Runner

一个用于运行模型上下文协议(MCP)服务器的 WebSocket 服务器实现。此应用程序允许通过 WebSocket 连接访问 MCP 服务器,从而方便与 Web 应用程序和其他支持网络的客户端集成。

Google Scholar

Google Scholar

🔍 通过一个简单的 MCP 接口,使 AI 助手能够搜索和访问 Google Scholar 论文。

comment-stripper-mcp

comment-stripper-mcp

一个灵活的 MCP 服务器,可以批量处理代码文件,以删除多种编程语言中的注释。目前支持 JavaScript、TypeScript 和 Vue 文件,并使用基于正则表达式的模式匹配。可以处理单个文件、目录(包括子目录)和文本输入。专为代码的清洁维护和准备而构建。

Model Context Protocol Community

Model Context Protocol Community

轻松运行、部署和连接到 MCP 服务器