Discover Awesome MCP Servers
Extend your agent with 12,378 capabilities via MCP servers.
- All12,378
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Telegram MCP Server
MCP服务器向Telegram发送通知
Mcp Servers Wiki Website
Binance Market Data MCP Server
Prompt Decorators
一个标准化的框架,旨在通过可组合的装饰器来增强大型语言模型(LLM)处理和响应提示的方式。该框架包含一个官方的开放标准规范和一个带有 MCP 服务器集成的 Python 参考实现。
Apache Doris MCP Server
Apache Doris 和 VeloDB 的 MCP 服务器
Choose MCP Server Setup
镜子 (jìng zi)
mock-assistant-mcp-server
MCP 服务器助手,用于模拟数据。
Model Context Protocol (MCP) Server 🚀
McpDocs
Okay, this is a complex task involving several moving parts. Here's a breakdown of how you can provide Elixir project documentation (including dependencies) to an LLM via an SSE (Server-Sent Events) MCP (Message Channel Protocol) server. I'll outline the steps, technologies, and considerations. **1. Project Setup and Documentation Generation** * **Elixir Project:** Assume you have an Elixir project with modules, functions, and proper documentation using `ExDoc`. * **ExDoc Configuration:** Ensure your `mix.exs` file has the `ex_doc` dependency and is configured correctly. This is crucial for generating the documentation in a usable format. ```elixir def deps do [ {:ex_doc, "~> 0.31", only: :dev, runtime: false} ] end def project do [ app: :my_elixir_app, version: "0.1.0", elixir: "~> 1.14", start_permanent: Mix.env() == :prod, deps: deps(), docs: [ main: "MyElixirApp", # Optional: The main page for your documentation source_url: "https://github.com/your-username/my-elixir-app", # Optional: Link to your source code extras: ["README.md"] # Optional: Include other files in the documentation ] ] end ``` * **Generate Documentation:** Run `mix docs` in your project directory. This will create a `doc` directory containing the HTML documentation. **2. Extracting Documentation Data** The key challenge is extracting the relevant information from the generated documentation in a format suitable for an LLM. Here are a few approaches: * **Parsing HTML:** The simplest approach is to parse the generated HTML files. You can use libraries like `Floki` or `HTML5ever` in Elixir to extract the function names, arguments, documentation strings, and module names. This is fragile because the HTML structure of `ExDoc` can change between versions. ```elixir # Example using Floki (requires adding Floki to your dependencies) defmodule DocExtractor do require Logger import Floki def extract_function_docs(html_file) do html_content = File.read!(html_file) {:ok, html} = HTML5.parse(html_content) # Example: Extract function names and descriptions (adjust selectors as needed) function_data = html |> Floki.find(".detail-header") # Adjust selector to match ExDoc's HTML |> Enum.map(fn element -> function_name = element |> Floki.find(".detail-name") |> Floki.text() |> List.first() description = element |> Floki.find(".detail-doc") |> Floki.text() |> List.first() %{name: function_name, description: description} end) function_data end end ``` * **Using `ExDoc.Markdown` (More Robust):** A better approach is to leverage `ExDoc.Markdown` directly. This allows you to parse the Markdown source files that `ExDoc` uses internally. This is more stable than parsing HTML. You'll need to understand how `ExDoc` structures its Markdown files. This is more complex but less prone to breaking with `ExDoc` updates. ```elixir # Example (Conceptual - requires more detailed implementation) defmodule DocExtractor do require Logger def extract_function_docs_from_markdown(markdown_file) do markdown_content = File.read!(markdown_file) # Use ExDoc.Markdown to parse the Markdown content # (This part requires more research and implementation) # Example: # parsed_markdown = ExDoc.Markdown.parse(markdown_content) # Then, traverse the parsed_markdown structure to extract function details # ... end end ``` * **Directly Accessing `ExDoc`'s Internal Data (Advanced, Risky):** You *could* try to access `ExDoc`'s internal data structures directly. This is the most fragile approach and is highly discouraged because it relies on undocumented internal APIs that are subject to change without notice. You'd need to delve into `ExDoc`'s source code to understand how it stores the documentation data. **3. Dependency Documentation** * **Include Dependencies:** When generating documentation with `mix docs`, `ExDoc` will also generate documentation for your project's dependencies (if they have `ExDoc` documentation). The documentation for dependencies will be located in the `doc` directory, organized by package name. * **Extract Dependency Docs:** Use the same extraction methods (HTML parsing or `ExDoc.Markdown`) to extract documentation from the dependency documentation files. Make sure to clearly identify the source of the documentation (i.e., the dependency name and version). **4. Structuring the Data for the LLM** The LLM needs structured data. A good format is JSON, with each function represented as a JSON object. ```json [ { "module": "MyElixirApp.MyModule", "function": "my_function", "signature": "my_function(arg1, arg2)", "docstring": "This function does something important.", "source_file": "lib/my_elixir_app/my_module.ex", "dependency": null // or "SomeDependency" if it's from a dependency }, { "module": "SomeDependency.SomeModule", "function": "another_function", "signature": "another_function(x)", "docstring": "This function is from a dependency.", "source_file": "deps/some_dependency/lib/some_dependency/some_module.ex", "dependency": "SomeDependency" } ] ``` **5. SSE MCP Server Implementation** * **Choose a Web Server:** Use a web server like `Phoenix` or `Plug` to create the SSE endpoint. `Phoenix` is a full-featured framework, while `Plug` is a lightweight option. * **SSE Endpoint:** Create an endpoint that streams the documentation data as Server-Sent Events. SSE is a simple protocol for pushing data from the server to the client over HTTP. * **MCP (Message Channel Protocol):** MCP is a higher-level protocol that can be built on top of SSE. It provides features like message IDs, acknowledgments, and error handling. You'll need to implement the MCP logic on both the server and client sides. There isn't a standard Elixir MCP library, so you'll likely need to build your own or adapt an existing one from another language. Here's a basic example using `Plug` for the SSE server: ```elixir defmodule DocServer do use Plug.Router require Logger plug(:match) plug(:dispatch) def init(_opts) do [] end def call(conn, _opts) do conn |> Plug.Conn.put_resp_header("content-type", "text/event-stream") |> Plug.Conn.send_resp(200, stream_docs()) end defp stream_docs() do # Replace with your actual documentation extraction logic docs = [ %{module: "MyModule", function: "my_func", docstring: "Does something"}, %{module: "AnotherModule", function: "another_func", docstring: "Does something else"} ] Enum.map_join(docs, "\n", fn doc -> data = Jason.encode!(doc) # Use Jason for JSON encoding "event: doc\ndata: #{data}\n\n" end) end end ``` * **MCP Layer (Conceptual):** * **Message IDs:** Assign a unique ID to each documentation chunk sent. * **Acknowledgments:** The LLM client should send an acknowledgment message back to the server after receiving each chunk. * **Error Handling:** Implement error handling to deal with network issues or data corruption. * **Framing:** MCP often involves framing messages to ensure proper delivery. This might involve adding a header with the message length. **6. LLM Client** * **SSE Client:** The LLM client needs to connect to the SSE endpoint and receive the documentation data. Most programming languages have libraries for handling SSE connections. * **MCP Client:** Implement the MCP client logic to send acknowledgments and handle errors. * **Data Processing:** The LLM client needs to parse the JSON data and feed it to the LLM. **7. Considerations** * **Scalability:** For large projects, consider using a more robust streaming solution like Apache Kafka or RabbitMQ. SSE is suitable for smaller datasets. * **Error Handling:** Implement comprehensive error handling throughout the system. * **Security:** If the documentation contains sensitive information, secure the SSE endpoint with authentication and authorization. * **Rate Limiting:** Implement rate limiting to prevent the LLM client from overwhelming the server. * **Data Size:** Consider the size of the documentation data. If it's very large, you might need to chunk it into smaller pieces. * **LLM Context Window:** Be mindful of the LLM's context window. You might need to break the documentation into smaller chunks that fit within the context window. * **Tokenization:** Consider the tokenization used by the LLM. You might need to pre-tokenize the documentation data to optimize performance. * **Incremental Updates:** Implement a mechanism for sending incremental updates to the LLM when the documentation changes. **Example Client (Conceptual - Python):** ```python import sseclient import requests import json url = "http://localhost:4000" # Replace with your server URL def main(): response = requests.get(url, stream=True) client = sseclient.SSEClient(response) for event in client.events(): if event.event == 'doc': try: doc = json.loads(event.data) print(f"Received doc: {doc['module']}.{doc['function']}") # Process the documentation data and feed it to the LLM # ... # Send an acknowledgment (MCP) # ... except json.JSONDecodeError as e: print(f"Error decoding JSON: {e}") # Handle the error (MCP) # ... else: print(f"Received event: {event.event}, data: {event.data}") if __name__ == "__main__": main() ``` **Summary** This is a complex project that requires a good understanding of Elixir, `ExDoc`, web servers, SSE, MCP, and LLMs. Start with the basics (documentation generation and HTML parsing) and gradually add complexity as needed. Consider using a more robust streaming solution if you have a large project. Remember to prioritize error handling and security. Good luck!
mcp-server-bluesky
镜子 (jìng zi)
MCP System Monitor
一个系统监控工具,通过模型上下文协议(MCP)暴露系统指标。该工具允许大型语言模型(LLM)通过兼容 MCP 的接口检索实时系统信息。
mpc-csharp-semantickernel
Okay, here's an example demonstrating how to use Microsoft Semantic Kernel with OpenAI and a hypothetical "MCP Server" (assuming MCP stands for something like "My Custom Processing Server" or "Message Control Protocol Server"). Since "MCP Server" is vague, I'll make some assumptions about its functionality and how it might interact with Semantic Kernel. You'll need to adapt this to your specific MCP Server's capabilities. **Conceptual Overview** The core idea is to use Semantic Kernel to orchestrate interactions between OpenAI (for language understanding and generation) and your MCP Server (for specialized processing, data retrieval, or control actions). **Assumptions about the MCP Server** * **API Endpoint:** It exposes an API endpoint (e.g., REST API) for receiving requests and sending responses. * **Functionality:** Let's assume it can perform a specific task, like: * **Data Lookup:** Retrieve information from a database based on a query. * **System Control:** Execute a command on a system. * **Message Routing:** Route a message to a specific destination. * **Input/Output:** It expects structured input (e.g., JSON) and returns structured output (e.g., JSON). **Example Scenario: Smart Home Control** Let's imagine an MCP Server that controls smart home devices. We want to use Semantic Kernel and OpenAI to allow users to control their home with natural language. **Code Example (C#)** ```csharp using Microsoft.SemanticKernel; using Microsoft.SemanticKernel.Connectors.OpenAI; using System.Net.Http; using System.Text; using System.Text.Json; using System.Threading.Tasks; public class SmartHomePlugin { private readonly HttpClient _httpClient; private readonly string _mcpServerEndpoint; public SmartHomePlugin(string mcpServerEndpoint) { _httpClient = new HttpClient(); _mcpServerEndpoint = mcpServerEndpoint; } [KernelFunction, Description("Controls a smart home device.")] public async Task<string> ControlDevice( [Description("The device to control (e.g., lights, thermostat).")] string device, [Description("The action to perform (e.g., turn on, turn off, set temperature).")] string action, [Description("The value to set (e.g., 22 for temperature).")] string value = "" ) { // 1. Prepare the request to the MCP Server var requestData = new { device = device, action = action, value = value }; string jsonRequest = JsonSerializer.Serialize(requestData); var content = new StringContent(jsonRequest, Encoding.UTF8, "application/json"); // 2. Send the request to the MCP Server HttpResponseMessage response = await _httpClient.PostAsync(_mcpServerEndpoint, content); // 3. Handle the response from the MCP Server if (response.IsSuccessStatusCode) { string jsonResponse = await response.Content.ReadAsStringAsync(); // Deserialize the JSON response (assuming MCP Server returns JSON) try { var responseObject = JsonSerializer.Deserialize<Dictionary<string, string>>(jsonResponse); return responseObject?["status"] ?? "Unknown status"; // Assuming MCP returns a "status" field } catch (JsonException ex) { Console.WriteLine($"Error deserializing MCP Server response: {ex.Message}"); return "Error processing MCP Server response."; } } else { Console.WriteLine($"MCP Server request failed: {response.StatusCode}"); return $"MCP Server request failed with status code: {response.StatusCode}"; } } } public class Example { public static async Task Main() { // 1. Configure Semantic Kernel string apiKey = "YOUR_OPENAI_API_KEY"; string orgId = "YOUR_OPENAI_ORG_ID"; // Optional Kernel kernel = Kernel.CreateBuilder() .AddOpenAIChatCompletion("gpt-3.5-turbo", apiKey, orgId) // Or "gpt-4" .Build(); // 2. Define the MCP Server endpoint string mcpServerEndpoint = "http://your-mcp-server.com/api/control"; // Replace with your actual endpoint // 3. Import the SmartHomePlugin var smartHomePlugin = new SmartHomePlugin(mcpServerEndpoint); kernel.ImportPluginFromObject(smartHomePlugin, "SmartHome"); // 4. Create a Semantic Function (Prompt) string prompt = @" Control the smart home device. Device: {{$device}} Action: {{$action}} Value: {{$value}} {{SmartHome.ControlDevice $device $action $value}} "; var smartHomeFunction = kernel.CreateFunction(prompt); // 5. Run the Semantic Function with user input var arguments = new KernelArguments { ["device"] = "lights", ["action"] = "turn on", ["value"] = "" }; var result = await smartHomeFunction.InvokeAsync(kernel, arguments); Console.WriteLine($"Result: {result.GetValue<string>()}"); // Example 2: More natural language input using OpenAI to extract parameters string naturalLanguagePrompt = "Turn on the living room lights."; // Define a prompt to extract device, action, and value from the natural language input string extractionPrompt = @" Extract the device, action, and value from the following text: Text: {{$text}} Device: Action: Value: "; var extractionFunction = kernel.CreateFunction(extractionPrompt); var extractionResult = await extractionFunction.InvokeAsync(kernel, new KernelArguments { ["text"] = naturalLanguagePrompt }); string extractedText = extractionResult.GetValue<string>()!; // Parse the extracted text (this is a simplified example; you might need more robust parsing) string extractedDevice = extractedText.Split("Device:")[1].Split("Action:")[0].Trim(); string extractedAction = extractedText.Split("Action:")[1].Split("Value:")[0].Trim(); string extractedValue = extractedText.Split("Value:")[1].Trim(); Console.WriteLine($"Extracted Device: {extractedDevice}"); Console.WriteLine($"Extracted Action: {extractedAction}"); Console.WriteLine($"Extracted Value: {extractedValue}"); // Now use the extracted parameters with the SmartHome.ControlDevice function var controlArguments = new KernelArguments { ["device"] = extractedDevice, ["action"] = extractedAction, ["value"] = extractedValue }; var controlResult = await smartHomeFunction.InvokeAsync(kernel, controlArguments); Console.WriteLine($"Control Result: {controlResult.GetValue<string>()}"); } } ``` **Explanation:** 1. **`SmartHomePlugin`:** * This class represents a Semantic Kernel plugin that interacts with the MCP Server. * It takes the MCP Server endpoint as a constructor parameter. * The `ControlDevice` function is decorated with `[KernelFunction]` to make it available to Semantic Kernel. * It constructs a JSON request based on the input parameters (`device`, `action`, `value`). * It sends a POST request to the MCP Server. * It handles the response from the MCP Server, deserializing the JSON and returning a status message. Error handling is included. 2. **`Example.Main`:** * **Configure Semantic Kernel:** Sets up the Semantic Kernel with your OpenAI API key and organization ID. * **Define MCP Server Endpoint:** Replace `"http://your-mcp-server.com/api/control"` with the actual URL of your MCP Server's API endpoint. * **Import Plugin:** Creates an instance of the `SmartHomePlugin` and imports it into the Semantic Kernel. This makes the `ControlDevice` function available for use in prompts. * **Create Semantic Function (Prompt):** Defines a prompt that uses the `SmartHome.ControlDevice` function. The prompt takes `device`, `action`, and `value` as input parameters. * **Run Semantic Function:** Creates a `KernelArguments` object with the desired device, action, and value, and then invokes the semantic function. The result from the MCP Server is printed to the console. * **Natural Language Example:** Demonstrates how to use OpenAI to extract the device, action, and value from a natural language prompt. This allows users to control their smart home with more natural commands. A separate prompt is used for extraction. The extracted parameters are then used to call the `SmartHome.ControlDevice` function. **Key Points and Considerations:** * **MCP Server API:** The most important part is understanding the API of your MCP Server. You need to know the endpoint, the expected request format (JSON schema), and the format of the response. * **Error Handling:** The example includes basic error handling for network requests and JSON deserialization. You should add more robust error handling for production code. * **Security:** If your MCP Server requires authentication, you'll need to add authentication headers to the `HttpClient` requests. Never hardcode sensitive information like API keys directly in your code. Use environment variables or a secure configuration mechanism. * **Prompt Engineering:** The prompts are crucial for getting the desired behavior. Experiment with different prompts to improve the accuracy and reliability of the system. Consider using techniques like few-shot learning to provide examples to the language model. * **JSON Serialization/Deserialization:** The example uses `System.Text.Json`. You can use other JSON libraries like Newtonsoft.Json if you prefer. * **Dependency Injection:** For larger applications, consider using dependency injection to manage the `HttpClient` and other dependencies. * **Asynchronous Operations:** The example uses `async` and `await` for asynchronous operations. This is important for avoiding blocking the main thread and improving performance. * **Parameter Extraction:** The natural language example uses a simple string splitting approach to extract parameters. For more complex scenarios, you might need to use more sophisticated techniques like regular expressions or a dedicated natural language processing library. Semantic Kernel also offers more advanced techniques for parameter extraction. * **Semantic Kernel Plugins:** Consider breaking down your MCP Server functionality into multiple Semantic Kernel plugins for better organization and reusability. * **Testing:** Write unit tests to verify the functionality of your Semantic Kernel plugins and the interactions with the MCP Server. **How to Adapt This Example:** 1. **Replace Placeholders:** Replace `"YOUR_OPENAI_API_KEY"`, `"YOUR_OPENAI_ORG_ID"`, and `"http://your-mcp-server.com/api/control"` with your actual values. 2. **Implement MCP Server Interaction:** Modify the `SmartHomePlugin` to match the API of your MCP Server. Adjust the request format, response handling, and error handling accordingly. 3. **Customize Prompts:** Adjust the prompts to match the specific tasks you want to perform. 4. **Add Error Handling:** Implement more robust error handling to handle potential issues with the MCP Server or the OpenAI API. 5. **Add Security:** Implement appropriate security measures to protect your API keys and other sensitive information. **Chinese Translation of Key Concepts:** * **Microsoft Semantic Kernel:** 微软语义内核 (Wēiruǎn yǔyì kènèi) * **OpenAI:** 开放人工智能 (Kāifàng réngōng zhìnéng) * **MCP Server:** (You'll need to translate this based on what MCP stands for in your context. For example, if it's "My Custom Processing Server," you could translate it as: 我的自定义处理服务器 (Wǒ de zì dìngyì chǔlǐ fúwùqì)) * **Plugin:** 插件 (Chājiàn) * **Kernel Function:** 内核函数 (Nèihé hánshù) * **Prompt:** 提示 (Tíshì) * **Semantic Function:** 语义函数 (Yǔyì hánshù) * **API Endpoint:** 应用程序接口端点 (Yìngyòng chéngxù jiēkǒu duāndiǎn) * **Natural Language:** 自然语言 (Zìrán yǔyán) This comprehensive example should give you a solid foundation for using Microsoft Semantic Kernel with OpenAI and your MCP Server. Remember to adapt the code to your specific needs and to thoroughly test your implementation. Good luck!
Weather MCP Server
Flights Mcp Server
谷歌航班的 MCP 服务器! (Gǔgē hángbān de MCP fúwùqì!)
MCP Server Pool
MCP 服务合集 (MCP fúwù héjí)
gatherings MCP Server
一个模型上下文协议服务器,帮助追踪社交活动的开销并计算报销金额,从而方便朋友之间结算账单。
google-workspace-mcp
create-mcp-server
构建具有集成 Web 功能的强大模型上下文协议 (MCP) 服务器的综合架构
Knowledge Graph Memory Server
镜子 (jìng zi)
MCP Server Docker
Docker 的 MCP 服务器
🐋 Docker MCP server
镜子 (jìng zi)

mcp-server-testWhat is MCP Server Test?How to use MCP Server Test?Key features of MCP Server Test?Use cases of MCP Server Test?FAQ from MCP Server Test?
测试 MCP 服务器 (Cèshì MCP fúwùqì)
Server
Okay, here's a basic outline and code snippets for a simple "Weather MCP" (presumably meaning "Minecraft Protocol") server in Python. I'll break it down into sections and explain the key concepts. Keep in mind that this is a *very* simplified example and doesn't implement the full Minecraft protocol. It's designed to illustrate the core idea of a server that responds to Minecraft client requests with weather data. **Important Considerations:** * **Minecraft Protocol Complexity:** The actual Minecraft protocol is complex and constantly evolving. This example *does not* implement it fully. It's a simplified demonstration. For real Minecraft server development, you'd need a robust library like `mcstatus` or a more complete server implementation. * **MCP (Minecraft Coder Pack):** MCP is primarily for *modding* the Minecraft client and server, not for creating entirely new servers. I'm assuming you're using "MCP" loosely to mean "something that interacts with Minecraft." * **Weather Data Source:** This example uses a placeholder for weather data. You'll need to integrate a real weather API (e.g., OpenWeatherMap, AccuWeather) to get actual weather information. * **Security:** This is a basic example and doesn't include any security measures. Real Minecraft servers need proper security to prevent exploits. **Conceptual Outline:** 1. **Socket Setup:** Create a TCP socket to listen for incoming connections from Minecraft clients (or a proxy/mod that simulates a client). 2. **Client Connection Handling:** When a client connects, accept the connection and create a new thread or process to handle it. 3. **Simplified Protocol:** Define a very simple protocol for the client to request weather data. For example, the client might send a specific string like "WEATHER_REQUEST". 4. **Weather Data Retrieval:** When a request is received, fetch weather data from your chosen source (API or placeholder). 5. **Response Formatting:** Format the weather data into a string or a simple data structure that the client can understand. 6. **Sending the Response:** Send the formatted weather data back to the client through the socket. 7. **Closing the Connection:** Close the connection with the client. **Python Code (Illustrative Example):** ```python import socket import threading import time # For simulating weather updates import random # For simulating weather updates # Configuration HOST = '127.0.0.1' # Listen on localhost PORT = 25566 # Choose a port (not the default Minecraft port) WEATHER_REQUEST_COMMAND = "WEATHER_REQUEST" SIMULATE_WEATHER = True # Set to False if using a real API # Placeholder for weather data (replace with API integration) weather_data = { "temperature": 25, "condition": "Clear", "humidity": 60 } def update_weather(): """Simulates weather updates (replace with API calls).""" global weather_data while SIMULATE_WEATHER: weather_data["temperature"] = random.randint(15, 35) conditions = ["Clear", "Rain", "Cloudy", "Thunderstorm"] weather_data["condition"] = random.choice(conditions) weather_data["humidity"] = random.randint(40, 80) print(f"Weather updated: {weather_data}") time.sleep(60) # Update every 60 seconds def handle_client(conn, addr): """Handles a single client connection.""" print(f"Connected by {addr}") try: while True: data = conn.recv(1024) # Receive up to 1024 bytes if not data: break # Client disconnected message = data.decode('utf-8').strip() print(f"Received: {message}") if message == WEATHER_REQUEST_COMMAND: # Format the weather data into a string weather_string = f"Temperature: {weather_data['temperature']}°C, Condition: {weather_data['condition']}, Humidity: {weather_data['humidity']}%" conn.sendall(weather_string.encode('utf-8')) else: conn.sendall("Unknown command".encode('utf-8')) except Exception as e: print(f"Error handling client: {e}") finally: conn.close() print(f"Connection closed with {addr}") def start_server(): """Starts the weather server.""" server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) # Avoid address already in use error try: server_socket.bind((HOST, PORT)) server_socket.listen() print(f"Weather server listening on {HOST}:{PORT}") # Start the weather update thread if SIMULATE_WEATHER: weather_thread = threading.Thread(target=update_weather) weather_thread.daemon = True # Exit when the main thread exits weather_thread.start() while True: conn, addr = server_socket.accept() client_thread = threading.Thread(target=handle_client, args=(conn, addr)) client_thread.start() except Exception as e: print(f"Server error: {e}") finally: server_socket.close() if __name__ == "__main__": start_server() ``` **Explanation:** 1. **Imports:** Imports necessary modules: `socket` for network communication, `threading` for handling multiple clients concurrently, `time` for simulating weather updates, and `random` for generating random weather data. 2. **Configuration:** * `HOST`: The IP address to listen on (localhost in this case). * `PORT`: The port number to listen on. Choose a port that's not already in use. *Do not use the default Minecraft port (25565) unless you know what you're doing.* * `WEATHER_REQUEST_COMMAND`: The string that the client sends to request weather data. * `SIMULATE_WEATHER`: A flag to control whether to simulate weather updates or use a real API. 3. **`weather_data`:** A dictionary to store the current weather information. This is a placeholder; you'll replace this with data from a real weather API. 4. **`update_weather()`:** This function simulates weather updates. It randomly changes the temperature, condition, and humidity every 60 seconds. *Replace this with code that calls a weather API.* The `time.sleep(60)` call pauses the thread for 60 seconds. The `weather_thread.daemon = True` line ensures that the thread exits when the main program exits. 5. **`handle_client(conn, addr)`:** This function handles the communication with a single client. * It receives data from the client using `conn.recv(1024)`. * It decodes the data from bytes to a string using `data.decode('utf-8')`. * It checks if the received message is the `WEATHER_REQUEST_COMMAND`. * If it is, it formats the weather data into a string and sends it back to the client using `conn.sendall(weather_string.encode('utf-8'))`. The `encode('utf-8')` converts the string to bytes before sending. * If the message is not recognized, it sends an "Unknown command" message. * It closes the connection using `conn.close()`. 6. **`start_server()`:** This function sets up the server socket and listens for incoming connections. * It creates a TCP socket using `socket.socket(socket.AF_INET, socket.SOCK_STREAM)`. * It binds the socket to the specified host and port using `server_socket.bind((HOST, PORT))`. * It starts listening for connections using `server_socket.listen()`. * It enters a loop that accepts incoming connections using `server_socket.accept()`. * For each connection, it creates a new thread to handle the client using `threading.Thread(target=handle_client, args=(conn, addr))`. * It starts the thread using `client_thread.start()`. * The `server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)` line is important. It allows you to quickly restart the server after it crashes or is stopped without getting an "Address already in use" error. 7. **`if __name__ == "__main__":`:** This ensures that the `start_server()` function is only called when the script is run directly (not when it's imported as a module). **How to Run:** 1. Save the code as a Python file (e.g., `weather_server.py`). 2. Run the script from your terminal: `python weather_server.py` **Client-Side (Simplified Example - Requires Modification for Minecraft):** This is a *very* basic Python client to test the server. **This will NOT work directly with Minecraft.** You'll need to adapt it to send the request from within a Minecraft mod or through a proxy. ```python import socket HOST = '127.0.0.1' PORT = 25566 WEATHER_REQUEST_COMMAND = "WEATHER_REQUEST" with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect((HOST, PORT)) s.sendall(WEATHER_REQUEST_COMMAND.encode('utf-8')) data = s.recv(1024) print('Received:', repr(data.decode('utf-8'))) ``` **Important Notes and Next Steps:** * **Minecraft Integration:** The biggest challenge is integrating this with Minecraft. You'll need to: * **Create a Minecraft Mod:** This is the most common approach. Your mod would need to: * Establish a connection to your Python server. * Send the `WEATHER_REQUEST_COMMAND`. * Receive the weather data. * Display the weather data in the game (e.g., in the chat, on a custom GUI). * **Use a Proxy:** You could create a proxy server that sits between the Minecraft client and the real Minecraft server. The proxy would intercept weather-related packets and replace them with data from your Python server. This is more complex. * **Weather API Integration:** Replace the placeholder weather data with calls to a real weather API. You'll need to: * Sign up for an API key from a weather service (e.g., OpenWeatherMap). * Install the `requests` library: `pip install requests` * Modify the `update_weather()` function to make API calls. * **Error Handling:** Add more robust error handling to the server and client code. * **Data Formatting:** Consider using a more structured data format like JSON for sending weather data between the server and client. This will make it easier to parse the data on the client side. * **Security:** Implement security measures to protect your server from unauthorized access. **Example of Weather API Integration (OpenWeatherMap):** ```python import requests # Replace with your OpenWeatherMap API key API_KEY = "YOUR_OPENWEATHERMAP_API_KEY" CITY = "London" # Or any city you want def get_weather_from_api(): """Gets weather data from OpenWeatherMap.""" url = f"http://api.openweathermap.org/data/2.5/weather?q={CITY}&appid={API_KEY}&units=metric" try: response = requests.get(url) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) data = response.json() weather_data = { "temperature": data["main"]["temp"], "condition": data["weather"][0]["description"], "humidity": data["main"]["humidity"] } return weather_data except requests.exceptions.RequestException as e: print(f"Error fetching weather data: {e}") return None # In your update_weather() function: def update_weather(): global weather_data while SIMULATE_WEATHER: new_weather = get_weather_from_api() if new_weather: weather_data = new_weather print(f"Weather updated from API: {weather_data}") else: print("Failed to update weather from API.") time.sleep(60) ``` Remember to replace `"YOUR_OPENWEATHERMAP_API_KEY"` with your actual API key. You'll also need to adjust the `CITY` variable to the city you want weather data for. This comprehensive explanation and code should give you a solid starting point for building your weather MCP server in Python. Good luck!
SkySQL MCP Integration
mcp-cbs-cijfers-open-data
用于处理 CBS 统计开放数据的 MCP 服务器
testmcpgithubdemo1
从 MCP 服务器演示创建。
Linear MCP Server
镜子 (jìng zi)
MCP Server для Prom.ua
MCP 服务器,用于与 Prom.ua API 交互
MCP-DeanMachines
MCP Server Runner
一个用于运行模型上下文协议(MCP)服务器的 WebSocket 服务器实现。此应用程序允许通过 WebSocket 连接访问 MCP 服务器,从而方便与 Web 应用程序和其他支持网络的客户端集成。