Discover Awesome MCP Servers
Extend your agent with 14,392 capabilities via MCP servers.
- All14,392
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
@modelcontextprotocol/server-terminal
模型上下文协议的终端服务器实现
🎯 Kubernetes MCP Server
AI 驱动的 MCP 服务器可以理解关于你的 Kubernetes 集群的自然语言查询。
Bilibili MCP 服务器
MCP 服务器学习 (MCP fúwùqì xuéxí)
MCP Workers AI
针对 Cloudflare Workers 的 MCP 服务器 SDK
s-GitHubTestRepo-Henry
从 MCP 服务器演示创建。
iOS Simulator MCP Server
镜子 (jìng zi)
Fiberflow MCP Gateway
通过标准输入/输出运行 Fiberflow MCP SSE 服务器。
MCPClient Python Application
Okay, I understand. You want an implementation that allows an MCP (presumably "Minecraft Protocol") server to interact with an Ollama model. This is a complex task that involves several steps. Here's a breakdown of the concepts, potential approaches, and a *conceptual* implementation outline. Keep in mind that this is a high-level overview, and a complete, working solution would require significant coding effort. **Understanding the Components** * **MCP Server (Minecraft Protocol Server):** This is the server that handles Minecraft client connections, game logic, and world management. We need to be able to intercept or inject messages into this server. This likely requires a Minecraft server mod (e.g., using Fabric, Forge, or a custom server implementation). * **Ollama Model:** This is a large language model (LLM) served by Ollama. We need to be able to send text prompts to the Ollama API and receive text responses. * **Interaction:** The core of the problem is *how* the MCP server and Ollama model will interact. Here are some possibilities: * **Chatbot:** Players type commands or messages in the Minecraft chat, which are then sent to the Ollama model. The model's response is displayed back in the chat. * **NPC Dialogue:** Non-player characters (NPCs) in the game have dialogue powered by the Ollama model. The model generates responses based on player interactions or game events. * **World Generation/Modification:** The Ollama model could be used to generate descriptions of terrain, structures, or quests, which are then used to modify the Minecraft world. * **Game Logic:** The model could be used to make decisions for AI entities or influence game events based on player actions. **Conceptual Implementation Outline** This outline focuses on the "Chatbot" interaction, as it's the most straightforward to explain. 1. **Minecraft Server Mod (e.g., Fabric/Forge):** * **Dependency:** Add the necessary dependencies for your chosen mod loader (Fabric or Forge). * **Event Listener:** Create an event listener that intercepts chat messages sent by players. This is the crucial part where you "hook" into the Minecraft server. * **Command Handling (Optional):** Register a custom command (e.g., `/ask <prompt>`) that players can use to specifically trigger the Ollama model. This is cleaner than intercepting *all* chat messages. * **Configuration:** Allow configuration of the Ollama API endpoint (e.g., `http://localhost:11434/api/generate`). * **Asynchronous Task:** When a chat message (or command) is received, create an asynchronous task to send the prompt to the Ollama API. This prevents the Minecraft server from blocking while waiting for the model's response. 2. **Ollama API Interaction (Java/Kotlin Code within the Mod):** * **HTTP Client:** Use a Java HTTP client library (e.g., `java.net.http.HttpClient`, OkHttp, or Apache HttpClient) to make POST requests to the Ollama API. * **JSON Payload:** Construct a JSON payload for the `/api/generate` endpoint. The payload should include: * `model`: The name of the Ollama model to use (e.g., "llama2"). * `prompt`: The player's chat message (or the command argument). * (Optional) `stream`: Set to `false` for a single response, or `true` for streaming responses. * **Error Handling:** Implement robust error handling to catch network errors, API errors, and JSON parsing errors. * **Rate Limiting (Important):** Implement rate limiting to prevent overwhelming the Ollama server with requests. This is crucial for performance and stability. 3. **Response Handling:** * **Parse JSON Response:** Parse the JSON response from the Ollama API. The response will contain the generated text. * **Send Message to Minecraft Chat:** Send the generated text back to the Minecraft chat, either to the player who sent the original message or to all players. Use the Minecraft server's API to send chat messages. * **Formatting:** Format the response appropriately for the Minecraft chat (e.g., add a prefix to indicate that the message is from the Ollama model). **Example (Conceptual Java Code Snippet - Fabric Mod)** ```java import net.fabricmc.api.ModInitializer; import net.fabricmc.fabric.api.event.lifecycle.v1.ServerLifecycleEvents; import net.fabricmc.fabric.api.command.v2.CommandRegistrationCallback; import net.minecraft.server.MinecraftServer; import net.minecraft.server.network.ServerPlayerEntity; import net.minecraft.text.Text; import com.mojang.brigadier.CommandDispatcher; import com.mojang.brigadier.arguments.StringArgumentType; import static net.minecraft.server.command.CommandManager.*; import static com.mojang.brigadier.arguments.StringArgumentType.*; import java.net.URI; import java.net.http.HttpClient; import java.net.http.HttpRequest; import java.net.http.HttpResponse; import java.util.concurrent.CompletableFuture; import com.google.gson.Gson; import com.google.gson.JsonObject; public class OllamaMod implements ModInitializer { private static final String OLLAMA_API_URL = "http://localhost:11434/api/generate"; private static final String OLLAMA_MODEL = "llama2"; // Or your chosen model private static final HttpClient httpClient = HttpClient.newHttpClient(); private static final Gson gson = new Gson(); @Override public void onInitialize() { ServerLifecycleEvents.SERVER_STARTED.register(this::onServerStarted); CommandRegistrationCallback.EVENT.register(this::registerCommands); } private void onServerStarted(MinecraftServer server) { System.out.println("Ollama Mod Initialized!"); } private void registerCommands(CommandDispatcher<net.minecraft.server.command.ServerCommandSource> dispatcher, net.minecraft.server.command.CommandRegistryAccess registryAccess, net.minecraft.server.command.CommandManager.RegistrationEnvironment environment) { dispatcher.register(literal("ask") .then(argument("prompt", string()) .executes(context -> { String prompt = getString(context, "prompt"); ServerPlayerEntity player = context.getSource().getPlayer(); askOllama(prompt, player); return 1; }))); } private void askOllama(String prompt, ServerPlayerEntity player) { CompletableFuture.runAsync(() -> { try { JsonObject requestBody = new JsonObject(); requestBody.addProperty("model", OLLAMA_MODEL); requestBody.addProperty("prompt", prompt); requestBody.addProperty("stream", false); // Get a single response HttpRequest request = HttpRequest.newBuilder() .uri(URI.create(OLLAMA_API_URL)) .header("Content-Type", "application/json") .POST(HttpRequest.BodyPublishers.ofString(gson.toJson(requestBody))) .build(); HttpResponse<String> response = httpClient.send(request, HttpResponse.BodyHandlers.ofString()); if (response.statusCode() == 200) { JsonObject jsonResponse = gson.fromJson(response.body(), JsonObject.class); String ollamaResponse = jsonResponse.get("response").getAsString(); // Adjust based on Ollama's actual response format player.sendMessage(Text.literal("Ollama: " + ollamaResponse)); } else { player.sendMessage(Text.literal("Error communicating with Ollama: " + response.statusCode())); } } catch (Exception e) { player.sendMessage(Text.literal("An error occurred: " + e.getMessage())); e.printStackTrace(); } }); } } ``` **Key Considerations and Challenges** * **Asynchronous Operations:** Crucially important to avoid blocking the Minecraft server thread. Use `CompletableFuture` or similar mechanisms. * **Error Handling:** Network errors, API errors, JSON parsing errors – handle them all gracefully. * **Rate Limiting:** Protect the Ollama server from being overwhelmed. * **Security:** If you're exposing this to the internet, be very careful about security. Sanitize inputs to prevent prompt injection attacks. * **Ollama API Changes:** The Ollama API might change in the future, so keep your code up-to-date. * **Minecraft Server Version:** Ensure your mod is compatible with the specific version of Minecraft you're targeting. * **Mod Loader (Fabric/Forge):** Choose the mod loader that best suits your needs and experience. * **Context:** The Ollama model will perform better if you provide it with context about the game world, the player's inventory, and recent events. This requires more complex data gathering from the Minecraft server. * **Streaming Responses:** Consider using streaming responses from the Ollama API for a more interactive experience. This requires more complex handling of the response data. * **Resource Management:** Be mindful of memory usage, especially if you're using large models. **Next Steps** 1. **Choose a Mod Loader:** Fabric is generally considered more lightweight and modern, while Forge has a larger ecosystem of mods. 2. **Set up a Development Environment:** Follow the instructions for setting up a development environment for your chosen mod loader. 3. **Implement the Basic Chatbot Functionality:** Start with the code snippet above and get the basic chatbot working. 4. **Add Error Handling and Rate Limiting:** Make the code more robust. 5. **Experiment with Different Interaction Models:** Explore other ways to integrate the Ollama model into the game. 6. **Consider Context:** Add context to the prompts sent to the Ollama model to improve its responses. This is a challenging but rewarding project. Good luck! Remember to break the problem down into smaller, manageable steps.
Dockerized Salesforce MCP Server
用于 REST API 集成的 Docker 化 Salesforce MCP 服务器
mcp-server-datahub
DataHub 的官方 MCP 服务器 (
Browser JavaScript Evaluator
这是一个 MCP 服务器的参考设计,该服务器托管一个网页,该网页通过 SSE 连接回服务器,并允许 Claude 在该页面上执行 JavaScript。
mcp_server_local_files
本地文件系统 MCP 服务器 (Běn dì wénjiàn xìtǒng MCP fúwùqì)
MCP Expert Server
镜子 (jìng zi)
Basilisp nREPL MCP Bridge
简单的 MCP 服务器,用于 nREPL
Zoom MCP Server
Zoom 的 MCP 服务器
Exa MCP Server 🔍
克劳德可以通过 MCP (模型上下文协议) 执行网页搜索 | Exa。
GooseTeam
看,一群鹅!一个用于鹅代理协作的 MCP 服务器和协议。
generator-mcp
用于快速创建新 MCP 服务器的 Yeoman 生成器
Initial thoughts
Okay, here's the translation of "Convert OpenAPI specifications to MCP server ready tools" into Chinese, along with some considerations for accuracy and context: **Option 1 (Most Direct Translation):** * **将 OpenAPI 规范转换为 MCP 服务器就绪的工具** * (Jiāng OpenAPI guīfàn zhuǎnhuàn wèi MCP fúwùqì jiùxù de gōngjù) This is a fairly literal translation. It's understandable but might sound a bit technical. **Option 2 (More Natural and Common Phrasing):** * **把 OpenAPI 规范转换成可用于 MCP 服务器的工具** * (Bǎ OpenAPI guīfàn zhuǎnhuàn chéng kěyòng yú MCP fúwùqì de gōngjù) This version uses "把...转换成..." which is a common and natural way to express "convert...to..." in Chinese. "可用于" (kěyòng yú) means "can be used for/in". **Option 3 (Emphasizing "Tools Ready for Use"):** * **将 OpenAPI 规范转换为适用于 MCP 服务器的即用工具** * (Jiāng OpenAPI guīfàn zhuǎnhuàn wèi shìyòng yú MCP fúwùqì de jíyòng gōngjù) This option uses "适用于" (shìyòng yú), meaning "suitable for" or "applicable to," and "即用" (jíyòng), meaning "ready to use" or "out-of-the-box." This emphasizes that the tools are immediately usable. **Option 4 (Focusing on Generation):** * **从 OpenAPI 规范生成 MCP 服务器可用的工具** * (Cóng OpenAPI guīfàn shēngchéng MCP fúwùqì kěyòng de gōngjù) This option uses "从...生成..." (cóng...shēngchéng...), meaning "generate from..." This is appropriate if the process involves generating the tools from the OpenAPI specification. **Which option is best depends on the specific context:** * If you want a simple and direct translation, **Option 1** is fine. * If you want a more natural and common phrasing, **Option 2** is a good choice. * If you want to emphasize that the tools are immediately usable, **Option 3** is best. * If the process involves generating the tools from the OpenAPI specification, **Option 4** is the most accurate. **Key Vocabulary:** * **OpenAPI 规范 (OpenAPI guīfàn):** OpenAPI Specification * **转换 (zhuǎnhuàn):** Convert, transform * **MCP 服务器 (MCP fúwùqì):** MCP Server * **就绪 (jiùxù):** Ready * **工具 (gōngjù):** Tools * **可用于 (kěyòng yú):** Can be used for/in * **适用于 (shìyòng yú):** Suitable for, applicable to * **即用 (jíyòng):** Ready to use, out-of-the-box * **生成 (shēngchéng):** Generate * **从...生成... (cóng...shēngchéng...):** Generate from... **Recommendation:** Without more context, I would recommend **Option 2: 把 OpenAPI 规范转换成可用于 MCP 服务器的工具** as it's the most natural and generally applicable. However, consider **Option 4** if the process is about *generating* the tools.
Financial Analysis MCP Server
镜子 (jìng zi)
Ghost MCP Server
镜子 (jìng zi)
Weather MCP Server
好的,这是将“Sample MCP server implementation for fetching weather forecasts”翻译成中文的几种选择,根据不同的语境和侧重点,可以选择最合适的: **1. 简洁直译:** * 用于获取天气预报的示例 MCP 服务器实现 **2. 更流畅的表达:** * 一个用于获取天气预报的 MCP 服务器示例实现 **3. 强调“示例”的意义:** * 一个用于演示如何获取天气预报的 MCP 服务器实现示例 **4. 更技术化的表达:** * 用于获取天气预报的 MCP 服务器的示例代码实现 **解释:** * **MCP (Mobile Carrier Platform)**: 如果需要,可以根据具体情况翻译成“移动运营商平台”或保留英文缩写。 * **Server Implementation**: 可以翻译成“服务器实现”、“服务器代码实现”或“服务器端实现”。 * **Fetching Weather Forecasts**: 可以翻译成“获取天气预报”、“获取天气预报数据”或“查询天气预报”。 **建议:** 如果是在技术文档或代码注释中使用,建议使用 **“用于获取天气预报的 MCP 服务器的示例代码实现”** 或 **“一个用于获取天气预报的 MCP 服务器示例实现”**。 如果是在更通用的语境中使用,可以使用 **“一个用于获取天气预报的 MCP 服务器示例实现”**。 请根据实际情况选择最合适的翻译。
glif-mcp
镜子 (jìng zi)
ConnectWise Manage MCP Server
用于 ConnectWise Manage API 集成的模型上下文协议 (MCP) 服务器
NSAF MCP Server
镜子 (jìng zi)
MCP Server Playwright
MCP Server Playwright - 用于 Claude Desktop 的浏览器自动化服务 (MCP Server Playwright - Yòng yú Claude Desktop de liúlǎnqì zìdònghuà fúwù)
🏆 LinkedIn DI MCP Server
Audiense Digital Intelligence LinkedIn MCP 服务器是一个基于模型上下文协议 (MCP) 的服务器,它允许 Claude 和其他 MCP 兼容的客户端与您的 Audiense DI LinkedIn 帐户进行交互。
Supergateway
通过 SSE 运行 MCP stdio 服务器,并通过 stdio 运行 SSE。 AI 网关。
Spring AI MCP Server 示例项目
Gmail MCP Server
镜子 (jìng zi)