Discover Awesome MCP Servers
Extend your agent with 16,230 capabilities via MCP servers.
- All16,230
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Share MCP - Model Context Protocol MCP Server导航站
Share MCP es un sitio de navegación centrado en el Protocolo de Contexto de Modelos (MCP). Ofrece una amplia gama de recursos, herramientas y servicios relacionados con MCP, clasificados para ayudar a los desarrolladores a encontrar rápidamente las soluciones MCP que necesitan.
paloalto-mcp-servers
A collection of Model Context Protocol (MCP) servers for managing Palo Alto Networks firewalls and services
s-GitHubTestRepo-Henry
created from MCP server demo
MCP Servers Hub
Descubre servidores y clientes MCP interesantes.
Bilibili MCP 服务器
This phrase, "mcp-server学习," appears to be a request to learn about MCP (Minecraft Protocol) servers. Here's the translation in Spanish, along with some options depending on the nuance you want to convey: **Most Direct Translation:** * **Aprendizaje sobre servidores MCP** (Learning about MCP servers) **More Natural/Common Translations:** * **Aprender sobre servidores MCP** (To learn about MCP servers) * **Estudiar servidores MCP** (To study MCP servers) * **Información sobre servidores MCP** (Information about MCP servers) **If you want to emphasize the process of learning:** * **Estudio de servidores MCP** (Study of MCP servers) **Therefore, the best translation depends on the context. However, "Aprender sobre servidores MCP" is likely the most common and easily understood.**
MCP Workers AI
MCP servers sdk for Cloudflare Workers
🎯 Kubernetes MCP Server
Servidor MCP impulsado por IA que entiende consultas en lenguaje natural sobre tu clúster de Kubernetes.
Exa MCP Server 🔍
Claude puede realizar Búsquedas Web | Exa con MCP (Protocolo de Contexto del Modelo).
@modelcontextprotocol/server-terminal
Terminal server implementation for Model Context Protocol
GooseTeam
Look, a flock of geese! An MCP server and protocol for Goose agent collaboration.
Remote MCP Server on Cloudflare
Dockerized Salesforce MCP Server
Dockerized Salesforce MCP Server for REST API integration
Basilisp nREPL MCP Bridge
simple MCP server for nREPL
Zoom MCP Server
MCP server for Zoom
Model Context Protocol (MCP) Implementation
Learn MCP by building from Scarch
MCPClient Python Application
Okay, I can help you outline the implementation for interacting between an MCP (presumably Minecraft Protocol) server and an Ollama model. This is a complex task, and the specific implementation will depend heavily on what you want to *do* with the Ollama model in relation to the Minecraft server. Here's a breakdown of the key components, considerations, and potential approaches: **I. Understanding the Goal** Before diving into code, clarify the purpose of this interaction. What do you want the Ollama model to *do*? Here are some examples: * **Dynamic NPC Dialogue:** The Ollama model generates dialogue for non-player characters (NPCs) based on player interactions, game events, or world state. * **World Generation Assistance:** The model provides suggestions or inspiration for world generation, perhaps influencing terrain features, structures, or biome placement. * **Game Event Triggering:** The model analyzes player actions or in-game events and triggers specific game events or challenges. * **Chat Enhancement:** The model enhances the in-game chat by providing more intelligent responses, translations, or filtering. * **Automated Game Mastering:** The model acts as a game master, dynamically adjusting the game's difficulty, introducing new quests, or creating unexpected scenarios. * **Code Generation:** The model generates code for Minecraft functions or datapacks based on natural language prompts. The specific goal will dictate the data you need to send to Ollama and the actions you take based on Ollama's responses. **II. Core Components** 1. **Minecraft Server (MCP):** This is your standard Minecraft server, likely running with a modding framework like Forge or Fabric. You'll need to write a mod to handle the interaction with the external Ollama model. 2. **Ollama Server:** This is where your Ollama model is running. It needs to be accessible over a network (e.g., `localhost`, a local network IP, or a public IP if you're careful about security). 3. **Mod (Forge/Fabric):** This is the crucial bridge. The mod will: * Listen for relevant Minecraft events (player chat, NPC interactions, world changes, etc.). * Format the relevant data into a prompt for the Ollama model. * Send the prompt to the Ollama server via an HTTP request. * Receive the response from the Ollama server. * Parse the response. * Take actions in the Minecraft world based on the response (e.g., change NPC dialogue, spawn entities, modify world data, send chat messages). 4. **Communication Protocol (HTTP/JSON):** The mod and the Ollama server will communicate using HTTP requests and responses, with data formatted as JSON. This is a standard and relatively easy-to-implement approach. **III. Implementation Steps (General Outline)** 1. **Set up Ollama:** * Install Ollama: Follow the instructions on the Ollama website ([https://ollama.com/](https://ollama.com/)). * Pull a model: `ollama pull <model_name>` (e.g., `ollama pull llama2`). Choose a model appropriate for your task. Smaller models are faster but may be less capable. * Run the model: Ollama typically runs in the background. 2. **Create a Minecraft Mod (Forge/Fabric):** * Set up a mod development environment (using Forge MDK or Fabric Loom). * Add necessary dependencies (e.g., libraries for HTTP requests, JSON parsing). Popular choices include: * **Forge:** Use the built-in `net.minecraft.util.GsonHelper` for JSON. For HTTP, you might use Java's `java.net.http` (available in Java 11+) or a library like Apache HttpClient. * **Fabric:** Use Gson for JSON. For HTTP, use Java's `java.net.http` or a library like Apache HttpClient. 3. **Implement Event Listeners:** * Register event listeners for the Minecraft events you care about. Examples: * `ServerChatEvent` (Forge) / `ServerChatEvent` (Fabric): For listening to player chat messages. * `EntityInteractEvent` (Forge) / `UseEntityCallback` (Fabric): For detecting interactions with NPCs. * `WorldEvent.Load` (Forge) / `ServerWorldEvents.LOAD` (Fabric): For detecting when a world is loaded. 4. **Create the Prompt:** * Inside your event listener, construct a prompt for the Ollama model. This is *critical*. The quality of the prompt directly affects the quality of the response. * Include relevant information in the prompt: * Player name * NPC name (if applicable) * Chat message (if applicable) * Game state (e.g., time of day, weather, player location) * Any other relevant context. * Format the prompt clearly and concisely. Use natural language that the model can understand. Consider using a template or a prompt engineering technique. * Example (for NPC dialogue): ```java String playerName = event.getPlayer().getName().getString(); String npcName = "Villager"; // Example String chatMessage = event.getMessage(); String prompt = "The player " + playerName + " said: '" + chatMessage + "'. You are a villager named " + npcName + ". Respond to the player in a friendly and helpful way."; ``` 5. **Send the HTTP Request to Ollama:** * Create an HTTP client. * Construct a JSON payload containing the prompt. The exact format depends on the Ollama API. Typically, it looks something like this: ```json { "prompt": "The player said...", "model": "llama2", // Or your chosen model "stream": false // Set to true for streaming responses } ``` * Send a POST request to the Ollama API endpoint (usually `/api/generate`). You'll need the correct URL (e.g., `http://localhost:11434/api/generate`). * Set the `Content-Type` header to `application/json`. ```java // Example using Java's java.net.http HttpClient client = HttpClient.newHttpClient(); String jsonPayload = "{\"prompt\": \"" + prompt + "\", \"model\": \"llama2\", \"stream\": false}"; HttpRequest request = HttpRequest.newBuilder() .uri(URI.create("http://localhost:11434/api/generate")) .header("Content-Type", "application/json") .POST(HttpRequest.BodyPublishers.ofString(jsonPayload)) .build(); HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString()); String responseBody = response.body(); ``` 6. **Parse the Response:** * Parse the JSON response from the Ollama server. The response will typically contain the generated text. * Extract the relevant text from the JSON. ```java // Example using Gson Gson gson = new Gson(); JsonObject jsonObject = gson.fromJson(responseBody, JsonObject.class); String generatedText = jsonObject.get("response").getAsString(); ``` 7. **Take Action in Minecraft:** * Based on the generated text, take appropriate actions in the Minecraft world. * Examples: * **NPC Dialogue:** Set the NPC's dialogue to the generated text. You'll need to find the appropriate method for setting NPC dialogue in your modding framework. * **Chat Message:** Send the generated text as a chat message to the player. Use `ServerPlayer.sendSystemMessage(Component.literal(generatedText))` (Forge/Fabric). * **World Modification:** Modify the world based on the generated text (e.g., spawn entities, change blocks). Be very careful when modifying the world, as it can cause lag or crashes. 8. **Error Handling:** * Implement robust error handling. Catch exceptions that may occur during HTTP requests, JSON parsing, or world modification. * Log errors to the Minecraft server log. * Provide informative error messages to the player (if appropriate). **IV. Code Example (Illustrative - Forge)** ```java import com.google.gson.Gson; import com.google.gson.JsonObject; import net.minecraft.network.chat.Component; import net.minecraft.server.level.ServerPlayer; import net.minecraftforge.event.ServerChatEvent; import net.minecraftforge.eventbus.api.SubscribeEvent; import net.minecraftforge.fml.common.Mod; import java.net.URI; import java.net.http.HttpClient; import java.net.http.HttpRequest; import java.net.http.HttpResponse; @Mod.EventBusSubscriber(bus = Mod.EventBusSubscriber.Bus.FORGE) public class ChatEventHandler { @SubscribeEvent public static void onServerChat(ServerChatEvent event) { ServerPlayer player = event.getPlayer(); String playerName = player.getName().getString(); String chatMessage = event.getMessage(); String prompt = "The player " + playerName + " said: '" + chatMessage + "'. Respond in a funny way."; try { HttpClient client = HttpClient.newHttpClient(); String jsonPayload = "{\"prompt\": \"" + prompt + "\", \"model\": \"llama2\", \"stream\": false}"; HttpRequest request = HttpRequest.newBuilder() .uri(URI.create("http://localhost:11434/api/generate")) .header("Content-Type", "application/json") .POST(HttpRequest.BodyPublishers.ofString(jsonPayload)) .build(); HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString()); String responseBody = response.body(); Gson gson = new Gson(); JsonObject jsonObject = gson.fromJson(responseBody, JsonObject.class); String generatedText = jsonObject.get("response").getAsString(); player.sendSystemMessage(Component.literal(generatedText)); } catch (Exception e) { System.err.println("Error communicating with Ollama: " + e.getMessage()); player.sendSystemMessage(Component.literal("Error: Could not generate response.")); } } } ``` **Important Considerations:** * **Performance:** Calling an external API for every chat message or NPC interaction can be slow. Consider using caching, rate limiting, or asynchronous processing to improve performance. Streaming responses from Ollama can also help. * **Security:** Be very careful about exposing your Ollama server to the internet. Use authentication and authorization to prevent unauthorized access. Sanitize the prompts to prevent prompt injection attacks. * **Prompt Engineering:** Experiment with different prompts to get the best results from the Ollama model. Use clear, concise language and provide as much context as possible. * **Rate Limiting:** Ollama might have rate limits. Implement retry logic with exponential backoff to handle rate limiting errors. * **Model Choice:** The choice of Ollama model is crucial. Smaller models are faster but may be less capable. Larger models are more capable but may be slower and require more resources. * **Asynchronous Processing:** Use asynchronous tasks (e.g., `CompletableFuture` in Java) to avoid blocking the main Minecraft server thread. This will prevent the server from freezing while waiting for responses from Ollama. * **Configuration:** Make the Ollama server URL, model name, and other parameters configurable via a configuration file. This will make it easier to deploy and manage the mod. * **Data Storage:** If you need to remember previous interactions, consider using a database or file to store the conversation history. **V. Next Steps** 1. **Choose a Modding Framework:** Decide whether to use Forge or Fabric. 2. **Set up your Development Environment:** Follow the instructions for your chosen framework. 3. **Implement the Basic Communication:** Get the mod to send a simple HTTP request to the Ollama server and print the response to the console. 4. **Implement Event Listeners:** Start with a simple event listener (e.g., `ServerChatEvent`). 5. **Create Prompts:** Experiment with different prompts to see what works best. 6. **Implement Error Handling:** Add error handling to your code. 7. **Test and Refine:** Test your mod thoroughly and refine the prompts and code as needed. This is a complex project, but by breaking it down into smaller steps, you can make it more manageable. Good luck!
MCP2HTTP
MCP2HTTP is a minimal transport adapter that bridges MCP clients using stdio with stateless HTTP servers.
Rails MCP Server
A Ruby gem implementation of a Model Context Protocol (MCP) server for Rails projects. This server allows LLMs (Large Language Models) to interact with Rails projects through the Model Context Protocol.
mcp_server_local_files
Local File System MCP Server
MCP Expert Server
Mirror of
iOS Simulator MCP Server
Mirror of
Modular Outlook MCP Server
MCP server for Claude to access Outlook data via Microsoft Graph API
mcp-server-datahub
El servidor MCP oficial para DataHub (
Browser JavaScript Evaluator
This is a reference design for a MCP server that hosts a web page that connects back to the server via sse and allows Claude to execute javascript on the page.
Gmail MCP Server
Mirror of
Supergateway
Run MCP stdio servers over SSE and SSE over stdio. AI gateway.
generator-mcp
Yeoman Generator to quickly create a new MCP Server
Fiberflow MCP Gateway
Run Fiberflow MCP SSE Server over stdio.
Spring AI MCP Server 示例项目
Initial thoughts
Okay, here's a breakdown of how to convert OpenAPI specifications to MCP (presumably, you mean Media Control Protocol) server-ready tools, along with considerations and potential approaches. Since "MCP server-ready tools" is a bit vague, I'll cover a few interpretations and provide general guidance. **Understanding the Goal** First, let's clarify what you mean by "MCP server-ready tools." Here are a few possibilities: * **Code Generation (Server Stubs):** You want to automatically generate server-side code (stubs, skeletons) from your OpenAPI specification that can handle MCP requests. This is the most common interpretation. * **API Gateway Configuration:** You want to configure an API gateway (like Kong, Tyk, or AWS API Gateway) to handle MCP requests based on your OpenAPI definition. * **Testing Tools:** You want to generate test cases or testing frameworks that can validate your MCP server implementation against the OpenAPI specification. * **Documentation Generation:** You want to generate documentation for your MCP server based on the OpenAPI specification. * **Data Transformation/Mapping:** You need to transform data from the format defined in your OpenAPI spec to the format expected by your MCP server, or vice versa. **General Workflow** The general process involves these steps: 1. **Define your API with OpenAPI:** Create a well-defined OpenAPI specification (YAML or JSON) that accurately describes your MCP API. This is the *most important* step. The quality of your OpenAPI definition directly impacts the quality of the generated tools. 2. **Choose a Tool/Approach:** Select a tool or method that suits your specific needs (code generation, API gateway configuration, testing, etc.). 3. **Configure the Tool:** Configure the chosen tool with your OpenAPI specification and any necessary options (e.g., target language for code generation, API gateway settings). 4. **Generate/Deploy:** Run the tool to generate the desired output (code, configuration files, test scripts, documentation). 5. **Customize (if needed):** The generated output might require some customization to fully integrate with your MCP server environment. **Specific Approaches and Tools** Here are some tools and approaches, categorized by common use cases: **1. Code Generation (Server Stubs)** * **OpenAPI Generator:** This is a very popular and versatile tool. It supports a wide range of server-side languages and frameworks (e.g., Java Spring, Python Flask, Node.js Express, Go). * **How to use:** ```bash # Example: Generate a Python Flask server stub openapi-generator generate -i your_openapi.yaml -g python-flask -o output_directory ``` Replace `your_openapi.yaml` with the path to your OpenAPI file, `python-flask` with the desired generator, and `output_directory` with the output directory. * **Considerations:** You'll need to install OpenAPI Generator (usually via `brew install openapi-generator` or downloading the JAR file). You'll also need to choose the appropriate generator for your target language and framework. The generated code will typically provide the basic structure for your API endpoints; you'll need to implement the actual business logic. * **Swagger Codegen (Older, but still used):** A predecessor to OpenAPI Generator. It's still functional, but OpenAPI Generator is generally preferred due to its wider support and active development. * **Commercial Tools:** Some commercial API management platforms offer code generation capabilities as part of their suite. **2. API Gateway Configuration** * **Kong:** A popular open-source API gateway. You can import your OpenAPI specification into Kong to automatically configure routes, request validation, authentication, and other features. * **How to use:** Kong provides a plugin called `kong-plugin-openapi` that allows you to import your OpenAPI specification. You'll need to install and configure Kong first. Then, you can use the Kong Admin API or Kong Manager UI to import the OpenAPI file. * **Considerations:** You'll need to understand Kong's configuration model and how it maps to OpenAPI concepts. You might need to customize the Kong configuration to handle specific MCP requirements. * **Tyk:** Another open-source API gateway with OpenAPI support. Similar to Kong, you can import your OpenAPI definition to configure routes, security policies, and other features. * **AWS API Gateway:** Amazon's API Gateway allows you to import OpenAPI definitions to create and manage APIs. * **Azure API Management:** Microsoft's API Management service also supports importing OpenAPI specifications. **3. Testing Tools** * **Dredd:** A command-line tool that validates API implementations against OpenAPI specifications. It reads your OpenAPI definition and sends HTTP requests to your API endpoints, verifying that the responses match the specification. * **How to use:** ```bash dredd your_openapi.yaml http://your-mcp-server ``` Replace `your_openapi.yaml` with the path to your OpenAPI file and `http://your-mcp-server` with the base URL of your MCP server. * **Considerations:** Dredd requires a running MCP server to test against. You might need to write custom hooks to handle authentication or other specific requirements. * **Postman:** While not strictly a code generator, Postman can import OpenAPI specifications and generate collections of API requests. You can then use these collections to manually test your API or run automated tests. * **Swagger Inspector:** A tool that allows you to send requests to your API and automatically generate an OpenAPI definition based on the responses. This can be useful for creating an OpenAPI specification from an existing API. **4. Documentation Generation** * **Swagger UI:** A popular tool for rendering OpenAPI specifications as interactive documentation. You can host Swagger UI and point it to your OpenAPI file to create a user-friendly interface for exploring your API. * **Redoc:** Another documentation generator that focuses on a clean and modern design. It's an alternative to Swagger UI. **5. Data Transformation/Mapping** * **Custom Code:** In many cases, you'll need to write custom code to transform data between the format defined in your OpenAPI specification and the format expected by your MCP server. This might involve mapping fields, converting data types, or handling different encoding schemes. * **Data Mapping Tools:** Some specialized data mapping tools can help automate the process of transforming data between different formats. However, these tools might require some configuration and customization to work with your specific OpenAPI definition and MCP server. **Example: Using OpenAPI Generator for Python Flask** Let's say you have an OpenAPI specification named `mcp_api.yaml` that describes your MCP API. You want to generate a Python Flask server stub. 1. **Install OpenAPI Generator:** ```bash brew install openapi-generator # (if you're on macOS with Homebrew) # Or download the JAR file from https://openapi-generator.tech/docs/installation/ ``` 2. **Generate the code:** ```bash openapi-generator generate -i mcp_api.yaml -g python-flask -o mcp_server ``` 3. **Customize the generated code:** The `mcp_server` directory will contain the generated Flask application. You'll need to: * Implement the business logic for each API endpoint in the generated `controllers` directory. * Configure the Flask application (e.g., database connections, logging). * Handle any MCP-specific protocols or data formats. **Important Considerations** * **Accuracy of OpenAPI Specification:** The quality of your generated tools depends entirely on the accuracy and completeness of your OpenAPI specification. Make sure your OpenAPI definition is up-to-date and accurately reflects your API. * **MCP-Specific Requirements:** Consider any MCP-specific protocols, data formats, or security requirements that are not covered by the standard OpenAPI specification. You might need to add custom extensions to your OpenAPI definition or write custom code to handle these requirements. * **Iteration:** Code generation is often an iterative process. You might need to regenerate the code multiple times as you refine your OpenAPI specification and implement your API. * **Security:** Pay close attention to security considerations when generating code or configuring API gateways. Make sure to implement appropriate authentication, authorization, and input validation mechanisms. * **Error Handling:** Implement robust error handling in your MCP server to gracefully handle invalid requests or unexpected errors. **In Summary** Converting OpenAPI specifications to MCP server-ready tools involves choosing the right tool for the job (code generation, API gateway configuration, testing, etc.), configuring the tool with your OpenAPI definition, and customizing the generated output to meet your specific requirements. The key is to start with a well-defined OpenAPI specification and carefully consider any MCP-specific requirements. Good luck! I hope this comprehensive explanation helps! If you can provide more details about your specific MCP server environment and what you want to achieve, I can give you more tailored advice.