Discover Awesome MCP Servers
Extend your agent with 12,711 capabilities via MCP servers.
- All12,711
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Model Context Protocol (MCP) Implementation
Learn MCP by building from Scarch
MCP2HTTP
MCP2HTTP is a minimal transport adapter that bridges MCP clients using stdio with stateless HTTP servers.
Remote MCP Server on Cloudflare
Dockerized Salesforce MCP Server
Dockerized Salesforce MCP Server for REST API integration
Basilisp nREPL MCP Bridge
simple MCP server for nREPL
Exa MCP Server 🔍
Claude puede realizar Búsquedas Web | Exa con MCP (Protocolo de Contexto del Modelo).
@modelcontextprotocol/server-terminal
Terminal server implementation for Model Context Protocol
Gmail MCP Server
Mirror of
Zoom MCP Server
MCP server for Zoom
GooseTeam
Look, a flock of geese! An MCP server and protocol for Goose agent collaboration.
mcp_server_local_files
Local File System MCP Server
MCP Expert Server
Mirror of
NSAF MCP Server
Espejo de
MCP Server Playwright
MCP Server Playwright - Un servicio de automatización de navegador para Claude Desktop
iOS Simulator MCP Server
Mirror of
Supergateway
Run MCP stdio servers over SSE and SSE over stdio. AI gateway.
generator-mcp
Yeoman Generator to quickly create a new MCP Server
Initial thoughts
Okay, here's a breakdown of how to convert OpenAPI specifications to MCP (presumably, you mean Media Control Protocol) server-ready tools, along with considerations and potential approaches. Since "MCP server-ready tools" is a bit vague, I'll cover a few interpretations and provide general guidance. **Understanding the Goal** First, let's clarify what you mean by "MCP server-ready tools." Here are a few possibilities: * **Code Generation (Server Stubs):** You want to automatically generate server-side code (stubs, skeletons) from your OpenAPI specification that can handle MCP requests. This is the most common interpretation. * **API Gateway Configuration:** You want to configure an API gateway (like Kong, Tyk, or AWS API Gateway) to handle MCP requests based on your OpenAPI definition. * **Testing Tools:** You want to generate test cases or testing frameworks that can validate your MCP server implementation against the OpenAPI specification. * **Documentation Generation:** You want to generate documentation for your MCP server based on the OpenAPI specification. * **Data Transformation/Mapping:** You need to transform data from the format defined in your OpenAPI spec to the format expected by your MCP server, or vice versa. **General Workflow** The general process involves these steps: 1. **Define your API with OpenAPI:** Create a well-defined OpenAPI specification (YAML or JSON) that accurately describes your MCP API. This is the *most important* step. The quality of your OpenAPI definition directly impacts the quality of the generated tools. 2. **Choose a Tool/Approach:** Select a tool or method that suits your specific needs (code generation, API gateway configuration, testing, etc.). 3. **Configure the Tool:** Configure the chosen tool with your OpenAPI specification and any necessary options (e.g., target language for code generation, API gateway settings). 4. **Generate/Deploy:** Run the tool to generate the desired output (code, configuration files, test scripts, documentation). 5. **Customize (if needed):** The generated output might require some customization to fully integrate with your MCP server environment. **Specific Approaches and Tools** Here are some tools and approaches, categorized by common use cases: **1. Code Generation (Server Stubs)** * **OpenAPI Generator:** This is a very popular and versatile tool. It supports a wide range of server-side languages and frameworks (e.g., Java Spring, Python Flask, Node.js Express, Go). * **How to use:** ```bash # Example: Generate a Python Flask server stub openapi-generator generate -i your_openapi.yaml -g python-flask -o output_directory ``` Replace `your_openapi.yaml` with the path to your OpenAPI file, `python-flask` with the desired generator, and `output_directory` with the output directory. * **Considerations:** You'll need to install OpenAPI Generator (usually via `brew install openapi-generator` or downloading the JAR file). You'll also need to choose the appropriate generator for your target language and framework. The generated code will typically provide the basic structure for your API endpoints; you'll need to implement the actual business logic. * **Swagger Codegen (Older, but still used):** A predecessor to OpenAPI Generator. It's still functional, but OpenAPI Generator is generally preferred due to its wider support and active development. * **Commercial Tools:** Some commercial API management platforms offer code generation capabilities as part of their suite. **2. API Gateway Configuration** * **Kong:** A popular open-source API gateway. You can import your OpenAPI specification into Kong to automatically configure routes, request validation, authentication, and other features. * **How to use:** Kong provides a plugin called `kong-plugin-openapi` that allows you to import your OpenAPI specification. You'll need to install and configure Kong first. Then, you can use the Kong Admin API or Kong Manager UI to import the OpenAPI file. * **Considerations:** You'll need to understand Kong's configuration model and how it maps to OpenAPI concepts. You might need to customize the Kong configuration to handle specific MCP requirements. * **Tyk:** Another open-source API gateway with OpenAPI support. Similar to Kong, you can import your OpenAPI definition to configure routes, security policies, and other features. * **AWS API Gateway:** Amazon's API Gateway allows you to import OpenAPI definitions to create and manage APIs. * **Azure API Management:** Microsoft's API Management service also supports importing OpenAPI specifications. **3. Testing Tools** * **Dredd:** A command-line tool that validates API implementations against OpenAPI specifications. It reads your OpenAPI definition and sends HTTP requests to your API endpoints, verifying that the responses match the specification. * **How to use:** ```bash dredd your_openapi.yaml http://your-mcp-server ``` Replace `your_openapi.yaml` with the path to your OpenAPI file and `http://your-mcp-server` with the base URL of your MCP server. * **Considerations:** Dredd requires a running MCP server to test against. You might need to write custom hooks to handle authentication or other specific requirements. * **Postman:** While not strictly a code generator, Postman can import OpenAPI specifications and generate collections of API requests. You can then use these collections to manually test your API or run automated tests. * **Swagger Inspector:** A tool that allows you to send requests to your API and automatically generate an OpenAPI definition based on the responses. This can be useful for creating an OpenAPI specification from an existing API. **4. Documentation Generation** * **Swagger UI:** A popular tool for rendering OpenAPI specifications as interactive documentation. You can host Swagger UI and point it to your OpenAPI file to create a user-friendly interface for exploring your API. * **Redoc:** Another documentation generator that focuses on a clean and modern design. It's an alternative to Swagger UI. **5. Data Transformation/Mapping** * **Custom Code:** In many cases, you'll need to write custom code to transform data between the format defined in your OpenAPI specification and the format expected by your MCP server. This might involve mapping fields, converting data types, or handling different encoding schemes. * **Data Mapping Tools:** Some specialized data mapping tools can help automate the process of transforming data between different formats. However, these tools might require some configuration and customization to work with your specific OpenAPI definition and MCP server. **Example: Using OpenAPI Generator for Python Flask** Let's say you have an OpenAPI specification named `mcp_api.yaml` that describes your MCP API. You want to generate a Python Flask server stub. 1. **Install OpenAPI Generator:** ```bash brew install openapi-generator # (if you're on macOS with Homebrew) # Or download the JAR file from https://openapi-generator.tech/docs/installation/ ``` 2. **Generate the code:** ```bash openapi-generator generate -i mcp_api.yaml -g python-flask -o mcp_server ``` 3. **Customize the generated code:** The `mcp_server` directory will contain the generated Flask application. You'll need to: * Implement the business logic for each API endpoint in the generated `controllers` directory. * Configure the Flask application (e.g., database connections, logging). * Handle any MCP-specific protocols or data formats. **Important Considerations** * **Accuracy of OpenAPI Specification:** The quality of your generated tools depends entirely on the accuracy and completeness of your OpenAPI specification. Make sure your OpenAPI definition is up-to-date and accurately reflects your API. * **MCP-Specific Requirements:** Consider any MCP-specific protocols, data formats, or security requirements that are not covered by the standard OpenAPI specification. You might need to add custom extensions to your OpenAPI definition or write custom code to handle these requirements. * **Iteration:** Code generation is often an iterative process. You might need to regenerate the code multiple times as you refine your OpenAPI specification and implement your API. * **Security:** Pay close attention to security considerations when generating code or configuring API gateways. Make sure to implement appropriate authentication, authorization, and input validation mechanisms. * **Error Handling:** Implement robust error handling in your MCP server to gracefully handle invalid requests or unexpected errors. **In Summary** Converting OpenAPI specifications to MCP server-ready tools involves choosing the right tool for the job (code generation, API gateway configuration, testing, etc.), configuring the tool with your OpenAPI definition, and customizing the generated output to meet your specific requirements. The key is to start with a well-defined OpenAPI specification and carefully consider any MCP-specific requirements. Good luck! I hope this comprehensive explanation helps! If you can provide more details about your specific MCP server environment and what you want to achieve, I can give you more tailored advice.
Financial Analysis MCP Server
Espejo de
Ghost MCP Server
Mirror of
Weather MCP Server
Okay, here's a sample MCP (MicroConfig Protocol) server implementation concept for fetching weather forecasts, along with explanations to help you understand the key parts. I'll provide a Python example, as it's commonly used for this kind of task due to its readability and available libraries. **Conceptual Overview** 1. **MCP Server:** This is the core component. It listens for incoming MCP requests, parses them, and responds with the requested data. 2. **Weather Data Source:** This is where you get the actual weather information. This could be: * A public weather API (e.g., OpenWeatherMap, AccuWeather, WeatherAPI.com). This is the most common and practical approach. * A local weather station (if you have one). * A database of weather data. 3. **MCP Request Handling:** The server needs to understand the format of the incoming MCP requests. Typically, this involves: * **Parsing:** Extracting the relevant parameters from the request (e.g., location, units). * **Validation:** Ensuring the parameters are valid (e.g., location is a valid city, units are supported). * **Data Fetching:** Using the parameters to query the weather data source. * **Response Formatting:** Structuring the weather data into a valid MCP response. **Python Example (using `Flask` for the MCP server and `requests` for the API call)** ```python from flask import Flask, request, jsonify import requests import os # For accessing environment variables app = Flask(__name__) # Replace with your actual API key from OpenWeatherMap or another provider WEATHER_API_KEY = os.environ.get("WEATHER_API_KEY") # Get API key from environment variable if not WEATHER_API_KEY: print("Error: WEATHER_API_KEY environment variable not set.") exit() WEATHER_API_URL = "https://api.openweathermap.org/data/2.5/weather" # Example: OpenWeatherMap def get_weather_data(city, units="metric"): """Fetches weather data from the OpenWeatherMap API.""" params = { "q": city, "appid": WEATHER_API_KEY, "units": units, } try: response = requests.get(WEATHER_API_URL, params=params) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) data = response.json() return data except requests.exceptions.RequestException as e: print(f"Error fetching weather data: {e}") return None def format_weather_response(data): """Formats the weather data into a simplified MCP-like response.""" if not data: return None try: temperature = data["main"]["temp"] description = data["weather"][0]["description"] humidity = data["main"]["humidity"] wind_speed = data["wind"]["speed"] response = { "status": "ok", "temperature": temperature, "description": description, "humidity": humidity, "wind_speed": wind_speed, } return response except KeyError as e: print(f"Error formatting weather data: Missing key {e}") return None @app.route("/weather", methods=["GET"]) def weather_endpoint(): """Handles the /weather endpoint for MCP requests.""" city = request.args.get("city") units = request.args.get("units", "metric") # Default to metric if not city: return jsonify({"status": "error", "message": "City parameter is required"}), 400 weather_data = get_weather_data(city, units) if not weather_data: return jsonify({"status": "error", "message": "Failed to fetch weather data"}), 500 formatted_response = format_weather_response(weather_data) if not formatted_response: return jsonify({"status": "error", "message": "Failed to format weather data"}), 500 return jsonify(formatted_response) if __name__ == "__main__": app.run(debug=True, host="0.0.0.0", port=5000) ``` Key improvements and explanations: * **Error Handling:** Includes `try...except` blocks to handle potential errors during API requests and data formatting. This is *crucial* for a robust server. Specifically, it catches `requests.exceptions.RequestException` which covers network errors, timeouts, and HTTP errors. It also catches `KeyError` in the `format_weather_response` function, which can occur if the API response format changes. * **API Key from Environment Variable:** The `WEATHER_API_KEY` is now read from an environment variable. **Never hardcode API keys directly into your code!** This is a security risk. Environment variables are a much safer way to manage sensitive information. The code also checks if the environment variable is set and exits if it's not. * **`response.raise_for_status()`:** This line is very important. It checks the HTTP status code of the API response. If the status code indicates an error (e.g., 404 Not Found, 500 Internal Server Error), it will raise an `HTTPError` exception, which will be caught by the `except` block. This allows you to handle API errors gracefully. * **Clearer Error Messages:** The error messages in the `jsonify` responses are more informative, making it easier to debug problems. * **`units` Parameter:** The `weather_endpoint` function now accepts an optional `units` parameter, allowing the client to specify whether they want the temperature in Celsius or Fahrenheit (or other units supported by the API). It defaults to "metric" (Celsius). * **`format_weather_response` Function:** This function encapsulates the logic for formatting the raw weather data from the API into a simplified dictionary that is returned as the JSON response. This makes the code more modular and easier to maintain. * **`host="0.0.0.0"`:** This makes the server accessible from other machines on the network. If you only want to access it from your local machine, you can use `host="127.0.0.1"`. * **Comments:** Added more comments to explain the code. * **JSON Responses:** Uses `jsonify` to return JSON responses, which is the standard for web APIs. * **Status Codes:** Returns appropriate HTTP status codes (e.g., 400 for bad request, 500 for internal server error) to provide more information to the client. **How to Run:** 1. **Install Dependencies:** ```bash pip install Flask requests ``` 2. **Set Environment Variable:** ```bash export WEATHER_API_KEY="YOUR_OPENWEATHERMAP_API_KEY" # Replace with your actual key ``` (Or set it in your system's environment variables.) 3. **Run the Script:** ```bash python your_script_name.py ``` 4. **Test the Endpoint:** Open your web browser or use `curl` to access the endpoint: ``` http://localhost:5000/weather?city=London http://localhost:5000/weather?city=New York&units=imperial ``` **Important Considerations for a Real MCP Implementation:** * **MCP Specification:** This example is a *simplified* MCP server. A true MCP implementation would need to strictly adhere to the MCP specification, including the required headers, data formats, and error handling. You'd likely need to use a dedicated MCP library or implement the protocol parsing and serialization yourself. * **Security:** Implement proper authentication and authorization to protect your server from unauthorized access. Consider using HTTPS to encrypt communication. * **Scalability:** For high-traffic applications, consider using a more scalable web server (e.g., Gunicorn, uWSGI) and a load balancer. You might also want to cache weather data to reduce the load on the weather API. * **Rate Limiting:** Be aware of the rate limits imposed by the weather API you are using. Implement rate limiting in your server to avoid being blocked. * **Data Validation:** Thoroughly validate all input parameters to prevent injection attacks and other security vulnerabilities. * **Configuration:** Use a configuration file or environment variables to manage settings such as the API key, API URL, and port number. * **Logging:** Implement logging to track requests, errors, and other important events. This will help you debug problems and monitor the performance of your server. * **Testing:** Write unit tests and integration tests to ensure that your server is working correctly. **Spanish Translation of Key Concepts** * **MCP Server:** Servidor MCP * **Weather Data Source:** Fuente de datos meteorológicos * **API Key:** Clave de API * **Environment Variable:** Variable de entorno * **Endpoint:** Punto final * **Request:** Solicitud * **Response:** Respuesta * **Error Handling:** Manejo de errores * **Rate Limiting:** Limitación de velocidad * **Authentication:** Autenticación * **Authorization:** Autorización * **Scalability:** Escalabilidad * **Load Balancer:** Balanceador de carga * **Caching:** Almacenamiento en caché * **Data Validation:** Validación de datos * **Logging:** Registro (de eventos) This comprehensive example and explanation should give you a solid foundation for building your own MCP weather forecast server. Remember to adapt it to your specific needs and the requirements of the MCP protocol you are using. Good luck!
glif-mcp
Espejo de
mcp-server-datahub
El servidor MCP oficial para DataHub (
Browser JavaScript Evaluator
This is a reference design for a MCP server that hosts a web page that connects back to the server via sse and allows Claude to execute javascript on the page.
Spring AI MCP Server 示例项目
Fiberflow MCP Gateway
Run Fiberflow MCP SSE Server over stdio.
MCPClient Python Application
Okay, I can help you outline the implementation for interacting between an MCP (presumably Minecraft Protocol) server and an Ollama model. This is a complex task, and the specific implementation will depend heavily on what you want to *do* with the Ollama model in relation to the Minecraft server. Here's a breakdown of the key components, considerations, and potential approaches: **I. Understanding the Goal** Before diving into code, clarify the purpose of this interaction. What do you want the Ollama model to *do*? Here are some examples: * **Dynamic NPC Dialogue:** The Ollama model generates dialogue for non-player characters (NPCs) based on player interactions, game events, or world state. * **World Generation Assistance:** The model provides suggestions or inspiration for world generation, perhaps influencing terrain features, structures, or biome placement. * **Game Event Triggering:** The model analyzes player actions or in-game events and triggers specific game events or challenges. * **Chat Enhancement:** The model enhances the in-game chat by providing more intelligent responses, translations, or filtering. * **Automated Game Mastering:** The model acts as a game master, dynamically adjusting the game's difficulty, introducing new quests, or creating unexpected scenarios. * **Code Generation:** The model generates code for Minecraft functions or datapacks based on natural language prompts. The specific goal will dictate the data you need to send to Ollama and the actions you take based on Ollama's responses. **II. Core Components** 1. **Minecraft Server (MCP):** This is your standard Minecraft server, likely running with a modding framework like Forge or Fabric. You'll need to write a mod to handle the interaction with the external Ollama model. 2. **Ollama Server:** This is where your Ollama model is running. It needs to be accessible over a network (e.g., `localhost`, a local network IP, or a public IP if you're careful about security). 3. **Mod (Forge/Fabric):** This is the crucial bridge. The mod will: * Listen for relevant Minecraft events (player chat, NPC interactions, world changes, etc.). * Format the relevant data into a prompt for the Ollama model. * Send the prompt to the Ollama server via an HTTP request. * Receive the response from the Ollama server. * Parse the response. * Take actions in the Minecraft world based on the response (e.g., change NPC dialogue, spawn entities, modify world data, send chat messages). 4. **Communication Protocol (HTTP/JSON):** The mod and the Ollama server will communicate using HTTP requests and responses, with data formatted as JSON. This is a standard and relatively easy-to-implement approach. **III. Implementation Steps (General Outline)** 1. **Set up Ollama:** * Install Ollama: Follow the instructions on the Ollama website ([https://ollama.com/](https://ollama.com/)). * Pull a model: `ollama pull <model_name>` (e.g., `ollama pull llama2`). Choose a model appropriate for your task. Smaller models are faster but may be less capable. * Run the model: Ollama typically runs in the background. 2. **Create a Minecraft Mod (Forge/Fabric):** * Set up a mod development environment (using Forge MDK or Fabric Loom). * Add necessary dependencies (e.g., libraries for HTTP requests, JSON parsing). Popular choices include: * **Forge:** Use the built-in `net.minecraft.util.GsonHelper` for JSON. For HTTP, you might use Java's `java.net.http` (available in Java 11+) or a library like Apache HttpClient. * **Fabric:** Use Gson for JSON. For HTTP, use Java's `java.net.http` or a library like Apache HttpClient. 3. **Implement Event Listeners:** * Register event listeners for the Minecraft events you care about. Examples: * `ServerChatEvent` (Forge) / `ServerChatEvent` (Fabric): For listening to player chat messages. * `EntityInteractEvent` (Forge) / `UseEntityCallback` (Fabric): For detecting interactions with NPCs. * `WorldEvent.Load` (Forge) / `ServerWorldEvents.LOAD` (Fabric): For detecting when a world is loaded. 4. **Create the Prompt:** * Inside your event listener, construct a prompt for the Ollama model. This is *critical*. The quality of the prompt directly affects the quality of the response. * Include relevant information in the prompt: * Player name * NPC name (if applicable) * Chat message (if applicable) * Game state (e.g., time of day, weather, player location) * Any other relevant context. * Format the prompt clearly and concisely. Use natural language that the model can understand. Consider using a template or a prompt engineering technique. * Example (for NPC dialogue): ```java String playerName = event.getPlayer().getName().getString(); String npcName = "Villager"; // Example String chatMessage = event.getMessage(); String prompt = "The player " + playerName + " said: '" + chatMessage + "'. You are a villager named " + npcName + ". Respond to the player in a friendly and helpful way."; ``` 5. **Send the HTTP Request to Ollama:** * Create an HTTP client. * Construct a JSON payload containing the prompt. The exact format depends on the Ollama API. Typically, it looks something like this: ```json { "prompt": "The player said...", "model": "llama2", // Or your chosen model "stream": false // Set to true for streaming responses } ``` * Send a POST request to the Ollama API endpoint (usually `/api/generate`). You'll need the correct URL (e.g., `http://localhost:11434/api/generate`). * Set the `Content-Type` header to `application/json`. ```java // Example using Java's java.net.http HttpClient client = HttpClient.newHttpClient(); String jsonPayload = "{\"prompt\": \"" + prompt + "\", \"model\": \"llama2\", \"stream\": false}"; HttpRequest request = HttpRequest.newBuilder() .uri(URI.create("http://localhost:11434/api/generate")) .header("Content-Type", "application/json") .POST(HttpRequest.BodyPublishers.ofString(jsonPayload)) .build(); HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString()); String responseBody = response.body(); ``` 6. **Parse the Response:** * Parse the JSON response from the Ollama server. The response will typically contain the generated text. * Extract the relevant text from the JSON. ```java // Example using Gson Gson gson = new Gson(); JsonObject jsonObject = gson.fromJson(responseBody, JsonObject.class); String generatedText = jsonObject.get("response").getAsString(); ``` 7. **Take Action in Minecraft:** * Based on the generated text, take appropriate actions in the Minecraft world. * Examples: * **NPC Dialogue:** Set the NPC's dialogue to the generated text. You'll need to find the appropriate method for setting NPC dialogue in your modding framework. * **Chat Message:** Send the generated text as a chat message to the player. Use `ServerPlayer.sendSystemMessage(Component.literal(generatedText))` (Forge/Fabric). * **World Modification:** Modify the world based on the generated text (e.g., spawn entities, change blocks). Be very careful when modifying the world, as it can cause lag or crashes. 8. **Error Handling:** * Implement robust error handling. Catch exceptions that may occur during HTTP requests, JSON parsing, or world modification. * Log errors to the Minecraft server log. * Provide informative error messages to the player (if appropriate). **IV. Code Example (Illustrative - Forge)** ```java import com.google.gson.Gson; import com.google.gson.JsonObject; import net.minecraft.network.chat.Component; import net.minecraft.server.level.ServerPlayer; import net.minecraftforge.event.ServerChatEvent; import net.minecraftforge.eventbus.api.SubscribeEvent; import net.minecraftforge.fml.common.Mod; import java.net.URI; import java.net.http.HttpClient; import java.net.http.HttpRequest; import java.net.http.HttpResponse; @Mod.EventBusSubscriber(bus = Mod.EventBusSubscriber.Bus.FORGE) public class ChatEventHandler { @SubscribeEvent public static void onServerChat(ServerChatEvent event) { ServerPlayer player = event.getPlayer(); String playerName = player.getName().getString(); String chatMessage = event.getMessage(); String prompt = "The player " + playerName + " said: '" + chatMessage + "'. Respond in a funny way."; try { HttpClient client = HttpClient.newHttpClient(); String jsonPayload = "{\"prompt\": \"" + prompt + "\", \"model\": \"llama2\", \"stream\": false}"; HttpRequest request = HttpRequest.newBuilder() .uri(URI.create("http://localhost:11434/api/generate")) .header("Content-Type", "application/json") .POST(HttpRequest.BodyPublishers.ofString(jsonPayload)) .build(); HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString()); String responseBody = response.body(); Gson gson = new Gson(); JsonObject jsonObject = gson.fromJson(responseBody, JsonObject.class); String generatedText = jsonObject.get("response").getAsString(); player.sendSystemMessage(Component.literal(generatedText)); } catch (Exception e) { System.err.println("Error communicating with Ollama: " + e.getMessage()); player.sendSystemMessage(Component.literal("Error: Could not generate response.")); } } } ``` **Important Considerations:** * **Performance:** Calling an external API for every chat message or NPC interaction can be slow. Consider using caching, rate limiting, or asynchronous processing to improve performance. Streaming responses from Ollama can also help. * **Security:** Be very careful about exposing your Ollama server to the internet. Use authentication and authorization to prevent unauthorized access. Sanitize the prompts to prevent prompt injection attacks. * **Prompt Engineering:** Experiment with different prompts to get the best results from the Ollama model. Use clear, concise language and provide as much context as possible. * **Rate Limiting:** Ollama might have rate limits. Implement retry logic with exponential backoff to handle rate limiting errors. * **Model Choice:** The choice of Ollama model is crucial. Smaller models are faster but may be less capable. Larger models are more capable but may be slower and require more resources. * **Asynchronous Processing:** Use asynchronous tasks (e.g., `CompletableFuture` in Java) to avoid blocking the main Minecraft server thread. This will prevent the server from freezing while waiting for responses from Ollama. * **Configuration:** Make the Ollama server URL, model name, and other parameters configurable via a configuration file. This will make it easier to deploy and manage the mod. * **Data Storage:** If you need to remember previous interactions, consider using a database or file to store the conversation history. **V. Next Steps** 1. **Choose a Modding Framework:** Decide whether to use Forge or Fabric. 2. **Set up your Development Environment:** Follow the instructions for your chosen framework. 3. **Implement the Basic Communication:** Get the mod to send a simple HTTP request to the Ollama server and print the response to the console. 4. **Implement Event Listeners:** Start with a simple event listener (e.g., `ServerChatEvent`). 5. **Create Prompts:** Experiment with different prompts to see what works best. 6. **Implement Error Handling:** Add error handling to your code. 7. **Test and Refine:** Test your mod thoroughly and refine the prompts and code as needed. This is a complex project, but by breaking it down into smaller steps, you can make it more manageable. Good luck!
simple_mcp_server
Test Simple MCP server

ChatSum
Okay, I understand. Since I don't have access to your personal chat messages (that would be a privacy violation!), I can't query and summarize *your* specific chats. However, I can demonstrate how I would approach this task if I *did* have access to chat data. I'll create a hypothetical scenario and then show you how I would query and summarize it in Spanish. **Hypothetical Scenario:** Let's say I had access to a chat log between you and a friend planning a trip to Mexico. The chat log contains messages about: * Flights * Accommodation (hotels/Airbnb) * Activities (visiting ruins, beaches, restaurants) * Budget **Example Chat Log (in English - I would have the actual data in English):** * **You:** "Hey! Thinking about Mexico in November. What do you think?" * **Friend:** "Sounds amazing! Where in Mexico?" * **You:** "Maybe Cancun or Tulum? Flights are pretty cheap to Cancun." * **Friend:** "Cancun is good. Lots of resorts. Let's check flights on Skyscanner." * **You:** "Okay, I'm seeing roundtrip flights for around $400." * **Friend:** "Not bad! Let's aim for something under $500. What about hotels?" * **You:** "Hotels in Cancun are pricey. Maybe an Airbnb in Tulum would be cheaper?" * **Friend:** "Good idea. I'll look at Airbnbs. What do you want to do besides relax on the beach?" * **You:** "Definitely want to see some Mayan ruins. And eat lots of tacos!" * **Friend:** "Agreed! Chichen Itza is a must-see. And we should try some authentic Mexican food." * **You:** "Okay, so flights under $500, Airbnb in Tulum, Chichen Itza, and tacos. What's our budget?" * **Friend:** "Let's say $1500 each for the whole trip, including flights." **Query and Summarization (in Spanish):** Here's how I would approach querying and summarizing this data in Spanish: **1. Queries (Ejemplos de Consultas):** * **"What is the proposed destination?"** (¿Cuál es el destino propuesto?) * **"What is the budget per person?"** (¿Cuál es el presupuesto por persona?) * **"What activities are planned?"** (¿Qué actividades están planeadas?) * **"What is the plan for accommodation?"** (¿Cuál es el plan para el alojamiento?) * **"What is the target price range for flights?"** (¿Cuál es el rango de precios objetivo para los vuelos?) **2. Summary (Resumen):** "El chat trata sobre la planificación de un viaje a México en noviembre, posiblemente a Cancún o Tulum. El presupuesto es de $1500 por persona, incluyendo los vuelos. Están buscando vuelos de ida y vuelta por menos de $500. Están considerando alojarse en un Airbnb en Tulum para ahorrar dinero. Las actividades planeadas incluyen visitar ruinas mayas, como Chichén Itzá, y comer tacos. En resumen, están organizando un viaje económico a México centrado en la cultura y la gastronomía." **(Translation: "The chat is about planning a trip to Mexico in November, possibly to Cancun or Tulum. The budget is $1500 per person, including flights. They are looking for roundtrip flights for under $500. They are considering staying in an Airbnb in Tulum to save money. Planned activities include visiting Mayan ruins, such as Chichen Itza, and eating tacos. In summary, they are organizing an affordable trip to Mexico focused on culture and gastronomy.")** **Explanation of the Process:** * **Data Access (Hypothetical):** I would need access to the raw chat log data. * **Natural Language Processing (NLP):** I would use NLP techniques to: * **Identify key entities:** Locations (Cancun, Tulum, Chichen Itza), amounts ($400, $500, $1500), activities (visiting ruins, eating tacos), etc. * **Understand the intent:** Determine what the users are trying to achieve (e.g., find cheap flights, decide on accommodation). * **Extract relevant information:** Pull out the specific details related to the queries. * **Translation:** I would use a translation model to translate the queries and the summary into Spanish. * **Summarization:** I would use summarization techniques to create a concise and informative summary of the chat log. **Important Considerations:** * **Privacy:** Accessing and processing personal chat data requires strict adherence to privacy regulations. I am programmed to respect user privacy and would never access or share personal information without explicit consent and proper authorization. * **Context:** Understanding the context of the chat is crucial for accurate querying and summarization. * **Ambiguity:** Natural language can be ambiguous. I would need to use sophisticated NLP techniques to resolve ambiguity and ensure that the queries and summary are accurate. **In summary, while I can't access your personal chat messages, I can demonstrate the process of querying and summarizing chat data in Spanish using a hypothetical scenario and NLP techniques.** If you have any other questions about how I would approach this task, feel free to ask!
Google Drive & Sheets MCP Server
A Model Context Protocol (MCP) server built in Rust for interacting with Google Drive and Google Sheets.