Discover Awesome MCP Servers

Extend your agent with 19,294 capabilities via MCP servers.

All19,294
Redmine MCP Server

Redmine MCP Server

Mirror of

Cohere MCP Server

Cohere MCP Server

Máy chủ Cohere MCP

mcp-server-email

mcp-server-email

mcp-server demo about send email by golang

Sefaria Jewish Library MCP Server

Sefaria Jewish Library MCP Server

Gương của

s-GitHubTestRepo-Henry

s-GitHubTestRepo-Henry

created from MCP server demo

MCP Servers Hub

MCP Servers Hub

Khám phá các Máy chủ và Ứng dụng khách MCP thú vị.

Bilibili MCP 服务器

Bilibili MCP 服务器

"mcp-server learning" translates to: **Học tập về mcp-server** This is a direct and accurate translation. It means "Learning about mcp-server".

MCP Workers AI

MCP Workers AI

MCP servers sdk for Cloudflare Workers

paloalto-mcp-servers

paloalto-mcp-servers

A collection of Model Context Protocol (MCP) servers for managing Palo Alto Networks firewalls and services

🎯 Kubernetes MCP Server

🎯 Kubernetes MCP Server

Máy chủ MCP được hỗ trợ bởi AI hiểu các truy vấn bằng ngôn ngữ tự nhiên về cụm Kubernetes của bạn

Code Reviewer Fixer Agent

Code Reviewer Fixer Agent

This AI agent analyzes code repositories, detects potential security vulnerabilities, reviews code quality, and suggests fixes based on Sentry error logs using Sentry and GitHub MCP servers!

Modular Outlook MCP Server

Modular Outlook MCP Server

MCP server for Claude to access Outlook data via Microsoft Graph API

mcp-server-datahub

mcp-server-datahub

Máy chủ MCP chính thức cho DataHub (

Browser JavaScript Evaluator

Browser JavaScript Evaluator

This is a reference design for a MCP server that hosts a web page that connects back to the server via sse and allows Claude to execute javascript on the page.

Rails MCP Server

Rails MCP Server

A Ruby gem implementation of a Model Context Protocol (MCP) server for Rails projects. This server allows LLMs (Large Language Models) to interact with Rails projects through the Model Context Protocol.

MCPClient Python Application

MCPClient Python Application

Okay, I understand. You want a description of how to implement interaction between an MCP (presumably, you mean a Minecraft Protocol server) and an Ollama model. This is a complex task, as it involves bridging two very different systems. Here's a breakdown of the implementation considerations, along with code snippets (in Python, as it's commonly used for both Minecraft server scripting and interacting with Ollama) to illustrate key parts: **1. Understanding the Goal** First, clarify *what* kind of interaction you want. Here are some possibilities: * **In-Game Chatbot:** The Ollama model responds to player chat messages. * **World Generation/Modification:** The Ollama model influences the creation or alteration of the Minecraft world (e.g., suggesting structures, biomes, or quests). * **NPC Dialogue:** Ollama powers the dialogue of Non-Player Characters (NPCs). * **Game Event Triggering:** Ollama analyzes game events and triggers actions (e.g., spawning mobs, changing weather). The specific goal will heavily influence the implementation. Let's assume we're aiming for a **simple in-game chatbot** for this example. **2. Core Components** * **Minecraft Server:** The actual Minecraft server (e.g., Vanilla, Spigot, Paper). * **Minecraft Server Plugin/Mod:** Code that runs *within* the Minecraft server to intercept chat messages and send commands. This is usually written in Java (for Spigot/Paper) or uses a scripting engine like Python (if the server supports it). * **Bridge/Middleware:** A separate program (likely in Python) that acts as the intermediary. It receives chat messages from the Minecraft plugin, sends them to the Ollama model, receives the response, and sends the response back to the Minecraft plugin to display in-game. * **Ollama Server:** The Ollama server running with your chosen model. **3. Implementation Steps** **A. Minecraft Server Plugin/Mod (Java/Python)** * **Intercept Chat Messages:** Use the server's API to listen for player chat messages. For example, in Spigot/Paper (Java): ```java import org.bukkit.event.EventHandler; import org.bukkit.event.Listener; import org.bukkit.event.player.AsyncPlayerChatEvent; import org.bukkit.plugin.java.JavaPlugin; public class ChatPlugin extends JavaPlugin implements Listener { @Override public void onEnable() { getServer().getPluginManager().registerEvents(this, this); } @EventHandler public void onPlayerChat(AsyncPlayerChatEvent event) { String message = event.getMessage(); String playerName = event.getPlayer().getName(); // Send the message to the bridge (e.g., using a socket) sendToBridge(playerName + ": " + message); // Optionally, cancel the original message if you only want the bot's response // event.setCancelled(true); } private void sendToBridge(String message) { // Implement socket communication to send the message to the Python bridge // (See example below) } } ``` If using a Python scripting engine (like IronPython or Jython within the Minecraft server), the code would be similar but in Python. * **Send Messages to the Bridge:** Establish a communication channel (e.g., TCP socket, HTTP request) to send the chat message to the bridge. A simple TCP socket is often a good choice. Here's a basic example of the `sendToBridge` function using sockets (in Java): ```java import java.io.IOException; import java.io.PrintWriter; import java.net.Socket; private void sendToBridge(String message) { try (Socket socket = new Socket("localhost", 12345); // Replace with bridge's address/port PrintWriter out = new PrintWriter(socket.getOutputStream(), true)) { out.println(message); } catch (IOException e) { getLogger().severe("Error sending message to bridge: " + e.getMessage()); } } ``` * **Receive Responses from the Bridge:** Listen for responses from the bridge. Again, use the same communication channel (e.g., the same TCP socket). ```java // In the ChatPlugin class (Java) private void receiveFromBridge() { new Thread(() -> { try (ServerSocket serverSocket = new ServerSocket(12346); // Different port for receiving Socket clientSocket = serverSocket.accept(); BufferedReader in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()))) { String response; while ((response = in.readLine()) != null) { // Display the response in-game getServer().getScheduler().runTask(this, () -> { getServer().broadcastMessage(response); // Or send to the specific player }); } } catch (IOException e) { getLogger().severe("Error receiving message from bridge: " + e.getMessage()); } }).start(); } @Override public void onEnable() { getServer().getPluginManager().registerEvents(this, this); receiveFromBridge(); // Start listening for responses } ``` * **Display Responses In-Game:** Use the server's API to display the response in the Minecraft chat. The `getServer().broadcastMessage()` method in the example above sends the message to all players. You can use `player.sendMessage()` to send it only to the player who initiated the chat. **B. Bridge/Middleware (Python)** * **Listen for Messages from the Plugin:** Create a TCP socket server (or HTTP server) to listen for incoming messages from the Minecraft plugin. ```python import socket import threading import json import requests OLLAMA_API_URL = "http://localhost:11434/api/generate" # Default Ollama API endpoint MODEL_NAME = "llama2" # Replace with your desired Ollama model def handle_client(connection, address): try: print(f"Connection from {address}") while True: data = connection.recv(1024).decode() if not data: break print(f"Received: {data}") # Process the message with Ollama response = get_ollama_response(data) # Send the response back to the Minecraft plugin connection.sendall(response.encode()) except Exception as e: print(f"Error handling client: {e}") finally: connection.close() print(f"Connection with {address} closed") def get_ollama_response(prompt): """Sends a prompt to the Ollama API and returns the response.""" try: data = { "prompt": prompt, "model": MODEL_NAME, "stream": False # Set to True for streaming responses } response = requests.post(OLLAMA_API_URL, json=data, stream=False) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) json_response = response.json() return json_response.get("response", "No response from Ollama.") except requests.exceptions.RequestException as e: print(f"Error communicating with Ollama: {e}") return "Error: Could not connect to Ollama." def start_server(): host = "localhost" port = 12345 # Same port as in the Java plugin server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server_socket.bind((host, port)) server_socket.listen(5) # Listen for up to 5 incoming connections print(f"Listening on {host}:{port}") while True: connection, address = server_socket.accept() client_thread = threading.Thread(target=handle_client, args=(connection, address)) client_thread.start() if __name__ == "__main__": start_server() ``` * **Send Messages to Ollama:** Use the `requests` library (or similar) to send the chat message to the Ollama API. You'll need to format the message as a prompt for the model. ```python import requests import json OLLAMA_API_URL = "http://localhost:11434/api/generate" # Default Ollama API endpoint MODEL_NAME = "llama2" # Replace with your desired Ollama model def get_ollama_response(prompt): """Sends a prompt to the Ollama API and returns the response.""" try: data = { "prompt": prompt, "model": MODEL_NAME, "stream": False # Set to True for streaming responses } response = requests.post(OLLAMA_API_URL, json=data, stream=False) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) json_response = response.json() return json_response.get("response", "No response from Ollama.") except requests.exceptions.RequestException as e: print(f"Error communicating with Ollama: {e}") return "Error: Could not connect to Ollama." # Example usage: user_message = "What is the capital of France?" ollama_response = get_ollama_response(user_message) print(f"Ollama's response: {ollama_response}") ``` * **Receive Responses from Ollama:** Parse the JSON response from the Ollama API to extract the generated text. * **Send Responses to the Plugin:** Send the generated text back to the Minecraft plugin using the same communication channel (e.g., TCP socket). **C. Ollama Server** * **Run Ollama:** Make sure your Ollama server is running with the desired model loaded. Follow the Ollama documentation for installation and model management. The default API endpoint is usually `http://localhost:11434/api/generate`. **4. Key Considerations and Enhancements** * **Prompt Engineering:** The quality of the Ollama response depends heavily on the prompt. Experiment with different prompts to get the desired behavior. You might include context about the game, the player, or previous interactions. For example: ```python def get_ollama_response(player_name, message): prompt = f"You are a helpful assistant in a Minecraft game. {player_name} said: {message}. Respond in a friendly and helpful way." # ... (rest of the function) ``` * **Context Management:** For more complex interactions, you'll need to maintain context across multiple turns of the conversation. This could involve storing previous messages and including them in the prompt. Consider using a simple list or a more sophisticated memory system. * **Rate Limiting:** To prevent overloading the Ollama server or the Minecraft server, implement rate limiting. Limit the number of requests sent to Ollama per player or globally. * **Error Handling:** Implement robust error handling to catch exceptions and provide informative error messages. Handle cases where the Ollama server is unavailable or returns an error. * **Security:** If you're exposing the bridge to the internet, take security precautions to prevent unauthorized access. * **Asynchronous Operations:** Use asynchronous operations (e.g., `asyncio` in Python) to avoid blocking the main thread of the Minecraft server or the bridge. This is especially important for long-running Ollama requests. * **Configuration:** Make the bridge configurable (e.g., using a configuration file) so you can easily change the Ollama API endpoint, model name, and other parameters. * **Streaming Responses:** Ollama supports streaming responses. This allows you to display the response in the Minecraft chat as it's being generated, rather than waiting for the entire response to be complete. Set `stream: True` in the Ollama API request and handle the streamed data accordingly. **Example of Streaming Responses (Python Bridge)** ```python import requests import json OLLAMA_API_URL = "http://localhost:11434/api/generate" MODEL_NAME = "llama2" def get_ollama_streaming_response(prompt, callback): """Sends a prompt to Ollama and streams the response to a callback function.""" try: data = { "prompt": prompt, "model": MODEL_NAME, "stream": True } with requests.post(OLLAMA_API_URL, json=data, stream=True) as response: response.raise_for_status() for line in response.iter_lines(): if line: decoded_line = line.decode('utf-8') try: json_data = json.loads(decoded_line) response_content = json_data.get("response") if response_content: callback(response_content) # Call the callback with each chunk except json.JSONDecodeError: print(f"Error decoding JSON: {decoded_line}") except requests.exceptions.RequestException as e: print(f"Error communicating with Ollama: {e}") callback(f"Error: Could not connect to Ollama: {e}") # Example usage (replace with your actual callback function): def process_chunk(chunk): print(f"Received chunk: {chunk}") # In your actual implementation, you would send this chunk back to the # Minecraft plugin to display it in the chat. user_message = "Tell me a story about a Minecraft adventure." get_ollama_streaming_response(user_message, process_chunk) ``` **Important Notes:** * **Replace Placeholders:** Remember to replace placeholders like `"localhost"`, `12345`, `"llama2"`, and the Ollama API URL with your actual values. * **Threading:** The examples use threads to handle concurrent connections. For more complex applications, consider using `asyncio` for better concurrency. * **Error Handling:** The error handling in the examples is basic. Implement more robust error handling in a production environment. * **Security:** Be mindful of security implications, especially if you're exposing the bridge to the internet. This detailed explanation and the code snippets should give you a solid foundation for implementing interaction between a Minecraft server and an Ollama model. Remember to adapt the code to your specific needs and experiment with different prompts and configurations to achieve the desired behavior. Good luck!

MCP2HTTP

MCP2HTTP

MCP2HTTP is a minimal transport adapter that bridges MCP clients using stdio with stateless HTTP servers.

Remote MCP Server on Cloudflare

Remote MCP Server on Cloudflare

Dockerized Salesforce MCP Server

Dockerized Salesforce MCP Server

Dockerized Salesforce MCP Server for REST API integration

Basilisp nREPL MCP Bridge

Basilisp nREPL MCP Bridge

simple MCP server for nREPL

Zoom MCP Server

Zoom MCP Server

MCP server for Zoom

Exa MCP Server 🔍

Exa MCP Server 🔍

Claude có thể thực hiện Tìm kiếm trên Web | Exa với MCP (Giao thức Ngữ cảnh Mô hình).

@modelcontextprotocol/server-terminal

@modelcontextprotocol/server-terminal

Terminal server implementation for Model Context Protocol

GooseTeam

GooseTeam

Look, a flock of geese! An MCP server and protocol for Goose agent collaboration.

Model Context Protocol (MCP) Implementation

Model Context Protocol (MCP) Implementation

Learn MCP by building from Scarch

mcp_server_local_files

mcp_server_local_files

Local File System MCP Server

MCP Expert Server

MCP Expert Server

Mirror of

NSAF MCP Server

NSAF MCP Server

Gương của

MCP Server Playwright

MCP Server Playwright

MCP Server Playwright - Một dịch vụ tự động hóa trình duyệt cho Claude Desktop

iOS Simulator MCP Server

iOS Simulator MCP Server

Mirror of