Discover Awesome MCP Servers

Extend your agent with 30,124 capabilities via MCP servers.

All30,124
Sefaria Jewish Library MCP Server

Sefaria Jewish Library MCP Server

Cermin dari

🎯 Kubernetes MCP Server

🎯 Kubernetes MCP Server

Server MCP bertenaga AI memahami kueri bahasa alami tentang klaster Kubernetes Anda

MCP Servers Hub

MCP Servers Hub

Temukan Server dan Klien MCP yang menarik.

paloalto-mcp-servers

paloalto-mcp-servers

A collection of Model Context Protocol (MCP) servers for managing Palo Alto Networks firewalls and services

Code Reviewer Fixer Agent

Code Reviewer Fixer Agent

This AI agent analyzes code repositories, detects potential security vulnerabilities, reviews code quality, and suggests fixes based on Sentry error logs using Sentry and GitHub MCP servers!

Share MCP - Model Context Protocol MCP Server导航站

Share MCP - Model Context Protocol MCP Server导航站

Share MCP adalah situs navigasi yang berfokus pada Model Context Protocol (MCP). Situs ini menyediakan tampilan terklasifikasi dari berbagai sumber daya, alat, dan layanan terkait MCP, membantu pengembang menemukan solusi MCP yang mereka butuhkan dengan cepat.

mcp-client-and-server MCP server

mcp-client-and-server MCP server

Mirror of

MCP Client Configuration Server

MCP Client Configuration Server

Cermin dari

@modelcontextprotocol/server-terminal

@modelcontextprotocol/server-terminal

Terminal server implementation for Model Context Protocol

s-GitHubTestRepo-Henry

s-GitHubTestRepo-Henry

created from MCP server demo

Remote MCP Server on Cloudflare

Remote MCP Server on Cloudflare

Rails MCP Server

Rails MCP Server

A Ruby gem implementation of a Model Context Protocol (MCP) server for Rails projects. This server allows LLMs (Large Language Models) to interact with Rails projects through the Model Context Protocol.

Bilibili MCP 服务器

Bilibili MCP 服务器

That translates to: **mcp-server belajar** Which means "mcp-server learning" or "learning mcp-server".

MCP Workers AI

MCP Workers AI

MCP servers sdk for Cloudflare Workers

Model Context Protocol (MCP) Implementation

Model Context Protocol (MCP) Implementation

Learn MCP by building from Scarch

MCP2HTTP

MCP2HTTP

MCP2HTTP is a minimal transport adapter that bridges MCP clients using stdio with stateless HTTP servers.

Dockerized Salesforce MCP Server

Dockerized Salesforce MCP Server

Dockerized Salesforce MCP Server for REST API integration

Exa MCP Server 🔍

Exa MCP Server 🔍

Claude dapat melakukan Pencarian Web | Exa dengan MCP (Protokol Konteks Model).

mcp_server_local_files

mcp_server_local_files

Local File System MCP Server

MCP Expert Server

MCP Expert Server

Mirror of

Basilisp nREPL MCP Bridge

Basilisp nREPL MCP Bridge

simple MCP server for nREPL

Modular Outlook MCP Server

Modular Outlook MCP Server

MCP server for Claude to access Outlook data via Microsoft Graph API

mcp-server-datahub

mcp-server-datahub

Server MCP resmi untuk DataHub (

Zoom MCP Server

Zoom MCP Server

MCP server for Zoom

Fiberflow MCP Gateway

Fiberflow MCP Gateway

Run Fiberflow MCP SSE Server over stdio.

MCPClient Python Application

MCPClient Python Application

Okay, I understand. You want an implementation for interacting between an MCP (presumably Minecraft Protocol) server and an Ollama model. This is a complex task that involves several steps. Here's a breakdown of the concepts, potential approaches, and a simplified example to get you started. Keep in mind that a full, production-ready implementation would be quite extensive. **Core Concepts** * **Minecraft Protocol (MCP):** This is the communication protocol used between Minecraft clients and servers. It's binary and relatively complex. You'll need a library to handle the encoding and decoding of Minecraft packets. * **Ollama:** This is a tool that allows you to run large language models (LLMs) locally. You interact with Ollama via its API, typically using HTTP requests. * **Bridge/Middleware:** You'll need a piece of software (the "bridge") that sits between the Minecraft server and the Ollama model. This bridge will: * Receive messages from the Minecraft server (e.g., player chat). * Format those messages into a prompt for the Ollama model. * Send the prompt to the Ollama API. * Receive the response from Ollama. * Format the response into a Minecraft message. * Send the message back to the Minecraft server. **High-Level Architecture** 1. **Minecraft Server:** The standard Minecraft server. 2. **MCP Library:** A library (e.g., `mcproto`, `python-minecraft-protocol`, `node-minecraft-protocol`) to handle Minecraft packet encoding/decoding. This will be used in the bridge. 3. **Bridge (Your Code):** The core of the integration. This will: * Connect to the Minecraft server. * Listen for specific packets (e.g., chat messages). * Process the messages. * Interact with the Ollama API. * Send messages back to the Minecraft server. 4. **Ollama Server:** Running locally or on a server. You'll need to have a model loaded in Ollama (e.g., `llama2`, `mistral`). **Implementation Steps** 1. **Choose a Programming Language and Libraries:** * **Python:** Popular choice due to its ease of use and available libraries. Libraries like `mcproto`, `requests`, and `asyncio` are helpful. * **Node.js:** Another good option with libraries like `node-minecraft-protocol` and `node-fetch`. * **Go:** A performant option, but might require more manual work with the MCP. 2. **Set up Ollama:** * Install Ollama: Follow the instructions on the Ollama website ([https://ollama.com/](https://ollama.com/)). * Pull a model: `ollama pull llama2` (or another model of your choice). * Run the model: `ollama run llama2` (This starts the Ollama API server). 3. **Implement the Bridge:** This is the most complex part. Here's a simplified Python example using `mcproto` and `requests`: ```python import asyncio import mcproto import json import requests # Minecraft Server Configuration SERVER_HOST = "localhost" # Or your server's IP address SERVER_PORT = 25565 USERNAME = "OllamaBot" # Bot's username # Ollama API Configuration OLLAMA_API_URL = "http://localhost:11434/api/generate" # Default Ollama API endpoint OLLAMA_MODEL = "llama2" # The model you're using in Ollama async def handle_chat_message(client, message): """Handles incoming chat messages from the Minecraft server.""" try: chat_data = json.loads(message) text = chat_data['content'] # Extract the actual message text print(f"Received chat message: {text}") # Create a prompt for Ollama prompt = f"Respond to the following Minecraft chat message as a helpful and friendly player: {text}" # Send the prompt to Ollama response = requests.post(OLLAMA_API_URL, json={ "prompt": prompt, "model": OLLAMA_MODEL, "stream": False # Get the full response at once }) response.raise_for_status() # Raise an exception for bad status codes (4xx or 5xx) ollama_response = response.json() bot_response = ollama_response['response'] print(f"Ollama response: {bot_response}") # Format the response as a Minecraft chat message minecraft_message = f"[Ollama]: {bot_response}" # Send the message back to the Minecraft server await send_chat_message(client, minecraft_message) except json.JSONDecodeError as e: print(f"Error decoding JSON: {e}") except requests.exceptions.RequestException as e: print(f"Error communicating with Ollama: {e}") except Exception as e: print(f"An unexpected error occurred: {e}") async def send_chat_message(client, message): """Sends a chat message to the Minecraft server.""" await client.send("chat_message", {"message": message, "position": 0}) # Position 0 is chat async def main(): """Main function to connect to the Minecraft server and handle messages.""" try: client = await mcproto.create_client(SERVER_HOST, SERVER_PORT, username=USERNAME) print(f"Connected to Minecraft server as {USERNAME}") # Register a handler for incoming chat messages client.register("chat_message", handle_chat_message) # Keep the client running await client.run() except mcproto.exceptions.MinecraftError as e: print(f"Minecraft error: {e}") except Exception as e: print(f"An unexpected error occurred: {e}") if __name__ == "__main__": asyncio.run(main()) ``` **Explanation of the Python Example:** * **Imports:** Imports necessary libraries. * **Configuration:** Sets up the server address, port, username, Ollama API URL, and model name. *Crucially, change these to match your setup.* * **`handle_chat_message`:** * Receives a chat message from the Minecraft server. * Extracts the text of the message. * Constructs a prompt for Ollama. *This is where you can customize how you want the model to respond.* * Sends the prompt to the Ollama API using `requests.post`. * Parses the JSON response from Ollama. * Formats the response into a Minecraft chat message. * Sends the message back to the Minecraft server using `send_chat_message`. * **`send_chat_message`:** Sends a chat message to the Minecraft server using the `chat_message` packet. * **`main`:** * Connects to the Minecraft server using `mcproto.create_client`. * Registers the `handle_chat_message` function to be called when a `chat_message` packet is received. * Runs the client using `client.run()`. **To Run the Example:** 1. **Install Libraries:** `pip install mcproto requests` 2. **Start Ollama:** `ollama run llama2` (or your chosen model) 3. **Run the Python script:** `python your_script_name.py` 4. **Join your Minecraft server with a client.** 5. **Send a chat message in Minecraft.** The bot should respond. **Important Considerations and Next Steps:** * **Error Handling:** The example includes basic error handling, but you'll need to add more robust error handling for production use. Consider logging errors to a file. * **Prompt Engineering:** The prompt is crucial for getting good responses from the LLM. Experiment with different prompts to get the desired behavior. Consider adding context to the prompt (e.g., the player's name, the current game state). * **Rate Limiting:** Ollama has rate limits. Implement rate limiting in your bridge to avoid overloading the Ollama API. * **Security:** Be careful about the prompts you send to the LLM. Avoid sending sensitive information. Sanitize user input to prevent prompt injection attacks. * **Asynchronous Operations:** Use `asyncio` (or similar) to handle network operations concurrently. This will prevent the bridge from blocking while waiting for responses from the Minecraft server or the Ollama API. * **Minecraft Protocol Version:** Make sure your `mcproto` (or equivalent) library is compatible with the Minecraft server version you're using. * **Authentication:** If your Minecraft server requires authentication, you'll need to implement the authentication handshake in your bridge. * **Configuration:** Use a configuration file (e.g., JSON, YAML) to store the server address, port, username, Ollama API URL, and other settings. * **More Sophisticated Interactions:** Instead of just responding to chat messages, you could use the LLM to: * Generate quests. * Create dynamic storylines. * Control non-player characters (NPCs). * Analyze player behavior. * **Context Management:** LLMs often perform better with context. Consider storing a history of recent chat messages and including them in the prompt. Be mindful of the LLM's context window (the maximum length of the prompt). * **Filtering:** Implement filtering to prevent the LLM from generating inappropriate or offensive content. This is a starting point. Building a fully functional bridge between a Minecraft server and an Ollama model requires significant effort and experimentation. Good luck!

GooseTeam

GooseTeam

Look, a flock of geese! An MCP server and protocol for Goose agent collaboration.

Browser JavaScript Evaluator

Browser JavaScript Evaluator

This is a reference design for a MCP server that hosts a web page that connects back to the server via sse and allows Claude to execute javascript on the page.

iOS Simulator MCP Server

iOS Simulator MCP Server

Mirror of

Supergateway

Supergateway

Run MCP stdio servers over SSE and SSE over stdio. AI gateway.