Discover Awesome MCP Servers

Extend your agent with 17,166 capabilities via MCP servers.

All17,166
mcp-server-email

mcp-server-email

mcp-server demo about send email by golang

@waldzellai/mcp-servers

@waldzellai/mcp-servers

Monorepositorio de servidores MCP de Waldzell AI para la mejora de modelos. ¡Úselo en Claude Desktop, Cline, Roo Code y más!

Redmine MCP Server

Redmine MCP Server

Mirror of

paloalto-mcp-servers

paloalto-mcp-servers

A collection of Model Context Protocol (MCP) servers for managing Palo Alto Networks firewalls and services

Code Reviewer Fixer Agent

Code Reviewer Fixer Agent

This AI agent analyzes code repositories, detects potential security vulnerabilities, reviews code quality, and suggests fixes based on Sentry error logs using Sentry and GitHub MCP servers!

Share MCP - Model Context Protocol MCP Server导航站

Share MCP - Model Context Protocol MCP Server导航站

Share MCP es un sitio de navegación centrado en el Protocolo de Contexto de Modelos (MCP). Ofrece una amplia gama de recursos, herramientas y servicios relacionados con MCP, clasificados para ayudar a los desarrolladores a encontrar rápidamente las soluciones MCP que necesitan.

Cohere MCP Server

Cohere MCP Server

Servidor MCP de Cohere

MCP Client Configuration Server

MCP Client Configuration Server

Espejo de

teable-mcp-server

teable-mcp-server

Un servidor MCP para interactuar con la base de datos de Teable.

UIThub MCP Server

UIThub MCP Server

Simple MCP server for uithub.com

Exa MCP Server 🔍

Exa MCP Server 🔍

Claude puede realizar Búsquedas Web | Exa con MCP (Protocolo de Contexto del Modelo).

@modelcontextprotocol/server-terminal

@modelcontextprotocol/server-terminal

Terminal server implementation for Model Context Protocol

🎯 Kubernetes MCP Server

🎯 Kubernetes MCP Server

Servidor MCP impulsado por IA que entiende consultas en lenguaje natural sobre tu clúster de Kubernetes.

s-GitHubTestRepo-Henry

s-GitHubTestRepo-Henry

created from MCP server demo

MCP Servers Hub

MCP Servers Hub

Descubre servidores y clientes MCP interesantes.

MCP Workers AI

MCP Workers AI

MCP servers sdk for Cloudflare Workers

MCP2HTTP

MCP2HTTP

MCP2HTTP is a minimal transport adapter that bridges MCP clients using stdio with stateless HTTP servers.

Remote MCP Server on Cloudflare

Remote MCP Server on Cloudflare

Dockerized Salesforce MCP Server

Dockerized Salesforce MCP Server

Dockerized Salesforce MCP Server for REST API integration

Basilisp nREPL MCP Bridge

Basilisp nREPL MCP Bridge

simple MCP server for nREPL

Zoom MCP Server

Zoom MCP Server

MCP server for Zoom

Modular Outlook MCP Server

Modular Outlook MCP Server

MCP server for Claude to access Outlook data via Microsoft Graph API

Rails MCP Server

Rails MCP Server

A Ruby gem implementation of a Model Context Protocol (MCP) server for Rails projects. This server allows LLMs (Large Language Models) to interact with Rails projects through the Model Context Protocol.

Bilibili MCP 服务器

Bilibili MCP 服务器

This phrase, "mcp-server学习," appears to be a request to learn about MCP (Minecraft Protocol) servers. Here's the translation in Spanish, along with some options depending on the nuance you want to convey: **Most Direct Translation:** * **Aprendizaje sobre servidores MCP** (Learning about MCP servers) **More Natural/Common Translations:** * **Aprender sobre servidores MCP** (To learn about MCP servers) * **Estudiar servidores MCP** (To study MCP servers) * **Información sobre servidores MCP** (Information about MCP servers) **If you want to emphasize the process of learning:** * **Estudio de servidores MCP** (Study of MCP servers) **Therefore, the best translation depends on the context. However, "Aprender sobre servidores MCP" is likely the most common and easily understood.**

Model Context Protocol (MCP) Implementation

Model Context Protocol (MCP) Implementation

Learn MCP by building from Scarch

Fiberflow MCP Gateway

Fiberflow MCP Gateway

Run Fiberflow MCP SSE Server over stdio.

MCPClient Python Application

MCPClient Python Application

Okay, I can help you outline the implementation for interacting between an MCP (presumably Minecraft Protocol) server and an Ollama model. This is a complex task, and the specific implementation will depend heavily on what you want to *do* with the Ollama model in relation to the Minecraft server. Here's a breakdown of the key components, considerations, and potential approaches: **I. Understanding the Goal** Before diving into code, clarify the purpose of this interaction. What do you want the Ollama model to *do*? Here are some examples: * **Dynamic NPC Dialogue:** The Ollama model generates dialogue for non-player characters (NPCs) based on player interactions, game events, or world state. * **World Generation Assistance:** The model provides suggestions or inspiration for world generation, perhaps influencing terrain features, structures, or biome placement. * **Game Event Triggering:** The model analyzes player actions or in-game events and triggers specific game events or challenges. * **Chat Enhancement:** The model enhances the in-game chat by providing more intelligent responses, translations, or filtering. * **Automated Game Mastering:** The model acts as a game master, dynamically adjusting the game's difficulty, introducing new quests, or creating unexpected scenarios. * **Code Generation:** The model generates code for Minecraft functions or datapacks based on natural language prompts. The specific goal will dictate the data you need to send to Ollama and the actions you take based on Ollama's responses. **II. Core Components** 1. **Minecraft Server (MCP):** This is your standard Minecraft server, likely running with a modding framework like Forge or Fabric. You'll need to write a mod to handle the interaction with the external Ollama model. 2. **Ollama Server:** This is where your Ollama model is running. It needs to be accessible over a network (e.g., `localhost`, a local network IP, or a public IP if you're careful about security). 3. **Mod (Forge/Fabric):** This is the crucial bridge. The mod will: * Listen for relevant Minecraft events (player chat, NPC interactions, world changes, etc.). * Format the relevant data into a prompt for the Ollama model. * Send the prompt to the Ollama server via an HTTP request. * Receive the response from the Ollama server. * Parse the response. * Take actions in the Minecraft world based on the response (e.g., change NPC dialogue, spawn entities, modify world data, send chat messages). 4. **Communication Protocol (HTTP/JSON):** The mod and the Ollama server will communicate using HTTP requests and responses, with data formatted as JSON. This is a standard and relatively easy-to-implement approach. **III. Implementation Steps (General Outline)** 1. **Set up Ollama:** * Install Ollama: Follow the instructions on the Ollama website ([https://ollama.com/](https://ollama.com/)). * Pull a model: `ollama pull <model_name>` (e.g., `ollama pull llama2`). Choose a model appropriate for your task. Smaller models are faster but may be less capable. * Run the model: Ollama typically runs in the background. 2. **Create a Minecraft Mod (Forge/Fabric):** * Set up a mod development environment (using Forge MDK or Fabric Loom). * Add necessary dependencies (e.g., libraries for HTTP requests, JSON parsing). Popular choices include: * **Forge:** Use the built-in `net.minecraft.util.GsonHelper` for JSON. For HTTP, you might use Java's `java.net.http` (available in Java 11+) or a library like Apache HttpClient. * **Fabric:** Use Gson for JSON. For HTTP, use Java's `java.net.http` or a library like Apache HttpClient. 3. **Implement Event Listeners:** * Register event listeners for the Minecraft events you care about. Examples: * `ServerChatEvent` (Forge) / `ServerChatEvent` (Fabric): For listening to player chat messages. * `EntityInteractEvent` (Forge) / `UseEntityCallback` (Fabric): For detecting interactions with NPCs. * `WorldEvent.Load` (Forge) / `ServerWorldEvents.LOAD` (Fabric): For detecting when a world is loaded. 4. **Create the Prompt:** * Inside your event listener, construct a prompt for the Ollama model. This is *critical*. The quality of the prompt directly affects the quality of the response. * Include relevant information in the prompt: * Player name * NPC name (if applicable) * Chat message (if applicable) * Game state (e.g., time of day, weather, player location) * Any other relevant context. * Format the prompt clearly and concisely. Use natural language that the model can understand. Consider using a template or a prompt engineering technique. * Example (for NPC dialogue): ```java String playerName = event.getPlayer().getName().getString(); String npcName = "Villager"; // Example String chatMessage = event.getMessage(); String prompt = "The player " + playerName + " said: '" + chatMessage + "'. You are a villager named " + npcName + ". Respond to the player in a friendly and helpful way."; ``` 5. **Send the HTTP Request to Ollama:** * Create an HTTP client. * Construct a JSON payload containing the prompt. The exact format depends on the Ollama API. Typically, it looks something like this: ```json { "prompt": "The player said...", "model": "llama2", // Or your chosen model "stream": false // Set to true for streaming responses } ``` * Send a POST request to the Ollama API endpoint (usually `/api/generate`). You'll need the correct URL (e.g., `http://localhost:11434/api/generate`). * Set the `Content-Type` header to `application/json`. ```java // Example using Java's java.net.http HttpClient client = HttpClient.newHttpClient(); String jsonPayload = "{\"prompt\": \"" + prompt + "\", \"model\": \"llama2\", \"stream\": false}"; HttpRequest request = HttpRequest.newBuilder() .uri(URI.create("http://localhost:11434/api/generate")) .header("Content-Type", "application/json") .POST(HttpRequest.BodyPublishers.ofString(jsonPayload)) .build(); HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString()); String responseBody = response.body(); ``` 6. **Parse the Response:** * Parse the JSON response from the Ollama server. The response will typically contain the generated text. * Extract the relevant text from the JSON. ```java // Example using Gson Gson gson = new Gson(); JsonObject jsonObject = gson.fromJson(responseBody, JsonObject.class); String generatedText = jsonObject.get("response").getAsString(); ``` 7. **Take Action in Minecraft:** * Based on the generated text, take appropriate actions in the Minecraft world. * Examples: * **NPC Dialogue:** Set the NPC's dialogue to the generated text. You'll need to find the appropriate method for setting NPC dialogue in your modding framework. * **Chat Message:** Send the generated text as a chat message to the player. Use `ServerPlayer.sendSystemMessage(Component.literal(generatedText))` (Forge/Fabric). * **World Modification:** Modify the world based on the generated text (e.g., spawn entities, change blocks). Be very careful when modifying the world, as it can cause lag or crashes. 8. **Error Handling:** * Implement robust error handling. Catch exceptions that may occur during HTTP requests, JSON parsing, or world modification. * Log errors to the Minecraft server log. * Provide informative error messages to the player (if appropriate). **IV. Code Example (Illustrative - Forge)** ```java import com.google.gson.Gson; import com.google.gson.JsonObject; import net.minecraft.network.chat.Component; import net.minecraft.server.level.ServerPlayer; import net.minecraftforge.event.ServerChatEvent; import net.minecraftforge.eventbus.api.SubscribeEvent; import net.minecraftforge.fml.common.Mod; import java.net.URI; import java.net.http.HttpClient; import java.net.http.HttpRequest; import java.net.http.HttpResponse; @Mod.EventBusSubscriber(bus = Mod.EventBusSubscriber.Bus.FORGE) public class ChatEventHandler { @SubscribeEvent public static void onServerChat(ServerChatEvent event) { ServerPlayer player = event.getPlayer(); String playerName = player.getName().getString(); String chatMessage = event.getMessage(); String prompt = "The player " + playerName + " said: '" + chatMessage + "'. Respond in a funny way."; try { HttpClient client = HttpClient.newHttpClient(); String jsonPayload = "{\"prompt\": \"" + prompt + "\", \"model\": \"llama2\", \"stream\": false}"; HttpRequest request = HttpRequest.newBuilder() .uri(URI.create("http://localhost:11434/api/generate")) .header("Content-Type", "application/json") .POST(HttpRequest.BodyPublishers.ofString(jsonPayload)) .build(); HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString()); String responseBody = response.body(); Gson gson = new Gson(); JsonObject jsonObject = gson.fromJson(responseBody, JsonObject.class); String generatedText = jsonObject.get("response").getAsString(); player.sendSystemMessage(Component.literal(generatedText)); } catch (Exception e) { System.err.println("Error communicating with Ollama: " + e.getMessage()); player.sendSystemMessage(Component.literal("Error: Could not generate response.")); } } } ``` **Important Considerations:** * **Performance:** Calling an external API for every chat message or NPC interaction can be slow. Consider using caching, rate limiting, or asynchronous processing to improve performance. Streaming responses from Ollama can also help. * **Security:** Be very careful about exposing your Ollama server to the internet. Use authentication and authorization to prevent unauthorized access. Sanitize the prompts to prevent prompt injection attacks. * **Prompt Engineering:** Experiment with different prompts to get the best results from the Ollama model. Use clear, concise language and provide as much context as possible. * **Rate Limiting:** Ollama might have rate limits. Implement retry logic with exponential backoff to handle rate limiting errors. * **Model Choice:** The choice of Ollama model is crucial. Smaller models are faster but may be less capable. Larger models are more capable but may be slower and require more resources. * **Asynchronous Processing:** Use asynchronous tasks (e.g., `CompletableFuture` in Java) to avoid blocking the main Minecraft server thread. This will prevent the server from freezing while waiting for responses from Ollama. * **Configuration:** Make the Ollama server URL, model name, and other parameters configurable via a configuration file. This will make it easier to deploy and manage the mod. * **Data Storage:** If you need to remember previous interactions, consider using a database or file to store the conversation history. **V. Next Steps** 1. **Choose a Modding Framework:** Decide whether to use Forge or Fabric. 2. **Set up your Development Environment:** Follow the instructions for your chosen framework. 3. **Implement the Basic Communication:** Get the mod to send a simple HTTP request to the Ollama server and print the response to the console. 4. **Implement Event Listeners:** Start with a simple event listener (e.g., `ServerChatEvent`). 5. **Create Prompts:** Experiment with different prompts to see what works best. 6. **Implement Error Handling:** Add error handling to your code. 7. **Test and Refine:** Test your mod thoroughly and refine the prompts and code as needed. This is a complex project, but by breaking it down into smaller steps, you can make it more manageable. Good luck!

iOS Simulator MCP Server

iOS Simulator MCP Server

Mirror of

Supergateway

Supergateway

Run MCP stdio servers over SSE and SSE over stdio. AI gateway.

GooseTeam

GooseTeam

Look, a flock of geese! An MCP server and protocol for Goose agent collaboration.