Discover Awesome MCP Servers

Extend your agent with 25,254 capabilities via MCP servers.

All25,254
Perplexity MCP Server

Perplexity MCP Server

Mirror of

MCP Games Server

MCP Games Server

GitHub MCP Server

GitHub MCP Server

mariadb-mcp-server

mariadb-mcp-server

An mcp server that provides read-only access to MariaDB.

TimezoneToolkit MCP Server

TimezoneToolkit MCP Server

Máy chủ MCP nâng cao cung cấp các công cụ toàn diện về thời gian và múi giờ.

AWS Model Context Protocol (MCP) Server

AWS Model Context Protocol (MCP) Server

Một dịch vụ gọn nhẹ cho phép các trợ lý AI thực thi các lệnh AWS CLI thông qua Giao thức Ngữ cảnh Mô hình (MCP), cho phép các công cụ AI truy xuất tài liệu AWS và tương tác với các dịch vụ AWS.

MCP Command History

MCP Command History

Một công cụ mạnh mẽ để khám phá, tìm kiếm và quản lý lịch sử lệnh shell của bạn thông qua giao diện MCP (Model Control Protocol). Dự án này cho phép bạn dễ dàng truy cập, tìm kiếm và truy xuất các lệnh shell đã thực thi trước đó.

Gitee MCP Server

Gitee MCP Server

Tích hợp Gitee API, quản lý kho lưu trữ, vấn đề (issue), yêu cầu kéo (pull request) và nhiều hơn nữa.

Memory Bank MCP Server 2.2.1

Memory Bank MCP Server 2.2.1

Một máy chủ để quản lý tài liệu dự án và ngữ cảnh trên các phiên Claude AI thông qua các ngân hàng bộ nhớ toàn cục và theo nhánh cụ thể, cho phép quản lý kiến thức nhất quán với việc lưu trữ tài liệu JSON có cấu trúc.

MCP Server

MCP Server

MCP server implementation for handling run_python requests

MCP Server - Oracle DB Context

MCP Server - Oracle DB Context

Máy chủ MCP để làm việc với các cơ sở dữ liệu Oracle lớn

MCP Chunk Editor

MCP Chunk Editor

Một máy chủ MCP cung cấp một trình soạn thảo văn bản hiệu quả và an toàn cho các LLM.

kagi-server MCP Server

kagi-server MCP Server

Mirror of

vigilant-adventure

vigilant-adventure

hey,i wanted to play with some mods but when i try to open the game it says; The game crashed whilst initializing game Error: java.lang.NoClassDefFoundError: cpw/mods/fml/common/IPlayerTracker Exit Code: -1 what can i do to make it work?the log is; ---- Minecraft Crash Report ---- WARNING: coremods are present: MekanismCoremod (Mekanism-1.12.2-9…

Better Qdrant MCP Server

Better Qdrant MCP Server

Một máy chủ Giao thức Ngữ cảnh Mô hình (Model Context Protocol) cho phép các khả năng tìm kiếm ngữ nghĩa bằng cách cung cấp các công cụ để quản lý các bộ sưu tập cơ sở dữ liệu vector Qdrant, xử lý và nhúng tài liệu bằng cách sử dụng các dịch vụ nhúng khác nhau, và thực hiện các tìm kiếm ngữ nghĩa trên các nhúng vector.

.NET MCP Servers

.NET MCP Servers

Collection of my MCP (Model Context Protocol) servers written in .NET

Figma to Vue MCP Server

Figma to Vue MCP Server

MCP server that generates Vue components from Figma designs following Hostinger's design system

MCP Image Generation Server

MCP Image Generation Server

Mirror of

WCGW

WCGW

Send code snippet and paths to Claude. Designed to work with wcgw mcp server.

artifacts-mcp

artifacts-mcp

MCP Server for Artifacts MMO

Knowledge Graph Memory Server

Knowledge Graph Memory Server

Mirror of

mcp_server

mcp_server

mem0 MCP Server

mem0 MCP Server

Một triển khai TypeScript của máy chủ Model Context Protocol, cho phép tạo, quản lý và tìm kiếm ngữ nghĩa các luồng bộ nhớ với tích hợp Mem0.

mcp-server-bluesky

mcp-server-bluesky

Mirror of

Model Context Protocol (MCP) Server 🚀

Model Context Protocol (MCP) Server 🚀

mcp-server-web3

mcp-server-web3

The web3 function plugin server base on MCP of Anthropic.

Prompt Decorators

Prompt Decorators

A standardized framework for enhancing how LLMs process and respond to prompts through composable decorators, featuring an official open standard specification and Python reference implementation with MCP server integration.

McpDocs

McpDocs

Provide documentation about your Elixir project's functions and functions of dependencies to an LLM through an SSE MCP server.

mpc-csharp-semantickernel

mpc-csharp-semantickernel

Okay, here's an example demonstrating how to use Microsoft Semantic Kernel with OpenAI and a hypothetical "MCP Server" (assuming MCP stands for something like "Model Control Plane" or "Model Configuration Provider"). I'll outline the code structure, explain the concepts, and provide a basic implementation. Keep in mind that the "MCP Server" part is conceptual, as there isn't a standard, universally defined MCP Server. You'll need to adapt the MCP Server interaction to your specific implementation. **Conceptual Overview** 1. **Semantic Kernel:** The core framework for orchestrating AI tasks. It allows you to define skills (functions) that can be chained together to achieve complex goals. 2. **OpenAI:** Provides the large language models (LLMs) that power the AI capabilities. We'll use the OpenAI connector in Semantic Kernel to interact with OpenAI's APIs. 3. **MCP Server (Hypothetical):** This server is responsible for managing model configurations, potentially including: * API keys for OpenAI (or other LLM providers). * Model names (e.g., "gpt-3.5-turbo", "gpt-4"). * Temperature, top\_p, and other model parameters. * Rate limiting policies. * Potentially, even A/B testing configurations for different models. **Code Structure (Conceptual)** ```python # Assuming you have installed: # pip install semantic-kernel python-dotenv openai import semantic_kernel as sk from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion import os from dotenv import load_dotenv import requests # For interacting with the MCP Server # Load environment variables (e.g., from a .env file) load_dotenv() # --- 1. MCP Server Interaction (Hypothetical) --- MCP_SERVER_URL = os.getenv("MCP_SERVER_URL", "http://localhost:8000") # Default URL def get_model_config(model_name: str): """ Fetches model configuration from the MCP Server. Args: model_name: The name of the model to retrieve configuration for. Returns: A dictionary containing the model configuration, or None if not found. """ try: response = requests.get(f"{MCP_SERVER_URL}/models/{model_name}") response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) return response.json() except requests.exceptions.RequestException as e: print(f"Error fetching model config from MCP Server: {e}") return None # --- 2. Semantic Kernel Setup --- async def main(): kernel = sk.Kernel() # Get OpenAI configuration from MCP Server model_name = "gpt-3.5-turbo" # Or "gpt-4", etc. model_config = get_model_config(model_name) if model_config is None: print("Failed to retrieve model configuration. Using environment variables.") # Fallback to environment variables if MCP Server is unavailable api_key = os.getenv("OPENAI_API_KEY") org_id = os.getenv("OPENAI_ORG_ID") if not api_key: raise Exception("OpenAI API key not found in environment variables or MCP Server.") kernel.add_chat_service("openai", OpenAIChatCompletion(model_name, api_key, org_id)) else: # Use configuration from MCP Server api_key = model_config.get("api_key") org_id = model_config.get("org_id") temperature = model_config.get("temperature", 0.7) # Default temperature top_p = model_config.get("top_p", 1.0) # Default top_p if not api_key: raise Exception("OpenAI API key not found in MCP Server configuration.") kernel.add_chat_service( "openai", OpenAIChatCompletion( model_name, api_key, org_id, temperature=temperature, top_p=top_p ) ) # --- 3. Define a Skill --- # Simple example: A skill to summarize text summarize_prompt = """ Summarize the following text: {{$text}} """ summarize_function = kernel.create_semantic_function( prompt_template=summarize_prompt, description="Summarizes text", max_tokens=200, # Limit the output length temperature=0.7, top_p=1.0, ) # --- 4. Use the Skill --- text_to_summarize = """ Microsoft Semantic Kernel is a powerful framework for building AI-powered applications. It allows developers to easily integrate large language models (LLMs) like OpenAI's GPT-3 and GPT-4 into their applications. Semantic Kernel provides a set of tools and abstractions for orchestrating AI tasks, managing prompts, and chaining skills together. This makes it easier to create complex AI workflows. """ context_variables = sk.ContextVariables(text_to_summarize) summary = await summarize_function.invoke(context_variables) print("Original Text:") print(text_to_summarize) print("\nSummary:") print(summary) if __name__ == "__main__": import asyncio asyncio.run(main()) ``` **Explanation:** 1. **MCP Server Interaction:** * `get_model_config(model_name)`: This function is the key to integrating with your MCP Server. It sends an HTTP GET request to the server to retrieve the configuration for a specific model. The URL (`MCP_SERVER_URL/models/{model_name}`) is just an example; adjust it to match your server's API. * Error Handling: The `try...except` block handles potential network errors or HTTP errors (e.g., 404 Not Found if the model isn't configured). * Configuration Retrieval: The function expects the MCP Server to return a JSON response containing the model's configuration (API key, model name, temperature, etc.). 2. **Semantic Kernel Setup:** * `kernel = sk.Kernel()`: Creates a Semantic Kernel instance. * `get_model_config()` is called to fetch the OpenAI configuration. * Conditional Logic: If the MCP Server is unavailable or doesn't have the configuration, the code falls back to using environment variables (a common practice). This provides a backup mechanism. * `kernel.add_chat_service()`: This is where you register the OpenAI connector with the Semantic Kernel. It's crucial to pass the API key and other parameters (temperature, top\_p) obtained from either the MCP Server or the environment variables. 3. **Skill Definition:** * `summarize_prompt`: A simple prompt that instructs the LLM to summarize the input text. The `{{$text}}` is a placeholder that will be replaced with the actual text to be summarized. * `kernel.create_semantic_function()`: Creates a semantic function (a skill) from the prompt. You can also specify other parameters like `max_tokens` (to limit the output length), `temperature`, and `top_p`. 4. **Skill Usage:** * `text_to_summarize`: The text you want to summarize. * `context_variables = sk.ContextVariables(text_to_summarize)`: Creates a context object to pass the input text to the skill. * `summary = await summarize_function.invoke(context_variables)`: Invokes the skill and gets the result. * Prints the original text and the summary. **MCP Server Implementation (Example - Flask)** Here's a very basic example of how you might implement the MCP Server using Flask (a Python web framework): ```python from flask import Flask, jsonify import os from dotenv import load_dotenv load_dotenv() app = Flask(__name__) # In a real application, this would likely be stored in a database model_configurations = { "gpt-3.5-turbo": { "api_key": os.getenv("OPENAI_API_KEY"), "org_id": os.getenv("OPENAI_ORG_ID"), "temperature": 0.7, "top_p": 1.0, }, "gpt-4": { "api_key": os.getenv("OPENAI_API_KEY"), "org_id": os.getenv("OPENAI_ORG_ID"), "temperature": 0.5, "top_p": 0.9, }, } @app.route("/models/<model_name>") def get_model(model_name): if model_name in model_configurations: return jsonify(model_configurations[model_name]) else: return jsonify({"error": "Model not found"}), 404 if __name__ == "__main__": app.run(debug=True, port=8000) ``` **To Run the Example:** 1. **Install Dependencies:** ```bash pip install semantic-kernel python-dotenv openai flask requests ``` 2. **Set Environment Variables:** * Create a `.env` file in the same directory as your Python scripts. * Add your OpenAI API key and organization ID to the `.env` file: ``` OPENAI_API_KEY=YOUR_OPENAI_API_KEY OPENAI_ORG_ID=YOUR_OPENAI_ORG_ID MCP_SERVER_URL=http://localhost:8000 # Optional, if you want to override the default ``` * Replace `YOUR_OPENAI_API_KEY` and `YOUR_OPENAI_ORG_ID` with your actual credentials. 3. **Run the MCP Server:** ```bash python your_mcp_server_script.py # Replace with the actual name of your MCP server script ``` 4. **Run the Semantic Kernel Script:** ```bash python your_semantic_kernel_script.py # Replace with the actual name of your Semantic Kernel script ``` **Important Considerations:** * **Security:** Never hardcode API keys directly into your code. Use environment variables or a secure configuration management system (like the MCP Server) to store sensitive information. * **Error Handling:** Implement robust error handling to gracefully handle network errors, API errors, and other potential issues. * **Scalability:** For production environments, consider using a more scalable MCP Server implementation (e.g., using a database to store model configurations and a more robust web framework). * **MCP Server Design:** The design of your MCP Server will depend on your specific requirements. You might want to add features like: * Authentication and authorization. * Rate limiting. * Model versioning. * A/B testing. * Monitoring and logging. * **Asynchronous Operations:** Use `async` and `await` for I/O-bound operations (like network requests) to improve performance. * **Prompt Engineering:** Experiment with different prompts to optimize the performance of your skills. * **Semantic Kernel Documentation:** Refer to the official Microsoft Semantic Kernel documentation for the most up-to-date information and examples: [https://learn.microsoft.com/en-us/semantic-kernel/](https://learn.microsoft.com/en-us/semantic-kernel/) This comprehensive example should give you a solid foundation for using Microsoft Semantic Kernel with OpenAI and a custom MCP Server. Remember to adapt the MCP Server implementation to your specific needs and security requirements.

Linear MCP Server

Linear MCP Server

Mirror of