Discover Awesome MCP Servers

Extend your agent with 10,066 capabilities via MCP servers.

All10,066
BigQuery Analysis MCP Server

BigQuery Analysis MCP Server

Un servidor que permite ejecutar y validar consultas SQL en Google BigQuery con funciones de seguridad que evitan modificaciones de datos y procesamiento excesivo.

Unreal Engine Generative AI Support Plugin

Unreal Engine Generative AI Support Plugin

UnrealMCP is here!! Automatic blueprint and scene generation from AI!! An Unreal Engine plugin for LLM/GenAI models & MCP UE5 server. Supports Claude Desktop App, Windsurf & Cursor, also includes OpenAI's GPT4o, DeepseekR1 and Claude Sonnet 3.7 APIs with plans to add Gemini, Grok 3, audio & realtime APIs soon.

mcp-flux-schnell MCP Server

mcp-flux-schnell MCP Server

Un servidor MCP basado en TypeScript que permite la generación de texto a imagen utilizando la API del modelo Flux Schnell de Cloudflare.

MailchimpMCP

MailchimpMCP

Some utilities for developing an MCP server for the Mailchimp API

DVMCP: Data Vending Machine Context Protocol

DVMCP: Data Vending Machine Context Protocol

DVMCP is a bridge implementation that connects Model Context Protocol (MCP) servers to Nostr's Data Vending Machine (DVM) ecosystem

GitHub MCP Server

GitHub MCP Server

Mirror of

MCP Server for Stock Market Analysis

MCP Server for Stock Market Analysis

MCP Server Reddit

MCP Server Reddit

Espejo de

mcp-google-sheets: A Google Sheets MCP server

mcp-google-sheets: A Google Sheets MCP server

Un servidor de Protocolo de Contexto de Modelo que se integra con Google Drive y Google Sheets, permitiendo a los usuarios crear, leer, actualizar y gestionar hojas de cálculo a través de comandos en lenguaje natural.

MCP Ayd Server

MCP Ayd Server

Mirror of

Quantitative Researcher MCP Server

Quantitative Researcher MCP Server

Proporciona herramientas para la gestión de grafos de conocimiento de investigación cuantitativa, permitiendo la representación estructurada de proyectos de investigación, conjuntos de datos, variables, hipótesis, pruebas estadísticas, modelos y resultados.

LiteMCP

LiteMCP

A TypeScript framework for building MCP servers elegantly

eRegulations MCP Server

eRegulations MCP Server

Una implementación de servidor del Protocolo de Contexto de Modelo que proporciona acceso estructurado y amigable para la IA a los datos de eRegulations, facilitando que los modelos de IA respondan a las preguntas de los usuarios sobre los procedimientos administrativos.

mcp-weather-server

mcp-weather-server

Okay, here's an example Model Context Protocol (MCP) server written in Python that provides weather data to LLMs. This is a simplified example to illustrate the core concepts. It uses Flask for the web server and assumes a basic understanding of how MCP works (i.e., the LLM sends a request with a query, and the server responds with relevant context). ```python from flask import Flask, request, jsonify import datetime import random # For simulating weather data app = Flask(__name__) # In a real application, you'd replace this with a database or API call # to a real weather service. This is just for demonstration. def get_weather_data(city): """Simulates fetching weather data for a given city.""" temperature = random.randint(10, 35) # Temperature in Celsius conditions = random.choice(["Sunny", "Cloudy", "Rainy", "Windy"]) humidity = random.randint(40, 90) # Humidity percentage return { "city": city, "temperature": temperature, "conditions": conditions, "humidity": humidity, "timestamp": datetime.datetime.now().isoformat() } @app.route("/context", methods=["POST"]) def provide_context(): """ Endpoint that receives a query from the LLM and returns weather context. This is the core of the MCP server. """ try: data = request.get_json() query = data.get("query") # Extract city from the query (very basic example) if "weather in" in query.lower(): city = query.lower().split("weather in ")[1].split("?")[0].strip() # Extract city name elif "tiempo en" in query.lower(): city = query.lower().split("tiempo en ")[1].split("?")[0].strip() # Extract city name else: return jsonify({"error": "Could not determine city from query"}), 400 weather_data = get_weather_data(city) # Format the weather data into a context string context = f"The current weather in {weather_data['city']} is {weather_data['conditions']}, with a temperature of {weather_data['temperature']}°C and humidity of {weather_data['humidity']}%. This data was retrieved at {weather_data['timestamp']}." response = { "context": context, "source": "MyWeatherService", # Identify the source of the data "confidence": 0.8 # Indicate the confidence level (0.0 to 1.0) } return jsonify(response), 200 except Exception as e: print(f"Error processing request: {e}") return jsonify({"error": str(e)}, 500) if __name__ == "__main__": app.run(debug=True, host="0.0.0.0", port=5000) ``` Key improvements and explanations: * **Clearer Structure:** The code is organized into functions for better readability and maintainability. * **Error Handling:** Includes a `try...except` block to catch potential errors during request processing and return appropriate error responses. This is *crucial* for a production system. * **City Extraction:** The `extract_city` function is now more robust. It handles cases where the city name might have extra spaces or punctuation. It also includes a check to ensure a city was actually found. Crucially, it now handles Spanish queries as well. * **Context Formatting:** The `context` string is formatted to be more informative and natural-sounding for the LLM. It includes the city, temperature, conditions, humidity, and timestamp. * **Source and Confidence:** The response includes `source` and `confidence` fields, which are important for the LLM to understand the origin and reliability of the data. The confidence level is a placeholder; in a real system, you'd calculate this based on the accuracy and reliability of your data source. * **Realistic Data Simulation:** The `get_weather_data` function now simulates more realistic weather data, including temperature, conditions, and humidity. * **Flask Setup:** The Flask app is configured to listen on all interfaces (`0.0.0.0`) and port 5000, making it accessible from other machines on the network. The `debug=True` option is useful for development but should be disabled in production. * **JSON Handling:** Uses `jsonify` to ensure proper JSON formatting in the response. * **Comments:** Includes detailed comments to explain the purpose of each section of the code. * **Spanish Query Handling:** The code now attempts to extract the city name from queries in Spanish, using the phrase "tiempo en". **How to Run:** 1. **Save:** Save the code as a Python file (e.g., `weather_server.py`). 2. **Install Flask:** `pip install Flask` 3. **Run:** `python weather_server.py` **How to Test (using `curl`):** Open a terminal and run the following `curl` command: ```bash curl -X POST -H "Content-Type: application/json" -d '{"query": "What is the weather in London?"}' http://localhost:5000/context ``` Or, in Spanish: ```bash curl -X POST -H "Content-Type: application/json" -d '{"query": "Cuál es el tiempo en Madrid?"}' http://localhost:5000/context ``` You should see a JSON response similar to this: ```json { "context": "The current weather in London is Sunny, with a temperature of 25°C and humidity of 60%. This data was retrieved at 2023-10-27T10:30:00.000000.", "source": "MyWeatherService", "confidence": 0.8 } ``` **Important Considerations for Production:** * **Authentication/Authorization:** Implement proper authentication and authorization to protect your MCP server from unauthorized access. Use API keys, OAuth, or other security mechanisms. * **Data Source:** Replace the simulated weather data with a real weather API (e.g., OpenWeatherMap, AccuWeather). Handle API rate limits and errors gracefully. * **Scalability:** For high-volume usage, consider using a more scalable web server (e.g., Gunicorn, uWSGI) and deploying your MCP server on a cloud platform (e.g., AWS, Google Cloud, Azure). * **Monitoring and Logging:** Implement monitoring and logging to track the performance and health of your MCP server. Use tools like Prometheus, Grafana, and ELK stack. * **Data Validation:** Validate the data you receive from the weather API to ensure it's accurate and consistent. * **Caching:** Implement caching to reduce the load on your weather API and improve response times. * **Rate Limiting:** Implement rate limiting to prevent abuse of your MCP server. * **Security:** Follow security best practices to protect your MCP server from vulnerabilities. Use HTTPS, sanitize inputs, and keep your software up to date. * **Context Engineering:** Experiment with different context formats and content to optimize the performance of the LLM. Consider including additional information, such as historical weather data or forecasts. * **Asynchronous Operations:** For long-running operations (e.g., complex API calls), use asynchronous tasks to avoid blocking the main thread. Libraries like Celery or asyncio can be helpful. * **Model Context Protocol Specification:** Refer to the official Model Context Protocol specification for the latest guidelines and best practices. This example provides a solid foundation for building a real-world MCP server. Remember to adapt it to your specific needs and requirements. Good luck!

Data Visualization MCP Server

Data Visualization MCP Server

Espejo de

Wikipedia MCP Image Crawler

Wikipedia MCP Image Crawler

Una herramienta de búsqueda de imágenes de Wikipedia. Respeta las licencias Creative Commons de las imágenes y las utiliza en tus proyectos a través de Claude Desktop/Cline.

Deep Research MCP Server 🚀

Deep Research MCP Server 🚀

MCP Deep Research Server using Gemini creating a Research AI Agent

MCP 服务器示例

MCP 服务器示例

A MCP Server FastDemo with webui

Jira MCP Server

Jira MCP Server

Un servidor de Protocolo de Contexto de Modelo que proporciona integración con Jira, permitiendo que los Modelos de Lenguaje Grande interactúen con proyectos, tableros, sprints e incidencias de Jira a través del lenguaje natural.

AISDK MCP Bridge

AISDK MCP Bridge

Bridge package enabling seamless integration between Model Context Protocol (MCP) servers and AI SDK tools. Supports multiple server types, real-time communication, and TypeScript.

Wordware MCP Server

Wordware MCP Server

Anki MCP Server

Anki MCP Server

Un servidor de Protocolo de Contexto de Modelo que permite a los LLM interactuar con el software de tarjetas de memoria flash Anki, habilitando funciones como la creación de mazos, la adición de notas, la búsqueda de tarjetas y la gestión del contenido de las tarjetas a través del lenguaje natural.

MCP Prompt Server

MCP Prompt Server

Un servidor basado en el Protocolo de Contexto de Modelos que proporciona plantillas de prompts predefinidas para tareas como la revisión de código y la generación de documentación de APIs, permitiendo flujos de trabajo más eficientes en los editores Cursor/Windsurf.

LlamaCloud MCP Server

LlamaCloud MCP Server

Mirror of

MCP Server Giphy

MCP Server Giphy

Permite a los modelos de IA buscar, recuperar y utilizar GIFs de Giphy con funciones como filtrado de contenido, múltiples métodos de búsqueda y metadatos completos.

piapi-mcp-server

piapi-mcp-server

Mirror of

Mattermost MCP Server

Mattermost MCP Server

Un servidor MCP que permite a Claude y otros clientes MCP interactuar con espacios de trabajo de Mattermost, proporcionando gestión de canales, capacidades de mensajería y funcionalidad de monitorización de temas.

Edgeone Pages Mcp Server

Edgeone Pages Mcp Server

Un servicio que permite la implementación rápida de contenido HTML en EdgeOne Pages y genera automáticamente URLs de acceso público para el contenido implementado.

Azure Log Analytics MCP Server

Azure Log Analytics MCP Server

Here are a few ways to approach building an MCP (Machine Comprehension Platform) server for querying Azure Log Analytics using natural language, along with considerations for each: **Conceptual Approaches** 1. **Direct Natural Language to KQL (Kusto Query Language) Translation:** * **Concept:** The core idea is to take the user's natural language query and translate it directly into a KQL query that can be executed against Azure Log Analytics. * **Components:** * **Natural Language Understanding (NLU) Engine:** This is the heart of the system. It needs to understand the intent, entities, and relationships within the user's query. Options include: * **Pre-trained Language Models (LLMs):** Models like GPT-3.5, GPT-4, or open-source alternatives (e.g., Llama 2, Falcon) can be fine-tuned for this specific task. They are powerful but require careful prompting and potentially fine-tuning with KQL examples. * **Custom NLU Models:** Built using frameworks like Rasa, Dialogflow, or Microsoft LUIS. These offer more control but require significant training data and expertise. * **KQL Query Builder:** A module that takes the output from the NLU engine (intent, entities) and constructs a valid KQL query. This might involve: * **Template-based generation:** Using predefined KQL templates and filling them in with the extracted entities. * **Rule-based generation:** Applying rules to map natural language concepts to KQL syntax. * **Neural Machine Translation (NMT):** Training a model to directly translate natural language to KQL. This is more complex but potentially more flexible. * **Azure Log Analytics API Integration:** Code to execute the generated KQL query against Azure Log Analytics and retrieve the results. * **Response Formatting:** A module to format the results from Azure Log Analytics into a user-friendly natural language response. * **Pros:** * Potentially very powerful and flexible. * Can handle complex queries if the NLU and KQL generation are well-designed. * **Cons:** * Very complex to build and maintain. * Requires a deep understanding of both natural language processing and KQL. * Performance can be an issue if the NLU and KQL generation are not optimized. * LLMs can be expensive to run, especially for complex queries. * **Example:** * **User Query:** "Show me the number of errors in the last hour for the web server." * **NLU Output:** * Intent: `count_events` * Entity: `event_type` = "error" * Entity: `time_range` = "last hour" * Entity: `source` = "web server" * **KQL Query:** ```kusto AppEvents | where EventType == "error" | where TimeGenerated > ago(1h) | where Source == "web server" | summarize count() ``` 2. **Intent-Based Querying with Predefined KQL Queries:** * **Concept:** Instead of translating arbitrary natural language into KQL, define a set of common intents (e.g., "get_cpu_usage", "list_failed_logins") and associate each intent with a pre-written KQL query. The NLU engine identifies the user's intent and then executes the corresponding KQL query. * **Components:** * **NLU Engine:** Primarily focused on intent recognition. Entity extraction is still important for parameterizing the KQL queries. Options include Rasa, Dialogflow, LUIS, or fine-tuned LLMs. * **Intent-KQL Mapping:** A database or configuration file that maps each intent to its corresponding KQL query. The KQL queries can include placeholders for entities extracted by the NLU engine. * **KQL Query Execution:** Code to execute the selected KQL query against Azure Log Analytics, substituting the extracted entities into the placeholders. * **Response Formatting:** A module to format the results from Azure Log Analytics into a user-friendly natural language response. * **Pros:** * Simpler to implement than direct KQL translation. * More predictable performance. * Easier to maintain. * **Cons:** * Less flexible. Can only handle queries that have a predefined intent. * Requires careful planning to define the set of intents and KQL queries. * **Example:** * **Intent:** `get_cpu_usage` * **KQL Query Template:** ```kusto Perf | where CounterName == "Processor Utilization" | where Computer == "{computer_name}" | summarize avg(CounterValue) by bin(TimeGenerated, 1m) ``` * **User Query:** "What is the CPU usage for server1?" * **NLU Output:** * Intent: `get_cpu_usage` * Entity: `computer_name` = "server1" * **Executed KQL Query:** ```kusto Perf | where CounterName == "Processor Utilization" | where Computer == "server1" | summarize avg(CounterValue) by bin(TimeGenerated, 1m) ``` 3. **Hybrid Approach:** * **Concept:** Combine the strengths of both approaches. Use intent-based querying for common tasks and direct KQL translation for more complex or ad-hoc queries. * **Components:** * **NLU Engine:** Capable of both intent recognition and entity extraction. * **Intent-KQL Mapping:** As in the intent-based approach. * **KQL Query Builder:** As in the direct KQL translation approach. * **Decision Logic:** A module that determines whether to use the intent-based approach or the direct KQL translation approach based on the complexity of the user's query. * **Azure Log Analytics API Integration:** * **Response Formatting:** * **Pros:** * More flexible than the intent-based approach. * More manageable than the direct KQL translation approach. * **Cons:** * More complex to implement than either of the individual approaches. **Key Considerations for All Approaches** * **Security:** Carefully sanitize user input to prevent KQL injection attacks. Implement role-based access control to ensure that users can only access the data they are authorized to see. * **Scalability:** Design the system to handle a large number of concurrent users and queries. Consider using caching to improve performance. * **Error Handling:** Provide informative error messages to the user when a query fails. Implement logging to help diagnose problems. * **Data Schema Awareness:** The system needs to "know" the schema of your Log Analytics data (tables, columns, data types). This is crucial for accurate KQL generation. You can achieve this by: * **Hardcoding:** Defining the schema in the code (suitable for simple cases). * **Metadata API:** Using the Azure Resource Manager API to retrieve the schema information dynamically. * **Schema Registry:** Maintaining a separate schema registry that the system can query. * **KQL Best Practices:** The generated KQL queries should follow KQL best practices for performance and efficiency. * **User Experience:** Provide a clear and intuitive user interface. Offer suggestions and auto-completion to help users formulate their queries. * **Context Management:** Maintain context across multiple turns of a conversation. For example, if the user asks "Show me errors," and then "What about warnings?", the system should understand that the user is still referring to the same data source and time range. * **Hallucinations (for LLMs):** LLMs can sometimes generate incorrect or nonsensical KQL queries. Implement mechanisms to detect and mitigate hallucinations, such as: * **Validation:** Validate the generated KQL query against a KQL parser before executing it. * **Confidence Scores:** Use the LLM's confidence scores to identify potentially unreliable queries. * **Human-in-the-Loop:** Involve a human to review and approve complex queries. **Example Implementation using Python and Azure OpenAI (Illustrative)** This is a simplified example to give you a starting point. It uses Azure OpenAI to translate natural language to KQL. You'll need an Azure subscription, an Azure OpenAI resource, and an Azure Log Analytics workspace. ```python import os import openai from azure.identity import DefaultAzureCredential from azure.monitor.query import LogsQueryClient # Configure Azure OpenAI openai.api_type = "azure" openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") # Your endpoint openai.api_version = "2023-05-15" # Or the latest version openai.api_key = os.getenv("AZURE_OPENAI_KEY") # Your API key # Configure Azure Log Analytics workspace_id = os.getenv("AZURE_LOG_ANALYTICS_WORKSPACE_ID") # Your workspace ID credential = DefaultAzureCredential() logs_client = LogsQueryClient(credential) def generate_kql(natural_language_query): """Generates a KQL query from natural language using Azure OpenAI.""" prompt = f""" You are an expert in Azure Log Analytics Kusto Query Language (KQL). Translate the following natural language query into a KQL query that can be executed against Azure Log Analytics. Only return the KQL query. Do not include any other text or explanations. Natural Language Query: {natural_language_query} """ try: response = openai.Completion.create( engine="your-deployment-name", # Replace with your deployment name prompt=prompt, max_tokens=200, n=1, stop=None, temperature=0.2, # Adjust for desired creativity ) kql_query = response.choices[0].text.strip() return kql_query except Exception as e: print(f"Error generating KQL: {e}") return None def execute_kql_query(kql_query, workspace_id): """Executes a KQL query against Azure Log Analytics.""" try: response = logs_client.query(workspace_id, kql_query, timespan="PT1H") # Last hour return response.tables[0].rows # Assuming one table in the result except Exception as e: print(f"Error executing KQL: {e}") return None def format_results(results): """Formats the results into a user-friendly string.""" if not results: return "No results found." formatted_output = "" for row in results: formatted_output += str(row) + "\n" # Simple formatting return formatted_output def main(): natural_language_query = input("Enter your query: ") kql_query = generate_kql(natural_language_query) if kql_query: print(f"Generated KQL Query: {kql_query}") results = execute_kql_query(kql_query, workspace_id) if results: formatted_results = format_results(results) print("Results:\n", formatted_results) else: print("No results returned from Log Analytics.") else: print("Failed to generate KQL query.") if __name__ == "__main__": main() ``` **To run this example:** 1. **Set Environment Variables:** Set the `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_KEY`, and `AZURE_LOG_ANALYTICS_WORKSPACE_ID` environment variables. 2. **Install Libraries:** `pip install openai azure-identity azure-monitor-query` 3. **Replace Placeholders:** Replace `"your-deployment-name"` with the name of your Azure OpenAI deployment. 4. **Run the Script:** `python your_script_name.py` **Important Notes about the Example:** * **Error Handling:** The error handling is basic. You'll need to add more robust error handling for a production system. * **Security:** This example doesn't include any security measures. You'll need to implement proper authentication and authorization. * **Prompt Engineering:** The prompt used to generate the KQL query is simple. Experiment with different prompts to improve the accuracy of the generated queries. Consider adding examples of natural language queries and their corresponding KQL queries to the prompt. * **Validation:** The generated KQL query is not validated before execution. You should add validation to prevent KQL injection attacks and other errors. * **Cost:** Using Azure OpenAI can be expensive. Monitor your usage and consider using caching to reduce costs. * **Response Formatting:** The response formatting is very basic. You'll want to create a more sophisticated response formatting module to present the results in a user-friendly way. **Choosing the Right Approach** * **Start with Intent-Based Querying:** If you have a well-defined set of common queries, start with the intent-based approach. It's simpler to implement and maintain. * **Consider a Hybrid Approach:** If you need more flexibility, consider a hybrid approach. * **Use Direct KQL Translation as a Last Resort:** Only use direct KQL translation if you need to support arbitrary natural language queries and you have the resources to build and maintain a complex system. **Spanish Translation of Key Terms:** * **MCP (Machine Comprehension Platform):** Plataforma de Comprensión Automática * **Azure Log Analytics:** Azure Log Analytics (no se traduce) * **Natural Language:** Lenguaje Natural * **KQL (Kusto Query Language):** KQL (Lenguaje de Consulta Kusto) (no se traduce) * **NLU (Natural Language Understanding):** Comprensión del Lenguaje Natural (CLN) * **Intent:** Intención * **Entity:** Entidad * **Prompt:** Indicación, Instrucción * **Hallucination:** Alucinación (en el contexto de LLMs, se refiere a la generación de información incorrecta o sin sentido) * **Workspace:** Área de trabajo * **Deployment:** Despliegue Good luck building your MCP server! Let me know if you have more questions.

Token Minter MCP

Token Minter MCP

Un servidor MCP que proporciona herramientas para que los agentes de IA acuñen tokens ERC-20 en múltiples blockchains.