Discover Awesome MCP Servers
Extend your agent with 15,975 capabilities via MCP servers.
- All15,975
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
File Merger MCP Server
Permite fusionar múltiples archivos en un solo archivo a través de una interfaz MCP sencilla. Proporciona una forma segura de combinar archivos, restringiendo el acceso únicamente a los directorios permitidos.
GraphQL MCP Server
Un servidor TypeScript que proporciona a Claude AI acceso continuo a cualquier API GraphQL a través del Protocolo de Contexto de Modelo.
serverMCprtWhat is serverMCprt?How to use serverMCprt?Key features of serverMCprt?Use cases of serverMCprt?FAQ from serverMCprt?
prueba
mcp-server-iris: An InterSystems IRIS MCP server
Okay, here's a translation of your request into Spanish, along with some expanded options and considerations for each part: **Translation:** **Ejecutar consulta SQL en InterSystems IRIS.** **Realizar monitorización y manipulaciones con Interoperabilidad.** **Expanded Options and Considerations:** Let's break down each part of the request and consider different ways to express it in Spanish, along with some context: **1. Execute SQL Query on InterSystems IRIS:** * **More Literal:** * "Ejecutar una consulta SQL en InterSystems IRIS." (A more direct translation) * "Correr una consulta SQL en InterSystems IRIS." (Using "correr" which is common in some regions) * **More Technical/Formal:** * "Lanzar una consulta SQL en InterSystems IRIS." (Using "lanzar" which implies initiating the query) * "Realizar una consulta SQL en InterSystems IRIS." (Using "realizar" which is a more formal "to perform") * **More Specific (if you know the type of query):** * "Ejecutar una consulta de selección SQL en InterSystems IRIS." (If it's a SELECT query) * "Ejecutar una consulta de actualización SQL en InterSystems IRIS." (If it's an UPDATE query) * "Ejecutar una consulta de inserción SQL en InterSystems IRIS." (If it's an INSERT query) * **Emphasis on the Action:** * "Proceder a ejecutar una consulta SQL en InterSystems IRIS." (Emphasizes the act of proceeding with the execution) **2. Do some monitoring and manipulations with Interoperability:** * **More Literal:** * "Realizar alguna monitorización y manipulaciones con Interoperabilidad." (Direct translation) * **More Specific (depending on what you're monitoring):** * "Monitorizar el rendimiento de la Interoperabilidad." (Monitor the performance of Interoperability) * "Monitorizar el estado de los procesos de Interoperabilidad." (Monitor the status of Interoperability processes) * "Monitorizar los mensajes de Interoperabilidad." (Monitor Interoperability messages) * **More Specific (depending on what manipulations you're doing):** * "Gestionar y manipular los flujos de Interoperabilidad." (Manage and manipulate Interoperability flows) * "Modificar la configuración de Interoperabilidad." (Modify the Interoperability configuration) * "Reiniciar los servicios de Interoperabilidad." (Restart Interoperability services) * **More Detailed (combining monitoring and manipulation):** * "Monitorizar el estado de la Interoperabilidad y realizar las manipulaciones necesarias." (Monitor the status of Interoperability and perform the necessary manipulations) * **Using "Gestión" for a broader scope:** * "Realizar tareas de gestión y monitorización de la Interoperabilidad." (Perform management and monitoring tasks of Interoperability) **Key Considerations for Choosing the Best Translation:** * **Context:** What is the overall purpose of this translation? Is it for documentation, a presentation, a command-line instruction, etc.? * **Audience:** Who is the intended audience? Are they technical experts, or are they less familiar with InterSystems IRIS? * **Level of Detail:** How much detail do you need to convey? Do you need to be very specific about the type of query or the type of monitoring/manipulation? * **Regional Variations:** Spanish varies from region to region. The suggestions above are generally understood, but some phrases might be more common in certain countries. **Example of a more complete sentence combining both parts:** "Primero, ejecutaré una consulta SQL para verificar los datos. Luego, realizaré monitorización del rendimiento de la Interoperabilidad y, si es necesario, modificaré la configuración para optimizar el flujo de mensajes." (First, I will execute an SQL query to verify the data. Then, I will monitor the performance of Interoperability and, if necessary, modify the configuration to optimize the message flow.) To give you the *best* translation, please provide more context about *what* you are trying to achieve with the SQL query and the Interoperability monitoring/manipulation. The more information you give me, the more accurate and helpful I can be.
Make.com MCP Server
An MCP server implementation that integrates parts of the Make.com API
MCP Server for JIRA
Un servidor de Protocolo de Contexto de Modelo que permite a ChatGPT y otros asistentes de IA interactuar directamente con incidencias de JIRA, ofreciendo actualmente la capacidad de recuperar detalles de las incidencias.
Vibe-Coder MCP Server
Un servidor MCP que implementa un flujo de trabajo estructurado para la codificación basada en LLM, guiando el desarrollo a través de la clarificación de características, la generación de documentación, la implementación por fases y el seguimiento del progreso.
Maccam912_searxng Mcp Server
Mirror of
Python CLI Tool for Generating MCP Servers from API Specs
Generates an MCP server using Anthropic's SDK given input as OpenAPI or GraphQL specs.
Linear MCP Server
Un servidor de Protocolo de Contexto de Modelo que permite a los modelos de lenguaje grandes interactuar con el sistema de seguimiento de incidencias de Linear, permitiendo la gestión de incidencias, proyectos, equipos y otros recursos de Linear.
AlphaVantage MCP Server
Un servidor MCP que se integra con la API de datos financieros AlphaVantage, proporcionando acceso a datos del mercado de valores, indicadores técnicos e información financiera fundamental.
Mattermost MCP Server
Un servidor MCP que permite a Claude y otros clientes MCP interactuar con espacios de trabajo de Mattermost, proporcionando gestión de canales, capacidades de mensajería y funcionalidad de monitorización de temas.
Edgeone Pages Mcp Server
Un servicio que permite la implementación rápida de contenido HTML en EdgeOne Pages y genera automáticamente URLs de acceso público para el contenido implementado.
MCP Server DevOps Bridge 🚀
Azure Log Analytics MCP Server
Here are a few ways to approach building an MCP (Machine Comprehension Platform) server for querying Azure Log Analytics using natural language, along with considerations for each: **Conceptual Approaches** 1. **Direct Natural Language to KQL (Kusto Query Language) Translation:** * **Concept:** The core idea is to take the user's natural language query and translate it directly into a KQL query that can be executed against Azure Log Analytics. * **Components:** * **Natural Language Understanding (NLU) Engine:** This is the heart of the system. It needs to understand the intent, entities, and relationships within the user's query. Options include: * **Pre-trained Language Models (LLMs):** Models like GPT-3.5, GPT-4, or open-source alternatives (e.g., Llama 2, Falcon) can be fine-tuned for this specific task. They are powerful but require careful prompting and potentially fine-tuning with KQL examples. * **Custom NLU Models:** Built using frameworks like Rasa, Dialogflow, or Microsoft LUIS. These offer more control but require significant training data and expertise. * **KQL Query Builder:** A module that takes the output from the NLU engine (intent, entities) and constructs a valid KQL query. This might involve: * **Template-based generation:** Using predefined KQL templates and filling them in with the extracted entities. * **Rule-based generation:** Applying rules to map natural language concepts to KQL syntax. * **Neural Machine Translation (NMT):** Training a model to directly translate natural language to KQL. This is more complex but potentially more flexible. * **Azure Log Analytics API Integration:** Code to execute the generated KQL query against Azure Log Analytics and retrieve the results. * **Response Formatting:** A module to format the results from Azure Log Analytics into a user-friendly natural language response. * **Pros:** * Potentially very powerful and flexible. * Can handle complex queries if the NLU and KQL generation are well-designed. * **Cons:** * Very complex to build and maintain. * Requires a deep understanding of both natural language processing and KQL. * Performance can be an issue if the NLU and KQL generation are not optimized. * LLMs can be expensive to run, especially for complex queries. * **Example:** * **User Query:** "Show me the number of errors in the last hour for the web server." * **NLU Output:** * Intent: `count_events` * Entity: `event_type` = "error" * Entity: `time_range` = "last hour" * Entity: `source` = "web server" * **KQL Query:** ```kusto AppEvents | where EventType == "error" | where TimeGenerated > ago(1h) | where Source == "web server" | summarize count() ``` 2. **Intent-Based Querying with Predefined KQL Queries:** * **Concept:** Instead of translating arbitrary natural language into KQL, define a set of common intents (e.g., "get_cpu_usage", "list_failed_logins") and associate each intent with a pre-written KQL query. The NLU engine identifies the user's intent and then executes the corresponding KQL query. * **Components:** * **NLU Engine:** Primarily focused on intent recognition. Entity extraction is still important for parameterizing the KQL queries. Options include Rasa, Dialogflow, LUIS, or fine-tuned LLMs. * **Intent-KQL Mapping:** A database or configuration file that maps each intent to its corresponding KQL query. The KQL queries can include placeholders for entities extracted by the NLU engine. * **KQL Query Execution:** Code to execute the selected KQL query against Azure Log Analytics, substituting the extracted entities into the placeholders. * **Response Formatting:** A module to format the results from Azure Log Analytics into a user-friendly natural language response. * **Pros:** * Simpler to implement than direct KQL translation. * More predictable performance. * Easier to maintain. * **Cons:** * Less flexible. Can only handle queries that have a predefined intent. * Requires careful planning to define the set of intents and KQL queries. * **Example:** * **Intent:** `get_cpu_usage` * **KQL Query Template:** ```kusto Perf | where CounterName == "Processor Utilization" | where Computer == "{computer_name}" | summarize avg(CounterValue) by bin(TimeGenerated, 1m) ``` * **User Query:** "What is the CPU usage for server1?" * **NLU Output:** * Intent: `get_cpu_usage` * Entity: `computer_name` = "server1" * **Executed KQL Query:** ```kusto Perf | where CounterName == "Processor Utilization" | where Computer == "server1" | summarize avg(CounterValue) by bin(TimeGenerated, 1m) ``` 3. **Hybrid Approach:** * **Concept:** Combine the strengths of both approaches. Use intent-based querying for common tasks and direct KQL translation for more complex or ad-hoc queries. * **Components:** * **NLU Engine:** Capable of both intent recognition and entity extraction. * **Intent-KQL Mapping:** As in the intent-based approach. * **KQL Query Builder:** As in the direct KQL translation approach. * **Decision Logic:** A module that determines whether to use the intent-based approach or the direct KQL translation approach based on the complexity of the user's query. * **Azure Log Analytics API Integration:** * **Response Formatting:** * **Pros:** * More flexible than the intent-based approach. * More manageable than the direct KQL translation approach. * **Cons:** * More complex to implement than either of the individual approaches. **Key Considerations for All Approaches** * **Security:** Carefully sanitize user input to prevent KQL injection attacks. Implement role-based access control to ensure that users can only access the data they are authorized to see. * **Scalability:** Design the system to handle a large number of concurrent users and queries. Consider using caching to improve performance. * **Error Handling:** Provide informative error messages to the user when a query fails. Implement logging to help diagnose problems. * **Data Schema Awareness:** The system needs to "know" the schema of your Log Analytics data (tables, columns, data types). This is crucial for accurate KQL generation. You can achieve this by: * **Hardcoding:** Defining the schema in the code (suitable for simple cases). * **Metadata API:** Using the Azure Resource Manager API to retrieve the schema information dynamically. * **Schema Registry:** Maintaining a separate schema registry that the system can query. * **KQL Best Practices:** The generated KQL queries should follow KQL best practices for performance and efficiency. * **User Experience:** Provide a clear and intuitive user interface. Offer suggestions and auto-completion to help users formulate their queries. * **Context Management:** Maintain context across multiple turns of a conversation. For example, if the user asks "Show me errors," and then "What about warnings?", the system should understand that the user is still referring to the same data source and time range. * **Hallucinations (for LLMs):** LLMs can sometimes generate incorrect or nonsensical KQL queries. Implement mechanisms to detect and mitigate hallucinations, such as: * **Validation:** Validate the generated KQL query against a KQL parser before executing it. * **Confidence Scores:** Use the LLM's confidence scores to identify potentially unreliable queries. * **Human-in-the-Loop:** Involve a human to review and approve complex queries. **Example Implementation using Python and Azure OpenAI (Illustrative)** This is a simplified example to give you a starting point. It uses Azure OpenAI to translate natural language to KQL. You'll need an Azure subscription, an Azure OpenAI resource, and an Azure Log Analytics workspace. ```python import os import openai from azure.identity import DefaultAzureCredential from azure.monitor.query import LogsQueryClient # Configure Azure OpenAI openai.api_type = "azure" openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") # Your endpoint openai.api_version = "2023-05-15" # Or the latest version openai.api_key = os.getenv("AZURE_OPENAI_KEY") # Your API key # Configure Azure Log Analytics workspace_id = os.getenv("AZURE_LOG_ANALYTICS_WORKSPACE_ID") # Your workspace ID credential = DefaultAzureCredential() logs_client = LogsQueryClient(credential) def generate_kql(natural_language_query): """Generates a KQL query from natural language using Azure OpenAI.""" prompt = f""" You are an expert in Azure Log Analytics Kusto Query Language (KQL). Translate the following natural language query into a KQL query that can be executed against Azure Log Analytics. Only return the KQL query. Do not include any other text or explanations. Natural Language Query: {natural_language_query} """ try: response = openai.Completion.create( engine="your-deployment-name", # Replace with your deployment name prompt=prompt, max_tokens=200, n=1, stop=None, temperature=0.2, # Adjust for desired creativity ) kql_query = response.choices[0].text.strip() return kql_query except Exception as e: print(f"Error generating KQL: {e}") return None def execute_kql_query(kql_query, workspace_id): """Executes a KQL query against Azure Log Analytics.""" try: response = logs_client.query(workspace_id, kql_query, timespan="PT1H") # Last hour return response.tables[0].rows # Assuming one table in the result except Exception as e: print(f"Error executing KQL: {e}") return None def format_results(results): """Formats the results into a user-friendly string.""" if not results: return "No results found." formatted_output = "" for row in results: formatted_output += str(row) + "\n" # Simple formatting return formatted_output def main(): natural_language_query = input("Enter your query: ") kql_query = generate_kql(natural_language_query) if kql_query: print(f"Generated KQL Query: {kql_query}") results = execute_kql_query(kql_query, workspace_id) if results: formatted_results = format_results(results) print("Results:\n", formatted_results) else: print("No results returned from Log Analytics.") else: print("Failed to generate KQL query.") if __name__ == "__main__": main() ``` **To run this example:** 1. **Set Environment Variables:** Set the `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_KEY`, and `AZURE_LOG_ANALYTICS_WORKSPACE_ID` environment variables. 2. **Install Libraries:** `pip install openai azure-identity azure-monitor-query` 3. **Replace Placeholders:** Replace `"your-deployment-name"` with the name of your Azure OpenAI deployment. 4. **Run the Script:** `python your_script_name.py` **Important Notes about the Example:** * **Error Handling:** The error handling is basic. You'll need to add more robust error handling for a production system. * **Security:** This example doesn't include any security measures. You'll need to implement proper authentication and authorization. * **Prompt Engineering:** The prompt used to generate the KQL query is simple. Experiment with different prompts to improve the accuracy of the generated queries. Consider adding examples of natural language queries and their corresponding KQL queries to the prompt. * **Validation:** The generated KQL query is not validated before execution. You should add validation to prevent KQL injection attacks and other errors. * **Cost:** Using Azure OpenAI can be expensive. Monitor your usage and consider using caching to reduce costs. * **Response Formatting:** The response formatting is very basic. You'll want to create a more sophisticated response formatting module to present the results in a user-friendly way. **Choosing the Right Approach** * **Start with Intent-Based Querying:** If you have a well-defined set of common queries, start with the intent-based approach. It's simpler to implement and maintain. * **Consider a Hybrid Approach:** If you need more flexibility, consider a hybrid approach. * **Use Direct KQL Translation as a Last Resort:** Only use direct KQL translation if you need to support arbitrary natural language queries and you have the resources to build and maintain a complex system. **Spanish Translation of Key Terms:** * **MCP (Machine Comprehension Platform):** Plataforma de Comprensión Automática * **Azure Log Analytics:** Azure Log Analytics (no se traduce) * **Natural Language:** Lenguaje Natural * **KQL (Kusto Query Language):** KQL (Lenguaje de Consulta Kusto) (no se traduce) * **NLU (Natural Language Understanding):** Comprensión del Lenguaje Natural (CLN) * **Intent:** Intención * **Entity:** Entidad * **Prompt:** Indicación, Instrucción * **Hallucination:** Alucinación (en el contexto de LLMs, se refiere a la generación de información incorrecta o sin sentido) * **Workspace:** Área de trabajo * **Deployment:** Despliegue Good luck building your MCP server! Let me know if you have more questions.
Gemini Context MCP Server
Una implementación de servidor MCP que maximiza la ventana de contexto de 2 millones de tokens de Gemini con herramientas para la gestión eficiente del contexto y el almacenamiento en caché en múltiples aplicaciones cliente de IA.
OpenAI MCP Server
Mirror of
MCP Server for Running E2E Tests
e2e mcp server to automate validating your ai generated code
py-poetry
MCP Tools
Una interfaz de línea de comandos para interactuar con servidores MCP (Protocolo de Contexto de Modelo) utilizando transporte stdio y HTTP.
Wikipedia
Algorand MCP Implementation
Un servidor MCP integral para interacciones de herramientas (más de 40) y accesibilidad de recursos (más de 60) con la cadena de bloques de Algorand, además de muchas indicaciones útiles.
Petstore3
Un servidor proxy que conecta agentes de IA y APIs externas traduciendo dinámicamente las especificaciones de OpenAPI en herramientas MCP estandarizadas, permitiendo una interacción fluida sin código de integración personalizado.
mcp-server-zenn: Unofficial MCP server for Zenn (
Un servidor no oficial del Protocolo de Contexto del Modelo para Zenn que permite obtener artículos y libros de la plataforma Zenn a través de su API.
Agentis MCP
Python framework for creating AI agents that use MCP servers as tools. Compatible with any MCP server and model provider.
Code Summarizer
Permite que herramientas de LLM como Claude Desktop y Cursor AI accedan y resuman archivos de código a través de un servidor de Protocolo de Contexto de Modelo, proporcionando acceso estructurado al contenido de la base de código sin necesidad de copiar manualmente.
memos-mcp-server
A MCP(Model Context Protocol) server for Memos.
Azure DevOps MCP (Model Context Protocol)
Una implementación de servidor de referencia para el Protocolo de Contexto del Modelo que permite a los asistentes de IA interactuar con los recursos de Azure DevOps y realizar operaciones como la gestión de proyectos, el seguimiento de elementos de trabajo, las operaciones de repositorio y la búsqueda de código de forma programática.
Krep MCP Server
Una utilidad de búsqueda de cadenas de alto rendimiento con integración del Protocolo de Contexto de Modelo que permite a los asistentes de IA realizar búsquedas de patrones eficientes en archivos y cadenas de texto.
Scast
Convierte código en diagramas UML y diagramas de flujo a través de análisis estático, permitiendo la visualización de la estructura del código y la explicación de la funcionalidad.