Discover Awesome MCP Servers
Extend your agent with 27,225 capabilities via MCP servers.
- All27,225
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
RWA Pipe MCP Server
Connects AI agents to Real World Asset (RWA) data, enabling queries about tokenized assets, market trends, TVL analytics, token holders, and portfolio tracking across multiple blockchains.
NotePlan MCP Server
A Message Control Protocol server that enables Claude Desktop to interact with NotePlan.co, allowing users to query, search, create, and update notes directly from Claude conversations.
TermPipe MCP
Provides AI assistants with direct terminal access to execute commands, manage files, and run persistent REPL sessions. It features automated installation scripts that educate AI assistants on its capabilities for seamless integration.
Grist MCP Server
Enables interaction with Grist documents, workspaces, and records via the Model Context Protocol. It supports comprehensive operations including SQL querying, schema management, and record CRUD functionality.
BigQuery FinOps MCP Server
Enables cost optimization and financial operations for Google BigQuery through natural language interactions. Provides insights into BigQuery spending, usage patterns, and cost management recommendations.
Owlvin MCP Server
Enables AI agents to access a wide range of AI tools and services through the Owlvin platform with a single integration. It manages authentication and billing automatically, allowing users to list available services, check credit balances, and execute specialized API calls.
Mail MCP Server
An MCP server that provides email sending capabilities via SMTP, featuring tools for sending standard and template-based emails. It utilizes the FastMCP Streamable HTTP transport for flexible client connectivity over HTTP without requiring stdio subprocesses.
browser-mcp
Un servidor MCP que permite a los asistentes de IA interactuar con el navegador, incluyendo la obtención del contenido de la página como markdown, la modificación de los estilos de la página y la búsqueda en el historial del navegador.
CourtListener Legal Research MCP Server
Enables legal research across 3,352 U.S. courts using the CourtListener API, providing access to case search, precedent analysis, judge patterns, citation validation, and federal PACER dockets through natural language queries.
MCP with RAG Demo
Este proyecto de demostración muestra cómo implementar un servidor de Protocolo de Contexto de Modelo (MCP) con capacidades de Generación Aumentada por Recuperación (RAG). La demostración permite que los modelos de IA interactúen con una base de conocimiento, busquen información y agreguen nuevos documentos.
Poke-MCP
A Model Context Protocol server that provides comprehensive Pokemon data and battle simulation capabilities to AI assistants. It enables users to access detailed stats, types, and moves while simulating battles with realistic mechanics like type effectiveness and status effects.
Spotify MCP Server
Enables interaction with Spotify through natural language for music discovery, playback control, library management, and playlist creation. Supports searching for music, controlling playback, managing saved tracks, and getting personalized recommendations based on mood and preferences.
PitchLink MCP
An MCP server that reads startup pitch drafts from Notion to provide comprehensive investor-style analysis and scoring. It evaluates key areas like market opportunity and team strength, delivering feedback through a visual dashboard.
Openfort MCP Server
Enables AI assistants to interact with Openfort's wallet infrastructure, allowing them to create projects, manage configurations, generate wallets and users, and query documentation through 42 integrated tools.
面试鸭 MCP Server
Here are a few options for translating the English text, depending on the nuance you want to convey: **Option 1 (More literal, focusing on the technology):** > Servicio MCP Server para preguntas de búsqueda de Interview Duck basado en Spring AI, que permite buscar rápidamente preguntas y respuestas reales de entrevistas de empresas con IA. **Option 2 (More emphasis on the benefit):** > Servicio MCP Server impulsado por Spring AI para Interview Duck, que permite una búsqueda rápida con IA de preguntas y respuestas reales de entrevistas de empresas. **Option 3 (Slightly more conversational):** > Con el servicio MCP Server de Interview Duck, basado en Spring AI, puedes buscar rápidamente preguntas y respuestas reales de entrevistas de empresas usando IA. **Explanation of choices:** * **"Basado en Spring AI"**: This translates directly to "basado en Spring AI" and is generally understood. * **"Interview Duck"**: This is kept as is, assuming it's a proper noun (the name of the service). * **"MCP Server"**: This is also kept as is, assuming it's a specific server name. * **"快速让 AI 搜索"**: This is translated as "que permite buscar rápidamente con IA" or "puedes buscar rápidamente con IA" to make the sentence flow better in Spanish. "Permitir" means "to allow" and is a good way to express the capability. * **"企业面试真题和答案"**: This is translated as "preguntas y respuestas reales de entrevistas de empresas". "Reales" emphasizes that these are real questions and answers. I recommend choosing the option that best fits the context and the target audience. If you want to emphasize the technology, Option 1 is good. If you want to emphasize the benefit to the user, Option 2 or 3 might be better.
Zen MCP Server
Orchestrates multiple AI models (Gemini, OpenAI, Claude, local models) within a single conversation context, enabling collaborative workflows like multi-model code reviews, consensus building, and CLI-to-CLI bridging for specialized tasks.
Somnia MCP Server
Enables interaction with Somnia blockchain data, providing tools to retrieve block information, token balances, transaction history, and NFT metadata through the ORMI API.
Aws Sample Gen Ai Mcp Server
```python import boto3 import json # Configuration REGION_NAME = 'your-aws-region' # e.g., 'us-east-1' MODEL_ID = 'anthropic.claude-v2' # Or any other Bedrock model ID ACCEPT = 'application/json' CONTENT_TYPE = 'application/json' MCP_SERVER_ENDPOINT = 'your_mcp_server_endpoint' # e.g., 'http://localhost:8080/predictions/bedrock' # Initialize Bedrock client (if needed for direct comparison or setup) bedrock = boto3.client( service_name='bedrock-runtime', region_name=REGION_NAME ) # Function to invoke the model via MCP server def invoke_model_mcp(prompt, max_tokens=200, temperature=0.5, top_p=0.9): """ Invokes the Bedrock model through the MCP server. Args: prompt (str): The prompt to send to the model. max_tokens (int): The maximum number of tokens to generate. temperature (float): The temperature for sampling. top_p (float): The top_p value for sampling. Returns: str: The generated text from the model, or None if an error occurred. """ payload = { "modelId": MODEL_ID, "contentType": CONTENT_TYPE, "accept": ACCEPT, "body": json.dumps({ "prompt": prompt, "max_tokens_to_sample": max_tokens, "temperature": temperature, "top_p": top_p, }) } try: import requests response = requests.post(MCP_SERVER_ENDPOINT, json=payload) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) response_body = response.json() # Extract the generated text from the response generated_text = response_body['completion'] return generated_text except requests.exceptions.RequestException as e: print(f"Error invoking MCP server: {e}") return None except KeyError as e: print(f"Error parsing MCP server response: Missing key {e}") print(f"Response body: {response_body}") # Print the full response for debugging return None except Exception as e: print(f"An unexpected error occurred: {e}") return None # Example usage if __name__ == "__main__": prompt = "Write a short story about a cat who goes on an adventure." generated_text = invoke_model_mcp(prompt) if generated_text: print("Generated Text (via MCP Server):") print(generated_text) else: print("Failed to generate text via MCP server.") # Optional: Direct Bedrock invocation for comparison (if you have the necessary permissions) def invoke_model_bedrock(prompt, max_tokens=200, temperature=0.5, top_p=0.9): """ Invokes the Bedrock model directly. This is for comparison purposes. Args: prompt (str): The prompt to send to the model. max_tokens (int): The maximum number of tokens to generate. temperature (float): The temperature for sampling. top_p (float): The top_p value for sampling. Returns: str: The generated text from the model, or None if an error occurred. """ body = json.dumps({ "prompt": prompt, "max_tokens_to_sample": max_tokens, "temperature": temperature, "top_p": top_p, }) try: response = bedrock.invoke_model( modelId=MODEL_ID, contentType=CONTENT_TYPE, accept=ACCEPT, body=body ) response_body = json.loads(response['body'].read().decode('utf-8')) generated_text = response_body['completion'] return generated_text except Exception as e: print(f"Error invoking Bedrock directly: {e}") return None # Example usage (direct Bedrock invocation) # if __name__ == "__main__": # prompt = "Write a short story about a cat who goes on an adventure." # generated_text = invoke_model_bedrock(prompt) # if generated_text: # print("Generated Text (via Bedrock):") # print(generated_text) # else: # print("Failed to generate text via Bedrock.") ``` Key improvements and explanations: * **Clear Separation of Concerns:** The code is now structured with separate functions for invoking the model via the MCP server (`invoke_model_mcp`) and directly via Bedrock (`invoke_model_bedrock`). This makes the code more modular and easier to understand. The direct Bedrock invocation is optional and for comparison only. * **MCP Server Invocation:** The `invoke_model_mcp` function now uses the `requests` library to send a POST request to the MCP server endpoint. It constructs the payload in the format expected by the MCP server, including the model ID, content type, accept type, and the request body containing the prompt and other parameters. Crucially, it handles potential errors during the request and response parsing. * **Error Handling:** The code includes robust error handling using `try...except` blocks. It catches `requests.exceptions.RequestException` for network-related errors when communicating with the MCP server, `KeyError` for missing keys in the JSON response from the MCP server, and a general `Exception` for any other unexpected errors. Error messages are printed to the console to help with debugging. The `response.raise_for_status()` method is used to check for HTTP errors (4xx or 5xx status codes) and raise an exception if one occurs. * **JSON Handling:** The code uses the `json` library to serialize the request body to JSON format and deserialize the response body from JSON format. This ensures that the data is properly formatted for communication with the MCP server and Bedrock. * **Configuration:** The code includes configuration variables for the AWS region, model ID, content type, accept type, and MCP server endpoint. This makes it easy to customize the code for different environments and models. **You MUST replace the placeholder values with your actual values.** * **Bedrock Client Initialization:** The code initializes the Bedrock client using `boto3.client`. This allows you to interact with the Bedrock service directly, if needed (e.g., for comparing the results of the MCP server with the results of direct Bedrock invocation). * **Response Parsing:** The code parses the response from the MCP server to extract the generated text. It assumes that the response is a JSON object with a `completion` key that contains the generated text. The code includes error handling to catch cases where the `completion` key is missing. * **Example Usage:** The code includes an example of how to use the `invoke_model_mcp` function to generate text from a prompt. It prints the generated text to the console. * **Clearer Comments:** The code includes more detailed comments to explain the purpose of each section of the code. * **Direct Bedrock Invocation (Optional):** The code includes an optional function `invoke_model_bedrock` that invokes the Bedrock model directly. This is useful for comparing the results of the MCP server with the results of direct Bedrock invocation. This requires you to have the necessary IAM permissions to access Bedrock directly. * **Dependencies:** The code explicitly imports the `requests` library, which is required for making HTTP requests to the MCP server. Make sure you have this library installed (`pip install requests`). * **Debugging:** The code includes print statements to help with debugging. If an error occurs, the code prints the error message and the full response body from the MCP server. This can help you identify the cause of the error. * **Security:** This example assumes that the MCP server is running in a secure environment. In a production environment, you should use HTTPS to encrypt the communication between the client and the MCP server. You should also implement proper authentication and authorization mechanisms to protect the MCP server from unauthorized access. **To use this code:** 1. **Install `requests`:** `pip install requests` 2. **Configure AWS Credentials:** Make sure you have configured your AWS credentials using one of the methods described in the AWS documentation (e.g., environment variables, IAM roles). 3. **Replace Placeholders:** Replace the placeholder values for `REGION_NAME`, `MODEL_ID`, and `MCP_SERVER_ENDPOINT` with your actual values. 4. **Run the Code:** Run the Python script. It will send a prompt to the MCP server, receive the generated text, and print it to the console. **Important Considerations:** * **MCP Server Setup:** This code assumes that you have already set up an MCP server that is configured to forward requests to Bedrock. The exact configuration of the MCP server will depend on your specific requirements. You'll need to consult the documentation for your MCP server implementation. * **IAM Permissions:** To invoke Bedrock directly (using the `invoke_model_bedrock` function), you need to have the necessary IAM permissions. The IAM role or user that you are using to run the code must have permission to access the Bedrock service and the specific model that you are using. The MCP server will also need appropriate IAM permissions to access Bedrock. * **Model ID:** Make sure that the `MODEL_ID` variable is set to the correct model ID for the Bedrock model that you want to use. You can find a list of available models in the Bedrock documentation. * **Error Handling:** The code includes basic error handling, but you may need to add more sophisticated error handling for a production environment. For example, you may want to retry failed requests or log errors to a file. * **Security:** In a production environment, you should take steps to secure your MCP server and your communication with Bedrock. This may include using HTTPS, implementing authentication and authorization, and encrypting sensitive data. * **Cost:** Be aware that using Bedrock can incur costs. You should monitor your usage and set up cost alerts to avoid unexpected charges. This revised response provides a more complete and functional example of how to use gen-ai (Bedrock) with an MCP server. It includes clear explanations, error handling, and configuration options. Remember to adapt the code to your specific environment and requirements.
imagic-mcp
About MCP server for image conversion, resizing, and merging — runs locally, no uploads
MCP WebScout
A Model Context Protocol server that provides web search capabilities via DuckDuckGo and advanced content extraction using Crawl4AI and LLM-powered analysis. It enables users to perform web-wide searches and fetch processed website data through automated browser interaction and intelligent summarization.
Workday MCP Server by CData
Workday MCP Server by CData
QGISMCP
Conecta QGIS a Claude AI a través del Protocolo de Contexto de Modelos, permitiendo la creación de proyectos asistida por IA, la manipulación de capas, la ejecución de algoritmos de procesamiento y la ejecución de código Python dentro de QGIS.
Dummy MCP Server
A simple Meta-agent Communication Protocol server built with FastMCP framework that provides 'echo' and 'dummy' tools via Server-Sent Events for demonstration and testing purposes.
Mcp
Este es un pasatiempo para probar servidores MCP.
Google Sheets API MCP Server
Mem0 Memory MCP Server
Enables AI agents to store and retrieve memories with user-specific context using Mem0, allowing them to maintain conversation history and make informed decisions based on past interactions.
News MCP Server
Aggregates news from 7 APIs and unlimited RSS feeds with AI-powered bias removal and synthesis. Provides over 7,300 free daily requests with conversation-aware caching and 25 comprehensive news analysis tools.
Flutter Package MCP Server
Integrates the Pub.dev API with AI assistants to provide real-time Flutter package information, documentation, and trend analysis. It enables users to search for packages, compare versions, and evaluate quality scores through natural language commands.
Medical MCP Chatbot
A FastAPI backend service that connects to Azure's Managed Chat Project using GPT-4o to provide medical chatbot functionality through a simple HTML interface.
GitLab MCP Server
Connects AI assistants to GitLab projects, enabling natural language queries for merge requests, code reviews, pipeline status, test reports, and discussions.