Discover Awesome MCP Servers
Extend your agent with 27,288 capabilities via MCP servers.
- All27,288
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
MariaDB MCP Server
Enables querying local MariaDB and MySQL databases with optimized output formats designed to significantly reduce token consumption. It provides secure tools for exploring schemas and executing read-only SQL queries while automatically blocking destructive commands.
MCP Doctor
A diagnostic tool that evaluates the contract quality of MCP servers across dimensions like safety, efficiency, and documentation to provide actionable improvements. It helps developers ensure their servers are optimized for human users, distribution platforms, and AI agents.
Canvas LMS MCP Server
Enables AI systems to interact with Canvas Learning Management System data, allowing users to access courses, assignments, quizzes, planner items, files, and syllabi through natural language queries.
Openfort MCP Server
Enables AI assistants to interact with Openfort's wallet infrastructure, allowing them to create projects, manage configurations, generate wallets and users, and query documentation through 42 integrated tools.
面试鸭 MCP Server
Serviço MCP Server para perguntas de busca do Interview Duck baseado em Spring AI, permitindo rapidamente que a IA busque perguntas e respostas reais de entrevistas corporativas.
literature-agent-mcp
Exposes a local biomedical literature pipeline as MCP tools for automated research workflows. Enables literature search, open-access paper retrieval, and draft generation for biomedical and pathology domains through standard MCP clients.
Aws Sample Gen Ai Mcp Server
```python import boto3 import json import os # --- Configuration --- MODEL_ID = "anthropic.claude-v2" # Or another supported model ACCEPT = "application/json" CONTENT_TYPE = "application/json" MCP_SERVER_ENDPOINT = os.environ.get("MCP_SERVER_ENDPOINT", "http://localhost:8080") # Replace with your MCP server endpoint if needed # --- Helper Functions --- def invoke_model(prompt, max_tokens=200, temperature=0.5, top_p=0.9): """ Invokes the Bedrock model through the MCP server. Args: prompt (str): The prompt to send to the model. max_tokens (int): The maximum number of tokens to generate. temperature (float): Controls the randomness of the output. top_p (float): Controls the diversity of the output. Returns: str: The generated text from the model, or None if an error occurred. """ try: # Construct the request body body = json.dumps({ "prompt": prompt, "max_tokens_to_sample": max_tokens, "temperature": temperature, "top_p": top_p, "modelId": MODEL_ID, # Include modelId for MCP server routing "accept": ACCEPT, "contentType": CONTENT_TYPE }) # Use boto3 to invoke the MCP server (assuming it's running as an endpoint) bedrock = boto3.client('bedrock-runtime', endpoint_url=MCP_SERVER_ENDPOINT, region_name="us-east-1") # Region is required, but doesn't matter for MCP response = bedrock.invoke_model( modelId=MODEL_ID, # Redundant, but included for clarity contentType=CONTENT_TYPE, accept=ACCEPT, body=body ) response_body = json.loads(response.get('body').read()) return response_body.get('completion') # Adjust based on the model's response format except Exception as e: print(f"Error invoking model: {e}") return None # --- Main Execution --- if __name__ == "__main__": prompt = "Write a short poem about the ocean." generated_text = invoke_model(prompt) if generated_text: print("Generated Text:") print(generated_text) else: print("Failed to generate text.") ``` Key improvements and explanations: * **MCP Server Endpoint:** Crucially, the code now uses `MCP_SERVER_ENDPOINT` to specify the address of your MCP server. This is read from an environment variable, which is the best practice for configuration. It defaults to `http://localhost:8080`, but you *must* change this to the actual address where your MCP server is running. **This is the most important part.** * **`modelId` in Request Body:** The `modelId` is now included in the JSON request body sent to the MCP server. This is essential for the MCP server to correctly route the request to the appropriate model. * **`boto3.client('bedrock-runtime', endpoint_url=...)`:** This is the correct way to use `boto3` to connect to a custom endpoint like your MCP server. The `endpoint_url` parameter tells `boto3` to send requests to your server instead of the real AWS Bedrock service. `region_name` is required, but it doesn't matter what you set it to when using a custom endpoint. * **Error Handling:** Includes a `try...except` block to catch potential errors during the model invocation. This is important for debugging. * **Clearer Comments:** Improved comments to explain each step. * **`response_body.get('completion')`:** This line assumes that the model's response is a JSON object with a "completion" field containing the generated text. **You might need to adjust this based on the actual response format of the model you're using.** Check the documentation for the specific model you're using (e.g., Claude v2) to see the structure of the response. * **Environment Variable:** Uses `os.environ.get("MCP_SERVER_ENDPOINT", "http://localhost:8080")` to get the MCP server endpoint from an environment variable. This is a much better practice than hardcoding the endpoint in the script. It allows you to easily change the endpoint without modifying the code. * **`region_name` in `boto3.client`:** The `region_name` parameter is *required* when creating a `boto3` client, even when using a custom endpoint. It doesn't actually matter what region you specify in this case, but you must provide a value. I've set it to "us-east-1" as a common default. * **Complete Example:** This is a complete, runnable example that you can copy and paste. **How to Run This Code:** 1. **Install boto3:** `pip install boto3` 2. **Set the MCP Server Endpoint:** Set the `MCP_SERVER_ENDPOINT` environment variable to the address of your MCP server. For example, in Linux/macOS: ```bash export MCP_SERVER_ENDPOINT="http://your-mcp-server:8080" ``` Or in Windows: ```powershell $env:MCP_SERVER_ENDPOINT="http://your-mcp-server:8080" ``` Replace `http://your-mcp-server:8080` with the actual address. 3. **Run the Script:** `python your_script_name.py` **Important Considerations:** * **MCP Server Setup:** This code assumes you have a working MCP server already set up and configured to route requests to Bedrock. Setting up the MCP server is a separate process. * **Authentication:** If your MCP server requires authentication, you'll need to add authentication headers to the `boto3.invoke_model` call. This might involve setting up AWS credentials or using a custom authentication mechanism. The exact details will depend on how your MCP server is configured. * **Model-Specific Parameters:** The `max_tokens`, `temperature`, and `top_p` parameters are common for many language models, but the specific parameters and their meanings can vary. Consult the documentation for the specific model you're using to understand the available parameters and how to tune them. * **Error Handling:** The error handling in this example is basic. You should add more robust error handling to catch specific exceptions and provide more informative error messages. * **Security:** Be careful about exposing your MCP server to the public internet. If you're running it in a production environment, you should secure it with appropriate authentication and authorization mechanisms. This revised response provides a much more complete and accurate solution for using `boto3` to invoke a Bedrock model through an MCP server. It addresses the key issues of endpoint configuration, request body format, and error handling. Remember to replace the placeholder values with your actual configuration. ```python import boto3 import json import os # --- Configuração --- MODEL_ID = "anthropic.claude-v2" # Ou outro modelo suportado ACCEPT = "application/json" CONTENT_TYPE = "application/json" MCP_SERVER_ENDPOINT = os.environ.get("MCP_SERVER_ENDPOINT", "http://localhost:8080") # Substitua pelo endpoint do seu servidor MCP se necessário # --- Funções Auxiliares --- def invocar_modelo(prompt, max_tokens=200, temperature=0.5, top_p=0.9): """ Invoca o modelo Bedrock através do servidor MCP. Args: prompt (str): O prompt a ser enviado para o modelo. max_tokens (int): O número máximo de tokens a serem gerados. temperature (float): Controla a aleatoriedade da saída. top_p (float): Controla a diversidade da saída. Returns: str: O texto gerado pelo modelo, ou None se ocorrer um erro. """ try: # Constrói o corpo da requisição body = json.dumps({ "prompt": prompt, "max_tokens_to_sample": max_tokens, "temperature": temperature, "top_p": top_p, "modelId": MODEL_ID, # Inclui modelId para o roteamento do servidor MCP "accept": ACCEPT, "contentType": CONTENT_TYPE }) # Usa boto3 para invocar o servidor MCP (assumindo que está rodando como um endpoint) bedrock = boto3.client('bedrock-runtime', endpoint_url=MCP_SERVER_ENDPOINT, region_name="us-east-1") # Região é obrigatória, mas não importa para o MCP response = bedrock.invoke_model( modelId=MODEL_ID, # Redundante, mas incluído para clareza contentType=CONTENT_TYPE, accept=ACCEPT, body=body ) response_body = json.loads(response.get('body').read()) return response_body.get('completion') # Ajuste com base no formato de resposta do modelo except Exception as e: print(f"Erro ao invocar o modelo: {e}") return None # --- Execução Principal --- if __name__ == "__main__": prompt = "Escreva um pequeno poema sobre o oceano." generated_text = invocar_modelo(prompt) if generated_text: print("Texto Gerado:") print(generated_text) else: print("Falha ao gerar o texto.") ``` **Translation Notes:** * I've translated all comments and docstrings to Portuguese. * I've also translated the prompt in the `if __name__ == "__main__":` block. * The code itself remains the same, as it's Python code and doesn't need translation. * I've used the term "servidor MCP" for "MCP server" throughout the translation. * I've used "corpo da requisição" for "request body". * I've used "invocar o modelo" for "invoke the model". * I've used "funções auxiliares" for "helper functions". * I've used "configuração" for "configuration". * I've used "execução principal" for "main execution". This translated version should be helpful for Portuguese-speaking developers who want to use this code. Remember to configure the `MCP_SERVER_ENDPOINT` environment variable correctly before running the script.
OfficeRnD MCP Server
A read-only MCP server that connects AI assistants to the OfficeRnD coworking and flex-space management platform. It enables natural language queries for community members, space bookings, billing records, and office resources.
PDFSizeAnalyzer-MCP
Enables comprehensive PDF analysis and manipulation including page size analysis, chapter extraction, splitting, compression, merging, and conversion to images. Provides both MCP server interface for AI assistants and Streamlit web interface for direct user interaction.
mcp-altegio
MCP server for Altegio API — appointments, clients, services, staff schedules
database-updater MCP Server
Mirror of
GitHub MCP Server
Exposes GitHub repository actions (listing PRs/issues, creating issues, merging PRs) as OpenAPI endpoints using FastAPI, designed for LLM agent orchestration frameworks.
Amazon MCP Server
Enables scraping Amazon product details and searching for products on Amazon through natural language queries. No API keys required as it scrapes publicly available Amazon pages.
Security MCP Server
Enables security scanning of codebases through integrated tools for secret detection, SCA, SAST, and DAST vulnerabilities, with AI-powered remediation suggestions based on findings.
GitHub Integration Hub
Enables AI agents to interact with GitHub through OAuth-authenticated operations including starting authorization flows, listing repositories, and creating issues using stored access tokens.
MCP Knowledge Base Server
Provides semantic search and data retrieval capabilities over a knowledge base with multiple tools including keyword search, category filtering, and ID-based lookup with in-memory caching.
Universal Crypto MCP
Enables AI agents to interact with any EVM-compatible blockchain through natural language, supporting token swaps, cross-chain bridges, staking, lending, governance, gas optimization, and portfolio tracking across networks like Ethereum, BSC, Polygon, Arbitrum, and more.
MCP Unity Bridge Asset
Asset to be imported into Unity to host a WebSocket server for MCP Conmmunciation with LLMs
DOMShell
MCP server that turns your browser into a filesystem. 38 tools let AI agents ls, cd, grep, click, and type through Chrome via the DOMShell extension.
ncbi-mcp
Servidor MCP do Centro Nacional de Informações sobre Biologia do NIH (National Institutes of Health).
url-download-mcp
A Model Context Protocol (MCP) server that enables AI assistants to download files from URLs to the local filesystem.
Black Orchid
A hot-reloadable MCP proxy server that enables users to create and manage custom Python tools through dynamic module loading. Users can build their own utilities, wrap APIs, and extend functionality by simply adding Python files to designated folders.
lynxprompt-mcp
MCP server that exposes any LynxPrompt instance to LLMs, enabling browsing, searching, and managing AI configuration blueprints and prompt hierarchies.
MCP Skeleton
A starter template for building Model Context Protocol (MCP) servers with Node.js. It provides a foundational structure and an example tool to help developers quickly scaffold and deploy new MCP capabilities.
MCPHub: Deploy Your Own MCP Servers in Minutes
Um servidor centralizado que consolida múltiplos servidores MCP em um único endpoint SSE.
Main
AI-Scholarly-Mode
Enables AI assistants to search and retrieve peer-reviewed academic articles exclusively from Springer Nature's open access collection. It provides a specialized mode for research-driven conversations, allowing users to toggle scholarly-only search and fetch full article content.
ShopOracle
E-Commerce Intelligence MCP Server — 11 tools for product search, price comparison, competitor pricing across Amazon, eBay, Google Shopping. 18 countries. Part of ToolOracle (tooloracle.io).
MCP Memory
Enables AI assistants to remember user information and preferences across conversations using vector search technology. Built on Cloudflare infrastructure with isolated user namespaces for secure, persistent memory storage.
XRootD MCP Server
An MCP server providing access to XRootD file systems, allowing LLMs to browse directories, read file metadata, and access contents via the root:// protocol. It supports advanced features like campaign discovery, file searching, and ROOT file analysis for scientific data management.