Discover Awesome MCP Servers
Extend your agent with 13,827 capabilities via MCP servers.
- All13,827
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2

Interactive Feedback MCP
MCP server that enables human-in-the-loop workflow in AI-assisted development tools by allowing users to provide direct feedback to AI agents without consuming additional premium requests.

MLB Projections MCP Server
An MCP server that enables interaction with MLB (Major League Baseball) v3 projections through the SportsData.io API, allowing access to baseball statistics and projections through natural language.
MCP Host Installation
Um MCP que instala outros MCPs. O último MCP você instalará manualmente adicionando comandos ao seu arquivo `mcp.json`. Adicione este MCP ao seu host favorito e peça para ele instalar o servidor que você quiser.

DoubleClick Bid Manager MCP Server
An MCP Server that provides a conversational interface to the DoubleClick Bid Manager API, allowing users to manage programmatic advertising campaigns through natural language interactions.
MCP YNAB Server 💰
Custom MCP server for YNAB API (TypeScript)
MCP Server Implementations
Implementação de servidor personalizada para o Protocolo de Controle de Modelo (MCP) usando Eventos Enviados pelo Servidor (SSE).

MCP PDF Server
A Model Context Protocol (MCP) based server that efficiently manages PDF files, allowing AI coding tools like Cursor to read, summarize, and extract information from PDF datasheets to assist embedded development work.

Israel Statistics MCP
MCP server that provides programmatic access to the Israeli Central Bureau of Statistics (CBS) price indices and economic data

MCP Odoo Shell
A bridge server that provides access to an Odoo shell environment, allowing execution of Python code within an Odoo database context for model introspection and database operations.

MCP-Confirm
An MCP server implementing AI-user confirmation protocols, providing tools for LLMs to seek user confirmation when uncertain through yes/no questions, action confirmations, intent clarification, understanding verification, and satisfaction ratings.

PentestThinkingMCP
An AI-powered penetration testing reasoning engine that provides automated attack path planning, step-by-step guidance for CTFs/HTB challenges, and tool recommendations using Beam Search and MCTS algorithms.
Bitwig MCP Server
Servidor MCP para Bitwig Studio
Sample Model Context Protocol Demos
Here's a collection of examples and concepts related to using the Model Context Protocol (MCP) with AWS, focusing on how it can be applied and what benefits it offers. Keep in mind that the Model Context Protocol is a relatively new and evolving concept, and its adoption within AWS services might vary. This response will cover the general principles and potential applications. **Understanding Model Context Protocol (MCP)** The Model Context Protocol aims to provide a standardized way for models to access contextual information during inference. This context can include: * **User Information:** User ID, location, preferences. * **Session Information:** Current session ID, history of interactions. * **Device Information:** Device type, operating system. * **Environment Information:** Time of day, weather conditions. * **External Data:** Real-time data from databases, APIs, or other services. The goal is to make models more aware of their environment, leading to more accurate and personalized predictions. Instead of hardcoding context into the model or passing it directly in the inference request, MCP provides a structured and potentially more efficient way to manage and access this information. **How MCP Could Be Used with AWS Services** While a direct, fully-fledged "MCP service" might not exist as a standalone AWS offering, the principles of MCP can be implemented and leveraged using various AWS services. Here's how: 1. **Amazon SageMaker:** * **Custom Inference Containers:** You can build custom inference containers for SageMaker that implement the MCP. This involves: * **Defining a Context Provider:** A component within your container that fetches context data from various sources (e.g., DynamoDB, Redis, external APIs). * **Integrating with the Model:** Modifying your model's inference code to query the context provider for relevant information before making predictions. * **Deployment:** Deploying the container to SageMaker endpoints. * **SageMaker Inference Pipelines:** You can create inference pipelines where one step is dedicated to fetching and preparing context data. This step could use AWS Lambda or a custom processing container. The output of this step is then passed to the model inference step. * **SageMaker Feature Store:** While not directly MCP, SageMaker Feature Store provides a centralized repository for features that can be used as context. Your inference code can retrieve features from the Feature Store based on a key (e.g., user ID) and use them during inference. This is a common way to provide contextual information. * **Example Scenario:** A recommendation engine deployed on SageMaker. The inference container uses the user ID from the request to query a DynamoDB table (acting as a context provider) for the user's past purchase history, browsing behavior, and demographic information. This information is then fed into the recommendation model to generate personalized recommendations. 2. **AWS Lambda:** * **Context Enrichment:** Lambda functions can be used to enrich incoming inference requests with context data. The Lambda function receives the initial request, fetches context from various sources (e.g., DynamoDB, API Gateway, S3), and then passes the augmented request to the model endpoint (e.g., a SageMaker endpoint). * **Example Scenario:** An image recognition service. The Lambda function receives an image upload request. It then uses the user's location (obtained from the request headers or a user profile) to fetch weather data from an external API. The weather data is added to the request payload and sent to the image recognition model, which might use this information to improve its accuracy (e.g., recognizing objects that are more likely to be present in certain weather conditions). 3. **Amazon API Gateway:** * **Request Transformation:** API Gateway can be configured to transform incoming requests and add context information. This can involve extracting data from request headers, query parameters, or even making calls to other AWS services (e.g., Lambda) to fetch context data. * **Example Scenario:** A fraud detection service. API Gateway receives a transaction request. It extracts the user's IP address and device information from the request headers. It then uses a Lambda function to geolocate the IP address and identify the device type. This information is added to the request payload and sent to the fraud detection model. 4. **Amazon DynamoDB:** * **Context Storage:** DynamoDB can be used as a fast and scalable storage solution for context data. You can store user profiles, session information, and other relevant data in DynamoDB and retrieve it during inference. * **Example Scenario:** A personalized marketing campaign. The model needs to predict the likelihood of a user clicking on an ad. DynamoDB stores user profiles with information such as age, gender, interests, and past interactions with ads. The inference code retrieves this information from DynamoDB and uses it to personalize the ad prediction. 5. **Amazon ElastiCache (Redis/Memcached):** * **Caching Context Data:** ElastiCache can be used to cache frequently accessed context data, reducing latency and improving performance. This is particularly useful for context data that is relatively static or changes infrequently. * **Example Scenario:** A real-time bidding (RTB) system. The model needs to predict the value of an ad impression. ElastiCache stores frequently accessed data such as user demographics, website categories, and ad performance metrics. The inference code retrieves this information from ElastiCache to make a fast and accurate bid. **Key Considerations for Implementing MCP-like Functionality on AWS:** * **Data Consistency:** Ensure that the context data is consistent and up-to-date. Use appropriate caching strategies and data synchronization mechanisms. * **Latency:** Minimize the latency of fetching context data. Use fast storage solutions (e.g., DynamoDB, ElastiCache) and optimize your queries. * **Security:** Protect the context data from unauthorized access. Use appropriate authentication and authorization mechanisms. * **Scalability:** Design your system to scale to handle a large number of inference requests. Use scalable AWS services such as DynamoDB, Lambda, and API Gateway. * **Cost Optimization:** Optimize the cost of fetching and storing context data. Use appropriate caching strategies and choose the most cost-effective AWS services. * **Monitoring and Logging:** Monitor the performance of your system and log any errors. Use AWS CloudWatch to monitor metrics and logs. **Example Code Snippet (Conceptual - Python with Boto3):** ```python import boto3 import json # Assume you have a SageMaker endpoint and a DynamoDB table for user context sagemaker_client = boto3.client('sagemaker-runtime') dynamodb_client = boto3.client('dynamodb') def get_user_context(user_id): """Fetches user context from DynamoDB.""" try: response = dynamodb_client.get_item( TableName='user_context_table', Key={'user_id': {'S': user_id}} ) if 'Item' in response: return response['Item'] else: return None # User not found except Exception as e: print(f"Error fetching user context: {e}") return None def invoke_sagemaker_endpoint(user_id, input_data): """Invokes the SageMaker endpoint with user context.""" user_context = get_user_context(user_id) if user_context: # Transform DynamoDB item to a more usable format (e.g., a dictionary) context_data = {k: list(v.values())[0] for k, v in user_context.items()} # Simple conversion, adjust as needed # Augment the input data with context input_data['context'] = context_data # Convert input data to JSON for SageMaker payload = json.dumps(input_data) try: response = sagemaker_client.invoke_endpoint( EndpointName='your-sagemaker-endpoint', ContentType='application/json', Body=payload ) result = json.loads(response['Body'].read().decode()) return result except Exception as e: print(f"Error invoking SageMaker endpoint: {e}") return None # Example usage user_id = 'user123' input_data = {'feature1': 0.5, 'feature2': 0.8} # Initial input data prediction = invoke_sagemaker_endpoint(user_id, input_data) if prediction: print(f"Prediction: {prediction}") else: print("Failed to get prediction.") ``` **Explanation of the Code:** 1. **`get_user_context(user_id)`:** This function retrieves user context from a DynamoDB table based on the `user_id`. It uses the `boto3` library to interact with DynamoDB. Error handling is included. It returns `None` if the user is not found or if there's an error. The conversion of the DynamoDB item to a dictionary is a crucial step, and you'll need to adapt it based on the structure of your DynamoDB data. 2. **`invoke_sagemaker_endpoint(user_id, input_data)`:** This function orchestrates the process: * It calls `get_user_context()` to retrieve the user's context. * If context is found, it augments the `input_data` with the context information. This is where you'd structure the context data to be compatible with your model's input requirements. * It converts the augmented `input_data` to a JSON payload. * It invokes the SageMaker endpoint using the `sagemaker-runtime` client. * It parses the response from the endpoint and returns the result. Error handling is included. 3. **Example Usage:** Shows how to call the `invoke_sagemaker_endpoint` function with a `user_id` and some initial `input_data`. **Important Notes:** * **Replace Placeholders:** You *must* replace the placeholder values (e.g., `'user_context_table'`, `'your-sagemaker-endpoint'`) with your actual resource names. * **IAM Permissions:** Ensure that your Lambda function (or the IAM role associated with your SageMaker endpoint) has the necessary IAM permissions to access DynamoDB and invoke the SageMaker endpoint. * **Data Transformation:** The way you transform the DynamoDB item into a dictionary (or other format) will depend on the structure of your data and the expected input format of your model. Pay close attention to this step. * **Error Handling:** The code includes basic error handling, but you should add more robust error handling and logging in a production environment. * **Context Data Structure:** The structure of the `context_data` dictionary should match the expected input format of your model. You might need to perform additional data transformations to ensure compatibility. * **Alternative Context Sources:** You can easily adapt the `get_user_context` function to fetch context data from other sources, such as ElastiCache, S3, or external APIs. **Benefits of Using MCP Principles with AWS:** * **Improved Model Accuracy:** By providing models with access to relevant context, you can improve their accuracy and make more informed predictions. * **Personalization:** MCP enables you to personalize model predictions based on user preferences, location, and other contextual factors. * **Flexibility:** You can easily update and modify the context data without retraining the model. * **Scalability:** AWS services provide the scalability and reliability needed to handle a large number of inference requests. * **Centralized Context Management:** You can manage context data in a centralized location, making it easier to maintain and update. In summary, while a dedicated "Model Context Protocol" service might not be explicitly available on AWS, you can effectively implement the principles of MCP by leveraging various AWS services such as SageMaker, Lambda, API Gateway, DynamoDB, and ElastiCache. The key is to design a system that allows your models to access and utilize relevant context data during inference, leading to more accurate and personalized predictions. The example code provides a starting point for building such a system. Remember to adapt the code and architecture to your specific use case and requirements.
OpsNow MCP Cost Server
Python Mcp Server Sample
MCP Neo4j Knowledge Graph Memory Server
☢️ NOT READY DO NOT USE ☢️

LogAnalyzer MCP Server
An AI-powered server that provides rapid debugging of server logs with actionable fixes in under 30 seconds, featuring real-time monitoring and root cause analysis through Google Gemini integration.
uuid-mcp-server-example
uuid(v4)を作成するシンプルなMCPサーバーです。

Todoist MCP Server
A Model Context Protocol server that enables advanced task and project management in Todoist via Claude Desktop and other MCP-compatible clients.

Concordium MCP Server
Concordium mcp-sever for interacting with the concordium chain
Excel Reader MCP Server

MCP Document Server
A local development server that provides an interface for managing and accessing markdown documents using the Model Context Protocol (MCP).
Weather MCP Server
Image Process MCP Server
Um servidor MCP para processamento de imagens que utiliza a biblioteca Sharp para fornecer funcionalidades de manipulação de imagem.
🧠 MCP PID Wallet Verifier
Um servidor MCP leve e amigável para IA que permite que qualquer agente de IA ou assistente compatível com MCP inicie e verifique uma apresentação de credenciais PID (Dados de Identidade Pessoal) via OIDC4VP.

Remote MCP Server Authless
A Cloudflare Workers-based Model Context Protocol server without authentication requirements, allowing users to deploy and customize AI tools that can be accessed from Claude Desktop or Cloudflare AI Playground.

MCP Memory
An MCP server that enables clients like Cursor, Claude, and Windsurf to remember user information and preferences across conversations using vector search technology.

Databricks MCP Server
A Model Context Protocol server that enables AI assistants to interact with Databricks workspaces, allowing them to browse Unity Catalog, query metadata, sample data, and execute SQL queries.
MCP Server with Azure Communication Services Email
Azure Communication Services - Email MCP (MCP pode se referir a "Managed Communication Provider" ou outro termo específico dependendo do contexto. Se for o caso, forneça mais contexto para uma tradução mais precisa.)