Discover Awesome MCP Servers

Extend your agent with 17,103 capabilities via MCP servers.

All17,103
Remote MCP Server Template

Remote MCP Server Template

A template for deploying authentication-free MCP servers on Cloudflare Workers. Enables users to create custom tool servers that can be connected to Claude Desktop or other MCP clients remotely.

OpenFeature MCP Server

OpenFeature MCP Server

Provides OpenFeature SDK installation guidance for various programming languages and enables feature flag evaluation through the OpenFeature Remote Evaluation Protocol (OFREP). Supports multiple AI clients and can connect to any OFREP-compatible feature flag service.

天气 MCP 服务器

天气 MCP 服务器

Este é um servidor MCP de consulta de clima construído com base no FastMCP.

Secure MCP Server Template

Secure MCP Server Template

A template for creating secure, remotely accessible MCP servers with OAuth authentication and Cloudflare ZeroTrust Access protection. Enables deployment of containerized MCP servers that can be safely accessed by Claude Desktop and other MCP clients through authenticated connections.

SAP Ariba Procurement MCP Server by CData

SAP Ariba Procurement MCP Server by CData

This project builds a read-only MCP server. For full read, write, update, delete, and action capabilities and a simplified setup, check out our free CData MCP Server for SAP Ariba Procurement (beta): https://www.cdata.com/download/download.aspx?sku=PAZK-V&type=beta

Multi-Feature MCP Server

Multi-Feature MCP Server

Provides comprehensive functionality including weather data, system utilities, Azure cloud management, and AI-powered image generation and editing. Features interactive parameter selection through MCP elicitation for enhanced user experience.

Pendle Finance MCP Server

Pendle Finance MCP Server

Enables interaction with Pendle Finance DeFi protocol to fetch live yields, simulate staking and swaps, retrieve portfolio data, and get AI-based token recommendations. Provides comprehensive DeFi portfolio management and yield optimization through natural language.

Theneo MCP Server

Theneo MCP Server

Enables AI assistants to automatically create, update, and publish API documentation through Theneo's platform. Supports OpenAPI specs, Postman collections, AI-powered description generation, and natural language interactions for seamless documentation workflows.

Youtube Uploader MCP

Youtube Uploader MCP

- Upload videos to YouTube from MCP - Client(Claude/Cursor/VS Code) - OAuth2 authentication flow - Access token and refresh token management - Multi Channel Support

Bitbucket MCP Server

Bitbucket MCP Server

Enables AI assistants to interact with Bitbucket Cloud repositories, allowing users to manage pull requests, comments, tasks, and branches through natural language commands.

Fact Check Tools MCP Server

Fact Check Tools MCP Server

An MCP server that enables interaction with Google's Fact Check Tools API, allowing users to query and manage fact-check claims and publishers through natural language commands.

remote-mcp-server

remote-mcp-server

E0N-MCP-FOR-ETH

E0N-MCP-FOR-ETH

just a simple test for education

Chess MCP Server

Chess MCP Server

Enables chess gameplay and interaction through MCP protocol. Allows users to play chess games, make moves, and manage chess sessions through natural language commands.

Mcp Server Amq

Mcp Server Amq

Servidor MCP para interagir com APIs AWS AmazonMQ

🚀 MCP Server Tester

🚀 MCP Server Tester

MCP Server Tester é uma ferramenta web simples para testar e validar códigos de instalação de servidores MCP. Construída com Node.js, Express e EJS, ela fornece resultados de resposta do servidor em tempo real com uma interface de usuário limpa e responsiva.

Anki MCP

Anki MCP

Anki MCP

PentestThinkingMCP

PentestThinkingMCP

An AI-powered penetration testing reasoning engine that provides automated attack path planning, step-by-step guidance for CTFs/HTB challenges, and tool recommendations using Beam Search and MCTS algorithms.

Bitwig MCP Server

Bitwig MCP Server

Servidor MCP para Bitwig Studio

Sample Model Context Protocol Demos

Sample Model Context Protocol Demos

Here's a collection of examples and concepts related to using the Model Context Protocol (MCP) with AWS, focusing on how it can be applied and what benefits it offers. Keep in mind that the Model Context Protocol is a relatively new and evolving concept, and its adoption within AWS services might vary. This response will cover the general principles and potential applications. **Understanding Model Context Protocol (MCP)** The Model Context Protocol aims to provide a standardized way for models to access contextual information during inference. This context can include: * **User Information:** User ID, location, preferences. * **Session Information:** Current session ID, history of interactions. * **Device Information:** Device type, operating system. * **Environment Information:** Time of day, weather conditions. * **External Data:** Real-time data from databases, APIs, or other services. The goal is to make models more aware of their environment, leading to more accurate and personalized predictions. Instead of hardcoding context into the model or passing it directly in the inference request, MCP provides a structured and potentially more efficient way to manage and access this information. **How MCP Could Be Used with AWS Services** While a direct, fully-fledged "MCP service" might not exist as a standalone AWS offering, the principles of MCP can be implemented and leveraged using various AWS services. Here's how: 1. **Amazon SageMaker:** * **Custom Inference Containers:** You can build custom inference containers for SageMaker that implement the MCP. This involves: * **Defining a Context Provider:** A component within your container that fetches context data from various sources (e.g., DynamoDB, Redis, external APIs). * **Integrating with the Model:** Modifying your model's inference code to query the context provider for relevant information before making predictions. * **Deployment:** Deploying the container to SageMaker endpoints. * **SageMaker Inference Pipelines:** You can create inference pipelines where one step is dedicated to fetching and preparing context data. This step could use AWS Lambda or a custom processing container. The output of this step is then passed to the model inference step. * **SageMaker Feature Store:** While not directly MCP, SageMaker Feature Store provides a centralized repository for features that can be used as context. Your inference code can retrieve features from the Feature Store based on a key (e.g., user ID) and use them during inference. This is a common way to provide contextual information. * **Example Scenario:** A recommendation engine deployed on SageMaker. The inference container uses the user ID from the request to query a DynamoDB table (acting as a context provider) for the user's past purchase history, browsing behavior, and demographic information. This information is then fed into the recommendation model to generate personalized recommendations. 2. **AWS Lambda:** * **Context Enrichment:** Lambda functions can be used to enrich incoming inference requests with context data. The Lambda function receives the initial request, fetches context from various sources (e.g., DynamoDB, API Gateway, S3), and then passes the augmented request to the model endpoint (e.g., a SageMaker endpoint). * **Example Scenario:** An image recognition service. The Lambda function receives an image upload request. It then uses the user's location (obtained from the request headers or a user profile) to fetch weather data from an external API. The weather data is added to the request payload and sent to the image recognition model, which might use this information to improve its accuracy (e.g., recognizing objects that are more likely to be present in certain weather conditions). 3. **Amazon API Gateway:** * **Request Transformation:** API Gateway can be configured to transform incoming requests and add context information. This can involve extracting data from request headers, query parameters, or even making calls to other AWS services (e.g., Lambda) to fetch context data. * **Example Scenario:** A fraud detection service. API Gateway receives a transaction request. It extracts the user's IP address and device information from the request headers. It then uses a Lambda function to geolocate the IP address and identify the device type. This information is added to the request payload and sent to the fraud detection model. 4. **Amazon DynamoDB:** * **Context Storage:** DynamoDB can be used as a fast and scalable storage solution for context data. You can store user profiles, session information, and other relevant data in DynamoDB and retrieve it during inference. * **Example Scenario:** A personalized marketing campaign. The model needs to predict the likelihood of a user clicking on an ad. DynamoDB stores user profiles with information such as age, gender, interests, and past interactions with ads. The inference code retrieves this information from DynamoDB and uses it to personalize the ad prediction. 5. **Amazon ElastiCache (Redis/Memcached):** * **Caching Context Data:** ElastiCache can be used to cache frequently accessed context data, reducing latency and improving performance. This is particularly useful for context data that is relatively static or changes infrequently. * **Example Scenario:** A real-time bidding (RTB) system. The model needs to predict the value of an ad impression. ElastiCache stores frequently accessed data such as user demographics, website categories, and ad performance metrics. The inference code retrieves this information from ElastiCache to make a fast and accurate bid. **Key Considerations for Implementing MCP-like Functionality on AWS:** * **Data Consistency:** Ensure that the context data is consistent and up-to-date. Use appropriate caching strategies and data synchronization mechanisms. * **Latency:** Minimize the latency of fetching context data. Use fast storage solutions (e.g., DynamoDB, ElastiCache) and optimize your queries. * **Security:** Protect the context data from unauthorized access. Use appropriate authentication and authorization mechanisms. * **Scalability:** Design your system to scale to handle a large number of inference requests. Use scalable AWS services such as DynamoDB, Lambda, and API Gateway. * **Cost Optimization:** Optimize the cost of fetching and storing context data. Use appropriate caching strategies and choose the most cost-effective AWS services. * **Monitoring and Logging:** Monitor the performance of your system and log any errors. Use AWS CloudWatch to monitor metrics and logs. **Example Code Snippet (Conceptual - Python with Boto3):** ```python import boto3 import json # Assume you have a SageMaker endpoint and a DynamoDB table for user context sagemaker_client = boto3.client('sagemaker-runtime') dynamodb_client = boto3.client('dynamodb') def get_user_context(user_id): """Fetches user context from DynamoDB.""" try: response = dynamodb_client.get_item( TableName='user_context_table', Key={'user_id': {'S': user_id}} ) if 'Item' in response: return response['Item'] else: return None # User not found except Exception as e: print(f"Error fetching user context: {e}") return None def invoke_sagemaker_endpoint(user_id, input_data): """Invokes the SageMaker endpoint with user context.""" user_context = get_user_context(user_id) if user_context: # Transform DynamoDB item to a more usable format (e.g., a dictionary) context_data = {k: list(v.values())[0] for k, v in user_context.items()} # Simple conversion, adjust as needed # Augment the input data with context input_data['context'] = context_data # Convert input data to JSON for SageMaker payload = json.dumps(input_data) try: response = sagemaker_client.invoke_endpoint( EndpointName='your-sagemaker-endpoint', ContentType='application/json', Body=payload ) result = json.loads(response['Body'].read().decode()) return result except Exception as e: print(f"Error invoking SageMaker endpoint: {e}") return None # Example usage user_id = 'user123' input_data = {'feature1': 0.5, 'feature2': 0.8} # Initial input data prediction = invoke_sagemaker_endpoint(user_id, input_data) if prediction: print(f"Prediction: {prediction}") else: print("Failed to get prediction.") ``` **Explanation of the Code:** 1. **`get_user_context(user_id)`:** This function retrieves user context from a DynamoDB table based on the `user_id`. It uses the `boto3` library to interact with DynamoDB. Error handling is included. It returns `None` if the user is not found or if there's an error. The conversion of the DynamoDB item to a dictionary is a crucial step, and you'll need to adapt it based on the structure of your DynamoDB data. 2. **`invoke_sagemaker_endpoint(user_id, input_data)`:** This function orchestrates the process: * It calls `get_user_context()` to retrieve the user's context. * If context is found, it augments the `input_data` with the context information. This is where you'd structure the context data to be compatible with your model's input requirements. * It converts the augmented `input_data` to a JSON payload. * It invokes the SageMaker endpoint using the `sagemaker-runtime` client. * It parses the response from the endpoint and returns the result. Error handling is included. 3. **Example Usage:** Shows how to call the `invoke_sagemaker_endpoint` function with a `user_id` and some initial `input_data`. **Important Notes:** * **Replace Placeholders:** You *must* replace the placeholder values (e.g., `'user_context_table'`, `'your-sagemaker-endpoint'`) with your actual resource names. * **IAM Permissions:** Ensure that your Lambda function (or the IAM role associated with your SageMaker endpoint) has the necessary IAM permissions to access DynamoDB and invoke the SageMaker endpoint. * **Data Transformation:** The way you transform the DynamoDB item into a dictionary (or other format) will depend on the structure of your data and the expected input format of your model. Pay close attention to this step. * **Error Handling:** The code includes basic error handling, but you should add more robust error handling and logging in a production environment. * **Context Data Structure:** The structure of the `context_data` dictionary should match the expected input format of your model. You might need to perform additional data transformations to ensure compatibility. * **Alternative Context Sources:** You can easily adapt the `get_user_context` function to fetch context data from other sources, such as ElastiCache, S3, or external APIs. **Benefits of Using MCP Principles with AWS:** * **Improved Model Accuracy:** By providing models with access to relevant context, you can improve their accuracy and make more informed predictions. * **Personalization:** MCP enables you to personalize model predictions based on user preferences, location, and other contextual factors. * **Flexibility:** You can easily update and modify the context data without retraining the model. * **Scalability:** AWS services provide the scalability and reliability needed to handle a large number of inference requests. * **Centralized Context Management:** You can manage context data in a centralized location, making it easier to maintain and update. In summary, while a dedicated "Model Context Protocol" service might not be explicitly available on AWS, you can effectively implement the principles of MCP by leveraging various AWS services such as SageMaker, Lambda, API Gateway, DynamoDB, and ElastiCache. The key is to design a system that allows your models to access and utilize relevant context data during inference, leading to more accurate and personalized predictions. The example code provides a starting point for building such a system. Remember to adapt the code and architecture to your specific use case and requirements.

Android MCP

Android MCP

A lightweight bridge enabling AI agents to perform real-world tasks on Android devices such as app navigation, UI interaction, and automated QA testing without requiring computer-vision pipelines or preprogrammed scripts.

claude-skills-mcp

claude-skills-mcp

Enables intelligent semantic search and discovery of relevant Claude Agent Skills using vector embeddings. Provides access to curated scientific skills and supports both GitHub repositories and local skill directories.

LogAnalyzer MCP Server

LogAnalyzer MCP Server

An AI-powered server that provides rapid debugging of server logs with actionable fixes in under 30 seconds, featuring real-time monitoring and root cause analysis through Google Gemini integration.

Todoist MCP Server

Todoist MCP Server

A Model Context Protocol server that enables advanced task and project management in Todoist via Claude Desktop and other MCP-compatible clients.

Yahoo Finance MCP Server

Yahoo Finance MCP Server

Este é um servidor de Protocolo de Contexto de Modelo (MCP) que fornece dados financeiros abrangentes do Yahoo Finance. Ele permite que você recupere informações detalhadas sobre ações, incluindo preços históricos, informações da empresa, demonstrações financeiras, dados de opções e notícias do mercado.

Image Process MCP Server

Image Process MCP Server

Um servidor MCP para processamento de imagens que utiliza a biblioteca Sharp para fornecer funcionalidades de manipulação de imagem.

OpsNow MCP Cost Server

OpsNow MCP Cost Server

Python Mcp Server Sample

Python Mcp Server Sample

MCP Neo4j Knowledge Graph Memory Server

MCP Neo4j Knowledge Graph Memory Server

☢️ NOT READY DO NOT USE ☢️

☢️ NOT READY DO NOT USE ☢️