Discover Awesome MCP Servers
Extend your agent with 20,381 capabilities via MCP servers.
- All20,381
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Xava Labs MCP Template
A template repository for building Model Context Protocol (MCP) servers that enables developers to create interactive AI agents with real-time bidirectional communication capabilities through WebSocket and SSE endpoints.
MCP Server Demo
A Model Context Protocol server that provides tools and resources for Java CI/CD workflows and GitLab CI template management.
MCP Echo Server
Theneo MCP Server
Enables AI assistants to automatically create, update, and publish API documentation through Theneo's platform. Supports OpenAPI specs, Postman collections, AI-powered description generation, and natural language interactions for seamless documentation workflows.
MCP Simple Server
Um servidor simples que implementa o Protocolo de Contexto do Modelo para busca de documentos.
PentestThinkingMCP
An AI-powered penetration testing reasoning engine that provides automated attack path planning, step-by-step guidance for CTFs/HTB challenges, and tool recommendations using Beam Search and MCTS algorithms.
Cold Email Assistant
Automates cold email outreach for job applications by parsing job postings, generating personalized emails using AI, and sending them or saving as drafts in Gmail with resume attachments.
Bitwig MCP Server
Servidor MCP para Bitwig Studio
Directmedia MCP
Provides programmatic access to the Directmedia Publishing 'Digitale Bibliothek' collection, a 1990s German electronic book library containing 101 volumes of classic literature and philosophy with text extraction, search, and navigation capabilities.
Sample Model Context Protocol Demos
Here's a collection of examples and concepts related to using the Model Context Protocol (MCP) with AWS, focusing on how it can be applied and what benefits it offers. Keep in mind that the Model Context Protocol is a relatively new and evolving concept, and its adoption within AWS services might vary. This response will cover the general principles and potential applications. **Understanding Model Context Protocol (MCP)** The Model Context Protocol aims to provide a standardized way for models to access contextual information during inference. This context can include: * **User Information:** User ID, location, preferences. * **Session Information:** Current session ID, history of interactions. * **Device Information:** Device type, operating system. * **Environment Information:** Time of day, weather conditions. * **External Data:** Real-time data from databases, APIs, or other services. The goal is to make models more aware of their environment, leading to more accurate and personalized predictions. Instead of hardcoding context into the model or passing it directly in the inference request, MCP provides a structured and potentially more efficient way to manage and access this information. **How MCP Could Be Used with AWS Services** While a direct, fully-fledged "MCP service" might not exist as a standalone AWS offering, the principles of MCP can be implemented and leveraged using various AWS services. Here's how: 1. **Amazon SageMaker:** * **Custom Inference Containers:** You can build custom inference containers for SageMaker that implement the MCP. This involves: * **Defining a Context Provider:** A component within your container that fetches context data from various sources (e.g., DynamoDB, Redis, external APIs). * **Integrating with the Model:** Modifying your model's inference code to query the context provider for relevant information before making predictions. * **Deployment:** Deploying the container to SageMaker endpoints. * **SageMaker Inference Pipelines:** You can create inference pipelines where one step is dedicated to fetching and preparing context data. This step could use AWS Lambda or a custom processing container. The output of this step is then passed to the model inference step. * **SageMaker Feature Store:** While not directly MCP, SageMaker Feature Store provides a centralized repository for features that can be used as context. Your inference code can retrieve features from the Feature Store based on a key (e.g., user ID) and use them during inference. This is a common way to provide contextual information. * **Example Scenario:** A recommendation engine deployed on SageMaker. The inference container uses the user ID from the request to query a DynamoDB table (acting as a context provider) for the user's past purchase history, browsing behavior, and demographic information. This information is then fed into the recommendation model to generate personalized recommendations. 2. **AWS Lambda:** * **Context Enrichment:** Lambda functions can be used to enrich incoming inference requests with context data. The Lambda function receives the initial request, fetches context from various sources (e.g., DynamoDB, API Gateway, S3), and then passes the augmented request to the model endpoint (e.g., a SageMaker endpoint). * **Example Scenario:** An image recognition service. The Lambda function receives an image upload request. It then uses the user's location (obtained from the request headers or a user profile) to fetch weather data from an external API. The weather data is added to the request payload and sent to the image recognition model, which might use this information to improve its accuracy (e.g., recognizing objects that are more likely to be present in certain weather conditions). 3. **Amazon API Gateway:** * **Request Transformation:** API Gateway can be configured to transform incoming requests and add context information. This can involve extracting data from request headers, query parameters, or even making calls to other AWS services (e.g., Lambda) to fetch context data. * **Example Scenario:** A fraud detection service. API Gateway receives a transaction request. It extracts the user's IP address and device information from the request headers. It then uses a Lambda function to geolocate the IP address and identify the device type. This information is added to the request payload and sent to the fraud detection model. 4. **Amazon DynamoDB:** * **Context Storage:** DynamoDB can be used as a fast and scalable storage solution for context data. You can store user profiles, session information, and other relevant data in DynamoDB and retrieve it during inference. * **Example Scenario:** A personalized marketing campaign. The model needs to predict the likelihood of a user clicking on an ad. DynamoDB stores user profiles with information such as age, gender, interests, and past interactions with ads. The inference code retrieves this information from DynamoDB and uses it to personalize the ad prediction. 5. **Amazon ElastiCache (Redis/Memcached):** * **Caching Context Data:** ElastiCache can be used to cache frequently accessed context data, reducing latency and improving performance. This is particularly useful for context data that is relatively static or changes infrequently. * **Example Scenario:** A real-time bidding (RTB) system. The model needs to predict the value of an ad impression. ElastiCache stores frequently accessed data such as user demographics, website categories, and ad performance metrics. The inference code retrieves this information from ElastiCache to make a fast and accurate bid. **Key Considerations for Implementing MCP-like Functionality on AWS:** * **Data Consistency:** Ensure that the context data is consistent and up-to-date. Use appropriate caching strategies and data synchronization mechanisms. * **Latency:** Minimize the latency of fetching context data. Use fast storage solutions (e.g., DynamoDB, ElastiCache) and optimize your queries. * **Security:** Protect the context data from unauthorized access. Use appropriate authentication and authorization mechanisms. * **Scalability:** Design your system to scale to handle a large number of inference requests. Use scalable AWS services such as DynamoDB, Lambda, and API Gateway. * **Cost Optimization:** Optimize the cost of fetching and storing context data. Use appropriate caching strategies and choose the most cost-effective AWS services. * **Monitoring and Logging:** Monitor the performance of your system and log any errors. Use AWS CloudWatch to monitor metrics and logs. **Example Code Snippet (Conceptual - Python with Boto3):** ```python import boto3 import json # Assume you have a SageMaker endpoint and a DynamoDB table for user context sagemaker_client = boto3.client('sagemaker-runtime') dynamodb_client = boto3.client('dynamodb') def get_user_context(user_id): """Fetches user context from DynamoDB.""" try: response = dynamodb_client.get_item( TableName='user_context_table', Key={'user_id': {'S': user_id}} ) if 'Item' in response: return response['Item'] else: return None # User not found except Exception as e: print(f"Error fetching user context: {e}") return None def invoke_sagemaker_endpoint(user_id, input_data): """Invokes the SageMaker endpoint with user context.""" user_context = get_user_context(user_id) if user_context: # Transform DynamoDB item to a more usable format (e.g., a dictionary) context_data = {k: list(v.values())[0] for k, v in user_context.items()} # Simple conversion, adjust as needed # Augment the input data with context input_data['context'] = context_data # Convert input data to JSON for SageMaker payload = json.dumps(input_data) try: response = sagemaker_client.invoke_endpoint( EndpointName='your-sagemaker-endpoint', ContentType='application/json', Body=payload ) result = json.loads(response['Body'].read().decode()) return result except Exception as e: print(f"Error invoking SageMaker endpoint: {e}") return None # Example usage user_id = 'user123' input_data = {'feature1': 0.5, 'feature2': 0.8} # Initial input data prediction = invoke_sagemaker_endpoint(user_id, input_data) if prediction: print(f"Prediction: {prediction}") else: print("Failed to get prediction.") ``` **Explanation of the Code:** 1. **`get_user_context(user_id)`:** This function retrieves user context from a DynamoDB table based on the `user_id`. It uses the `boto3` library to interact with DynamoDB. Error handling is included. It returns `None` if the user is not found or if there's an error. The conversion of the DynamoDB item to a dictionary is a crucial step, and you'll need to adapt it based on the structure of your DynamoDB data. 2. **`invoke_sagemaker_endpoint(user_id, input_data)`:** This function orchestrates the process: * It calls `get_user_context()` to retrieve the user's context. * If context is found, it augments the `input_data` with the context information. This is where you'd structure the context data to be compatible with your model's input requirements. * It converts the augmented `input_data` to a JSON payload. * It invokes the SageMaker endpoint using the `sagemaker-runtime` client. * It parses the response from the endpoint and returns the result. Error handling is included. 3. **Example Usage:** Shows how to call the `invoke_sagemaker_endpoint` function with a `user_id` and some initial `input_data`. **Important Notes:** * **Replace Placeholders:** You *must* replace the placeholder values (e.g., `'user_context_table'`, `'your-sagemaker-endpoint'`) with your actual resource names. * **IAM Permissions:** Ensure that your Lambda function (or the IAM role associated with your SageMaker endpoint) has the necessary IAM permissions to access DynamoDB and invoke the SageMaker endpoint. * **Data Transformation:** The way you transform the DynamoDB item into a dictionary (or other format) will depend on the structure of your data and the expected input format of your model. Pay close attention to this step. * **Error Handling:** The code includes basic error handling, but you should add more robust error handling and logging in a production environment. * **Context Data Structure:** The structure of the `context_data` dictionary should match the expected input format of your model. You might need to perform additional data transformations to ensure compatibility. * **Alternative Context Sources:** You can easily adapt the `get_user_context` function to fetch context data from other sources, such as ElastiCache, S3, or external APIs. **Benefits of Using MCP Principles with AWS:** * **Improved Model Accuracy:** By providing models with access to relevant context, you can improve their accuracy and make more informed predictions. * **Personalization:** MCP enables you to personalize model predictions based on user preferences, location, and other contextual factors. * **Flexibility:** You can easily update and modify the context data without retraining the model. * **Scalability:** AWS services provide the scalability and reliability needed to handle a large number of inference requests. * **Centralized Context Management:** You can manage context data in a centralized location, making it easier to maintain and update. In summary, while a dedicated "Model Context Protocol" service might not be explicitly available on AWS, you can effectively implement the principles of MCP by leveraging various AWS services such as SageMaker, Lambda, API Gateway, DynamoDB, and ElastiCache. The key is to design a system that allows your models to access and utilize relevant context data during inference, leading to more accurate and personalized predictions. The example code provides a starting point for building such a system. Remember to adapt the code and architecture to your specific use case and requirements.
COTI MCP Server
Enables AI applications to interact with the COTI blockchain for private token operations, supporting account management, private ERC20/ERC721 tokens, and secure transactions using Multi-Party Computation (MPC) technology.
MCP Neo4j Knowledge Graph Memory Server
☢️ NOT READY DO NOT USE ☢️
long-context-mcp
An MCP server implementing Recursive Language Models (RLM) to process arbitrarily large contexts through a programmatic probe, recurse, and synthesize loop. It enables LLMs to perform multi-step investigations and evidence-backed extraction across massive file sets without being limited by standard context windows.
MY MCP
A template MCP server that provides a basic structure and configuration for building Model Context Protocol servers. Includes Docker support, environment configuration, and examples for both STDIO and SSE transport modes.
Excel MCP Server
Enables conversational data analysis of Excel/CSV files through natural language queries, powered by 395 Excel functions via HyperFormula and multi-provider AI. Supports advanced analytics, bulk operations, financial modeling, and large file processing with intelligent chunking.
OpsNow MCP Cost Server
Kolosal Vision MCP
Provides AI-powered image analysis and OCR capabilities using the Kolosal Vision API. Supports analyzing images from URLs, local files, or base64 data with natural language queries for object detection, scene description, text extraction, and visual assessment.
TOTP MCP Server
Generates time-based one-time password (TOTP) 2FA codes for configured accounts, enabling Claude to automate workflows requiring two-factor authentication.
Python Mcp Server Sample
Model Context Protocol (MCP) MSPaint App Automation
Okay, this is a more complex request involving inter-process communication (MCP), mathematical problem solving, and integration with MSPaint. Here's a conceptual outline and a simplified Python example to illustrate the core ideas. Keep in mind that a fully robust solution would require significantly more code and error handling. **Conceptual Outline** 1. **MCP Server (Python):** * Listens for incoming connections on a specific port. * Receives a mathematical problem (as a string) from the client. * Parses the problem. * Solves the problem. * Generates a solution string (including steps). * Sends the solution string back to the client. 2. **MCP Client (Python):** * Connects to the MCP server. * Prompts the user to enter a math problem. * Sends the problem to the server. * Receives the solution from the server. * Creates a temporary image file (e.g., using PIL/Pillow). * Draws the solution text onto the image. * Saves the image. * Opens the image in MSPaint using `os.system` or `subprocess`. **Simplified Python Example (Illustrative)** ```python # server.py import socket import threading import ast import traceback HOST = '127.0.0.1' # Standard loopback interface address (localhost) PORT = 65432 # Port to listen on (non-privileged ports are > 1023) def solve_problem(problem): """ A very basic problem solver. Expand this significantly! """ try: # WARNING: Using eval() is DANGEROUS with untrusted input. # This is ONLY for demonstration. Use a proper math parser. result = eval(problem) solution = f"Problem: {problem}\nSolution: {result}" return solution except Exception as e: return f"Error solving problem: {e}\n{traceback.format_exc()}" def handle_client(conn, addr): print(f"Connected by {addr}") with conn: while True: data = conn.recv(1024) if not data: break problem = data.decode() print(f"Received problem: {problem}") solution = solve_problem(problem) conn.sendall(solution.encode()) print(f"Sent solution") def server_main(): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen() print(f"Server listening on {HOST}:{PORT}") while True: conn, addr = s.accept() thread = threading.Thread(target=handle_client, args=(conn, addr)) thread.start() if __name__ == "__main__": server_main() ``` ```python # client.py import socket import os import subprocess from PIL import Image, ImageDraw, ImageFont HOST = '127.0.0.1' # The server's hostname or IP address PORT = 65432 # The port used by the server IMAGE_FILE = "solution.png" # Name of the image file def create_image(text, filename): """Creates an image with the given text.""" img = Image.new('RGB', (800, 600), color='white') # Adjust size as needed d = ImageDraw.Draw(img) font = ImageFont.truetype("arial.ttf", 20) # Or another font you have d.text((10, 10), text, fill='black', font=font) img.save(filename) def client_main(): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect((HOST, PORT)) problem = input("Enter a math problem: ") s.sendall(problem.encode()) data = s.recv(4096) # Increase buffer size if needed solution = data.decode() print(f"Received solution:\n{solution}") create_image(solution, IMAGE_FILE) # Open MSPaint (Windows-specific) try: #os.system(f"mspaint {IMAGE_FILE}") # simpler but less robust subprocess.run(["mspaint", IMAGE_FILE]) # more robust except FileNotFoundError: print("MSPaint not found. Make sure it's in your PATH.") except Exception as e: print(f"Error opening MSPaint: {e}") if __name__ == "__main__": client_main() ``` **Key Improvements and Explanations** * **Error Handling:** Includes `try...except` blocks to catch potential errors during problem solving, image creation, and MSPaint execution. The server also includes a traceback to help debug server-side errors. * **Image Creation (PIL/Pillow):** Uses the Pillow library to create an image and draw the solution text onto it. You'll need to install Pillow: `pip install Pillow`. You'll also need to specify a font file that exists on your system (e.g., "arial.ttf"). * **MSPaint Integration:** Uses `subprocess.run(["mspaint", IMAGE_FILE])` to open the image in MSPaint. This is generally more robust than `os.system`. It also includes a check to see if MSPaint is found. * **Encoding/Decoding:** Explicitly encodes and decodes strings when sending data over the socket. * **Threading (Server):** The server now uses threads to handle multiple client connections concurrently. * **`solve_problem` function:** This is now a function, making the code more organized. **IMPORTANT:** The `eval()` function is extremely dangerous with untrusted input. See the warnings below. * **Buffer Size:** Increased the receive buffer size on the client to 4096 bytes. Adjust as needed based on the expected size of the solution string. **How to Run** 1. **Save:** Save the code as `server.py` and `client.py`. 2. **Install Pillow:** `pip install Pillow` 3. **Run the Server:** Open a terminal and run `python server.py`. 4. **Run the Client:** Open another terminal and run `python client.py`. 5. **Enter a Problem:** The client will prompt you to enter a math problem (e.g., `2 + 2`). 6. **MSPaint:** The client will create an image with the solution and attempt to open it in MSPaint. **Important Considerations and Next Steps** * **Security (VERY IMPORTANT):** **DO NOT USE `eval()` IN PRODUCTION CODE!** It is extremely vulnerable to code injection if the input is not carefully sanitized. Use a safe math parsing library like `ast.literal_eval()` (for very simple expressions) or a more robust library like `sympy`. `ast.literal_eval()` only supports basic Python literals (strings, numbers, tuples, lists, dicts, booleans, `None`). `sympy` is a full-featured symbolic mathematics library. ```python # Example using ast.literal_eval (SAFER for simple expressions) import ast def solve_problem_safe(problem): try: result = ast.literal_eval(problem) # Safer than eval() solution = f"Problem: {problem}\nSolution: {result}" return solution except (ValueError, SyntaxError) as e: return f"Error: Invalid expression: {e}" # Example using sympy (for more complex math) # import sympy # from sympy.parsing.mathematica import parse_mathematica # if you want to parse mathematica syntax # def solve_problem_sympy(problem): # try: # #parsed_expr = sympy.parsing.mathematica.parse_mathematica(problem) # if using mathematica syntax # parsed_expr = sympy.sympify(problem) # sympy's default parser # result = sympy.simplify(parsed_expr) # solution = f"Problem: {problem}\nSolution: {result}" # return solution # except Exception as e: # return f"Error: {e}" ``` * **Error Handling:** Add more comprehensive error handling to both the client and server. Handle socket errors, file I/O errors, and MSPaint errors gracefully. * **Problem Parsing:** Implement a more sophisticated problem parser. Consider using a library like `sympy` to handle a wider range of mathematical expressions. * **Solution Formatting:** Improve the formatting of the solution text in the image. Use different fonts, colors, and layout techniques to make it more readable. * **User Interface:** Consider using a GUI library like Tkinter, PyQt, or Kivy to create a more user-friendly interface for the client. * **MCP Protocol:** Define a more formal MCP protocol for communication between the client and server. This could involve defining message types, error codes, and data formats. Consider using JSON or Protocol Buffers for serialization. * **Platform Independence:** The MSPaint integration is Windows-specific. To make the client platform-independent, you'll need to use a different image viewer or editor that is available on other operating systems. You could also allow the user to specify the image viewer to use. * **Security (Again):** If this is going to be used in any kind of networked environment, think very carefully about security. Authentication, authorization, and encryption may be necessary. This expanded example provides a much more solid foundation for building your MCP math problem solver. Remember to prioritize security and error handling as you add more features. Good luck!
MCP Firebird
Um servidor que implementa o Protocolo de Contexto de Modelo (MCP) da Anthropic para bancos de dados Firebird SQL, permitindo que Claude e outros LLMs acessem, analisem e manipulem dados em bancos de dados Firebird de forma segura através de linguagem natural.
Sassy Fact Check Bot
Generates witty, citation-backed responses to health myths and misinformation with automatic tone adjustment for sensitive topics. Integrates with Instagram DMs to fact-check viral claims with sass and sources.
😎 Contributing
🔥🔒 Awesome MCP (Model Context Protocol) Security 🖥️
Voice Call MCP Server
Um servidor de Protocolo de Contexto de Modelo que permite que assistentes de IA como o Claude iniciem e gerenciem chamadas de voz em tempo real usando o Twilio e os modelos de voz da OpenAI.
EduChain MCP Server
Integrates the EduChain library with Claude Desktop to generate educational content such as multiple-choice questions, lesson plans, and flashcards. It utilizes Gemini LLMs through LangChain to provide local and secure content generation tools.
MCP-Odoo
A bridge that allows AI agents to access and manipulate Odoo ERP data through a standardized Model Context Protocol interface, supporting partner information, accounting data, financial records reconciliation, and invoice queries.
EliteMCP
Analyzes directory structures with .gitignore awareness and executes Python code in secure sandboxed environments. Combines intelligent codebase analysis with safe code execution for development workflows.
Amazon Product Search MCP
Enables AI-powered Amazon product searches and recommendations by integrating the Amazon API with Hugging Face models. It allows users to filter products by price and specific features to receive tailored shopping suggestions.
Domain Availability Checker MCP
Domain Availability Checker MCP