Discover Awesome MCP Servers

Extend your agent with 25,308 capabilities via MCP servers.

All25,308
DeBank MCP Server

DeBank MCP Server

Enables querying DeFi data through DeBank API, including wallet balances, token prices, NFT collections, protocol positions, transaction history, gas prices, and security analysis across 93+ blockchains through natural language.

mcp-server-myweight

mcp-server-myweight

Exchange Rate MCP Server

Exchange Rate MCP Server

Servidor MCP de brinquedo que fornece acesso a dados de taxas de câmbio da API do Norges Bank.

Bilibili MCP Server

Bilibili MCP Server

Enables interaction with Bilibili (B站) platform through API and web scraping. Supports video search, article search, video info retrieval, comment fetching, danmaku extraction, and article content access.

Text-Toolkit

Text-Toolkit

An MCP server that provides text conversion, formatting, and analysis functions, which can be directly integrated into the development workflow.

PsyFlow-MCP

PsyFlow-MCP

A lightweight FastMCP server that enables language models to discover, clone, transform, and localize PsyFlow task templates through a streamlined workflow with standardized tools.

Minecraft MCP Server

Minecraft MCP Server

A client library that connects AI agents to Minecraft servers, providing full game control with 30 verified skills for common tasks including movement, combat, crafting, and building.

Google Docs MCP Server

Google Docs MCP Server

Enables AI assistants to create, read, edit, and manage Google Docs and Drive files with support for formatting, comments, tables, images, and bulk operations.

MCP Perplexity Server

MCP Perplexity Server

Provides AI-powered search, research, and reasoning capabilities through integration with Perplexity.ai, offering three specialized tools: general conversational AI, deep research with citations, and advanced reasoning.

cobroya

cobroya

Connects AI agents to Mercado Pago, the leading payment platform in Latin America. Create payment links, search payments, get payment details, issue refunds, and retrieve merchant info.

MCP Master Puppeteer

MCP Master Puppeteer

An advanced MCP server for browser automation using Puppeteer, specifically optimized for token efficiency through minimal data returns and progressive enhancement. It enables agents to navigate pages, capture LLM-optimized screenshots, extract structured content, and perform batch interactions.

Washington Law MCP Server

Washington Law MCP Server

Provides offline access to Washington State's Revised Code of Washington (RCW) and Washington Administrative Code (WAC) for AI agents. Enables fast retrieval, full-text search, and navigation of all Washington state laws through natural language queries.

AI Intervention Agent

AI Intervention Agent

Enables real-time user intervention for MCP agents through a web UI, allowing users to review context and provide feedback when AI agents drift from intent, keeping them on track.

GitLab MCP Server

GitLab MCP Server

Enables AI assistants to interact with GitLab projects, allowing users to query merge requests, review discussions, view pipeline test results, search by branch, and respond to comments through natural language.

Toy MCP Server

Toy MCP Server

A simple reference implementation demonstrating MCP server basics with two toy tools: generating random animals and simulating 20-sided die rolls.

clipboard-mcp

clipboard-mcp

MCP server that reads and writes the system clipboard — tables, text, code, JSON, URLs, images, and more. Preserves spreadsheet structure (rows/columns) that is lost when pasting into Claude directly.

kintone MCP Server (Python3)

kintone MCP Server (Python3)

Enables AI assistants to interact with kintone data by providing comprehensive tools for record CRUD operations, file management, and workflow status updates. It supports secure authentication and automatic pagination to handle large datasets efficiently through the Model Context Protocol.

Gelbooru MCP

Gelbooru MCP

A Python MCP server that wraps the Gelbooru API. Connect it to any MCP-compatible client (Claude Desktop, Cursor, etc.) to search posts, look up tags, and generate Stable Diffusion prompts from real character appearance data — all directly from your AI assistant.

MinionWorks – Modular browser agents that work for bananas 🍌

MinionWorks – Modular browser agents that work for bananas 🍌

MCP Adapter

MCP Adapter

Automatically converts OpenAPI specifications into Model Context Protocol applications, enabling HTTP APIs to be managed as MCP services. It features a dynamic architecture that monitors file systems or Kubernetes ConfigMaps to update MCP tools in real-time.

BitScale MCP Server

BitScale MCP Server

Connects your BitScale workspace to Claude, allowing you to manage grids, fetch row data, and trigger enrichment runs via natural language. It provides tools for listing grids, checking run status, and retrieving workspace details.

Perplexity MCP Server

Perplexity MCP Server

Integrates Perplexity AI's search-enhanced language models with Claude Desktop, providing three tools with different complexity levels for quick fact-checking, technical analysis, and deep research.

POHODA MCP Server

POHODA MCP Server

An MCP server that integrates with Stormware POHODA accounting software via the mServer XML API. It provides 48 tools to manage invoices, stock, orders, warehouse documents, and accounting reports through any MCP-compatible client.

Cold Email Assistant

Cold Email Assistant

Automates cold email outreach for job applications by parsing job postings, generating personalized emails using AI, and sending them or saving as drafts in Gmail with resume attachments.

Vercel MCP Server Template

Vercel MCP Server Template

A starter template for deploying Model Context Protocol (MCP) servers on Vercel using TypeScript and Vercel Functions. It includes example tools for rolling dice and checking weather to demonstrate tool integration patterns.

Directmedia MCP

Directmedia MCP

Provides programmatic access to the Directmedia Publishing 'Digitale Bibliothek' collection, a 1990s German electronic book library containing 101 volumes of classic literature and philosophy with text extraction, search, and navigation capabilities.

Sample Model Context Protocol Demos

Sample Model Context Protocol Demos

Here's a collection of examples and concepts related to using the Model Context Protocol (MCP) with AWS, focusing on how it can be applied and what benefits it offers. Keep in mind that the Model Context Protocol is a relatively new and evolving concept, and its adoption within AWS services might vary. This response will cover the general principles and potential applications. **Understanding Model Context Protocol (MCP)** The Model Context Protocol aims to provide a standardized way for models to access contextual information during inference. This context can include: * **User Information:** User ID, location, preferences. * **Session Information:** Current session ID, history of interactions. * **Device Information:** Device type, operating system. * **Environment Information:** Time of day, weather conditions. * **External Data:** Real-time data from databases, APIs, or other services. The goal is to make models more aware of their environment, leading to more accurate and personalized predictions. Instead of hardcoding context into the model or passing it directly in the inference request, MCP provides a structured and potentially more efficient way to manage and access this information. **How MCP Could Be Used with AWS Services** While a direct, fully-fledged "MCP service" might not exist as a standalone AWS offering, the principles of MCP can be implemented and leveraged using various AWS services. Here's how: 1. **Amazon SageMaker:** * **Custom Inference Containers:** You can build custom inference containers for SageMaker that implement the MCP. This involves: * **Defining a Context Provider:** A component within your container that fetches context data from various sources (e.g., DynamoDB, Redis, external APIs). * **Integrating with the Model:** Modifying your model's inference code to query the context provider for relevant information before making predictions. * **Deployment:** Deploying the container to SageMaker endpoints. * **SageMaker Inference Pipelines:** You can create inference pipelines where one step is dedicated to fetching and preparing context data. This step could use AWS Lambda or a custom processing container. The output of this step is then passed to the model inference step. * **SageMaker Feature Store:** While not directly MCP, SageMaker Feature Store provides a centralized repository for features that can be used as context. Your inference code can retrieve features from the Feature Store based on a key (e.g., user ID) and use them during inference. This is a common way to provide contextual information. * **Example Scenario:** A recommendation engine deployed on SageMaker. The inference container uses the user ID from the request to query a DynamoDB table (acting as a context provider) for the user's past purchase history, browsing behavior, and demographic information. This information is then fed into the recommendation model to generate personalized recommendations. 2. **AWS Lambda:** * **Context Enrichment:** Lambda functions can be used to enrich incoming inference requests with context data. The Lambda function receives the initial request, fetches context from various sources (e.g., DynamoDB, API Gateway, S3), and then passes the augmented request to the model endpoint (e.g., a SageMaker endpoint). * **Example Scenario:** An image recognition service. The Lambda function receives an image upload request. It then uses the user's location (obtained from the request headers or a user profile) to fetch weather data from an external API. The weather data is added to the request payload and sent to the image recognition model, which might use this information to improve its accuracy (e.g., recognizing objects that are more likely to be present in certain weather conditions). 3. **Amazon API Gateway:** * **Request Transformation:** API Gateway can be configured to transform incoming requests and add context information. This can involve extracting data from request headers, query parameters, or even making calls to other AWS services (e.g., Lambda) to fetch context data. * **Example Scenario:** A fraud detection service. API Gateway receives a transaction request. It extracts the user's IP address and device information from the request headers. It then uses a Lambda function to geolocate the IP address and identify the device type. This information is added to the request payload and sent to the fraud detection model. 4. **Amazon DynamoDB:** * **Context Storage:** DynamoDB can be used as a fast and scalable storage solution for context data. You can store user profiles, session information, and other relevant data in DynamoDB and retrieve it during inference. * **Example Scenario:** A personalized marketing campaign. The model needs to predict the likelihood of a user clicking on an ad. DynamoDB stores user profiles with information such as age, gender, interests, and past interactions with ads. The inference code retrieves this information from DynamoDB and uses it to personalize the ad prediction. 5. **Amazon ElastiCache (Redis/Memcached):** * **Caching Context Data:** ElastiCache can be used to cache frequently accessed context data, reducing latency and improving performance. This is particularly useful for context data that is relatively static or changes infrequently. * **Example Scenario:** A real-time bidding (RTB) system. The model needs to predict the value of an ad impression. ElastiCache stores frequently accessed data such as user demographics, website categories, and ad performance metrics. The inference code retrieves this information from ElastiCache to make a fast and accurate bid. **Key Considerations for Implementing MCP-like Functionality on AWS:** * **Data Consistency:** Ensure that the context data is consistent and up-to-date. Use appropriate caching strategies and data synchronization mechanisms. * **Latency:** Minimize the latency of fetching context data. Use fast storage solutions (e.g., DynamoDB, ElastiCache) and optimize your queries. * **Security:** Protect the context data from unauthorized access. Use appropriate authentication and authorization mechanisms. * **Scalability:** Design your system to scale to handle a large number of inference requests. Use scalable AWS services such as DynamoDB, Lambda, and API Gateway. * **Cost Optimization:** Optimize the cost of fetching and storing context data. Use appropriate caching strategies and choose the most cost-effective AWS services. * **Monitoring and Logging:** Monitor the performance of your system and log any errors. Use AWS CloudWatch to monitor metrics and logs. **Example Code Snippet (Conceptual - Python with Boto3):** ```python import boto3 import json # Assume you have a SageMaker endpoint and a DynamoDB table for user context sagemaker_client = boto3.client('sagemaker-runtime') dynamodb_client = boto3.client('dynamodb') def get_user_context(user_id): """Fetches user context from DynamoDB.""" try: response = dynamodb_client.get_item( TableName='user_context_table', Key={'user_id': {'S': user_id}} ) if 'Item' in response: return response['Item'] else: return None # User not found except Exception as e: print(f"Error fetching user context: {e}") return None def invoke_sagemaker_endpoint(user_id, input_data): """Invokes the SageMaker endpoint with user context.""" user_context = get_user_context(user_id) if user_context: # Transform DynamoDB item to a more usable format (e.g., a dictionary) context_data = {k: list(v.values())[0] for k, v in user_context.items()} # Simple conversion, adjust as needed # Augment the input data with context input_data['context'] = context_data # Convert input data to JSON for SageMaker payload = json.dumps(input_data) try: response = sagemaker_client.invoke_endpoint( EndpointName='your-sagemaker-endpoint', ContentType='application/json', Body=payload ) result = json.loads(response['Body'].read().decode()) return result except Exception as e: print(f"Error invoking SageMaker endpoint: {e}") return None # Example usage user_id = 'user123' input_data = {'feature1': 0.5, 'feature2': 0.8} # Initial input data prediction = invoke_sagemaker_endpoint(user_id, input_data) if prediction: print(f"Prediction: {prediction}") else: print("Failed to get prediction.") ``` **Explanation of the Code:** 1. **`get_user_context(user_id)`:** This function retrieves user context from a DynamoDB table based on the `user_id`. It uses the `boto3` library to interact with DynamoDB. Error handling is included. It returns `None` if the user is not found or if there's an error. The conversion of the DynamoDB item to a dictionary is a crucial step, and you'll need to adapt it based on the structure of your DynamoDB data. 2. **`invoke_sagemaker_endpoint(user_id, input_data)`:** This function orchestrates the process: * It calls `get_user_context()` to retrieve the user's context. * If context is found, it augments the `input_data` with the context information. This is where you'd structure the context data to be compatible with your model's input requirements. * It converts the augmented `input_data` to a JSON payload. * It invokes the SageMaker endpoint using the `sagemaker-runtime` client. * It parses the response from the endpoint and returns the result. Error handling is included. 3. **Example Usage:** Shows how to call the `invoke_sagemaker_endpoint` function with a `user_id` and some initial `input_data`. **Important Notes:** * **Replace Placeholders:** You *must* replace the placeholder values (e.g., `'user_context_table'`, `'your-sagemaker-endpoint'`) with your actual resource names. * **IAM Permissions:** Ensure that your Lambda function (or the IAM role associated with your SageMaker endpoint) has the necessary IAM permissions to access DynamoDB and invoke the SageMaker endpoint. * **Data Transformation:** The way you transform the DynamoDB item into a dictionary (or other format) will depend on the structure of your data and the expected input format of your model. Pay close attention to this step. * **Error Handling:** The code includes basic error handling, but you should add more robust error handling and logging in a production environment. * **Context Data Structure:** The structure of the `context_data` dictionary should match the expected input format of your model. You might need to perform additional data transformations to ensure compatibility. * **Alternative Context Sources:** You can easily adapt the `get_user_context` function to fetch context data from other sources, such as ElastiCache, S3, or external APIs. **Benefits of Using MCP Principles with AWS:** * **Improved Model Accuracy:** By providing models with access to relevant context, you can improve their accuracy and make more informed predictions. * **Personalization:** MCP enables you to personalize model predictions based on user preferences, location, and other contextual factors. * **Flexibility:** You can easily update and modify the context data without retraining the model. * **Scalability:** AWS services provide the scalability and reliability needed to handle a large number of inference requests. * **Centralized Context Management:** You can manage context data in a centralized location, making it easier to maintain and update. In summary, while a dedicated "Model Context Protocol" service might not be explicitly available on AWS, you can effectively implement the principles of MCP by leveraging various AWS services such as SageMaker, Lambda, API Gateway, DynamoDB, and ElastiCache. The key is to design a system that allows your models to access and utilize relevant context data during inference, leading to more accurate and personalized predictions. The example code provides a starting point for building such a system. Remember to adapt the code and architecture to your specific use case and requirements.

OpsNow MCP Cost Server

OpsNow MCP Cost Server

MCP Adobe Experience Platform Server

MCP Adobe Experience Platform Server

A Node.js server that provides a comprehensive API interface for Adobe Experience Platform (AEP) integration. It enables users to manage schemas, datasets, segments, and profiles while supporting data ingestion and query services.

Python Mcp Server Sample

Python Mcp Server Sample