Discover Awesome MCP Servers

Extend your agent with 20,526 capabilities via MCP servers.

All20,526
play-sound-mcp-server

play-sound-mcp-server

An MCP server that plays local sound files on macOS using the afplay command, allowing AI assistants to trigger audio notifications after responding.

BigQuery FinOps MCP Server

BigQuery FinOps MCP Server

Enables cost optimization and financial operations for Google BigQuery through natural language interactions. Provides insights into BigQuery spending, usage patterns, and cost management recommendations.

Model Context Protocol servers

Model Context Protocol servers

3D Asset Processing MCP

3D Asset Processing MCP

Enables processing, validation, optimization, and analysis of 3D models with glTF/GLB support, including format conversion, compression (Meshopt/Draco), texture optimization, and detailed model statistics.

News MCP Server

News MCP Server

Aggregates news from 7 APIs and unlimited RSS feeds with AI-powered bias removal and synthesis. Provides over 7,300 free daily requests with conversation-aware caching and 25 comprehensive news analysis tools.

Dokploy MCP Server

Dokploy MCP Server

Enables AI assistants to manage Dokploy deployments, including creating and deploying applications, managing databases, configuring domains with SSL, and monitoring application status through a standardized interface.

🏆 Audiense Demand MCP Server

🏆 Audiense Demand MCP Server

このMCPは、Audiense Demandアカウントとの連携を支援します。デマンドレポートの作成と分析、エンティティのパフォーマンス追跡、さまざまなチャネルや国を跨いだインサイトの取得などのツールを提供します。

WikiJS MCP Server

WikiJS MCP Server

Enables AI assistants to search and retrieve content from WikiJS knowledge bases, allowing integration with your Wiki through simple search and retrieval tools.

Okta MCP Server

Okta MCP Server

Enables LLM agents to manage Okta organizations through natural language, providing full CRUD operations for users, groups, applications, policies, and system logs via Okta's Admin Management APIs.

Sentry Issues MCP

Sentry Issues MCP

SentryのIssueを、以下の2つのシンプルなツールで取得できるようにするサーバー:URL/IDで特定のIssueを取得、またはプロジェクトからIssueのリストを取得。

Healthcare MCP Server

Healthcare MCP Server

AIアシスタントが、FDA医薬品情報、PubMed研究、健康に関するトピック、臨床試験、医学用語検索など、医療データツールへのアクセスを提供する、モデルコンテキストプロトコルサーバー。

MCP-server

MCP-server

llms-txt-mcp

llms-txt-mcp

Enables fast, token-efficient access to large documentation files in llms.txt format through semantic search. Solves token limit issues by searching first and retrieving only relevant sections instead of dumping entire documentation.

AMiner MCP Server

AMiner MCP Server

Enables academic paper search and analysis through the AMiner API. Supports keyword, author, and venue-based searches with advanced filtering and citation data for research assistance.

Bitbucket MCP Server

Bitbucket MCP Server

An MCP server that enables interaction with Bitbucket repositories through the Model Context Protocol, supporting both Bitbucket Cloud and Server with features for PR lifecycle management and code review.

Crypto Trader MCP Tool

Crypto Trader MCP Tool

Provides cryptocurrency market data using the CoinGecko API

PDF Redaction MCP Server

PDF Redaction MCP Server

Enables loading, reviewing, and redacting sensitive content in PDF documents through text-based or area-based redaction methods. Supports customizable redaction appearance and saves redacted PDFs with comprehensive error handling.

ncbi-mcp

ncbi-mcp

NIH(アメリカ国立衛生研究所)の国立生物学情報センター(NCBI)のMCPサーバー

url-download-mcp

url-download-mcp

A Model Context Protocol (MCP) server that enables AI assistants to download files from URLs to the local filesystem.

Delphi Build Server

Delphi Build Server

Enables building and cleaning Delphi projects (.dproj/.groupproj) on Windows using MSBuild with RAD Studio environment initialization. Supports both individual projects and group projects with configurable build configurations and platforms.

Black Orchid

Black Orchid

A hot-reloadable MCP proxy server that enables users to create and manage custom Python tools through dynamic module loading. Users can build their own utilities, wrap APIs, and extend functionality by simply adding Python files to designated folders.

Dummy MCP Server

Dummy MCP Server

A simple Meta-agent Communication Protocol server built with FastMCP framework that provides 'echo' and 'dummy' tools via Server-Sent Events for demonstration and testing purposes.

Claude Jester MCP

Claude Jester MCP

Transforms Claude from a code generator into a programming partner capable of testing, debugging, and optimizing code automatically through a secure execution environment.

@depthark/css-first

@depthark/css-first

This server integrates with Mozilla Developer Network (MDN) documentation to suggest CSS properties, check browser support, and provide implementation guidance with user consent mechanisms.

MCP Cheat Engine Server

MCP Cheat Engine Server

Provides safe, read-only access to memory analysis and debugging functionality through the Model Context Protocol, allowing users to examine computer memory for software development, security research, and educational purposes.

Openfort MCP Server

Openfort MCP Server

Enables AI assistants to interact with Openfort's wallet infrastructure, allowing them to create projects, manage configurations, generate wallets and users, and query documentation through 42 integrated tools.

面试鸭 MCP Server

面试鸭 MCP Server

Spring AI をベースにした面接アヒル検索問題の MCP Server サービスで、AI が企業の面接本番問題と回答を迅速に検索できるようにします。

Pearch

Pearch

This project provides a tool for searching people using the Pearch.ai, implemented as a FastMCP service.

MCPStudio: The Postman for Model Context Protocol

MCPStudio: The Postman for Model Context Protocol

MCPサーバー用のPostman

Aws Sample Gen Ai Mcp Server

Aws Sample Gen Ai Mcp Server

Okay, here's a sample code snippet demonstrating how to use Gen-AI (Bedrock) with an MCP (Message Control Protocol) server. This example focuses on the core concepts and assumes you have the necessary libraries and configurations set up. It's a simplified illustration and will need adaptation based on your specific MCP server and Bedrock use case. **Conceptual Overview** 1. **MCP Server:** This acts as a central point for receiving requests. It could be a simple TCP server or a more sophisticated message queue system. The code below uses a basic TCP server for demonstration. 2. **Bedrock (Gen-AI):** This is where the AI model resides. You'll use the Bedrock API to send prompts and receive responses. 3. **Workflow:** * The MCP server receives a request (e.g., a text prompt). * The server forwards the prompt to Bedrock. * Bedrock processes the prompt and returns a response. * The server sends the response back to the client. **Python Example (using `socket` for MCP and `boto3` for Bedrock)** ```python import socket import boto3 import json # Configuration (replace with your actual values) MCP_HOST = 'localhost' # Or your MCP server's IP address MCP_PORT = 12345 # Or your MCP server's port BEDROCK_REGION = 'us-east-1' # Or your Bedrock region BEDROCK_MODEL_ID = 'anthropic.claude-v2' # Or your desired Bedrock model ID ACCEPTABLE_ORIGINS = ["localhost", "127.0.0.1"] # Add any other acceptable origins here # Initialize Bedrock client bedrock = boto3.client(service_name='bedrock-runtime', region_name=BEDROCK_REGION) def handle_request(client_socket, client_address): """Handles a single request from a client.""" try: data = client_socket.recv(1024).decode('utf-8') if not data: return # Client disconnected print(f"Received from {client_address}: {data}") # Check origin (very basic example - improve this for production!) try: request_json = json.loads(data) origin = request_json.get("origin", None) prompt = request_json.get("prompt", None) except json.JSONDecodeError: print("Invalid JSON received") client_socket.sendall("Invalid JSON".encode('utf-8')) return if origin not in ACCEPTABLE_ORIGINS: print(f"Request from unacceptable origin: {origin}") client_socket.sendall("Origin not allowed".encode('utf-8')) return if not prompt: print("No prompt provided") client_socket.sendall("No prompt provided".encode('utf-8')) return # Call Bedrock try: response = invoke_bedrock(prompt) client_socket.sendall(response.encode('utf-8')) except Exception as e: print(f"Bedrock error: {e}") client_socket.sendall(f"Bedrock error: {e}".encode('utf-8')) except Exception as e: print(f"Error handling request: {e}") finally: client_socket.close() def invoke_bedrock(prompt): """Invokes the Bedrock model with the given prompt.""" # Construct the request body (adjust based on the model) body = json.dumps({ "prompt": prompt, "max_tokens_to_sample": 200, # Adjust as needed "temperature": 0.5, # Adjust as needed "top_p": 0.9 # Adjust as needed }) try: response = bedrock.invoke_model( modelId=BEDROCK_MODEL_ID, contentType='application/json', accept='application/json', body=body ) response_body = json.loads(response['body'].read().decode('utf-8')) completion = response_body['completion'] # Adjust based on model's response format return completion except Exception as e: print(f"Error invoking Bedrock: {e}") return f"Error: {e}" def start_mcp_server(): """Starts the MCP server.""" server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server_socket.bind((MCP_HOST, MCP_PORT)) server_socket.listen(5) # Listen for up to 5 incoming connections print(f"MCP server listening on {MCP_HOST}:{MCP_PORT}") while True: client_socket, client_address = server_socket.accept() print(f"Accepted connection from {client_address}") handle_request(client_socket, client_address) # Handle the request in a separate function if __name__ == "__main__": start_mcp_server() ``` **Explanation:** * **Imports:** Imports necessary libraries (`socket`, `boto3`, `json`). * **Configuration:** Sets up configuration variables for the MCP server address, port, Bedrock region, and model ID. **Crucially, replace these with your actual values.** * **`handle_request()`:** * Receives data from the client socket. * Decodes the data (assuming UTF-8 encoding). * **Important:** Includes a very basic origin check. **This is a placeholder and needs to be significantly improved for any production environment.** You should implement robust authentication and authorization. The example expects a JSON payload with `origin` and `prompt` fields. * Calls `invoke_bedrock()` to send the prompt to Bedrock. * Sends the response back to the client. * Handles potential errors. * Closes the client socket. * **`invoke_bedrock()`:** * Constructs the request body for the Bedrock API. **This is highly model-dependent.** The example shows a basic structure for Anthropic Claude. You'll need to consult the Bedrock documentation for the specific model you're using to determine the correct request format. * Calls the `bedrock.invoke_model()` method. * Parses the response from Bedrock. **Again, the response format is model-dependent.** The example assumes a `completion` field in the response. * Handles potential errors. * **`start_mcp_server()`:** * Creates a TCP socket. * Binds the socket to the specified host and port. * Listens for incoming connections. * Accepts connections in a loop. * Calls `handle_request()` to process each connection. * **`if __name__ == "__main__":`:** Starts the MCP server when the script is run. **How to Run:** 1. **Install Libraries:** ```bash pip install boto3 ``` 2. **Configure AWS Credentials:** Make sure you have configured your AWS credentials (e.g., using `aws configure` or environment variables) so that `boto3` can access Bedrock. The IAM role or user you're using must have permissions to invoke the Bedrock model. 3. **Replace Placeholders:** Update the configuration variables at the top of the script with your actual values. 4. **Run the Script:** ```bash python your_script_name.py ``` 5. **Test with a Client:** You'll need a client application to send requests to the MCP server. Here's a simple Python client example: ```python import socket import json MCP_HOST = 'localhost' MCP_PORT = 12345 def send_request(prompt, origin): """Sends a request to the MCP server.""" with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect((MCP_HOST, MCP_PORT)) message = json.dumps({"prompt": prompt, "origin": origin}) s.sendall(message.encode('utf-8')) data = s.recv(1024) print(f"Received: {data.decode('utf-8')}") if __name__ == "__main__": prompt = "Write a short poem about the ocean." origin = "localhost" # Or "127.0.0.1" send_request(prompt, origin) ``` **Important Considerations:** * **Error Handling:** The error handling in the example is basic. You should implement more robust error handling, including logging and retries. * **Security:** The origin check is extremely basic. For production environments, you *must* implement proper authentication and authorization to prevent unauthorized access. Consider using TLS/SSL for secure communication. * **Scalability:** For high-volume traffic, consider using a more scalable MCP server architecture, such as a message queue (e.g., RabbitMQ, Kafka) or a load balancer. You might also need to scale your Bedrock usage. * **Bedrock Model Configuration:** The `invoke_bedrock()` function needs to be carefully configured based on the specific Bedrock model you're using. Refer to the Bedrock documentation for the model's input and output formats, available parameters, and best practices. * **Asynchronous Processing:** For better performance, consider using asynchronous programming (e.g., `asyncio`) to handle multiple requests concurrently. * **Rate Limiting:** Be aware of Bedrock's rate limits and implement appropriate rate limiting in your MCP server to avoid exceeding those limits. * **Data Validation:** Validate the data received from clients to prevent malicious input. * **Logging:** Implement comprehensive logging to track requests, responses, and errors. This example provides a starting point. You'll need to adapt it to your specific requirements and environment. Remember to prioritize security, error handling, and scalability as you develop your application.