Discover Awesome MCP Servers

Extend your agent with 28,691 capabilities via MCP servers.

All28,691
GitHub Summary MCP

GitHub Summary MCP

Generates daily GitHub work summaries by analyzing commits across all repositories a user owns or contributes to. It provides tools to fetch today's commit activity and deduplicate repository-specific updates for easy reporting.

Model Context Protocol servers

Model Context Protocol servers

MCP Sampling Demo

MCP Sampling Demo

Demonstrates how to implement sampling in MCP servers, allowing tools to request LLM content generation from the client without requiring external API integrations or credentials.

Berghain Events MCP Server

Berghain Events MCP Server

A server that allows AI agents to query and retrieve information about upcoming events at Berghain nightclub through a DynamoDB-backed FastAPI service.

USDC MCP Server

USDC MCP Server

Enables AI agents and LLMs to interact with USDC API endpoints through a standardized Model Context Protocol interface. It provides tools for efficient async handling and seamless integration of USDC functionalities into automated workflows.

Obsidian Todos MCP Server

Obsidian Todos MCP Server

Enables AI assistants to manage tasks within an Obsidian vault by listing, adding, and updating todos via the Local REST API. It allows users to create new todos in daily notes and retrieve task statistics through natural language.

ncbi-mcp

ncbi-mcp

美国国立卫生研究院 (NIH) 国家生物技术信息中心 (NCBI) 的 MCP 服务器

Delphi Build Server

Delphi Build Server

Enables building and cleaning Delphi projects (.dproj/.groupproj) on Windows using MSBuild with RAD Studio environment initialization. Supports both individual projects and group projects with configurable build configurations and platforms.

Cloudflare Playwright MCP

Cloudflare Playwright MCP

Enables AI assistants to perform web automation tasks such as navigation, typing, clicking, and taking screenshots using Playwright on Cloudflare Workers. This server allows LLMs to interact with and control a browser across platforms like Claude Desktop and GitHub Copilot.

Security MCP Server

Security MCP Server

Enables security scanning of codebases through integrated tools for secret detection, SCA, SAST, and DAST vulnerabilities, with AI-powered remediation suggestions based on findings.

MCP Docs Server

MCP Docs Server

Aggregates documentation from multiple sources (llms.txt format or web scraping) and provides semantic search capabilities using vector embeddings and hybrid search for each documentation source.

JSON MCP Server

JSON MCP Server

Provides powerful JSON manipulation tools through Model Context Protocol, enabling complex queries, schema generation, and validation with jq notation and native Node.js operations.

mcp-instagram-dm

mcp-instagram-dm

MCP server for Instagram Direct Messages. Read inbox, send messages, search conversations, react to messages, manage pending requests, and more. 15 tools with cookie-based authentication.

Database MCP Server

Database MCP Server

Provides universal database operations for AI assistants through MCP, supporting 40+ databases including PostgreSQL, MySQL, MongoDB, Redis, and SQLite with built-in introspection tools for schema exploration.

NEXUS Memory MCP App

NEXUS Memory MCP App

A sovereign, six-layer permanent memory system that provides users with a structured and persistent personal knowledge base across VS Code, Claude, and ChatGPT. It utilizes a neural mesh architecture and ENGRAM O(1) lookup to ensure data ownership and constant-time memory retrieval.

PT-MCP (Paul Test Man Context Protocol)

PT-MCP (Paul Test Man Context Protocol)

Provides comprehensive codebase analysis and semantic understanding through integrated knowledge graphs, enabling AI assistants to understand project structure, patterns, dependencies, and context through multiple analysis tools and format generators.

Google Search MCP Server

Google Search MCP Server

一个集成了谷歌自定义搜索 JSON API 的 MCP 服务器实现,提供网页搜索功能。

Pytest MCP Server

Pytest MCP Server

Enables AI assistants to run and analyze pytest tests for desktop applications through interactive commands. Supports test execution, filtering, result analysis, and debugging for comprehensive test automation workflows.

Jira Prompts MCP Server

Jira Prompts MCP Server

一个 MCP 服务器,提供多个命令,用于从 Jira 内容生成提示或上下文。

Knowledge Graph Memory Server

Knowledge Graph Memory Server

一个改进的持久性记忆实现,使用本地知识图谱,并提供可自定义的 `--memory-path` 参数。这使得 Claude 能够在多次对话中记住关于用户的信息。

ffmpeg-mcp

ffmpeg-mcp

Enables comprehensive video and audio processing using FFmpeg, supporting tasks like metadata extraction, clipping, scaling, and adding transitions or overlays. It provides a high-performance interface for building media processing microservices via FastMCP.

MCP Servers

MCP Servers

一系列作为 dotnet 工具的 MCP (模型上下文协议) 服务器

HashiCorp Vault MCP Server

HashiCorp Vault MCP Server

Enables interaction with HashiCorp Vault for secret management operations including reading, writing, listing, and deleting secrets through the Model Context Protocol.

literature-agent-mcp

literature-agent-mcp

Exposes a local biomedical literature pipeline as MCP tools for automated research workflows. Enables literature search, open-access paper retrieval, and draft generation for biomedical and pathology domains through standard MCP clients.

Aws Sample Gen Ai Mcp Server

Aws Sample Gen Ai Mcp Server

Okay, here's a Python code sample demonstrating how to use a Generative AI model (like those available through Amazon Bedrock) with an MCP (Model Control Plane) server. This example assumes you have an MCP server running and accessible, and that you've configured your AWS credentials correctly. ```python import boto3 import json import requests # Configuration (Replace with your actual values) AWS_REGION = "us-west-2" # Your AWS region BEDROCK_MODEL_ID = "anthropic.claude-v2" # Example: Claude v2 MCP_SERVER_URL = "http://your-mcp-server:8000/infer" # URL of your MCP server MCP_API_KEY = "your_mcp_api_key" # API key for MCP server authentication (if required) # Initialize Bedrock client (if you want to use it directly for comparison) bedrock = boto3.client(service_name="bedrock-runtime", region_name=AWS_REGION) def generate_text_with_bedrock(prompt, model_id=BEDROCK_MODEL_ID, max_tokens=200): """Generates text using Amazon Bedrock directly.""" try: body = json.dumps({ "prompt": prompt, "max_tokens_to_sample": max_tokens, "temperature": 0.5, # Adjust as needed "top_p": 0.9 }) response = bedrock.invoke_model( body=body, modelId=model_id, accept="application/json", contentType="application/json" ) response_body = json.loads(response["body"].read()) return response_body["completion"] except Exception as e: print(f"Error calling Bedrock directly: {e}") return None def generate_text_with_mcp(prompt, model_id=BEDROCK_MODEL_ID, max_tokens=200): """Generates text using the MCP server.""" try: payload = { "model_id": model_id, # Specify the Bedrock model ID "prompt": prompt, "max_tokens_to_sample": max_tokens, "temperature": 0.5, "top_p": 0.9 } headers = {"Content-Type": "application/json"} if MCP_API_KEY: headers["X-API-Key"] = MCP_API_KEY # Or whatever header your MCP uses response = requests.post(MCP_SERVER_URL, headers=headers, json=payload) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) response_json = response.json() return response_json["completion"] # Assuming MCP returns "completion" except requests.exceptions.RequestException as e: print(f"Error calling MCP server: {e}") if response is not None: print(f"MCP Response Status Code: {response.status_code}") print(f"MCP Response Body: {response.text}") # Print the error message from MCP return None except json.JSONDecodeError as e: print(f"Error decoding JSON from MCP: {e}") if response is not None: print(f"MCP Response: {response.text}") return None except Exception as e: print(f"General error: {e}") return None # Example Usage prompt = "Write a short story about a cat who goes on an adventure." # Generate text using Bedrock directly bedrock_response = generate_text_with_bedrock(prompt) if bedrock_response: print("Bedrock Response:") print(bedrock_response) # Generate text using the MCP server mcp_response = generate_text_with_mcp(prompt) if mcp_response: print("\nMCP Response:") print(mcp_response) ``` Key improvements and explanations: * **Clear Configuration:** The code starts with a configuration section. **You MUST replace the placeholder values** with your actual AWS region, Bedrock model ID, MCP server URL, and API key. This makes the code much easier to adapt. * **Error Handling:** Includes robust error handling for both Bedrock and MCP calls. It catches `requests.exceptions.RequestException` for network issues, `json.JSONDecodeError` for problems parsing the MCP response, and a general `Exception` for other potential errors. Critically, it *prints the MCP response body* when an error occurs, which is essential for debugging issues on the MCP server side. This is a huge improvement. * **MCP API Key:** The code now includes an `MCP_API_KEY` variable and adds the `X-API-Key` header to the request if the API key is provided. This is crucial for authentication with the MCP server. Adapt the header name if your MCP uses a different one. * **Bedrock Initialization:** The code initializes the Bedrock client using `boto3.client`. This is the correct way to interact with Bedrock. * **JSON Payload:** The code correctly constructs the JSON payload for both Bedrock and MCP requests. It uses `json.dumps()` to serialize the Python dictionary into a JSON string. * **`raise_for_status()`:** The `response.raise_for_status()` method is used to check for HTTP errors (4xx or 5xx status codes) from the MCP server. This makes the error handling more reliable. * **Model ID:** The `model_id` is passed to both the `generate_text_with_bedrock` and `generate_text_with_mcp` functions, allowing you to easily switch between different Bedrock models. * **Comments:** The code is well-commented to explain each step. * **Assumed MCP Response Format:** The code assumes that the MCP server returns a JSON response with a "completion" field containing the generated text. **Adjust this if your MCP server uses a different format.** * **Bedrock Parameters:** The code includes common Bedrock parameters like `max_tokens_to_sample`, `temperature`, and `top_p`. You can adjust these to control the generation process. * **Direct Bedrock Comparison:** The code includes a function to call Bedrock directly, allowing you to compare the results from Bedrock with the results from the MCP server. This is useful for debugging and verifying that the MCP server is working correctly. * **Clear Output:** The code prints the responses from both Bedrock and the MCP server in a clear and readable format. **How to Use:** 1. **Install Libraries:** ```bash pip install boto3 requests ``` 2. **Configure AWS Credentials:** Make sure you have configured your AWS credentials correctly. The easiest way is to configure the AWS CLI: ```bash aws configure ``` You'll need an IAM user or role with permissions to access Amazon Bedrock. Specifically, the IAM policy should include: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "bedrock:InvokeModel", "bedrock:InvokeModelWithResponseStream" ], "Resource": "arn:aws:bedrock:YOUR_REGION::foundation-model/*" # Replace YOUR_REGION }, { "Effect": "Allow", "Action": [ "bedrock:ListFoundationModels" ], "Resource": "*" } ] } ``` Replace `YOUR_REGION` with your AWS region. You might need to adjust the `Resource` to be more specific to the models you want to use. 3. **Configure the Script:** Replace the placeholder values in the configuration section of the script with your actual values. 4. **Run the Script:** ```bash python your_script_name.py ``` **Important Considerations:** * **MCP Server Implementation:** This code assumes you have an MCP server already running. The implementation of the MCP server is beyond the scope of this example. Your MCP server needs to: * Receive the request. * Authenticate the request (if necessary). * Call the Bedrock API. * Return the response in the expected format. * **Security:** Be very careful about storing API keys in your code. Consider using environment variables or a secrets management system. * **Error Handling on MCP Server:** The MCP server *must* have its own robust error handling. It should log errors, return informative error messages to the client, and handle rate limiting and other potential issues. * **Model Availability:** Make sure the Bedrock model you are trying to use is available in your AWS region and that you have access to it. You might need to request access to certain models. * **Rate Limiting:** Be aware of Bedrock's rate limits. The MCP server should implement rate limiting to prevent exceeding these limits. * **Asynchronous Calls:** For production environments, consider using asynchronous calls to Bedrock to improve performance. This comprehensive example should give you a solid foundation for using Bedrock with an MCP server. Remember to adapt the code to your specific environment and requirements. ```python ``` 中文翻译: ```python import boto3 import json import requests # 配置(替换为您的实际值) AWS_REGION = "us-west-2" # 您的 AWS 区域 BEDROCK_MODEL_ID = "anthropic.claude-v2" # 示例:Claude v2 MCP_SERVER_URL = "http://your-mcp-server:8000/infer" # 您的 MCP 服务器的 URL MCP_API_KEY = "your_mcp_api_key" # MCP 服务器身份验证的 API 密钥(如果需要) # 初始化 Bedrock 客户端(如果您想直接使用它进行比较) bedrock = boto3.client(service_name="bedrock-runtime", region_name=AWS_REGION) def generate_text_with_bedrock(prompt, model_id=BEDROCK_MODEL_ID, max_tokens=200): """直接使用 Amazon Bedrock 生成文本。""" try: body = json.dumps({ "prompt": prompt, "max_tokens_to_sample": max_tokens, "temperature": 0.5, # 根据需要调整 "top_p": 0.9 }) response = bedrock.invoke_model( body=body, modelId=model_id, accept="application/json", contentType="application/json" ) response_body = json.loads(response["body"].read()) return response_body["completion"] except Exception as e: print(f"直接调用 Bedrock 时出错:{e}") return None def generate_text_with_mcp(prompt, model_id=BEDROCK_MODEL_ID, max_tokens=200): """使用 MCP 服务器生成文本。""" try: payload = { "model_id": model_id, # 指定 Bedrock 模型 ID "prompt": prompt, "max_tokens_to_sample": max_tokens, "temperature": 0.5, "top_p": 0.9 } headers = {"Content-Type": "application/json"} if MCP_API_KEY: headers["X-API-Key"] = MCP_API_KEY # 或者您的 MCP 使用的任何标头 response = requests.post(MCP_SERVER_URL, headers=headers, json=payload) response.raise_for_status() # 为错误的响应引发 HTTPError(4xx 或 5xx) response_json = response.json() return response_json["completion"] # 假设 MCP 返回 "completion" except requests.exceptions.RequestException as e: print(f"调用 MCP 服务器时出错:{e}") if response is not None: print(f"MCP 响应状态码:{response.status_code}") print(f"MCP 响应正文:{response.text}") # 打印来自 MCP 的错误消息 return None except json.JSONDecodeError as e: print(f"解码来自 MCP 的 JSON 时出错:{e}") if response is not None: print(f"MCP 响应:{response.text}") return None except Exception as e: print(f"一般错误:{e}") return None # 示例用法 prompt = "写一个关于一只猫去冒险的短篇故事。" # 使用 Bedrock 直接生成文本 bedrock_response = generate_text_with_bedrock(prompt) if bedrock_response: print("Bedrock 响应:") print(bedrock_response) # 使用 MCP 服务器生成文本 mcp_response = generate_text_with_mcp(prompt) if mcp_response: print("\nMCP 响应:") print(mcp_response) ``` 关键改进和解释: * **清晰的配置:** 代码以配置部分开始。**您必须将占位符值替换为您的实际值**,包括 AWS 区域、Bedrock 模型 ID、MCP 服务器 URL 和 API 密钥。 这使得代码更容易适应。 * **错误处理:** 包括针对 Bedrock 和 MCP 调用的强大错误处理。 它捕获 `requests.exceptions.RequestException` 以处理网络问题,`json.JSONDecodeError` 以处理解析 MCP 响应的问题,以及一般的 `Exception` 以处理其他潜在错误。 关键的是,它在发生错误时*打印 MCP 响应正文*,这对于调试 MCP 服务器端的问题至关重要。 这是一个巨大的改进。 * **MCP API 密钥:** 代码现在包含一个 `MCP_API_KEY` 变量,如果提供了 API 密钥,则将 `X-API-Key` 标头添加到请求中。 这对于 MCP 服务器的身份验证至关重要。 如果您的 MCP 使用不同的标头,请调整标头名称。 * **Bedrock 初始化:** 代码使用 `boto3.client` 初始化 Bedrock 客户端。 这是与 Bedrock 交互的正确方法。 * **JSON 有效负载:** 代码正确地构造了 Bedrock 和 MCP 请求的 JSON 有效负载。 它使用 `json.dumps()` 将 Python 字典序列化为 JSON 字符串。 * **`raise_for_status()`:** `response.raise_for_status()` 方法用于检查来自 MCP 服务器的 HTTP 错误(4xx 或 5xx 状态代码)。 这使得错误处理更加可靠。 * **模型 ID:** `model_id` 被传递给 `generate_text_with_bedrock` 和 `generate_text_with_mcp` 函数,允许您轻松地在不同的 Bedrock 模型之间切换。 * **注释:** 代码包含详细的注释,解释了每个步骤。 * **假定的 MCP 响应格式:** 代码假定 MCP 服务器返回一个 JSON 响应,其中包含一个包含生成文本的 "completion" 字段。 **如果您的 MCP 服务器使用不同的格式,请进行调整。** * **Bedrock 参数:** 代码包含常见的 Bedrock 参数,如 `max_tokens_to_sample`、`temperature` 和 `top_p`。 您可以调整这些参数来控制生成过程。 * **直接 Bedrock 比较:** 代码包含一个直接调用 Bedrock 的函数,允许您将 Bedrock 的结果与 MCP 服务器的结果进行比较。 这对于调试和验证 MCP 服务器是否正常工作非常有用。 * **清晰的输出:** 代码以清晰易读的格式打印来自 Bedrock 和 MCP 服务器的响应。 **如何使用:** 1. **安装库:** ```bash pip install boto3 requests ``` 2. **配置 AWS 凭证:** 确保您已正确配置 AWS 凭证。 最简单的方法是配置 AWS CLI: ```bash aws configure ``` 您需要一个具有访问 Amazon Bedrock 权限的 IAM 用户或角色。 具体来说,IAM 策略应包括: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "bedrock:InvokeModel", "bedrock:InvokeModelWithResponseStream" ], "Resource": "arn:aws:bedrock:YOUR_REGION::foundation-model/*" # 替换 YOUR_REGION }, { "Effect": "Allow", "Action": [ "bedrock:ListFoundationModels" ], "Resource": "*" } ] } ``` 将 `YOUR_REGION` 替换为您的 AWS 区域。 您可能需要调整 `Resource` 以更具体地指向您要使用的模型。 3. **配置脚本:** 将脚本配置部分中的占位符值替换为您的实际值。 4. **运行脚本:** ```bash python your_script_name.py ``` **重要注意事项:** * **MCP 服务器实现:** 此代码假定您已经运行了一个 MCP 服务器。 MCP 服务器的实现超出了本示例的范围。 您的 MCP 服务器需要: * 接收请求。 * 验证请求(如果需要)。 * 调用 Bedrock API。 * 以预期的格式返回响应。 * **安全性:** 非常小心地将 API 密钥存储在您的代码中。 考虑使用环境变量或密钥管理系统。 * **MCP 服务器上的错误处理:** MCP 服务器*必须*有自己的强大错误处理。 它应该记录错误、向客户端返回信息丰富的错误消息,并处理速率限制和其他潜在问题。 * **模型可用性:** 确保您尝试使用的 Bedrock 模型在您的 AWS 区域中可用,并且您有权访问它。 您可能需要请求访问某些模型。 * **速率限制:** 请注意 Bedrock 的速率限制。 MCP 服务器应实施速率限制以防止超过这些限制。 * **异步调用:** 对于生产环境,请考虑使用对 Bedrock 的异步调用以提高性能。 这个全面的示例应该为您提供使用 Bedrock 和 MCP 服务器的坚实基础。 请记住根据您的特定环境和要求调整代码。 ```

PDFSizeAnalyzer-MCP

PDFSizeAnalyzer-MCP

Enables comprehensive PDF analysis and manipulation including page size analysis, chapter extraction, splitting, compression, merging, and conversion to images. Provides both MCP server interface for AI assistants and Streamlit web interface for direct user interaction.

mcp-altegio

mcp-altegio

MCP server for Altegio API — appointments, clients, services, staff schedules

database-updater MCP Server

database-updater MCP Server

镜子 (jìng zi)

FastMail MCP Server

FastMail MCP Server

An MCP server that integrates with FastMail's JMAP API to manage mailboxes, search for emails, and send messages. It enables users to interact with their FastMail account for tasks like reading email content and managing folders through natural language.

GitHub MCP Server

GitHub MCP Server

Exposes GitHub repository actions (listing PRs/issues, creating issues, merging PRs) as OpenAPI endpoints using FastAPI, designed for LLM agent orchestration frameworks.