Discover Awesome MCP Servers

Extend your agent with 20,381 capabilities via MCP servers.

All20,381
Jira MCP Server

Jira MCP Server

通过模型上下文协议,实现与 Jira 的自然语言交互,从而管理项目、问题、任务和工作流程,允许用户通过 Claude Desktop 委派项目管理任务。

JavaScript
ChatGPT MCP Server

ChatGPT MCP Server

一个模型上下文协议服务器,它通过使用自定义 GPT 界面,实现通过自然语言交互来管理 Docker 容器。 (Alternative, slightly more formal and technical:) 一个模型上下文协议服务器,该服务器允许通过自定义 GPT 接口,使用自然语言交互的方式来管理 Docker 容器。

TypeScript
Telegram MCP Server

Telegram MCP Server

一个桥梁,允许 Claude Desktop 通过模型上下文协议访问 Telegram 聊天和消息,提供只读功能以从 Telegram 检索对话和消息。

Python
Cloudinary MCP Server

Cloudinary MCP Server

这个服务器提供了一些工具,可以使用 Claude/Cline 直接将图片和视频上传到 Cloudinary,从而方便资源管理,并提供可定制的选项,例如资源类型和公共 ID。

JavaScript
Kibela MCP Server

Kibela MCP Server

启用与 Kibela API 的集成,用于搜索和检索笔记,从而使 LLM 能够无缝地与 Kibela 内容交互。

TypeScript
MCP Docling Server

MCP Docling Server

一个服务器,它使用模型上下文协议提供文档处理能力,允许将文档转换为 Markdown 格式、提取表格以及处理文档图像。

Python
MCP Tavily

MCP Tavily

Tavily 的 mcp 服务器。通过在 GitHub 上创建一个帐户来为 kshern/mcp-tavily 的开发做出贡献。

JavaScript
OpenAPI

OpenAPI

TypeScript
datadog

datadog

Okay, here's how you can access monitor and cluster logs from Datadog, broken down into steps and considerations: **1. Accessing Monitor Logs:** * **From the Monitor Page:** 1. **Navigate to Monitors:** In the Datadog UI, go to "Monitors" -> "Manage Monitors". 2. **Find the Monitor:** Locate the specific monitor you're interested in. You can use the search bar, filters (e.g., by tag, name, status), or browse the list. 3. **Monitor Status and Events:** Click on the monitor's name. This will take you to the monitor's details page. Here you'll see: * **Monitor Status:** The current status of the monitor (OK, Alert, Warning, No Data). * **Events Timeline:** A timeline of events related to the monitor. This is where you'll find when the monitor triggered, when it recovered, and any associated messages. * **Event Details:** Click on a specific event in the timeline to see more details. This often includes: * The time the event occurred. * The message associated with the event (which might contain information about the cause of the alert). * Links to related logs, metrics, or traces (if configured). This is a crucial part for troubleshooting. * **Using the Event Explorer:** 1. **Navigate to Event Explorer:** In the Datadog UI, go to "Events" -> "Explorer". 2. **Filter by Monitor:** Use the search bar or filters to narrow down the events to those related to your specific monitor. You can filter by: * `monitor:<monitor_name>` (replace `<monitor_name>` with the name of your monitor) * `monitor_id:<monitor_id>` (replace `<monitor_id>` with the ID of your monitor) You can find the monitor ID on the monitor's details page. * `status:<alert|warning|ok|no data>` to filter by the status of the monitor. 3. **Analyze Events:** The Event Explorer allows you to see a stream of events related to your monitor. You can: * Sort events by time. * View the event message. * Click on an event to see more details. * Use facets on the left-hand side to further refine your search. * **Linking Monitors to Logs (Important for Effective Troubleshooting):** * **Use Tags:** The most effective way to link monitors to logs is to use consistent tagging. When you create your monitor, add tags that are also present in your logs. For example, if your monitor is for a specific service, tag both the monitor and the logs from that service with `service:my-service`. * **Use Log Patterns in Monitor Messages:** If your monitor message includes specific patterns that appear in your logs (e.g., an error code, a transaction ID), you can use these patterns to search for related logs. * **Use the `{{log.id}}` variable in monitor messages:** If you are creating monitors based on log patterns, you can include the `{{log.id}}` variable in the monitor message. This will include the unique ID of the log message that triggered the monitor, making it very easy to find the exact log in the Log Explorer. **2. Accessing Cluster Logs (Kubernetes, ECS, etc.):** * **Ensure Log Collection is Configured:** The first and most important step is to make sure you've properly configured Datadog to collect logs from your cluster. This typically involves: * **Installing the Datadog Agent:** The Datadog Agent needs to be running on your cluster nodes (or as a DaemonSet in Kubernetes). * **Configuring Log Collection:** You need to tell the Datadog Agent where to find the logs. This usually involves configuring the Agent to monitor specific log files or to collect logs from standard output/standard error of your containers. Datadog provides specific integrations for Kubernetes, ECS, and other container orchestration platforms. Follow the official Datadog documentation for your specific platform. * **Using the Datadog Operator for Kubernetes (Recommended):** For Kubernetes, the Datadog Operator simplifies the deployment and management of the Datadog Agent and related resources. It can automatically configure log collection based on your Kubernetes resources. * **Using the Log Explorer:** 1. **Navigate to Log Explorer:** In the Datadog UI, go to "Logs" -> "Explorer". 2. **Filter by Cluster:** Use the search bar or facets to filter the logs to those from your cluster. Common filters include: * `kubernetes.cluster.name:<cluster_name>` (for Kubernetes) * `ecs.cluster.name:<cluster_name>` (for ECS) * `host:<hostname>` (to filter by specific nodes in the cluster) * `source:<source_name>` (if you've configured a specific source for your cluster logs) * `service:<service_name>` (to filter by specific services running in the cluster) * `container_name:<container_name>` (to filter by specific containers) 3. **Analyze Logs:** The Log Explorer provides powerful tools for analyzing your cluster logs: * **Search:** Use the search bar to find specific keywords, error messages, or patterns. * **Facets:** Use the facets on the left-hand side to filter and group your logs. * **Time Series:** Create time series graphs based on log data (e.g., count the number of error logs over time). * **Live Tail:** View a live stream of logs as they are generated. * **Log Patterns:** Identify common log patterns to help you understand the behavior of your applications. * **Using Dashboards:** 1. **Create or Edit a Dashboard:** In the Datadog UI, go to "Dashboards" -> "New Dashboard" or edit an existing dashboard. 2. **Add Log Widgets:** Add widgets to your dashboard that display log data. You can use: * **Log Stream Widget:** Displays a stream of logs based on your query. * **Log Count Widget:** Displays the number of logs that match your query over a specific time period. * **Top List Widget:** Displays the top values for a specific attribute in your logs (e.g., the top error messages). 3. **Configure the Widget:** Configure the widget to filter the logs to those from your cluster and to display the information you're interested in. **Example: Kubernetes Log Access** Let's say you want to see the logs from a specific pod in your Kubernetes cluster. 1. **Ensure the Datadog Agent is running as a DaemonSet in your Kubernetes cluster.** This is the recommended way to collect logs. 2. **Verify that the Datadog Agent is configured to collect logs from your containers.** The Datadog Operator can automate this. 3. **In the Log Explorer, use the following filters:** * `kubernetes.cluster.name:<your_cluster_name>` * `kubernetes.pod.name:<your_pod_name>` * `kubernetes.namespace.name:<your_namespace>` **Important Considerations:** * **Log Volume:** Collecting logs from a large cluster can generate a significant amount of data. Consider using log filtering and sampling to reduce the volume of logs you're collecting. * **Security:** Be careful about what information you're logging. Avoid logging sensitive data such as passwords or API keys. Use log masking or redaction to protect sensitive information. * **Retention:** Datadog has log retention policies. Make sure you understand the retention policies and that you're retaining logs for as long as you need them. * **Cost:** Datadog's pricing is based on log volume. Be aware of the cost implications of collecting logs from your cluster. * **Structured Logging:** Using structured logging (e.g., JSON) makes it much easier to query and analyze your logs in Datadog. Encourage your developers to use structured logging in their applications. **In summary, accessing monitor and cluster logs in Datadog requires proper configuration of the Datadog Agent, understanding of the Log Explorer and Event Explorer, and the use of appropriate filters and queries. Linking monitors to logs through tagging and log patterns is crucial for effective troubleshooting.** I've provided a comprehensive guide. Let me know if you have any specific questions or scenarios you'd like me to elaborate on.

Python
Penrose MCP Server

Penrose MCP Server

使用 Penrose 的领域特定语言,通过自然语言促进数学图表的创建,从而可以定义数学类型、关系和视觉表示规则。

JavaScript
Together AI Image Server

Together AI Image Server

一个 MCP 服务器,它使 Claude 和其他 MCP 兼容的助手能够使用 Together AI 的图像生成模型,通过文本提示生成图像。

TypeScript
OKX MCP Server

OKX MCP Server

通过模型上下文协议接口,提供来自OKX交易所的实时加密货币价格数据,允许访问任何交易工具的历史K线数据和当前市场价格。

JavaScript
Twitch MCP Server

Twitch MCP Server

可以与 Twitch API 交互,允许用户检索关于频道、直播、游戏等方面的全面信息,并额外支持搜索和访问聊天元素,如表情和徽章。

TypeScript
Microsoft SQL Server MCP Server

Microsoft SQL Server MCP Server

一个模型上下文协议服务器,它能够安全地与 Microsoft SQL Server 数据库进行交互,允许 AI 助手通过受控界面列出表、读取数据和执行 SQL 查询。

Python
Lichess MCP

Lichess MCP

一个 MCP 服务器,它支持与 Lichess 国际象棋平台进行自然语言交互,允许用户通过 Claude 来下棋、分析棋局、管理账户以及参与比赛。

JavaScript
Unofficial dubco-mcp-server

Unofficial dubco-mcp-server

一个模型上下文协议服务器,它使 AI 助手能够通过 Dub.co API 创建、更新和删除 Dub.co 短链接。

JavaScript
Systemprompt MCP Gmail Server

Systemprompt MCP Gmail Server

使用户能够通过 MCP 协议,使用 AI 代理辅助操作来管理 Gmail 帐户,支持电子邮件搜索、阅读、删除和发送,并具有语音驱动的界面。

TypeScript
RagDocs MCP Server

RagDocs MCP Server

提供 RAG 功能,用于使用 Qdrant 向量数据库和 Ollama/OpenAI 嵌入进行语义文档搜索,允许用户添加、搜索、列出和删除文档,并支持元数据。

TypeScript
Finnhub MCP Server

Finnhub MCP Server

这个服务器提供了一个与 Finnhub API 的接口,使用户能够获取最新的市场新闻、股票市场数据、基本财务数据以及特定股票的推荐趋势。

Python
Academic Paper Search MCP Server

Academic Paper Search MCP Server

支持从多个来源实时搜索和检索学术论文信息,提供论文元数据、摘要以及在可用情况下的全文内容,并提供结构化数据响应,以便与支持工具/函数调用的 AI 模型集成。

Python
Resend MCP Server

Resend MCP Server

允许通过标准化的模型上下文协议 (MCP) 接口,使用 Resend 服务发送电子邮件,并提供 TypeScript 支持。

TypeScript
FRED MCP Server

FRED MCP Server

用于访问 FRED(联邦储备经济数据)API 的 MCP 服务器 - kablewy/fred-mcp-server

TypeScript
Python Jira MCP Server

Python Jira MCP Server

一个服务器实现,允许 AI 模型通过模型上下文协议与 Jira 交互,从而实现诸如 JQL 搜索和检索问题详情之类的任务。

Python
Strapi MCP Server

Strapi MCP Server

通过 MCP 协议访问 Strapi CMS 内容,允许用户通过 Claude 创建、读取、更新和删除内容条目。

JavaScript
OpenDota MCP Server

OpenDota MCP Server

模型上下文协议服务器,它使大型语言模型和人工智能助手能够通过标准化接口检索实时的 Dota 2 统计数据、比赛数据、玩家信息和游戏指标。

Python
MCPunk

MCPunk

好的,我明白了。 这段文字翻译成中文是: “通过智能代码搜索与你的代码库进行对话,无需嵌入。 通过将文件分解为逻辑块,为 LLM 提供搜索这些块的工具,并让它找到回答你的问题所需的特定代码。”

Python
mcp-git-ingest

mcp-git-ingest

帮助 AI 读取 GitHub 仓库结构和重要文件。想要快速了解一个仓库是关于什么的?用 "read https://github.com/adhikasp/mcp-git-ingest 并确定代码在技术上是如何工作的" 来提示它。

Python
Gmail MCP Server

Gmail MCP Server

通过 Gmail API 实现与 Gmail 的交互,以读取、发送和管理电子邮件。支持多个 Gmail 帐户,具有实时监控和电子邮件搜索及附件处理的高级功能。

Python
CoinGecko MCP Server

CoinGecko MCP Server

允许与 CoinGecko Pro API 交互,以通过 MCP 和 OpenAI 函数调用访问加密货币数据,包括价格历史和市场指标。

JavaScript
AWS Resources MCP Server

AWS Resources MCP Server

一个模型上下文协议服务器,使 Claude 能够使用 boto3 执行 Python 代码,从而直接从对话中查询和管理 AWS 资源。

Python