Discover Awesome MCP Servers
Extend your agent with 15,860 capabilities via MCP servers.
- All15,860
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
javaDemo
javaDemo。通过在 GitHub 上创建一个帐户来参与 zf0497/mydemo 的开发。
MCP Server for X/Twitter
使用真正的浏览器 API 自动化你的 X 账户 - JoshMayerr/mcp-x
Minima
可配置容器的本地对话式 RAG - dmayboroda/minima
NutJS Windows Control
用于操作系统自动化的跨平台 MCP 服务器。通过在 GitHub 上创建一个帐户来参与 Cheffromspace/MCPControl 的开发。
WebSearch
Web Search tools 是一系列工具,允许 Claude 通过 MCP 服务器访问互联网。
GitHub Support Assistant
帮助支持工程师通过搜索存储库并基于问题描述计算相似度分数,来查找相似的 GitHub 问题,从而加快故障排除速度。
Cloudflare API MCP Server
轻量级 MCP 服务器,让你的 Cursor Agent 能够访问 Cloudflare API。- zueai/cloudflare-api-mcp
MCP Server for eSignatures
促进电子签名的合同和模板管理,使用户能够通过用户友好的界面创建、发送、更新和管理具有可定制选项的合同和模板。
Inoyu Apache Unomi MCP Server
一个模型上下文协议服务器,使 Claude 能够通过 Apache Unomi 配置文件管理来维护用户上下文。
JSON MCP Server
一个模型上下文协议服务器实现,它使大型语言模型(LLM)能够使用 JSONPath 语法查询和操作 JSON 数据,并提供扩展操作来进行过滤、排序、转换和聚合数据。
Zotero MCP Server
这个服务器允许用户通过模型上下文协议与他们的 Zotero 文献库进行交互,提供使用自然语言查询来搜索条目、检索元数据和访问全文的工具。
Radarr and Sonarr MCP Server
一个基于 Python 的模型上下文协议服务器,允许像 Claude 这样的人工智能助手通过 Radarr 和 Sonarr API 访问和查询您的电影和电视剧收藏。
Coding Standards MCP Server
提供访问各种技术(包括 Java、Python 和 React)的编码风格指南和最佳实践的工具。
Semantic Scholar MCP Server
Semantic Scholar API,提供对学术论文数据、作者信息和引文网络的全面访问。
GitHub MCP Server
模型上下文协议服务器。通过在 GitHub 上创建一个帐户来参与 modelcontextprotocol/servers 的开发。
MCP Docling Server
一个服务器,它使用模型上下文协议提供文档处理能力,允许将文档转换为 Markdown 格式、提取表格以及处理文档图像。
MCP Tavily
Tavily 的 mcp 服务器。通过在 GitHub 上创建一个帐户来为 kshern/mcp-tavily 的开发做出贡献。
OpenAPI
datadog
Okay, here's how you can access monitor and cluster logs from Datadog, broken down into steps and considerations: **1. Accessing Monitor Logs:** * **From the Monitor Page:** 1. **Navigate to Monitors:** In the Datadog UI, go to "Monitors" -> "Manage Monitors". 2. **Find the Monitor:** Locate the specific monitor you're interested in. You can use the search bar, filters (e.g., by tag, name, status), or browse the list. 3. **Monitor Status and Events:** Click on the monitor's name. This will take you to the monitor's details page. Here you'll see: * **Monitor Status:** The current status of the monitor (OK, Alert, Warning, No Data). * **Events Timeline:** A timeline of events related to the monitor. This is where you'll find when the monitor triggered, when it recovered, and any associated messages. * **Event Details:** Click on a specific event in the timeline to see more details. This often includes: * The time the event occurred. * The message associated with the event (which might contain information about the cause of the alert). * Links to related logs, metrics, or traces (if configured). This is a crucial part for troubleshooting. * **Using the Event Explorer:** 1. **Navigate to Event Explorer:** In the Datadog UI, go to "Events" -> "Explorer". 2. **Filter by Monitor:** Use the search bar or filters to narrow down the events to those related to your specific monitor. You can filter by: * `monitor:<monitor_name>` (replace `<monitor_name>` with the name of your monitor) * `monitor_id:<monitor_id>` (replace `<monitor_id>` with the ID of your monitor) You can find the monitor ID on the monitor's details page. * `status:<alert|warning|ok|no data>` to filter by the status of the monitor. 3. **Analyze Events:** The Event Explorer allows you to see a stream of events related to your monitor. You can: * Sort events by time. * View the event message. * Click on an event to see more details. * Use facets on the left-hand side to further refine your search. * **Linking Monitors to Logs (Important for Effective Troubleshooting):** * **Use Tags:** The most effective way to link monitors to logs is to use consistent tagging. When you create your monitor, add tags that are also present in your logs. For example, if your monitor is for a specific service, tag both the monitor and the logs from that service with `service:my-service`. * **Use Log Patterns in Monitor Messages:** If your monitor message includes specific patterns that appear in your logs (e.g., an error code, a transaction ID), you can use these patterns to search for related logs. * **Use the `{{log.id}}` variable in monitor messages:** If you are creating monitors based on log patterns, you can include the `{{log.id}}` variable in the monitor message. This will include the unique ID of the log message that triggered the monitor, making it very easy to find the exact log in the Log Explorer. **2. Accessing Cluster Logs (Kubernetes, ECS, etc.):** * **Ensure Log Collection is Configured:** The first and most important step is to make sure you've properly configured Datadog to collect logs from your cluster. This typically involves: * **Installing the Datadog Agent:** The Datadog Agent needs to be running on your cluster nodes (or as a DaemonSet in Kubernetes). * **Configuring Log Collection:** You need to tell the Datadog Agent where to find the logs. This usually involves configuring the Agent to monitor specific log files or to collect logs from standard output/standard error of your containers. Datadog provides specific integrations for Kubernetes, ECS, and other container orchestration platforms. Follow the official Datadog documentation for your specific platform. * **Using the Datadog Operator for Kubernetes (Recommended):** For Kubernetes, the Datadog Operator simplifies the deployment and management of the Datadog Agent and related resources. It can automatically configure log collection based on your Kubernetes resources. * **Using the Log Explorer:** 1. **Navigate to Log Explorer:** In the Datadog UI, go to "Logs" -> "Explorer". 2. **Filter by Cluster:** Use the search bar or facets to filter the logs to those from your cluster. Common filters include: * `kubernetes.cluster.name:<cluster_name>` (for Kubernetes) * `ecs.cluster.name:<cluster_name>` (for ECS) * `host:<hostname>` (to filter by specific nodes in the cluster) * `source:<source_name>` (if you've configured a specific source for your cluster logs) * `service:<service_name>` (to filter by specific services running in the cluster) * `container_name:<container_name>` (to filter by specific containers) 3. **Analyze Logs:** The Log Explorer provides powerful tools for analyzing your cluster logs: * **Search:** Use the search bar to find specific keywords, error messages, or patterns. * **Facets:** Use the facets on the left-hand side to filter and group your logs. * **Time Series:** Create time series graphs based on log data (e.g., count the number of error logs over time). * **Live Tail:** View a live stream of logs as they are generated. * **Log Patterns:** Identify common log patterns to help you understand the behavior of your applications. * **Using Dashboards:** 1. **Create or Edit a Dashboard:** In the Datadog UI, go to "Dashboards" -> "New Dashboard" or edit an existing dashboard. 2. **Add Log Widgets:** Add widgets to your dashboard that display log data. You can use: * **Log Stream Widget:** Displays a stream of logs based on your query. * **Log Count Widget:** Displays the number of logs that match your query over a specific time period. * **Top List Widget:** Displays the top values for a specific attribute in your logs (e.g., the top error messages). 3. **Configure the Widget:** Configure the widget to filter the logs to those from your cluster and to display the information you're interested in. **Example: Kubernetes Log Access** Let's say you want to see the logs from a specific pod in your Kubernetes cluster. 1. **Ensure the Datadog Agent is running as a DaemonSet in your Kubernetes cluster.** This is the recommended way to collect logs. 2. **Verify that the Datadog Agent is configured to collect logs from your containers.** The Datadog Operator can automate this. 3. **In the Log Explorer, use the following filters:** * `kubernetes.cluster.name:<your_cluster_name>` * `kubernetes.pod.name:<your_pod_name>` * `kubernetes.namespace.name:<your_namespace>` **Important Considerations:** * **Log Volume:** Collecting logs from a large cluster can generate a significant amount of data. Consider using log filtering and sampling to reduce the volume of logs you're collecting. * **Security:** Be careful about what information you're logging. Avoid logging sensitive data such as passwords or API keys. Use log masking or redaction to protect sensitive information. * **Retention:** Datadog has log retention policies. Make sure you understand the retention policies and that you're retaining logs for as long as you need them. * **Cost:** Datadog's pricing is based on log volume. Be aware of the cost implications of collecting logs from your cluster. * **Structured Logging:** Using structured logging (e.g., JSON) makes it much easier to query and analyze your logs in Datadog. Encourage your developers to use structured logging in their applications. **In summary, accessing monitor and cluster logs in Datadog requires proper configuration of the Datadog Agent, understanding of the Log Explorer and Event Explorer, and the use of appropriate filters and queries. Linking monitors to logs through tagging and log patterns is crucial for effective troubleshooting.** I've provided a comprehensive guide. Let me know if you have any specific questions or scenarios you'd like me to elaborate on.
Penrose MCP Server
使用 Penrose 的领域特定语言,通过自然语言促进数学图表的创建,从而可以定义数学类型、关系和视觉表示规则。
Together AI Image Server
一个 MCP 服务器,它使 Claude 和其他 MCP 兼容的助手能够使用 Together AI 的图像生成模型,通过文本提示生成图像。
OKX MCP Server
通过模型上下文协议接口,提供来自OKX交易所的实时加密货币价格数据,允许访问任何交易工具的历史K线数据和当前市场价格。
Twitch MCP Server
可以与 Twitch API 交互,允许用户检索关于频道、直播、游戏等方面的全面信息,并额外支持搜索和访问聊天元素,如表情和徽章。
Microsoft SQL Server MCP Server
一个模型上下文协议服务器,它能够安全地与 Microsoft SQL Server 数据库进行交互,允许 AI 助手通过受控界面列出表、读取数据和执行 SQL 查询。
Lichess MCP
一个 MCP 服务器,它支持与 Lichess 国际象棋平台进行自然语言交互,允许用户通过 Claude 来下棋、分析棋局、管理账户以及参与比赛。
Systemprompt MCP Gmail Server
使用户能够通过 MCP 协议,使用 AI 代理辅助操作来管理 Gmail 帐户,支持电子邮件搜索、阅读、删除和发送,并具有语音驱动的界面。
Unofficial dubco-mcp-server
一个模型上下文协议服务器,它使 AI 助手能够通过 Dub.co API 创建、更新和删除 Dub.co 短链接。
RagDocs MCP Server
提供 RAG 功能,用于使用 Qdrant 向量数据库和 Ollama/OpenAI 嵌入进行语义文档搜索,允许用户添加、搜索、列出和删除文档,并支持元数据。
Finnhub MCP Server
这个服务器提供了一个与 Finnhub API 的接口,使用户能够获取最新的市场新闻、股票市场数据、基本财务数据以及特定股票的推荐趋势。
Academic Paper Search MCP Server
支持从多个来源实时搜索和检索学术论文信息,提供论文元数据、摘要以及在可用情况下的全文内容,并提供结构化数据响应,以便与支持工具/函数调用的 AI 模型集成。