Discover Awesome MCP Servers
Extend your agent with 10,234 capabilities via MCP servers.
- All10,234
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2

javaDemo
javaDemo. Contribute to zf0497/mydemo development by creating an account on GitHub.
MCP Server for X/Twitter
Automate your X account using the real browser API - JoshMayerr/mcp-x

Minima
On-premises conversational RAG with configurable containers - dmayboroda/minima

NutJS Windows Control
Cross-platform MCP server for OS automation. Contribute to Cheffromspace/MCPControl development by creating an account on GitHub.
WebSearch
Alat Pencarian Web adalah serangkaian alat yang memungkinkan Claude mengakses internet melalui Server MCP.
GitHub Support Assistant
Membantu teknisi dukungan menemukan isu GitHub yang serupa untuk mempercepat pemecahan masalah dengan mencari repositori dan menghitung skor kemiripan berdasarkan deskripsi isu.
Cloudflare API MCP Server
Lightweight MCP server to give your Cursor Agent access to the Cloudflare API

MCP Server for eSignatures
Memfasilitasi pengelolaan kontrak dan templat untuk tanda tangan elektronik (eSignature), memungkinkan pengguna untuk membuat, mengirim, memperbarui, dan mengelola kontrak dan templat dengan opsi yang dapat disesuaikan melalui antarmuka yang mudah digunakan.
Inoyu Apache Unomi MCP Server
Server Protokol Konteks Model yang memungkinkan Claude untuk mempertahankan konteks pengguna melalui manajemen profil Apache Unomi.

JSON MCP Server
Implementasi server Protokol Konteks Model yang memungkinkan LLM untuk melakukan kueri dan memanipulasi data JSON menggunakan sintaks JSONPath dengan operasi yang diperluas untuk memfilter, mengurutkan, mengubah, dan mengagregasi data.
Zotero MCP Server
Server ini memungkinkan pengguna untuk berinteraksi dengan perpustakaan Zotero mereka melalui Model Context Protocol, menyediakan alat untuk mencari item, mengambil metadata, dan mengakses teks lengkap menggunakan kueri bahasa alami.

Radarr and Sonarr MCP Server
Sebuah server Protokol Konteks Model berbasis Python yang memungkinkan asisten AI seperti Claude untuk mengakses dan menanyakan koleksi film dan acara TV Anda melalui API Radarr dan Sonarr.
Coding Standards MCP Server
Menyediakan alat untuk mengakses panduan gaya pengkodean dan praktik terbaik untuk berbagai teknologi termasuk Java, Python, dan React.
Semantic Scholar MCP Server
API Semantic Scholar, menyediakan akses komprehensif ke data makalah akademik, informasi penulis, dan jaringan sitasi.

GitHub MCP Server
Repository management, file operations, and GitHub API integration
MCP Docling Server
Sebuah server yang menyediakan kemampuan pemrosesan dokumen menggunakan Model Context Protocol, memungkinkan konversi dokumen ke markdown, ekstraksi tabel, dan pemrosesan gambar dokumen.

MCP Tavily
A Model Context Protocol server enabling advanced search and content extraction using the Tavily API, with rich customization and integration options.

OpenAPI

Penrose MCP Server
Memfasilitasi pembuatan diagram matematika menggunakan bahasa alami melalui bahasa khusus domain Penrose, memungkinkan definisi tipe matematika, hubungan, dan aturan representasi visual.

datadog
Okay, here's how you can access monitor and cluster logs from Datadog, broken down into steps and considerations: **Accessing Monitor Logs** Monitors in Datadog trigger alerts based on specific conditions. Accessing logs related to these monitors helps you understand why a monitor triggered and troubleshoot the underlying issue. 1. **From the Monitor's Alert:** * **Navigate to the Monitor:** Find the monitor that triggered the alert. You can usually do this from the "Events" page, the "Monitors" page, or from a notification (email, Slack, etc.). * **View the Alert Details:** Click on the alert event. This will take you to a detailed view of the alert. * **Look for Log Links:** The alert details often include links directly to relevant logs. These links are usually generated based on tags or attributes associated with the monitor and the data it's monitoring. Look for phrases like "View Logs," "Related Logs," or links to the Log Explorer. These links are the *easiest* way to find the logs relevant to that specific alert. 2. **Using the Log Explorer:** * **Navigate to the Log Explorer:** In the Datadog UI, go to `Logs` -> `Explorer`. * **Filter by Monitor Name/ID:** The most effective way to find logs related to a specific monitor is to filter by the monitor's name or ID. Add a filter like `monitor:"<your_monitor_name>"` or `monitor_id:<your_monitor_id>`. Replace `<your_monitor_name>` and `<your_monitor_id>` with the actual values. You can find the monitor ID in the monitor's settings. * **Filter by Tags:** Monitors often have tags associated with them (e.g., `environment:production`, `service:web`). Use these tags to further refine your log search. For example: `environment:production service:web monitor:"My Web Service Monitor"`. * **Filter by Time:** Set the time range in the Log Explorer to the period around when the monitor triggered. This will help you focus on the logs that are most likely related to the alert. * **Search for Keywords:** If you have an idea of what might be causing the issue, search for relevant keywords in the logs. For example, if the monitor is alerting on high CPU usage, search for "CPU," "high," "usage," or related terms. * **Use Facets:** The Log Explorer's facets (on the left-hand side) are very helpful. They allow you to quickly filter logs based on attributes like hostname, service, status, etc. Use them to narrow down your search. 3. **Using Dashboards:** * **Create a Dashboard:** If you frequently need to access logs related to specific monitors, create a dashboard that includes a log widget. * **Configure the Log Widget:** Configure the log widget to filter logs based on the monitor's name, ID, tags, or other relevant attributes. This will give you a dedicated view of the logs related to that monitor. **Accessing Cluster Logs (e.g., Kubernetes, ECS)** Accessing cluster logs is crucial for understanding the health and performance of your containerized applications. 1. **Ensure Log Collection is Configured:** * **Datadog Agent:** The Datadog Agent needs to be properly configured to collect logs from your cluster. This typically involves deploying the agent as a DaemonSet (in Kubernetes) or configuring it to run on your ECS instances. * **Log Configuration:** You need to configure the agent to collect the specific logs you're interested in. This usually involves specifying the log file paths or using auto-discovery features to automatically detect container logs. Refer to the Datadog documentation for your specific cluster environment (Kubernetes, ECS, etc.). The documentation will provide detailed instructions on how to configure log collection. 2. **Using the Log Explorer (Similar to Monitor Logs):** * **Navigate to the Log Explorer:** `Logs` -> `Explorer`. * **Filter by Cluster Name/ID:** If you have multiple clusters, filter by the cluster name or ID. The attribute name for the cluster will depend on your setup (e.g., `kube_cluster_name`, `ecs_cluster_name`). * **Filter by Namespace (Kubernetes):** If you're using Kubernetes, filter by the namespace to focus on logs from a specific application or team. Use the `kube_namespace` attribute. * **Filter by Pod/Container Name:** Filter by the pod name (`kube_pod_name`) or container name (`container_name`) to focus on logs from a specific container. * **Filter by Service:** Filter by the service name to see logs related to a particular service. * **Filter by Host:** Filter by the host to see logs related to a particular host. * **Use Kubernetes Metadata:** Datadog automatically enriches Kubernetes logs with metadata like pod labels, annotations, and deployment names. Use these metadata fields to filter and analyze your logs. * **Example:** `kube_cluster_name:"my-production-cluster" kube_namespace:"my-app" kube_pod_name:"my-app-pod-12345"` 3. **Using Dashboards:** * **Create Cluster-Specific Dashboards:** Create dashboards that are tailored to your cluster environment. * **Include Log Widgets:** Add log widgets to your dashboards to display cluster logs. * **Use Template Variables:** Use template variables to make your dashboards more flexible. For example, you could create a template variable for the namespace, so you can easily switch between different namespaces. 4. **Using the Container View:** * **Navigate to the Container View:** In the Datadog UI, go to `Infrastructure` -> `Containers`. * **Select a Container:** Select the container you're interested in. * **View Logs:** The container view often provides a direct link to the logs for that container. **Important Considerations:** * **Log Volume:** Collecting logs from a large cluster can generate a significant amount of data. Make sure you have sufficient log retention and that you're not collecting unnecessary logs. Consider using log sampling or filtering to reduce the volume of logs. * **Log Format:** Ensure that your logs are in a format that Datadog can easily parse. Structured logging (e.g., JSON) is generally preferred. * **Security:** Be careful about what information you log. Avoid logging sensitive data like passwords or API keys. * **Retention Policies:** Understand your Datadog log retention policies. Logs are typically retained for a limited time. * **Agent Configuration:** Properly configuring the Datadog Agent is critical for successful log collection. Refer to the Datadog documentation for detailed instructions. * **Context Tags:** Use context tags to add relevant information to your logs. This will make it easier to filter and analyze your logs. For example, you could add tags for the application name, environment, or user ID. * **Log Pipelines:** Use Datadog's log pipelines to process and enrich your logs. You can use pipelines to filter, redact, and transform your logs before they are indexed. **Example Scenario (Kubernetes):** Let's say you have a Kubernetes cluster and you want to troubleshoot an issue with a specific pod. 1. **Identify the Pod:** You know the name of the pod is `my-app-pod-12345` and it's running in the `my-app` namespace. 2. **Go to the Log Explorer:** Navigate to `Logs` -> `Explorer`. 3. **Add Filters:** Add the following filters: * `kube_cluster_name:"your-cluster-name"` (Replace `your-cluster-name` with your actual cluster name) * `kube_namespace:"my-app"` * `kube_pod_name:"my-app-pod-12345"` 4. **Set Time Range:** Set the time range to the period when the issue occurred. 5. **Analyze Logs:** Examine the logs to identify any errors, warnings, or other relevant information. Use the facets to further filter the logs. By following these steps, you should be able to effectively access and analyze monitor and cluster logs in Datadog to troubleshoot issues and gain insights into your applications and infrastructure. Remember to consult the official Datadog documentation for the most up-to-date information and best practices.

OKX MCP Server
Menyediakan data harga mata uang kripto waktu nyata dari bursa OKX melalui antarmuka Model Context Protocol, memungkinkan akses ke data candlestick historis dan harga pasar saat ini untuk instrumen perdagangan apa pun.
Together AI Image Server
Sebuah server MCP yang memungkinkan Claude dan asisten lain yang kompatibel dengan MCP untuk menghasilkan gambar dari perintah teks menggunakan model pembuatan gambar Together AI.
Twitch MCP Server
Memungkinkan interaksi dengan Twitch API, memungkinkan pengguna untuk mengambil informasi lengkap tentang saluran, aliran (streams), permainan (games), dan lainnya, dengan dukungan tambahan untuk pencarian dan akses elemen obrolan (chat) seperti emote dan lencana (badges).

Microsoft SQL Server MCP Server
Server Protokol Konteks Model yang memungkinkan interaksi aman dengan database Microsoft SQL Server, memungkinkan asisten AI untuk membuat daftar tabel, membaca data, dan menjalankan kueri SQL melalui antarmuka yang terkontrol.
Lichess MCP
Sebuah server MCP yang memungkinkan interaksi bahasa alami dengan platform catur Lichess, memungkinkan pengguna untuk bermain game, menganalisis posisi, mengelola akun mereka, dan berpartisipasi dalam turnamen melalui Claude.
Unofficial dubco-mcp-server
Server Protokol Konteks Model yang memungkinkan asisten AI untuk membuat, memperbarui, dan menghapus tautan pendek Dub.co melalui API Dub.co.
Systemprompt MCP Gmail Server
Memungkinkan pengguna untuk mengelola akun Gmail menggunakan operasi yang dibantu oleh agen AI melalui protokol MCP, mendukung pencarian, pembacaan, penghapusan, dan pengiriman email dengan antarmuka bertenaga suara.
RagDocs MCP Server
Menyediakan kemampuan RAG (Retrieval-Augmented Generation) untuk pencarian dokumen semantik menggunakan basis data vektor Qdrant dan embedding Ollama/OpenAI, memungkinkan pengguna untuk menambah, mencari, membuat daftar, dan menghapus dokumentasi dengan dukungan metadata.

Jenkins MCP
Memungkinkan pengelolaan operasi Jenkins seperti menampilkan daftar pekerjaan, memicu build, dan memeriksa status build melalui server MCP yang dapat dikonfigurasi.
MCP Server for Ticketmaster Events
Menyediakan alat untuk menemukan acara di Madison Square Garden melalui Ticketmaster API, mengembalikan data terstruktur dengan detail acara seperti nama, tanggal, harga, dan tautan pembelian tiket.