Datadog MCP Server
Enables comprehensive Datadog monitoring capabilities including CI/CD pipeline management, service logs analysis, metrics querying, monitor and SLO management, service definitions retrieval, and team management through Claude and other MCP clients.
README
Datadog MCP Server
A Model Context Protocol (MCP) server that provides comprehensive Datadog monitoring capabilities through Claude Desktop and other MCP clients.
Features
This MCP server enables Claude to:
- CI/CD Pipeline Management: List CI pipelines, extract fingerprints
- Service Logs Analysis: Retrieve and analyze service logs with environment and time filtering
- Metrics Monitoring: Query any Datadog metric with flexible filtering, aggregation, and field discovery
- Monitoring & Alerting: List and manage Datadog monitors and Service Level Objectives (SLOs)
- Service Definitions: List and retrieve detailed service definitions with metadata, ownership, and configuration
- Team Management: List teams, view member details, and manage team information
Quick Start
Choose your preferred method to run the Datadog MCP server:
๐ UVX Direct Run (Recommended)
export DD_API_KEY="your-datadog-api-key" DD_APP_KEY="your-datadog-application-key"
# Latest version (HEAD)
uvx --from git+https://github.com/shelfio/datadog-mcp.git datadog-mcp
# Specific version (recommended for production)
uvx --from git+https://github.com/shelfio/datadog-mcp.git@v0.0.5 datadog-mcp
# Specific branch
uvx --from git+https://github.com/shelfio/datadog-mcp.git@main datadog-mcp
๐ง UV Quick Run (Development)
export DD_API_KEY="your-datadog-api-key" DD_APP_KEY="your-datadog-application-key"
git clone https://github.com/shelfio/datadog-mcp.git /tmp/datadog-mcp && cd /tmp/datadog-mcp && uv run ddmcp/server.py
๐ณ Podman (Optional)
podman run -e DD_API_KEY="your-datadog-api-key" -e DD_APP_KEY="your-datadog-application-key" -i $(podman build -q https://github.com/shelfio/datadog-mcp.git)
Method Comparison:
| Method | Speed | Latest Code | Setup | Best For |
|---|---|---|---|---|
| ๐ UVX Direct Run | โกโกโก | โ (versioned) | Minimal | Production, Claude Desktop |
| ๐ง UV Quick Run | โกโก | โ (bleeding edge) | Clone Required | Development, Testing |
| ๐ณ Podman | โก | โ (bleeding edge) | Podman Required | Containerized Environments |
Requirements
For UVX/UV Methods
- Python 3.13+
- UV package manager (includes uvx)
- Datadog API Key and Application Key
For Podman Method
- Podman
- Datadog API Key and Application Key
Version Management
When using UVX, you can specify exact versions for reproducible deployments:
Version Formats
- Latest:
git+https://github.com/shelfio/datadog-mcp.git(HEAD) - Specific Tag:
git+https://github.com/shelfio/datadog-mcp.git@v0.0.5 - Branch:
git+https://github.com/shelfio/datadog-mcp.git@main - Commit Hash:
git+https://github.com/shelfio/datadog-mcp.git@59f0c15
Recommendations
- Production: Use specific tags (e.g.,
@v0.0.5) for stability - Development: Use latest or specific branch for newest features
- Testing: Use commit hashes for exact reproducibility
See GitHub releases for all available versions.
Claude Desktop Integration
Using UVX (Recommended)
Add to Claude Desktop configuration:
Latest version (bleeding edge):
{
"mcpServers": {
"datadog": {
"command": "uvx",
"args": ["--from", "git+https://github.com/shelfio/datadog-mcp.git", "datadog-mcp"],
"env": {
"DD_API_KEY": "your-datadog-api-key",
"DD_APP_KEY": "your-datadog-application-key"
}
}
}
}
Specific version (recommended for production):
{
"mcpServers": {
"datadog": {
"command": "uvx",
"args": ["--from", "git+https://github.com/shelfio/datadog-mcp.git@v0.0.5", "datadog-mcp"],
"env": {
"DD_API_KEY": "your-datadog-api-key",
"DD_APP_KEY": "your-datadog-application-key"
}
}
}
}
For EU region (see Multi-Region Support for other regions):
{
"mcpServers": {
"datadog": {
"command": "uvx",
"args": ["--from", "git+https://github.com/shelfio/datadog-mcp.git", "datadog-mcp"],
"env": {
"DD_API_KEY": "your-datadog-api-key",
"DD_APP_KEY": "your-datadog-application-key",
"DD_SITE": "datadoghq.eu"
}
}
}
}
Using Local Development Setup
For development with local cloned repository:
git clone https://github.com/shelfio/datadog-mcp.git
cd datadog-mcp
Add to Claude Desktop configuration:
{
"mcpServers": {
"datadog": {
"command": "uv",
"args": ["run", "ddmcp/server.py"],
"cwd": "/path/to/datadog-mcp",
"env": {
"DD_API_KEY": "your-datadog-api-key",
"DD_APP_KEY": "your-datadog-application-key"
}
}
}
}
Installation Options
UVX Installation (Recommended)
Install and run directly from GitHub without cloning:
export DD_API_KEY="your-datadog-api-key"
export DD_APP_KEY="your-datadog-application-key"
# Latest version
uvx --from git+https://github.com/shelfio/datadog-mcp.git datadog-mcp
# Specific version (recommended for production)
uvx --from git+https://github.com/shelfio/datadog-mcp.git@v0.0.5 datadog-mcp
Development Installation
For local development and testing:
-
Clone the repository:
git clone https://github.com/shelfio/datadog-mcp.git cd datadog-mcp -
Install dependencies:
uv sync -
Run the server:
export DD_API_KEY="your-datadog-api-key" export DD_APP_KEY="your-datadog-application-key" uv run ddmcp/server.py
Podman Installation (Optional)
For containerized environments:
podman run -e DD_API_KEY="your-key" -e DD_APP_KEY="your-app-key" -i $(podman build -q https://github.com/shelfio/datadog-mcp.git)
Tools
The server provides these tools to Claude:
list_ci_pipelines
Lists all CI pipelines registered in Datadog with filtering options.
Arguments:
repository(optional): Filter by repository namepipeline_name(optional): Filter by pipeline nameformat(optional): Output format - "table", "json", or "summary"
get_pipeline_fingerprints
Extracts pipeline fingerprints for use in Terraform service definitions.
Arguments:
repository(optional): Filter by repository namepipeline_name(optional): Filter by pipeline nameformat(optional): Output format - "table", "json", or "summary"
list_metrics
Lists all available metrics from Datadog for metric discovery.
Arguments:
filter(optional): Filter to search for metrics by tags (e.g., 'aws:', 'env:', 'service:web')limit(optional): Maximum number of metrics to return (default: 100, max: 10000)
get_metrics
Queries any Datadog metric with flexible filtering and aggregation.
Arguments:
metric_name(required): The metric name to query (e.g., 'aws.apigateway.count', 'system.cpu.user')time_range(optional): "1h", "4h", "8h", "1d", "7d", "14d", "30d"aggregation(optional): "avg", "sum", "min", "max", "count"filters(optional): Dictionary of filters to apply (e.g., {'service': 'web', 'env': 'prod'})aggregation_by(optional): List of fields to group results byformat(optional): "table", "summary", "json", "timeseries"
get_metric_fields
Retrieves all available fields (tags) for a specific metric.
Arguments:
metric_name(required): The metric name to get fields fortime_range(optional): "1h", "4h", "8h", "1d", "7d", "14d", "30d"
get_metric_field_values
Retrieves all values for a specific field of a metric.
Arguments:
metric_name(required): The metric namefield_name(required): The field name to get values fortime_range(optional): "1h", "4h", "8h", "1d", "7d", "14d", "30d"
list_service_definitions
Lists all service definitions from Datadog with pagination and filtering.
Arguments:
page_size(optional): Number of service definitions per page (default: 10, max: 100)page_number(optional): Page number for pagination (0-indexed, default: 0)schema_version(optional): Filter by schema version (e.g., 'v2', 'v2.1', 'v2.2')format(optional): Output format - "table", "json", or "summary"
get_service_definition
Retrieves the definition of a specific service with detailed metadata.
Arguments:
service_name(required): Name of the service to retrieveschema_version(optional): Schema version to retrieve (default: "v2.2", options: "v1", "v2", "v2.1", "v2.2")format(optional): Output format - "formatted", "json", or "yaml"
get_service_logs
Retrieves service logs with comprehensive filtering capabilities.
Arguments:
service_name(required): Name of the servicetime_range(required): "1h", "4h", "8h", "1d", "7d", "14d", "30d"environment(optional): "prod", "staging", "backoffice"log_level(optional): "INFO", "ERROR", "WARN", "DEBUG"format(optional): "table", "text", "json", "summary"
list_monitors
Lists all Datadog monitors with comprehensive filtering options.
Arguments:
name(optional): Filter monitors by name (substring match)tags(optional): Filter monitors by tags (e.g., 'env:prod,service:web')monitor_tags(optional): Filter monitors by monitor tags (e.g., 'team:backend')page_size(optional): Number of monitors per page (default: 50, max: 1000)page(optional): Page number (0-indexed, default: 0)format(optional): Output format - "table", "json", or "summary"
list_slos
Lists Service Level Objectives (SLOs) from Datadog with filtering capabilities.
Arguments:
query(optional): Filter SLOs by name or description (substring match)tags(optional): Filter SLOs by tags (e.g., 'team:backend,env:prod')limit(optional): Maximum number of SLOs to return (default: 50, max: 1000)offset(optional): Number of SLOs to skip (default: 0)format(optional): Output format - "table", "json", or "summary"
get_teams
Lists teams and their members.
Arguments:
team_name(optional): Filter by team nameinclude_members(optional): Include member details (default: false)format(optional): "table", "json", "summary"
Examples
Ask Claude to help you with:
"Show me all CI pipelines for the shelf-api repository"
"Get error logs for the content service in the last 4 hours"
"List all available AWS metrics"
"What are the latest metrics for aws.apigateway.count grouped by account?"
"Get all available fields for the system.cpu.user metric"
"List all service definitions in my organization"
"Get the definition for the user-api service"
"List all teams and their members"
"Show all monitors for the web service"
"List SLOs with less than 99% uptime"
"Extract pipeline fingerprints for Terraform configuration"
Configuration
Environment Variables
| Variable | Description | Required | Default |
|---|---|---|---|
DD_API_KEY |
Datadog API Key | Yes | - |
DD_APP_KEY |
Datadog Application Key | Yes | - |
DD_SITE |
Datadog site/region (see table below) | No | datadoghq.com |
Multi-Region Support
Datadog operates in multiple regions. Set the DD_SITE environment variable to connect to your Datadog region:
| Region | DD_SITE Value | Description |
|---|---|---|
| US1 | datadoghq.com |
US (default) |
| US3 | us3.datadoghq.com |
US3 |
| US5 | us5.datadoghq.com |
US5 |
| EU1 | datadoghq.eu |
Europe |
| AP1 | ap1.datadoghq.com |
Asia Pacific (Japan) |
| US1-FED | ddog-gov.com |
US Government |
Example for EU region:
export DD_SITE="datadoghq.eu"
export DD_API_KEY="your-api-key"
export DD_APP_KEY="your-app-key"
uvx --from git+https://github.com/shelfio/datadog-mcp.git datadog-mcp
See Datadog's Getting Started with Sites for more information.
Obtaining Datadog Credentials
- Log in to your Datadog account
- Go to Organization Settings โ API Keys
- Create or copy your API Key (this is your
DD_API_KEY) - Go to Organization Settings โ Application Keys
- Create or copy your Application Key (this is your
DD_APP_KEY)
Note: These are two different keys:
- API Key: Used for authentication with Datadog's API
- Application Key: Used for authorization and is tied to a specific user account
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.