hny-mcp

hny-mcp

Server for interacting with Honeycomb observability data. This server enables LLMs like Claude to directly analyze and query your Honeycomb datasets.

Category
Visit Server

Tools

list_datasets

Lists available datasets for the active environment with pagination, sorting, and search support. Returns dataset names, slugs, descriptions, and timestamps.

list_columns

Lists all columns available in the specified dataset, including their names, types, descriptions, and hidden status. Supports pagination, sorting by type/name/created_at, and searching by name/description. Note: __all__ is NOT supported as a dataset name.

run_query

Executes a Honeycomb query, returning results with statistical summaries. CRITICAL RULE: For COUNT operations, NEVER include a "column" field in your calculation, even as null or undefined. Example: Use {"op": "COUNT"} NOT {"op": "COUNT", "column": "anything"}. Additional Rules: 1) All parameters must be at the TOP LEVEL (not nested inside a 'query' property) 2) Field names must be exact - use 'op' (not 'operation'), 'breakdowns' (not 'group_by') 3) Only use the exact operation names listed in the schema (e.g., use "P95" for 95th percentile, not "PERCENTILE") 4) For all operations EXCEPT COUNT and CONCURRENCY, you must specify a "column" field

analyze_columns

Analyzes specific columns in a dataset by running statistical queries and returning computed metrics. This tool allows users to get statistical information about a specific column, including value distribution, top values, and numeric statistics (for numeric columns). Supports analyzing up to 10 columns at once by specifying an array of column names in the 'columns' parameter. When multiple columns are specified, they will be analyzed together as a group, showing the distribution of their combined values. Use this tool before running queries to get a better understanding of the data in your dataset.

list_boards

Lists available boards (dashboards) for a specific environment with pagination, sorting, and search support. Returns board IDs, names, descriptions, creation times, and last update times.

get_board

Retrieves a specific board (dashboard) from a Honeycomb environment. This tool returns a detailed object containing the board's ID, name, description, creation time, and last update time.

list_markers

Lists available markers (deployment events) for a specific dataset or environment with pagination, sorting, and search support. Returns IDs, messages, types, URLs, creation times, start times, and end times.

list_recipients

Lists available recipients for notifications in a specific environment. This tool returns a list of all recipients available in the specified environment, including their names, types, targets, and metadata.

list_slos

Lists available SLOs (Service Level Objectives) for a specific dataset. This tool returns a list of all SLOs available in the specified environment, including their names, descriptions, time periods, and target per million events expected to succeed. NOTE: __all__ is NOT supported as a dataset name -- it is not possible to list all SLOs in an environment.

get_slo

Retrieves a specific SLO (Service Level Objective) by ID with detailed information. This tool returns a detailed object containing the SLO's ID, name, description, time period, target per million, compliance, budget remaining, SLI alias, and timestamps.

list_triggers

Lists available triggers (alerts) for a specific dataset. This tool returns a list of all triggers available in the specified dataset, including their names, descriptions, thresholds, and other metadata. NOTE: __all__ is NOT supported as a dataset name -- it is not possible to list all triggers in an environment.

get_trigger

Retrieves a specific trigger (alert) by ID with detailed information. This tool returns a detailed object containing the trigger's ID, name, description, threshold, frequency, alert type, triggered status, disabled status, recipients, evaluation schedule type, and timestamps.

get_trace_link

Generates a direct deep link to a specific trace in the Honeycomb UI. This tool creates a URL that opens a specific distributed trace, optionally positioning to a particular span and time range. If no time range is specified, the trace must have been generated within two hours from the current time. If only the start time is provided, the end time is assumed to be 10 minutes from the start time.

get_instrumentation_help

Provides important guidance for how to instrument code with OpenTelemetry traces and logs. It is intended to be used when someone wants to instrument their code, or improve instrumentation (such as getting advice on improving their logs or tracing, or creating new instrumentation). It is BEST used after inspecting existing code and telemetry data to understand some operational characteristics. However, if there is no telemetry data to read from Honeycomb, it can still provide guidance on how to instrument code.

README

Honeycomb MCP Server

A Model Context Protocol server for interacting with Honeycomb observability data. This server enables LLMs like Claude to directly analyze and query your Honeycomb datasets across multiple environments.

Honeycomb Enterprise Only

Currently, this is only available for Honeycomb Enterprise customers.

How it works

Today, this is a single server process that you must run on your own computer. It is not authenticated. All information uses STDIO between your client and the server.

Installation

pnpm install
pnpm run build

The build artifact goes into the /build folder.

Configuration

To use this MCP server, you need to provide Honeycomb API keys via environment variables in your MCP config.

{
    "mcpServers": {
      "honeycomb": {
        "command": "node",
        "args": [
          "/fully/qualified/path/to/honeycomb-mcp/build/index.mjs"
        ],
        "env": {
          "HONEYCOMB_API_KEY": "your_api_key"
        }
      }
    }
}

For multiple environments:

{
    "mcpServers": {
      "honeycomb": {
        "command": "node",
        "args": [
          "/fully/qualified/path/to/honeycomb-mcp/build/index.mjs"
        ],
        "env": {
          "HONEYCOMB_ENV_PROD_API_KEY": "your_prod_api_key",
          "HONEYCOMB_ENV_STAGING_API_KEY": "your_staging_api_key"
        }
      }
    }
}

Important: These environment variables must bet set in the env block of your MCP config.

EU Configuration

EU customers must also set a HONEYCOMB_API_ENDPOINT configuration, since the MCP defaults to the non-EU instance.

# Optional custom API endpoint (defaults to https://api.honeycomb.io)
HONEYCOMB_API_ENDPOINT=https://api.eu1.honeycomb.io/

Caching Configuration

The MCP server implements caching for all non-query Honeycomb API calls to improve performance and reduce API usage. Caching can be configured using these environment variables:

# Enable/disable caching (default: true)
HONEYCOMB_CACHE_ENABLED=true

# Default TTL in seconds (default: 300)
HONEYCOMB_CACHE_DEFAULT_TTL=300

# Resource-specific TTL values in seconds (defaults shown)
HONEYCOMB_CACHE_DATASET_TTL=900    # 15 minutes
HONEYCOMB_CACHE_COLUMN_TTL=900     # 15 minutes
HONEYCOMB_CACHE_BOARD_TTL=900      # 15 minutes
HONEYCOMB_CACHE_SLO_TTL=900        # 15 minutes
HONEYCOMB_CACHE_TRIGGER_TTL=900    # 15 minutes
HONEYCOMB_CACHE_MARKER_TTL=900     # 15 minutes
HONEYCOMB_CACHE_RECIPIENT_TTL=900  # 15 minutes
HONEYCOMB_CACHE_AUTH_TTL=3600      # 1 hour

# Maximum cache size (items per resource type)
HONEYCOMB_CACHE_MAX_SIZE=1000

Client compatibility

Honeycomb MCP has been tested with the following clients:

It will likely work with other clients.

Features

  • Query Honeycomb datasets across multiple environments
  • Run analytics queries with support for:
    • Multiple calculation types (COUNT, AVG, P95, etc.)
    • Breakdowns and filters
    • Time-based analysis
  • Monitor SLOs and their status (Enterprise only)
  • Analyze columns and data patterns
  • View and analyze Triggers
  • Access dataset metadata and schema information
  • Optimized performance with TTL-based caching for all non-query API calls

Resources

Access Honeycomb datasets using URIs in the format: honeycomb://{environment}/{dataset}

For example:

  • honeycomb://production/api-requests
  • honeycomb://staging/backend-services

The resource response includes:

  • Dataset name
  • Column information (name, type, description)
  • Schema details

Tools

  • list_datasets: List all datasets in an environment

    { "environment": "production" }
    
  • get_columns: Get column information for a dataset

    {
      "environment": "production",
      "dataset": "api-requests"
    }
    
  • run_query: Run analytics queries with rich options

    {
      "environment": "production",
      "dataset": "api-requests",
      "calculations": [
        { "op": "COUNT" },
        { "op": "P95", "column": "duration_ms" }
      ],
      "breakdowns": ["service.name"],
      "time_range": 3600
    }
    
  • analyze_columns: Analyzes specific columns in a dataset by running statistical queries and returning computed metrics.

  • list_slos: List all SLOs for a dataset

    {
      "environment": "production",
      "dataset": "api-requests"
    }
    
  • get_slo: Get detailed SLO information

    {
      "environment": "production",
      "dataset": "api-requests",
      "sloId": "abc123"
    }
    
  • list_triggers: List all triggers for a dataset

    {
      "environment": "production",
      "dataset": "api-requests"
    }
    
  • get_trigger: Get detailed trigger information

    {
      "environment": "production",
      "dataset": "api-requests",
      "triggerId": "xyz789"
    }
    
  • get_trace_link: Generate a deep link to a specific trace in the Honeycomb UI

  • get_instrumentation_help: Provides OpenTelemetry instrumentation guidance

    {
      "language": "python",
      "filepath": "app/services/payment_processor.py"
    }
    

Example Queries with Claude

Ask Claude things like:

  • "What datasets are available in the production environment?"
  • "Show me the P95 latency for the API service over the last hour"
  • "What's the error rate broken down by service name?"
  • "Are there any SLOs close to breaching their budget?"
  • "Show me all active triggers in the staging environment"
  • "What columns are available in the production API dataset?"

Optimized Tool Responses

All tool responses are optimized to reduce context window usage while maintaining essential information:

  • List datasets: Returns only name, slug, and description
  • Get columns: Returns streamlined column information focusing on name, type, and description
  • Run query:
    • Includes actual results and necessary metadata
    • Adds automatically calculated summary statistics
    • Only includes series data for heatmap queries
    • Omits verbose metadata, links and execution details
  • Analyze column:
    • Returns top values, counts, and key statistics
    • Automatically calculates numeric metrics when appropriate
  • SLO information: Streamlined to key status indicators and performance metrics
  • Trigger information: Focused on trigger status, conditions, and notification targets

This optimization ensures that responses are concise but complete, allowing LLMs to process more data within context limitations.

Query Specification for run_query

The run_query tool supports a comprehensive query specification:

  • calculations: Array of operations to perform

    • Supported operations: COUNT, CONCURRENCY, COUNT_DISTINCT, HEATMAP, SUM, AVG, MAX, MIN, P001, P01, P05, P10, P25, P50, P75, P90, P95, P99, P999, RATE_AVG, RATE_SUM, RATE_MAX
    • Some operations like COUNT and CONCURRENCY don't require a column
    • Example: {"op": "HEATMAP", "column": "duration_ms"}
  • filters: Array of filter conditions

    • Supported operators: =, !=, >, >=, <, <=, starts-with, does-not-start-with, exists, does-not-exist, contains, does-not-contain, in, not-in
    • Example: {"column": "error", "op": "=", "value": true}
  • filter_combination: "AND" or "OR" (default is "AND")

  • breakdowns: Array of columns to group results by

    • Example: ["service.name", "http.status_code"]
  • orders: Array specifying how to sort results

    • Must reference columns from breakdowns or calculations
    • HEATMAP operation cannot be used in orders
    • Example: {"op": "COUNT", "order": "descending"}
  • time_range: Relative time range in seconds (e.g., 3600 for last hour)

    • Can be combined with either start_time or end_time but not both
  • start_time and end_time: UNIX timestamps for absolute time ranges

  • having: Filter results based on calculation values

    • Example: {"calculate_op": "COUNT", "op": ">", "value": 100}

Example Queries

Here are some real-world example queries:

Find Slow API Calls

{
  "environment": "production",
  "dataset": "api-requests",
  "calculations": [
    {"column": "duration_ms", "op": "HEATMAP"},
    {"column": "duration_ms", "op": "MAX"}
  ],
  "filters": [
    {"column": "trace.parent_id", "op": "does-not-exist"}
  ],
  "breakdowns": ["http.target", "name"],
  "orders": [
    {"column": "duration_ms", "op": "MAX", "order": "descending"}
  ]
}

Distribution of DB Calls (Last Week)

{
  "environment": "production",
  "dataset": "api-requests",
  "calculations": [
    {"column": "duration_ms", "op": "HEATMAP"}
  ],
  "filters": [
    {"column": "db.statement", "op": "exists"}
  ],
  "breakdowns": ["db.statement"],
  "time_range": 604800
}

Exception Count by Exception and Caller

{
  "environment": "production",
  "dataset": "api-requests",
  "calculations": [
    {"op": "COUNT"}
  ],
  "filters": [
    {"column": "exception.message", "op": "exists"},
    {"column": "parent_name", "op": "exists"}
  ],
  "breakdowns": ["exception.message", "parent_name"],
  "orders": [
    {"op": "COUNT", "order": "descending"}
  ]
}

Development

pnpm install
pnpm run build

Requirements

  • Node.js 16+
  • Honeycomb API keys with appropriate permissions:
    • Query access for analytics
    • Read access for SLOs and Triggers
    • Environment-level access for dataset operations

License

MIT

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured