detrix

detrix

Detrix is an agentic debugger. Watch any variable at any line, zero code changes during debugging Local or cloud - same workflow for Docker containers and remote hosts Python, Go, Rust - observation points capture values without pausing, without restarting

Category
Visit Server

README

<table style="border:none;"> <tr> <td width="300"> <img alt="Detrix" src="assets/logo.png" width="300"> </td> <td>

Detrix

Give your AI agent eyes inside any running program.

  • Watch any variable at any line — agent sets an observation point in seconds, zero code changes during debugging
  • Local or cloud — same workflow for Docker containers and remote hosts
  • Python, Go, Rust — observation points capture values without pausing, without restarting
  • Built for agents — observe, query, manage observations via natural language; Claude Code, Cursor, Windsurf

</td> </tr> </table>

Tests CI Rust Docker crates.io PyPI Go License


See It in Action

<video src="https://github.com/user-attachments/assets/2f6cc317-e09b-48ae-a098-d553d59a26e4" controls width="100%"></video>

1-minute demo: A Go service in Docker starts reporting wrong numbers. Nothing in the client code changed. Claude watches it live and finds the cause.

Here's what the session looks like:

You:    "The order-service in Docker is reporting wildly inflated revenue.
         Daily total is in the thousands when transactions should be $20-200."

Agent:  → reads the source, spots dead unit-conversion code
        → wakes the service via Detrix
        → observes txn.Amount, txn.Unit, total
          (no line number — find_variable locates it in the running container)

        "Found it. The pricing API changed what it's returning.

        txn.Amount = 8957, txn.Unit = "cents" → should be $89.57, summed as $8957

        In calculateRevenue, the unit field is fetched and discarded:

        unit := txn.Unit
        _ = unit          // ← conversion was intended, never implemented
        total += amount   // ← adds cents as if they were dollars

        This didn't matter before. It matters now."

        → fixed the client code
        → reported the API contract change upstream

No code was modified to instrument. No restarts. The old workflow — add a log line, rebuild, redeploy, wait for the bug to reproduce — replaced by watching it live.

You don't need to know the line number either — describe the behavior and the agent finds where to look.


Why Detrix?

You hit a bug. The old workflow: add a print, restart, reproduce, remove the print, repeat. If it's in production, redeploy. If it's in a Docker container, get into the container. If it's intermittent, wait.

With Detrix, you just ask the agent. It finds the right line, plants an observation point, and tells you what it sees — live, nothing restarting.

That bug that cost you hours last week — redeploy after redeploy, still can't reproduce — your agent can investigate it in minutes, while your app keeps running.

print() / logging Detrix
Iteration speed Hours (edit → rebuild → deploy) Minutes
Add new observation Edit code → restart Ask the agent — no code, no restart¹
Production-safe Output pollution, perf risk Non-breaking observation points
Events Ephemeral stream Stored, queryable by metric and time
Capture control Every hit, no filtering Throttle, sample, first-hit, interval
Cleanup Manual (easy to forget, ships to prod) One command — or automatic expiry
Sensitive data Secrets can leak via log output Sensitive-named vars blocked by default; configurable blacklist + whitelist in detrix.toml

¹ Embed detrix.init() once for zero restarts forever. Or restart once to attach the debugger (--debugpy, dlv, lldb-dap) — from that point on, the agent adds and removes observations without any further restarts.


Quick Start

Try it in 2 minutes. Your agent handles everything after step 3.

1. Install Detrix

macOS (Homebrew):

brew install flashus/tap/detrix

macOS / Linux (shell script):

curl --proto '=https' --tlsv1.2 -LsSf \
  https://github.com/flashus/detrix/releases/latest/download/detrix-installer.sh | sh

Windows (PowerShell):

irm https://github.com/flashus/detrix/releases/latest/download/detrix-installer.ps1 | iex

Docker (linux/amd64, linux/arm64):

docker pull ghcr.io/flashus/detrix:latest

Build from source:

cargo install --git https://github.com/flashus/detrix detrix

Then initialise (creates config and sets up local storage):

detrix init

2. Add to your app

One line — the debugger sleeps until your agent needs it, zero overhead when idle:

import detrix
detrix.init(name="my-app")

Go and Rust work the same way — see App Integration.

3. Connect your agent

Claude Code:

claude mcp add --scope user detrix -- detrix mcp

Cursor / Windsurf — add to .mcp.json in your project root:

{
  "mcpServers": {
    "detrix": {
      "command": "detrix",
      "args": ["mcp"]
    }
  }
}

For cloud setup and other editors, see the setup guide.

That's it. Ask your agent to observe any line in your running app — no restarts, nothing ships to prod.


Alternative: connect without embedding

Don't want to add a dependency? Start your app directly under a debugger instead:

# Python
python -m debugpy --listen 127.0.0.1:5678 app.py

# Go
dlv debug --headless --listen=127.0.0.1:5678 --api-version=2 main.go

# Rust
lldb-dap --port 5678

Listens on 127.0.0.1 — local only. See the language setup guide for remote and Docker.


How It Works

Detrix is a daemon that runs locally or in the cloud and connects your AI agent to any running process via 29 MCP tools. Under the hood, it talks to your app's debugger via the Debug Adapter Protocol (DAP). It sets logpoints — breakpoints that evaluate an expression and log the result instead of pausing. Your application runs at full speed; Detrix captures the values.

  AI Agent                 Detrix Daemon              Debugger (DAP)         Your App
  (Claude Code, Cursor,    (local or Docker/cloud)    debugpy / dlv /        (Python/Go/Rust,
    Windsurf, local)                                  lldb-dap               local/cloud)
      │                         │                          │                      │
      │── "observe line 127" ──▶│                          │                      │
      │                         │── set logpoint ─────────▶│                      │
      │                         │                          │── captures value ───▶│
      │                         │◀────────────── captured values ─────────────────│
      │◀── structured events ───│                          │                      │
      │                         │                          │                      │
      │         App never pauses. No code changes. No restarts.                   │

The daemon runs locally or alongside your service in Docker — same protocol either way. In cloud mode, source files are fetched automatically so the agent can find the right lines without them on your machine. See the Installation Guide for cloud setup.


App Integration

import detrix
detrix.init(name="my-app")   # That's it. Agent controls the rest.
Language Install Docs
Python pip install detrix-py Python Client
Go go get github.com/flashus/detrix/clients/go Go Client
Rust detrix-rs = "1.1.1" in Cargo.toml Rust Client

Production pattern: Build one service instance with debug symbols and a Detrix client. Route suspect traffic to it via Kafka, a sidecar, or your load balancer. The rest of your fleet runs unaffected — full-speed, no instrumentation overhead. You get deep observability on one instance without touching production.

See the Clients Manual for full documentation.


Features

No code changes. The agent instruments your running code via observation points — nothing gets committed, nothing ships to prod.

No pausing. Observation points evaluate expressions at full execution speed, with no breakpoint-style halting. For high-frequency code paths, use sample or throttle modes to control event volume.

No forgotten cleanup. Metrics expire automatically via TTL, or remove everything with one command.

Agent tools 29 MCP tools — observe any line, query events, enable/disable observation groups, and clean up; no line number needed
Zero-downtime instrumentation Add metrics without restarting your app
Multi-variable capture Capture multiple variables per observation point
Capture modes Stream, sample, throttle, first-hit, periodic sampling (every N sec)
Runtime introspection Stack traces, memory snapshots, variable inspection, expression evaluation
Multi-language Python (debugpy), Go (delve), Rust (lldb-dap)
Cloud debugging Observe Docker containers and remote hosts — no VPN, no port forwarding
Durable storage Events stored in SQLite on the daemon host. Run Detrix on a remote server, connect your agent in the morning and ask what happened overnight. Daemon auto-reconnects to the debug adapter if it restarts.
Extensible New frontends via open API; new language support by implementing a language adapter — Adding Languages
Safety validation Sensitive variable names (password, api_key, token, secret, private_key, etc.) blocked before capture. Configurable blacklist + whitelist for variable names and functions in detrix.toml. Enable safe mode per connection to allow only variable watching — no expression execution, no stack traces, no memory snapshots. Blocked operations return a clear named error so the agent can explain the constraint.
Auth Bearer token auth (static or JWT/JWKS) — designed to run behind your reverse proxy
Event streaming Forward captured events to Graylog
4 API protocols MCP (stdio), gRPC, REST, WebSocket

Documentation

Installation Guide Install, language setup, agent config, cloud debugging
CLI Reference Command-line interface
Clients Manual Python, Go, Rust client libraries
Architecture Clean Architecture with 13 Rust crates
Adding Languages Extend Detrix to new languages

Contributing

cargo fmt --all && cargo clippy --all -- -D warnings && cargo test --all
  1. Fork the repository
  2. Create a feature branch
  3. Run the checks above
  4. Submit a Pull Request

License

MIT License — see LICENSE.

Found a bug? Open an issue. Found in minutes what took you days? Tell us in Discussions.

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured