Hedgehog

Hedgehog

A Model Context Protocol server that manages structured technical spike investigations and generates Architecture Decision Records through an enforced four-phase workflow. It enables developers to track exploration branches, document adversarial challenges, and maintain persistent state across multiple sessions.

Category
Visit Server

README

Hedgehog 🦔

A Model Context Protocol (MCP) server for structured spike investigations and Architecture Decision Record (ADR) generation.

What It Does

Manages technical spike investigations with:

  • Enforced 4-phase workflow: Meta-design → Divergent exploration → Adversarial challenge → Synthesis
  • State machine: Prevents skipping phases or invalid transitions
  • Checkpoint/rollback: Save and restore investigation state
  • ADR generation: Auto-generates ADRs from exploration artifacts
  • Dead-end tracking: Documents abandoned approaches for future reference

Installation

git clone https://github.com/jpalmerr/Hedgehog.git
cd Hedgehog
pip install .

Then add to your Claude Code MCP settings (~/.claude.json):

{
  "mcpServers": {
    "hedgehog": {
      "command": "hedgehog",
      "args": []
    }
  }
}

Or via the CLI:

claude mcp add hedgehog --scope user -- hedgehog

Example Flow

Here's how a real investigation looks in Claude Code. You talk to Claude naturally — Hedgehog manages the structure behind the scenes.

Session 1: Frame the problem

You: I need to investigate options for migrating our Kafka topic IDs from numeric to string-based. Create a spike for this.

Claude calls spike_create("kafka-topic-migration", "Evaluate approaches for migrating Kafka topic IDs from numeric to string-based identifiers") and generates a meta-design template covering the problem statement, key questions, success criteria, and constraints.

You: The meta-design looks good. The key constraint is zero downtime — we can't stop consumers during migration. Approve it and let's start exploring.

Claude calls spike_approve_meta → advances to Phase 1. State is checkpointed automatically.

You: Let's explore three approaches: dual-write, shadow topics, and a proxy translation layer.

Claude registers all three branches and starts investigating the first one — reading docs, considering trade-offs, and documenting findings for each branch as it goes.

Session 2: Continue exploration (next day)

You: Pick up the kafka-topic-migration spike. Where did we leave off?

Claude calls spike_get_state → sees Phase 1 with one branch explored, two remaining. Continues investigating the remaining branches.

When the third branch is completed, Hedgehog automatically advances to Phase 2 (adversarial challenge) and checkpoints the state.

Session 3: Challenge and synthesize

You: Continue the spike. Challenge each approach — focus on failure modes and hidden assumptions.

Claude systematically challenges each branch: What happens during a dual-write if one write fails? What's the rollback story for shadow topics? How does the proxy handle schema evolution?

When all branches are challenged, Hedgehog auto-advances to Phase 3. Claude synthesizes the findings, generates an ADR with the recommendation, and you approve it.

The tools behind the scenes

Throughout this flow, Claude is calling Hedgehog tools:

spike_create / spike_approve_meta          → Phase 0 (framing)
spike_add_branch / spike_complete_branch   → Phase 1 (exploration)
spike_add_challenge                        → Phase 2 (adversarial)
spike_synthesize / spike_generate_adr      → Phase 3 (synthesis)
spike_approve_adr                          → Complete

Checkpoints are created automatically at each phase transition. Use spike_checkpoint for manual saves and spike_rollback to revert if an exploration path goes nowhere.

Usage Guidance

Investigations span multiple sessions

A full spike investigation is a significant piece of work — comparable to a multi-day task you'd do at work. Hedgehog persists all state to disk (~/.claude/spikes/), so you can spread an investigation across as many Claude Code sessions as you need.

Natural session boundaries:

  • Session 1: Frame the problem (Phase 0), start exploration
  • Session 2-3: Complete branch explorations (Phase 1)
  • Session 4: Adversarial challenges + synthesis (Phases 2-3)

Use spike_get_state at the start of any session to pick up where you left off.

Right-sizing your investigation

Not every technical question needs a full spike. Use Hedgehog when:

  • The decision is hard to reverse (infrastructure, data model, core architecture)
  • There are genuinely 3+ viable approaches worth comparing
  • You need a defensible ADR for your team

For smaller questions, just ask Claude directly — no ceremony needed.

Pro plan considerations

Hedgehog's own overhead is minimal (small JSON tool calls). The tokens go on Claude thinking about your problem — reading code, researching approaches, writing analysis. A full 3-branch investigation is token-intensive because the work is intensive.

On a Pro plan, lean into the multi-session workflow. Do one branch per session if needed. The checkpoint system means you never lose progress.

The Four Phases

Phase 0: Meta-Design

Define the investigation scope, key questions, and success criteria. Forces you to think about whether you're solving the right problem.

Phase 1: Divergent Exploration

Explore at least 3 distinct approaches. Don't converge prematurely—document advantages, disadvantages, and open questions for each.

Phase 2: Adversarial Challenge

For each branch, systematically identify failure modes, challenge assumptions, and find second-order effects.

Phase 3: Synthesis

Compare branches, produce a recommendation with uncertainty bounds, and generate an ADR documenting the decision.

Available Tools

Tool Description
spike_create Create new spike investigation
spike_list List all spikes
spike_get_state Get current spike state
spike_approve_meta Approve meta-design, advance to Phase 1
spike_add_branch Register a branch for exploration
spike_complete_branch Mark branch as explored with findings
spike_mark_dead_end Document abandoned approach
spike_add_challenge Add adversarial challenge to branch
spike_synthesize Create synthesis document
spike_generate_adr Generate ADR from artifacts
spike_approve_adr Mark spike complete
spike_checkpoint Save current state
spike_rollback Restore to checkpoint
spike_archive Archive completed spike
spike_delete Delete spike

Available Resources

URI Description
spike://{name}/state Current spike state as JSON
spike://{name}/meta-design Meta-design document
spike://{name}/branches List of branch documents
spike://{name}/branches/{branch} Specific branch content
spike://{name}/challenges/{branch} Challenge document for branch
spike://{name}/adr Generated ADR

Development

# Clone and setup
git clone https://github.com/jpalmerr/Hedgehog
cd Hedgehog
python3.11 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"

# Run tests
pytest

# Type check
mypy src

# Lint
ruff check src tests

License

MIT

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured