simple-ai-provenance
An MCP server that tracks AI prompts in Claude Code and automatically annotates git commits with the history of what was asked. It provides tools to query session summaries, retrieve uncommitted work, and manage AI provenance directly within Claude.
README
simple-ai-provenance
Track every AI prompt you send in Claude Code, annotate git commits with what was asked, and query the full history of any session.
What it does
- Auto-captures every prompt you send in Claude Code via a hook — no manual steps
- Annotates commits with the prompts that produced the code, via global git hooks
- Answers "what did I do in this session?" through MCP tools callable inside Claude
- Stays compact — commit messages switch from verbose (all prompts) to condensed (summary) above a configurable threshold
Install
pip install simple-ai-provenance
provenance-setup
Then restart Claude Code and Claude Desktop.
That's it. Every prompt from that point forward is recorded automatically.
How it works
You type a prompt
↓
UserPromptSubmit hook fires → written to ~/.claude/provenance/provenance.db
↓
Claude works...
↓
git commit -m "fix: ..."
↓
prepare-commit-msg hook appends AI provenance block
↓
post-commit hook marks those prompts as committed
Commit message (≤ 5 prompts — verbose)
fix: auth bug
# ── AI Provenance ──────────────────────────────────────────
#
# Session 1 (2026-02-26 14:30, id: a1b2c3d4, 3 prompts)
# • fix the auth bug in login.py
# • add error handling for the edge cases
# • write unit tests for the new endpoints
#
# Files: src/auth/login.py, tests/test_auth.py
#
# ─────────────────────────────────────────────────────────
Commit message (> 5 prompts — condensed)
refactor: connection pooling
# ── AI Provenance ──────────────────────────────────────────
#
# 12 prompts · 2 sessions over 1h 23m
#
# Session 1 (09:00, id: a1b2c3d4, 5 prompts)
# Session 2 (10:30, id: e5f6g7h8, 7 prompts)
#
# First: refactor the database connection pooling module
# Last: add retry logic with exponential backoff
#
# Full history: call get_session_summary in Claude
# Files: src/db/pool.py, src/db/retry.py (+3 more)
#
# ─────────────────────────────────────────────────────────
The # lines are git comment lines — visible in your editor but not stored in the final commit message.
MCP Tools
Once installed, these tools are available inside any Claude session:
| Tool | What it does |
|---|---|
get_session_summary |
Prompts + files touched + tools used for a session |
get_uncommitted_work |
All prompts since last commit, grouped by session |
generate_commit_context |
Formatted provenance block for a commit message |
mark_committed |
Mark pending prompts as committed (auto-called by git hook) |
list_sessions |
Recent sessions with prompt counts |
configure |
Get or set config (e.g. verbose_threshold) |
Configuration
Config lives at ~/.claude/simple-ai-provenance-config.json:
{
"settings": {
"verbose_threshold": 5
}
}
Change it via the MCP tool inside Claude:
configure verbose_threshold=10
Or directly edit the JSON file.
Requirements
- Python 3.9+
- Claude Code (Claude CLI)
- Claude Desktop (optional — for MCP tools in the desktop app)
- Git
How sessions are scoped
Each Claude Code session is automatically scoped to the git repository detected from the working directory. Prompts from different projects never mix.
Session in ~/projects/api → recorded under repo /Users/you/projects/api
Session in ~/projects/web → recorded under repo /Users/you/projects/web
Uninstall
# Remove git hooks
git config --global --unset core.hooksPath
# Remove the UserPromptSubmit block from ~/.claude/settings.json
# Remove the simple-ai-provenance entry from Claude Desktop config
# Remove data (optional)
rm -rf ~/.claude/provenance/
rm ~/.claude/simple-ai-provenance-config.json
pip uninstall simple-ai-provenance
License
AGPL-3.0-or-later
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.