mason

mason

Mason indexes your codebase into a persistent concept map linking features and flows to their implementing files, so AI agents can answer "where is X implemented" without running grep/glob. It also provides pre-edit impact analysis and generates CLAUDE.md files from structured analysis of git history, architectural file sampling, and test mappings.

Category
Visit Server

README

Mason – the context builder for LLMs 👷

npm version CI npm downloads license issues

Mason gives LLMs a persistent map of your codebase so they stop exploring from scratch every session.

The problem: Every time an LLM starts a new conversation about your code, it greps, reads files, and pieces together the architecture — burning tokens on context it already understood yesterday. On a 164-file project, answering "what features does this app have?" requires reading 8+ files across multiple tool calls.

Mason's fix: A concept map that persists across sessions. One tool call returns a feature-to-file lookup table — the LLM knows exactly where to look, without exploring.

Measured result (deepeval, Claude Sonnet, 164-file KMP project):

Question With Mason Without Mason Token saving
List all features 10,258 tok 31,346 tok 67%
Trace data flow 12,010 tok 15,258 tok 21%
Compare platforms 10,897 tok 19,353 tok 44%
Onboarding flow 10,271 tok 11,432 tok 10%
Average 36%

Same answer quality (0.9/1.0 on all tests, both paths). Reproduce: bench/.

Quick start

claude mcp add mason --scope user -- npx -p mason-context mason-mcp

Restart Claude Code, then ask: "use mason to analyze this project and create a snapshot."

That's it — Mason will analyze your codebase and create a concept map. Next session, it loads the map instead of re-exploring everything.

How it works

Concept map

Mason's core feature. It persists a feature-to-file map in .mason/snapshot.json that survives across conversations. When the LLM needs to understand your project, it reads this map instead of grepping through your entire codebase:

{
  "features": {
    "home screen": {
      "files": ["HomeScreen.kt", "HomeViewModel.kt", "GetWeatherDataUseCase.kt"]
    }
  },
  "flows": {
    "weather fetch": {
      "chain": ["HomeViewModel.kt", "WeatherRepositoryImpl.kt", "WeatherServiceImpl.kt"]
    }
  }
}

The map is generated by the LLM itself — Mason provides the analysis tools, and the LLM interprets your code to decide what the features and flows are. This means the map captures architectural understanding, not just file listings.

Create one by asking your AI assistant to "create a mason snapshot", or via CLI:

mason set-llm gemini          # configure a provider (no API key needed)
mason snapshot ~/my-project   # generate concept map
mason snapshot --install-hook # auto-update on every commit

Change impact analysis

Before editing a file, Mason can tell you what else might be affected. It combines three signals that would each require multiple tool calls to gather manually:

  • Co-change history — files that historically change together in git commits
  • References — files that import or mention the target by name
  • Related tests — test files paired to the target by naming convention
mason impact WeatherRepository.kt -d ~/my-project

Also available as the get_impact MCP tool — ask your assistant "what would be affected if I changed WeatherRepository?"

Git history analysis

Mason aggregates hundreds of commits into actionable stats: which files change most often (hot files you should be careful with), which directories haven't been touched in months (potentially stale code), and what commit conventions the team follows. This is the kind of analysis that would take dozens of git log calls to compute manually.

mason analyze ~/my-project

MCP tools

Mason exposes 6 tools via the Model Context Protocol. Any MCP-compatible client (Claude Code, Cursor, etc.) can use them:

Tool What it does
get_snapshot Load the concept map — maps features/flows to files
save_snapshot Persist the concept map for future sessions
get_impact Change impact: co-change history, references, related tests
analyze_project Git history: commit patterns, hot files, stale dirs
full_analysis All-in-one first visit: git stats + structure + code samples + test map
get_code_samples Smart file previews selected by architectural role

CLI usage

Mason also works as a standalone CLI for generating CLAUDE.md files and running analysis without an MCP client. Configure an LLM provider once, then use any command:

mason set-llm claude|gemini|ollama|openai  # configure provider
mason generate                # analyze codebase + LLM -> CLAUDE.md
mason analyze                 # git stats only (no LLM needed)
mason impact File.kt          # change impact analysis
mason snapshot                # create/update concept map

Most providers work without an API key — claude, gemini, and ollama all use their respective CLIs directly.

Security

What the snapshot contains: Feature names, relative file paths, and flow descriptions. No source code, secrets, or business logic.

What it doesn't touch: Mason respects .gitignore (via git ls-files) and has a deny-list that blocks .env, .pem, .key, credentials, and other sensitive files from being sampled. Path traversal protection ensures all file access stays within the project root.

LLM data flow: Generating a snapshot via CLI sends sampled file contents to your configured LLM provider — the same way any AI coding assistant reads your code. Use ollama for fully local generation. The MCP server tools (get_snapshot, get_impact, etc.) only read local files.

Language support

Mason is completely language-agnostic. It uses file naming patterns and git history rather than language-specific parsing, so it works with any project that has source files and a git repository — TypeScript, Kotlin, Python, Go, Rust, Swift, Java, C#, Dart, and more.

License

MIT

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured