codecortex
Persistent codebase knowledge layer for AI agents. Pre-digests codebases into structured knowledge (symbols, dependency graphs, co-change patterns, architectural decisions) and serves via MCP. 28 languages, 14 tools, ~85% token reduction.
README
CodeCortex
Persistent codebase knowledge layer for AI agents. Your AI shouldn't re-learn your codebase every session.
<a href="https://glama.ai/mcp/servers/@rushikeshmore/codecortex"> <img width="380" height="200" src="https://glama.ai/mcp/servers/@rushikeshmore/codecortex/badge" alt="codecortex MCP server" /> </a>
The Problem
Every AI coding session starts from scratch. When context compacts or a new session begins, the AI re-scans the entire codebase. Same files, same tokens, same wasted time. It's like hiring a new developer every session who has to re-learn everything before writing a single line.
The data backs this up:
- AI agents increase defect risk by 30% on unfamiliar code (CodeScene + Lund University, 2025)
- Code churn grew 2.5x in the AI era (GitClear, 211M lines analyzed)
- Nobody combines structural + semantic + temporal + decision knowledge in one portable tool
The Solution
CodeCortex pre-digests codebases into layered knowledge files and serves them to any AI agent via MCP. Instead of re-understanding your codebase every session, the AI starts with knowledge.
Hybrid extraction: tree-sitter native N-API for structure (symbols, imports, calls across 28 languages) + host LLM for semantics (what modules do, why they're built that way). Zero extra API keys.
Quick Start
# Install
npm install -g codecortex-ai
# Initialize knowledge for your project
cd /path/to/your-project
codecortex init
# Start MCP server (for AI agent access)
codecortex serve
# Check knowledge freshness
codecortex status
Connect to Claude Code
Add to your MCP config:
{
"mcpServers": {
"codecortex": {
"command": "codecortex",
"args": ["serve"],
"cwd": "/path/to/your-project"
}
}
}
What Gets Generated
All knowledge lives in .codecortex/ as flat files in your repo:
.codecortex/
cortex.yaml # project manifest
constitution.md # project overview for agents
overview.md # module map + entry points
graph.json # dependency graph (imports, calls, modules)
symbols.json # full symbol index (functions, classes, types...)
temporal.json # git coupling, hotspots, bug history
modules/*.md # per-module deep analysis
decisions/*.md # architectural decision records
sessions/*.md # session change logs
patterns.md # coding patterns and conventions
Six Knowledge Layers
| Layer | What | File |
|---|---|---|
| 1. Structural | Modules, deps, symbols, entry points | graph.json + symbols.json |
| 2. Semantic | What each module does, data flow, gotchas | modules/*.md |
| 3. Temporal | Git behavioral fingerprint - coupling, hotspots, bug history | temporal.json |
| 4. Decisions | Why things are built this way | decisions/*.md |
| 5. Patterns | How code is written here | patterns.md |
| 6. Sessions | What changed between sessions | sessions/*.md |
The Temporal Layer
This is the killer differentiator. The temporal layer tells agents "if you touch file X, you MUST also touch file Y" even when there's no import between them. This comes from git co-change analysis, not static code analysis.
Example from a real codebase:
routes.tsandworker.tsco-changed in 9/12 commits (75%) with zero imports between them- Without this knowledge, an AI editing one file would produce a bug 75% of the time
MCP Tools (14)
Read Tools (9)
| Tool | Description |
|---|---|
get_project_overview |
Constitution + overview + graph summary |
get_module_context |
Module doc by name, includes temporal signals |
get_session_briefing |
Changes since last session |
search_knowledge |
Keyword search across all knowledge |
get_decision_history |
Decision records filtered by topic |
get_dependency_graph |
Import/export graph, filterable |
lookup_symbol |
Symbol by name/file/kind |
get_change_coupling |
What files must I also edit if I touch X? |
get_hotspots |
Files ranked by risk (churn x coupling) |
Write Tools (5)
| Tool | Description |
|---|---|
analyze_module |
Returns source files + structured prompt for LLM analysis |
save_module_analysis |
Persists LLM analysis to modules/*.md |
record_decision |
Saves architectural decision to decisions/*.md |
update_patterns |
Merges coding pattern into patterns.md |
report_feedback |
Agent reports incorrect knowledge for next analysis |
CLI Commands
| Command | Description |
|---|---|
codecortex init |
Discover project + extract symbols + analyze git history |
codecortex serve |
Start MCP server (stdio transport) |
codecortex update |
Re-extract changed files, update affected modules |
codecortex status |
Show knowledge freshness, stale modules, symbol counts |
Token Efficiency
CodeCortex uses a three-tier memory model to minimize token usage:
Session start (HOT only): ~4,300 tokens
Working on a module (+WARM): ~5,000 tokens
Need coding patterns (+COLD): ~5,900 tokens
vs. raw scan of entire codebase: ~37,800 tokens
85-90% token reduction. 7-10x efficiency gain.
Supported Languages (28)
| Category | Languages |
|---|---|
| Web | TypeScript, TSX, JavaScript, Liquid |
| Systems | C, C++, Objective-C, Rust, Zig, Go |
| JVM | Java, Kotlin, Scala |
| .NET | C# |
| Mobile | Swift, Dart |
| Scripting | Python, Ruby, PHP, Lua, Bash, Elixir |
| Functional | OCaml, Elm, Emacs Lisp |
| Other | Solidity, Vue, CodeQL |
Tech Stack
- TypeScript ESM, Node.js 20+
tree-sitter(native N-API) + 28 language grammar packages@modelcontextprotocol/sdk- MCP servercommander- CLIsimple-git- git integrationyaml,zod,glob
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.