lore-mcp
Architectural memory layer for AI coding. Automatically extracts decisions, detects security gaps, and analyzes git history from your codebase in one command.
README
<h1 align="center">LORE — Architectural Memory for AI Coding</h1>
<p align="center"> <img src="logo.png" width="350" alt="LORE"/> </p>
<p align="center"> <a href="https://npmjs.com/package/lore-mcp"><img src="https://img.shields.io/npm/v/lore-mcp?color=4f6ef7&label=npm&style=flat-square" alt="npm"/></a> <a href="https://npmjs.com/package/lore-mcp"><img src="https://img.shields.io/npm/dm/lore-mcp?color=4f6ef7&style=flat-square" alt="downloads"/></a> <img src="https://img.shields.io/badge/license-MIT-green?style=flat-square" alt="license"/> <img src="https://img.shields.io/github/actions/workflow/status/EliotShift/lore-mcp/test.yml?style=flat-square&label=CI" alt="CI"/> <img src="https://img.shields.io/badge/tested%20on-Linux%20%26%20macOS-blue?style=flat-square" alt="platforms"/> <img src="https://img.shields.io/badge/local--first-no%20data%20leaves-brightgreen?style=flat-square" alt="local-first"/> </p>
<p align="center"><em>AI forgets why your code was built this way. LORE remembers.</em></p>
See it in action
<p align="center"> <img src="demo.svg" alt="LORE Demo" width="700"/> </p>
Why LORE?
Every time you open Claude Code or Cursor, it starts with zero context.
Without LORE, you manually explain every session:
- "We use PostgreSQL because we need ACID transactions"
- "JWT expiry is 24h due to mobile requirements"
- "4 of our API routes have no auth middleware"
With LORE, one command gives AI full context automatically:
npx lore-mcp init
Quick Start
npm install -g lore-mcp
cd your-project
lore init
lore status
What LORE detects
| Source | What it finds |
|---|---|
package.json |
Databases, frameworks, auth, security libs |
| Source code | Unprotected routes, error handling %, MVC patterns |
| Git history | Bug-fix ratio, high churn files, commit quality |
| Manual input | WHY behind decisions via lore decide |
CLI
lore init # Analyze project → extract 24 decisions
lore status # View all decisions by category
lore decide "reason" # Record WHY behind a decision
lore doctor # Diagnose setup issues
lore --version # Show version
Capture the WHY
Automated extraction finds WHAT. lore decide captures WHY:
lore decide "chose PostgreSQL over MongoDB — need ACID for payments"
lore decide "rejected Redis sessions — JWT scales better for microservices"
lore decide "helmet enabled — security audit requirement Q1 2026"
MCP Integration
Add to Claude Code / Cursor settings:
{
"mcpServers": {
"lore": {
"command": "node",
"args": ["/path/to/lore-mcp/dist/index.js"]
}
}
}
What LORE finds in a real project
SECURITY
● bcrypt for password hashing
● Helmet.js for HTTP security headers
● 3 of 5 routes may lack auth middleware ← security gap
● JWT secrets must be in environment variables
RISK
● High bug-fix ratio: 3/5 recent commits are fixes
● Low commit message quality: 0%
● High churn file: src/services/userService.ts
Built by
@TheEliotShift — Developer from Morocco 🇲🇦
If LORE saved you time → ⭐
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.