Grimoire
A local security knowledge base that indexes documentation like CVEs and CWEs using hybrid keyword and semantic search. It enables LLM agents to query indexed materials via MCP for accurate, offline retrieval during security audits and code reviews.
README
<div align="center"> <img src="logo.svg" width="96" height="96" alt="Grimoire logo" /> <h1>Grimoire</h1> <p><strong>Your LLM re-reads the same reference docs every conversation. Grimoire indexes them once.</strong></p> <p> <a href="https://tannner.com">tannner.com</a> · <a href="https://github.com/tannernicol/grimoire">GitHub</a> </p>
<p align="center"> <img src="docs/demo.png" alt="Grimoire search demo" width="700" /> </p>
The Problem
Your LLM agent needs to reference CWE-89 during a code review. Without Grimoire, it either hallucinates the details, or you paste 50 pages of NIST docs into the context window and hope it finds the right paragraph. Every conversation. Every time.
The Solution
Grimoire indexes security reference material once — CVEs, CWEs, OWASP, audit findings, your internal standards — into a single SQLite file with both FTS5 keyword search and semantic embeddings. Your LLM agent searches it mid-conversation via MCP. Exact matches when you need "CWE-89". Conceptual recall when you need "authentication bypass techniques". Both in one query.
One SQLite file. Zero cloud. Instant retrieval via MCP.
+------------------+
| Data Sources |
| CVE MD CSV .. |
+--------+---------+
|
ingest()
|
+--------v---------+
| SQLite DB |
| +------------+ |
| | documents | |
| +------------+ |
| | docs_fts5 | | <-- FTS5 keyword index
| +------------+ |
| | embeddings | | <-- semantic vectors
| +------------+ |
+--------+---------+
|
+--------v---------+
| Search Engine |
| |
| keyword (BM25) |
| semantic (cos) |
| hybrid (both) |
+--------+---------+
|
+-------------+-------------+
| |
+------v------+ +--------v--------+
| Python API | | MCP Server |
| | | |
| Grimoire() | | grimoire_search |
| .search() | | grimoire_status |
| .add_doc() | | grimoire_quality|
+-------------+ +-----------------+
Quick Start
git clone https://github.com/tannernicol/grimoire.git
cd grimoire
pip install -e .
# Fetch and index real security data (NVD CVEs + CWE catalog + OWASP Top 10)
python scripts/fetch_sources.py all
# Search
python examples/search_demo.py "SQL injection"
python examples/search_demo.py "access control" --severity critical
python examples/search_demo.py --status
Auto-Fetch Security Data
Grimoire fetches from reputable public sources — no manual downloads:
# Everything: NVD + CWE + OWASP
python scripts/fetch_sources.py all
# Recent CVEs from NIST NVD (last 90 days, critical only)
python scripts/fetch_sources.py nvd --days 90 --severity CRITICAL
# Full CWE catalog from MITRE
python scripts/fetch_sources.py cwe
# With embeddings for semantic search (requires Ollama)
python scripts/fetch_sources.py all --embeddings
Enable Semantic Search
Requires Ollama with nomic-embed-text:
ollama pull nomic-embed-text
python scripts/fetch_sources.py all --embeddings
python examples/search_demo.py "authentication bypass" --mode hybrid
Why Not Just Use RAG?
Most RAG setups do one thing: chunk documents, embed them, vector search. That works until you need an exact CVE number, a specific NIST control ID, or a CWE by name. Vector search alone misses exact matches.
Grimoire runs both:
- FTS5 (BM25) for keyword precision — finds "CWE-89" when you search "CWE-89"
- Semantic embeddings (cosine similarity) for conceptual recall — finds SQL injection variants when you search "database manipulation"
- Hybrid mode combines both with configurable weighting (default 40/60 keyword/semantic)
Everything lives in a single SQLite file. No Postgres, no Pinecone, no cloud anything.
Security Knowledge Retrieval
Resume reviewers click through looking for real security signal. Grimoire makes it obvious:
- Auto-ingest CVEs, CWEs, OWASP, and your own Markdown findings with a single command.
- Store structured metadata (severity, categories, tags) so you can filter for "critical auth bypass" or "RCE" instantly.
- Serve the exact same SQLite file over MCP so any LLM agent can cite sources in the middle of a conversation.
python scripts/fetch_sources.py nvd --days 30 --severity CRITICAL
python scripts/fetch_sources.py cwe --embeddings
python examples/search_demo.py "JWT kid bypass" --mode hybrid --limit 5
The search demo will show hybrid BM25 + cosine hits with severity labels, while
the MCP server (pip install -e '.[mcp]') exposes identical results to your
agents without copying a single document into the prompt.
Python API
from grimoire.core import Grimoire
g = Grimoire("security_kb.db")
# Add documents
g.add_document(
source="advisory",
title="CVE-2024-1234",
content="Buffer overflow in example library allows RCE via crafted input...",
severity="critical",
categories=["buffer-overflow", "RCE"],
)
# Search
results = g.search("buffer overflow", mode="hybrid", limit=10)
for r in results:
print(f"[{r.score:.3f}] {r.title} ({r.severity})")
# Check index health
status = g.index_status()
health = g.health_check()
Ingest Anything
Built-in ingestors for common security data formats:
# CVE/NVD feeds (API 2.0, 1.1, or JSON array)
from grimoire.ingest.cve import CVEIngestor
CVEIngestor().ingest_to_grimoire(g, "cve_data.json")
# Markdown files (recursively scan directories)
from grimoire.ingest.markdown import MarkdownIngestor
MarkdownIngestor(source_label="audit-findings").ingest_to_grimoire(g, "findings/")
# CSV with column mapping
from grimoire.ingest.csv import CSVIngestor
CSVIngestor(
source_label="vuln-db",
column_map={"vuln_name": "title", "details": "content"},
).ingest_to_grimoire(g, "vulns.csv")
Add your own by subclassing BaseIngestor:
from grimoire.ingest.base import BaseIngestor
class MyIngestor(BaseIngestor):
source_name = "my-source"
def ingest(self, path):
for item in read_my_data(path):
yield {
"source": self.source_name,
"title": item["name"],
"content": item["description"],
"severity": item.get("severity"),
"categories": item.get("tags"),
}
MCP Integration
Grimoire ships an MCP server so LLM agents can search your knowledge base mid-conversation.
Note: The MCP server is an optional dependency. Install it with:
pip install -e ".[mcp]"
# Start the server
grimoire-mcp --db security_kb.db
Add to Claude Code or Claude Desktop:
{
"mcpServers": {
"grimoire": {
"command": "grimoire-mcp",
"args": ["--db", "/path/to/security_kb.db"]
}
}
}
| Tool | What it does |
|---|---|
grimoire_search |
Keyword, semantic, or hybrid search with severity/source filters |
grimoire_index_status |
Document count, embedding coverage, sources, last update |
grimoire_quality |
Health check; optionally test a query for result quality |
Configuration
database:
path: grimoire.db
ollama:
url: http://localhost:11434
model: nomic-embed-text
search:
default_mode: hybrid
semantic_weight: 0.6 # 60% semantic, 40% keyword
default_limit: 20
min_similarity: 0.3
quality:
min_cases: 5
gate_on_missing_eval: false
Search Algorithm
- FTS5 keyword — BM25 on title, content, and categories
- Semantic — cosine similarity between query and document embeddings (via Ollama
nomic-embed-text) - Score fusion —
hybrid = (0.4 * normalized_bm25) + (0.6 * cosine_sim) - Dedup — merge by document ID, sum scores
- Filter — min score, severity, source, max results
Requirements
- Python 3.10+
- SQLite with FTS5 (included in Python's
sqlite3) - Ollama +
nomic-embed-text(only needed for semantic/hybrid search — keyword works without it)
Development
pip install -e ".[dev]"
pytest
Threat Model
In scope — what Grimoire defends against:
- Knowledge staleness — auto-fetch from NVD, MITRE CWE, and OWASP keeps the index current; agents reference live CVE data, not training-cutoff snapshots
- Retrieval hallucination — hybrid search (BM25 + semantic) returns sourced, scored documents with provenance metadata; the agent cites indexed material, not confabulated details
- Exact-match failure — FTS5 keyword search guarantees that queries for specific identifiers (CVE-2024-1234, CWE-89) return exact matches, unlike pure vector search which can miss or rank them poorly
- Data exfiltration via cloud RAG — the entire pipeline runs offline against a local SQLite file; no document content is sent to external embedding APIs or vector databases
Out of scope — what Grimoire intentionally does not defend against:
- Poisoned source data — if upstream NVD or CWE feeds contain inaccurate information, Grimoire indexes it as-is; there is no cross-validation of ingested content
- Embedding model compromise — semantic search trusts the local Ollama embedding model; adversarial inputs crafted to manipulate
nomic-embed-textcould influence ranking - Access control on the knowledge base — the SQLite file and MCP server have no authentication; any process with filesystem or MCP access can query the full index
- Agent misuse of results — Grimoire returns relevant documents, but the consuming LLM may still misinterpret, selectively quote, or ignore them
Architecture
flowchart TB
subgraph Data Sources
NVD[NVD / CVE Feeds]
CWE[MITRE CWE Catalog]
OWASP[OWASP Top 10]
Custom[Markdown / CSV\nAudit Findings]
end
NVD --> Ingest
CWE --> Ingest
OWASP --> Ingest
Custom --> Ingest
subgraph Grimoire Core
Ingest[Ingestor Pipeline]
Ingest -->|documents| DB[(SQLite DB)]
DB -->|FTS5 index| FTS[BM25 Keyword Search]
DB -->|embedding vectors| Sem[Cosine Semantic Search]
Ollama[Local Ollama\nnomic-embed-text] -->|embeddings| DB
FTS --> Fusion[Score Fusion\n40% keyword + 60% semantic]
Sem --> Fusion
Fusion --> Results[Ranked Results\nwith provenance]
end
Results --> API[Python API\nGrimoire.search]
Results --> MCP[MCP Server\ngrimoire_search\ngrimoire_status\ngrimoire_quality]
MCP --> Agent([LLM Agent\ne.g. Claude Code])
API --> Scripts([Scripts / Pipelines])
Author
Tanner Nicol — tannner.com · GitHub · LinkedIn
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.