Libragen
Creates and searches private, local RAG libraries from documentation to ground AI assistants in authoritative sources, reducing hallucinations by providing current, accurate context from your own docs instead of relying on outdated training data.
README
<p align="center"> <img src="packages/website/public/favicon.svg" alt="Libragen Logo" width="80" height="80"> </p>
<h1 align="center">libragen</h1>
<p align="center"> <em>(pronounced "LIB-ruh-jen")</em> </p>
<p align="center"> <strong>Stop your AI from hallucinating code, and ground it in your actual documentation</strong> </p>
<p align="center"> <a href="https://www.npmjs.com/package/@libragen/cli"><img src="https://img.shields.io/npm/v/@libragen/cli.svg?label=cli" alt="npm cli"></a> <a href="https://www.npmjs.com/package/@libragen/core"><img src="https://img.shields.io/npm/v/@libragen/core.svg?label=core" alt="npm core"></a> <a href="https://www.npmjs.com/package/@libragen/mcp"><img src="https://img.shields.io/npm/v/@libragen/mcp.svg?label=mcp" alt="npm mcp"></a> <a href="https://github.com/libragen/libragen/actions"><img src="https://github.com/libragen/libragen/actions/workflows/ci.yml/badge.svg" alt="CI"></a> <a href="https://github.com/libragen/libragen/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="License"></a> </p>
<p align="center"> <a href="https://libragen.dev">Documentation</a> β’ <a href="https://libragen.dev/docs/getting-started">Getting Started</a> β’ <a href="https://github.com/libragen/libragen/discussions">Discussions</a> </p>
Create private, local RAG libraries that ground your AI in real documentationβnot 2-year-old training data. No cloud, no API keys, just single files you can share with your whole team.
What's RAG? Retrieval-Augmented Generation lets AI retrieve relevant context before responding, instead of relying solely on training data. libragen packages your docs into searchable libraries your AI can query.
π― Why libragen?
- Ground AI in truth β Give your coding agents authoritative docs to cite, dramatically reducing hallucinations
- Always current β Rebuild libraries when docs change; your AI gets the latest APIs, not stale training data
- Private & local β Everything runs on your machine. No API keys, no cloud bills, no data leaving your network
- Shareable β Single
.libragenfiles work anywhere. Share via git, S3, or install from curated collections
β¨ Features
- οΏ½ Hybrid Search β Combines vector similarity with BM25 keyword matching
- π Reranking β Optional cross-encoder reranking for improved relevance
- π¦ Portable β Single-file SQLite databases with embedded vectors
- π§ Smart Chunking β Language-aware splitting that respects code boundaries
- π Multiple Sources β Build from local files or git repositories
- π€ MCP Native β Works directly in Claude Desktop, VS Code, and any MCP client
π¦ Packages
| Package | Description |
|---|---|
@libragen/core |
Core library for embedding, chunking, storage |
@libragen/cli |
Command-line interface for building and querying |
@libragen/mcp |
Model Context Protocol server for AI assistants |
π Quick Start
Installation
npm install -g @libragen/cli
Build a Library
# From your internal docs
libragen build ./internal-api-docs --name internal-api
# From a private git repository
libragen build https://github.com/your-org/private-docs -o company-docs.libragen
# From any public repo
libragen build https://github.com/facebook/react -o react.libragen
Query a Library
libragen query "how to authenticate users" -l my-project.libragen
Use with AI Assistants
Install the MCP server globally:
npm install -g @libragen/mcp
Add to your Claude Desktop config (on macOS:
~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"libragen": {
"command": "npx",
"args": ["-y", "@libragen/mcp"]
}
}
}
Then install libraries to make them available:
libragen install my-project.libragen
οΏ½ CLI Commands
| Command | Description |
|---|---|
build <source> |
Build a library from files or git repo |
query <query> |
Search a library for relevant content |
info <library> |
Display library metadata |
list |
List installed libraries and collections |
install <source> |
Install a library or collection |
uninstall <name> |
Remove an installed library or collection |
update [name] |
Update installed libraries to newer versions |
collection create |
Create a collection file |
config |
Display configuration and paths |
completions <action> |
Manage shell completions (bash, zsh, fish) |
π Collections
Collections are JSON files that group libraries together for easy installation:
{
"name": "my-stack",
"description": "Libraries for my project",
"version": "1.0.0",
"items": [
{ "library": "https://example.com/react.libragen" },
{ "library": "https://example.com/typescript.libragen" },
{ "library": "https://example.com/testing.libragen", "required": false },
{ "collection": "https://example.com/base-web.json" }
]
}
Create a collection:
# Initialize a template
libragen collection init my-stack.json
# Or create with libraries directly
libragen collection create my-stack.json \
-l ./react.libragen \
-l ./typescript.libragen \
-o ./testing.libragen
Install a collection:
libragen install ./my-stack.json # Required libraries only
libragen install ./my-stack.json --all # Include optional libraries
Collections support:
- Nesting β Collections can include other collections
- Deduplication β Libraries are only installed once
- Optional items β Mark libraries as
"required": false - Reference counting β Uninstalling removes only unreferenced libraries
βοΈ Configuration
Storage Location
By default, libragen stores libraries and configuration in a platform-specific directory:
| Platform | Default Location |
|---|---|
| macOS | ~/Library/Application Support/libragen |
| Windows | %APPDATA%\libragen |
| Linux | $XDG_DATA_HOME/libragen (defaults to ~/.local/share/libragen) |
Override this by setting the LIBRAGEN_HOME environment variable:
export LIBRAGEN_HOME=/custom/path/to/libragen
The directory structure is:
$LIBRAGEN_HOME/
libraries/ # Installed .libragen files
manifest.json # Tracks installed libraries and collections
collections.json # Collection configuration
cache/ # Cached collection indexes
π Library Format
A .libragen file is a SQLite database containing:
- Metadata β Library name, version, description, embedding model info
- Chunks β Code/documentation segments with source file info
- Embeddings β Vector representations using
Xenova/bge-small-en-v1.5(384 dims) - FTS Index β Full-text search index for keyword matching
π Programmatic Usage
Use @libragen/core directly in your TypeScript/JavaScript projects:
import { Library, Searcher, Embedder, Reranker } from '@libragen/core';
// Open an existing library and search it
const library = await Library.open('./my-docs.libragen');
const embedder = new Embedder();
await embedder.initialize();
const reranker = new Reranker();
await reranker.initialize();
const searcher = new Searcher(embedder, library.getStore(), { reranker });
const results = await searcher.search({
query: 'how do I authenticate?',
k: 5,
rerank: true, // Use cross-encoder reranking
});
for (const result of results) {
console.log(`[${result.score.toFixed(3)}] ${result.sourceFile}`);
console.log(result.content);
}
await library.close();
import { Builder } from '@libragen/core';
// Build a library from source files
const builder = new Builder();
const result = await builder.build('./docs', {
name: 'my-docs',
description: 'Internal API documentation',
include: ['**/*.md', '**/*.mdx'],
});
console.log(`Built ${result.outputPath} with ${result.stats.chunkCount} chunks`);
π οΈ Development
# Install dependencies
npm install
# Run tests
npm test
# Run linting
npm run standards
# Build all packages
npm run build
ποΈ Architecture
@libragen/cli (build, query, install, manage)
β
βΌ
@libragen/core
βββ Embedder (bge-small-en-v1.5)
βββ Chunker (language-aware splitting)
βββ VectorStore (SQLite + sqlite-vec + FTS5)
βββ Searcher (hybrid search with RRF)
βββ Reranker (mxbai-rerank-xsmall-v1)
βββ Library (create/open/validate)
βββ LibraryManager (install/uninstall/update)
βββ Manifest (tracks installations)
βββ CollectionResolver (nested collections)
βββ Sources (FileSource, GitSource)
β
βΌ
@libragen/mcp (MCP server for AI assistants)
Tools: libragen_search, libragen_list, libragen_build,
libragen_install, libragen_uninstall, libragen_update,
libragen_collection
π Acknowledgments
libragen uses the following open-source models:
- BGE-small-en-v1.5 β Embedding model by BAAI (MIT License)
- mxbai-rerank-xsmall-v1 β Reranking model by Mixedbread (Apache-2.0)
If you use libragen in academic work, please cite the underlying models:
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@online{rerank2024mxbai,
title={Boost Your Search With The Crispy Mixedbread Rerank Models},
author={Aamir Shakir and Darius Koenig and Julius Lipp and Sean Lee},
year={2024},
url={https://www.mixedbread.ai/blog/mxbai-rerank-v1},
}
π License
MIT β see LICENSE for details.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.