Alethea World History Engine
A narrative graph engine that enables LLMs to generate, track, and mutate complex fictional worlds while maintaining consistency between factions, characters, and locations. It acts as a specialized RAG framework for storytelling, allowing models to manage thousands of entities without exceeding context limits.
README
Alethea π
A narrative graph engine used to generate, track, and visualize fictional worlds using LLMs or purely procedurally.
π Overview
World History Engine is a narrative framework that can work in two modes:
- AI-Assisted: As an MCP Server for LLMs (like Claude), allowing them to query and mutate the world state consistently.
- Procedural (Standalone): As a classic generator where you use the GUI or CLI to spawn worlds based on YAML templates, without needing an API key or LLM.
It maintains a consistent internal graph database of entities (Factions, Characters, Locations) and their relationships.
β¨ Key Features
- π΅οΈββοΈ RAG for Fiction: Keeps track of thousands of entities without filling up the LLM context window.
- π² Dual Mode: Works with Claude/OpenAI OR as a standalone offline generator.
- πΈοΈ Graph-Based Consistency: Entities have strict relationships (e.g.,
Faction A --[war]--> Faction B). - β³ Time-Travel Debugging: Includes a web-based visualizer (
world_viz.html) with a timeline slider. Roll back history to see how the world looked 50 epochs ago.
π Architecture
Here is the internal structure of the world engine entities:
graph TD
%% --- Styles ---
classDef browser fill:#f9f,stroke:#333,stroke-width:2px;
classDef mcp fill:#ffecb3,stroke:#ff6f00,stroke-width:2px,stroke-dasharray: 5 5;
classDef storage fill:#e0e0e0,stroke:#333,stroke-width:2px;
classDef core fill:#e1f5fe,stroke:#0277bd,stroke-width:2px;
%% --- Clients ---
subgraph Clients ["Clients & Interfaces"]
BrowserUI[Browser<br>Web Visualizer / GUI]:::browser
ClaudeApp[Claude Desktop<br>AI Assistant]:::mcp
end
%% --- Backend ---
subgraph Backend ["Backend (Python)"]
%% Entry Points
subgraph EntryPoints ["Entry Points"]
Server[server.py<br>HTTP API & GUI]:::core
CLI[main.py<br>CLI Generator]:::core
MCPSrv[mcp_server.py<br>MCP Server]:::mcp
end
DI((Dishka IOC))
subgraph Services ["Services"]
TES[TemplateEditorService]
SIM_S[SimulationService]
ST_S[StorytellerService]
WQS[WorldQueryService]
NS[NamingService]
end
%% Core Logic
subgraph CoreEngine ["Core Engine"]
WG[WorldGenerator]
Repo[InMemoryRepository]
end
%% Connections
ClaudeApp == Stdio/SSE ==> MCPSrv
BrowserUI == HTTP ==> Server
Server & MCPSrv & CLI --> DI
DI --> Services
Services --> CoreEngine
end
%% --- Storage ---
subgraph Storage ["Storage"]
YAML[(YAML Templates)]:::storage
JSON[(World JSON)]:::storage
end
Repo -.-> JSON
TES -.-> YAML
π Quick Start
π³ Docker Deployment
1. Build the Image
Build the container image from the root of your repository:
docker build -t world-engine .
2. Run the Container
Run the image, exposing the two required ports. Replace your_api_key_here with your actual key. You can skip BASE_URL if using standard OpenAI.:
docker run -d \
--name world-engine \
-p 8000:8000 \
-p 8001:8001 \
-e API_KEY="sk-..." \
-e MODEL="claude-4-5-sonnet-latest" \
-e BASE_URL="[https://api.anthropic.com/v1](https://api.anthropic.com/v1)" \
world-engine
3. Access
- Web UI (Standalone Generation): Access the graphical interface at
http://localhost:8001. - MCP Server (AI Integration): Connect your Claude Desktop or other MCP client to
http://localhost:8000. - Logs: View combined logs for both services:
docker logs world-engine-instance.
Prerequisites for deployment without Docker
- Python 3.11+
uv(recommended) orpip
Installation
# Clone the repository
git clone [https://github.com/your-username/world-history-engine.git](https://github.com/your-username/world-history-engine.git)
cd world-history-engine
# Install dependencies
uv sync
π² Generating Worlds (Standalone)
You can generate worlds without configuring any AI:
Option 1: Graphical Interface (GUI) Start the web server to generate and visualize worlds interactively.
uv run server.py
# Open [http://127.0.0.1:8001](http://127.0.0.1:8001) in your browser
Option 2: Command Line (CLI)
Run the main generation script to create a fresh world snapshot in world_output/.
uv run main.py
π€ Running with LLM (MCP Server)
To use this engine as a tool inside Claude (for interactive storytelling), run the MCP server:
uv run mcp_server.py
Add this to your claude_desktop_config.json:
{
"mcpServers": {
"world-engine": {
"command": "uv",
"args": [
"run",
"mcp_server.py"
],
"env": {
"PYTHONUNBUFFERED": "1"
}
}
}
}
To use this engine as a tool inside Qwen Desktop, paste the following configuration in the MCP settings:
{
"mcpServers": {
"world-builder": {
"url": "http://0.0.0.0:8000"
}
}
And add description
π Visualizing Your World
The engine comes with a standalone HTML visualizer.
- Generate a world using GUI, CLI, or MCP.
- Open
static/world_viz.htmlin your browser. - Upload the JSON export (from
world_output/). - Explore: Drag nodes, filter by factions, and use the Timeline Slider to replay history.
βοΈ Configuration & Templates
The engine's logic is data-driven. You can modify the simulation rules in data/templates/:
factions.yaml: Define cultures, taboos, and aggression levels.biomes.yaml: Configure environmental generation.resources.yaml: Manage economy items.
And more other rules of naming in data/naming
πΊοΈ Roadmap
- [ ] Persistent storage (PostgreSQL/Neo4j support)
- [ ] Develop AI driven quest generator
π€ Contributing
Contributions are welcome! Please check out the issues tab or submit a PR.
π License
This project is licensed under the MIT License - see the LICENSE file for details.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.