DPSCoach
An AI-powered combat log analyzer for Throne & Liberty that provides actionable DPS metrics using a local DuckDB event store and MCP tools. It enables users to ask natural-language questions about their performance and receive insights through automated SQL analysis.
README
DPSCoach β AI-Powered Combat Log Analyzer for Throne & Liberty
DPSCoach is a desktop application and MCP toolkit that parses Throne & Liberty combat logs into actionable DPS metrics, powered by a local AI coach that answers natural-language questions about your performance using SQL-first planning and DuckDB analytics. Prototype note: class-specific context is not wired yet; the class dropdown is intentionally disabled until backend class filtering is implemented. Fair-play disclaimer: This app only reads your exported text combat logs; it does not hook, modify, or automate the game client and does not confer any in-game advantage beyond offline analytics.
π¬ Demo

<p align="center"> <img src="visuals/dps-coach.png" alt="DPSCoach UI" width="420" /> <img src="visuals/runs.png" alt="Runs table" width="420" /> </p>
TODO: Record a 30-second GIF showing: log analysis β coach question β SQL trace β answer. See docs/DEMO.md for recording instructions.
β¨ Features
- AI Coach with Intent Routing: Ask "Why did my crit rate drop?" and the coach detects intent (CRIT_BUCKET_TREND, SKILL_DELTA, RUNS, etc.) and routes to deterministic handlers or LLM-planned SQL queries.
- Single-Call MCP Analysis Packet: All metrics (run summary, top skills, timeline buckets, skill deltas, windows, action levers) returned in one
get_analysis_packetcallβno iterative prompting required. - DuckDB Event Store: Combat events loaded into an in-memory DuckDB table for fast, safe, read-only SQL queries via MCP tools (
query_dps,get_events_schema). - Strict Model Verification: GGUF model integrity enforced via SHA-256 hash and minimum file size checks; models downloaded on-demand to user app data (never bundled).
- Read-Only Safety: All tools are SELECT-only; no INSERT/UPDATE/DELETE/file writes. Caps on result sizes (50 runs, 200 timeline buckets) prevent resource abuse.
- Deterministic Fallbacks: If the LLM misbehaves, the coach falls back to a safe default query and still produces an answerβnever displays instructions to "use Quick Questions."
- PySide6 Desktop UI: Native Windows app with tabbed views (Summary, Runs, Skills, Coach) and background workers for non-blocking analysis.
- PyInstaller One-Click Build: Ships as a standalone EXE with Python runtime and all dependencies (except the GGUF model, which downloads on first launch).
- Class Context (coming soon): UI shows class dropdown today; backend class-filtered analysis will land in the next iteration.
- Fair Play: Reads UTF-8 combat log files only; no game hooks, memory reads, packet interception, or automation.
ποΈ Architecture
ββββββββββββββββ
β PySide6 UI β (main.py)
β (Desktop) β
ββββββββ¬ββββββββ
β QPushButton clicks
βΌ
ββββββββββββββββββββ
β SQLCoach β (coach_local.py)
β Intent Router β β Detects RUNS, SKILLS, CRIT_BUCKET_TREND, etc.
β + LLM Planner β β Plans SQL or returns deterministic answer
ββββββββ¬ββββββββββββ
β MCP client calls
βΌ
ββββββββββββββββββββ
β MCP Client β (mcp_client.py)
β (Subprocess) β β Spawns `python -m mcp_server`
ββββββββ¬ββββββββββββ
β stdio JSON-RPC
βΌ
ββββββββββββββββββββ
β MCP Server β (mcp_server.py, FastMCP)
β + DuckDB Store β β analyze_dps_logs, query_dps, get_analysis_packet
β (event_store) β β Safe SELECT templates, no mutations
ββββββββ¬ββββββββββββ
β read logs
βΌ
ββββββββββββββββββββ
β Combat Logs β (UTF-8 .txt/.log files)
β (TL SAVED dir) β
ββββββββββββββββββββ
Key Design Choices:
- MCP as the contract boundary: UI and coach communicate via MCP tools, ensuring the same payload shape for CLI, desktop, and future integrations.
- Intent-first routing: Deterministic handlers (90% of questions) bypass the LLM entirely, guaranteeing consistent formatting and trace clarity.
- DuckDB instead of pandas: In-memory SQL engine enables ad-hoc queries without shipping raw events to the UI; query results are capped and safe.
π Safety Guarantees
- Read-Only Tools:
query_dpsenforces SELECT-only via AST parse; INSERT/UPDATE/DELETE raise exceptions. - Clamped Parameters:
last_n_runslimited to [1, 50],top_k_skillsto [1, 50],bucket_secondscoerced to allowed values (1, 2, 5, 10, 15, 30, 60). - No File Writes in Tools: The MCP server never writes files; all outputs go to stdout (CLI) or returned as JSON (tools).
- Model Integrity: GGUF files must pass SHA-256 hash and minimum size checks before loading; corrupt or tampered models are rejected.
- Deterministic Self-Test: Model must respond "OK" to a trivial prompt before the UI enables the coach chat.
π― Engineering Signals (for hiring managers)
This project demonstrates professional software engineering practices suitable for production systems:
1. Contract-Driven Design
- MCP as API boundary: UI, CLI, and third-party clients consume identical JSON payloads.
- Stable schemas:
runs_last_nnormalized tolist[dict]with explicit keys (run_id,dps,total_damage, etc.); consumers never rely on positional indexes. - Test parity: Smoke tests (
smoke_mcp_tool.py) verify MCP tool output matches CLI output (modulogenerated_attimestamps).
2. Defensive Programming & Validation
- Input sanitization: SQL inputs quoted via
.replace("'", "''"), user file paths resolved withPath(...).expanduser(). - Schema enforcement: Combat logs with unexpected column counts are skipped (not fatal); parsers yield instead of loading entire files into memory.
- Graceful degradation: Missing or corrupt models trigger fallback UIs; malformed LLM outputs route to deterministic handlers.
3. Testability & Observability
- 73 unit tests covering intent detection, route handlers, DPS bound checks, session persistence, skill delta rendering.
- Trace logging: Every coach answer includes a tool trace showing which MCP calls were made, with counts for runs/skills/timeline buckets.
- Reproducible builds:
scripts/test_all.ps1runs all tests + smoke checks in one command; CI/CD-ready.
4. Performance & Resource Management
- Streaming parsers: Log files parsed as iterators (
yieldper run) to avoid loading 100MB+ files into RAM. - Background threads: Qt workers (
QThread) for model downloads, analysis, and coach inference keep the UI responsive. - DuckDB in-memory: Query results are row-limited and columnar; no unbounded memory growth.
5. Security & Isolation
- No shell=True: Model download uses
urllib.request, MCP client spawnspython -m mcp_serversafely. - Subprocess sandboxing: MCP server runs in a child process; UI never directly touches combat log files.
- User-controlled models: GGUF weights stored in
%APPDATA%\DPSCoach\models\, never bundled in the EXE, so users verify/replace files independently.
6. User Experience & Polish
- Intent-aware routing: 90% of questions (RUNS, SKILLS, CRIT_BUCKET_TREND) skip the LLM and return instant, deterministic answers.
- Self-documenting UI: "Quick Questions" buttons demonstrate capabilities; SQL trace shows exactly what was queried.
- Transparent errors: Model validation failures display the exact error message; "Copy Error" button for support requests.
7. Maintainable Codebase
- Modular architecture: Parser (
dps_logs/parser.py), metrics (metrics.py), reporting (reporting.py), server (server.py), UI (app/main.py) are independently testable. - Type hints: All functions annotated with
-> Dict[str, Any],Optional[str], etc.; mypy-compatible. - Docstrings: Public APIs documented with Google-style docstrings; test names are descriptive (e.g.,
test_runs_analysis_dps_not_exceeding_bound).
π Setup & Installation
Prerequisites
- Python 3.11+ (tested with Windows default install)
- Git (for cloning the repo)
- Throne & Liberty combat logs exported as UTF-8
.txtor.logfiles
Development Install
# Clone the repository
git clone https://github.com/stalcup-dev/tl-dps-mcp.git
cd tl-dps-mcp
# Install all dependencies (core + dev + UI + coach)
pip install -r requirements.txt
# Install coach-specific dependencies (llama-cpp-python)
pip install -r app/requirements_coach.txt
# Run CLI smoke test
python -m server --sample --pretty
# Run desktop app
python -m app.main
# (Optional) Showcase for data analytics skill growth
# - Swap log directories to compare player sessions and build a portfolio of analyses
# - Export JSON/MD outputs to demonstrate exploratory data analysis and reporting
# - Extend the MCP server with custom tools to highlight SQL + Python + DuckDB skills
Model Download
The desktop app will prompt to download the required GGUF model (~4.4 GB) on first launch. Models are stored at:
%APPDATA%\DPSCoach\models\model.gguf
Alternatively, download manually:
- Primary model: Qwen2.5-7B-Instruct Q4_K_M
- Place in
models/model.gguf(repo root) or%APPDATA%\DPSCoach\models\model.gguf
Where Combat Logs Are Stored
Throne & Liberty saves logs to:
%LOCALAPPDATA%\TL\Saved\COMBATLOGS\
Point the UI or CLI at this directory to analyze your recent runs.
π§ͺ Running Tests
Official test contract (single command for all tests):
powershell -ExecutionPolicy Bypass -File .\scripts\test_all.ps1
Tip for your analytics portfolio: include the test contract in writeups to demonstrate reliability and reproducibility alongside your data insights.
This runs:
python -m unittest discover -s tests -v(73 unit tests)python -m tests.smoke_mcp_tool --sample(MCP parity check)
Additional validation:
# Verify DPS math
python validate_output.py --sample
# Packaged build smoke test
python -m server --smoke-packaged
π Usage Examples
CLI Analysis
# Analyze bundled sample log
python -m server --sample --pretty
# Analyze your own logs (newest 5 runs)
python -m server --input-path %LOCALAPPDATA%\TL\Saved\COMBATLOGS --limit-runs 5 --output-json reports/latest.json --output-md reports/latest.md
# Smoke test (verifies MCP tools + DuckDB queries)
python -m server --smoke-packaged
CLI Options
| Flag | Description |
|---|---|
--input-path PATH |
File or directory containing TL combat logs. Defaults to sample_data/example_log_01.txt. |
--sample |
Shortcut to always use the bundled sample log. |
--pretty |
Pretty-print the JSON output using an indent of 2 spaces. |
--limit-runs N |
When pointing at a directory, only parse the newest N .txt/.log files. |
--output-json PATH |
Write the full JSON payload (always indented) to the given path while still printing to stdout. |
--output-md PATH |
Write a Markdown report built from the JSON payload. |
--smoke-packaged |
Run the bundled smoke workflow, write reports/packaged_smoke.json, and print SMOKE PASS/SMOKE FAIL. |
MCP Tool (from Claude Desktop or other MCP clients)
Add to your MCP config (claude_desktop_config.json or similar):
{
"mcpServers": {
"tl-dps-mcp": {
"command": "python",
"args": ["-m", "mcp_server"],
"cwd": "C:\\Users\\Allen\\Desktop\\Data Analyst Projects\\tl-dps-mcp"
}
}
}
Then ask Claude:
- "Analyze my TL logs at C:\Users...\COMBATLOGS"
- "What's my average DPS across the last 10 runs?"
- "Show me skill efficiency for run_123"
Desktop App Workflow
- Launch
python -m app.main - Click "Download Model" (one-time, ~4.4 GB)
- After model self-test passes, click "Browse Combat Logs Directory"
- Select your
COMBATLOGSfolder - Click "Analyze Logs"
- Switch to Coach tab and ask questions like:
- "Why did my crit rate drop?"
- "Which skill fell off?"
- "Show me my best run"
πΊοΈ Roadmap
- [ ] Multi-Run Comparisons: Side-by-side view of best vs. worst runs with delta highlights.
- [ ] Rotation Suggestions: Detect opener sequences and suggest reordering based on early damage frontloading.
- [ ] Benchmarks & Percentiles: Compare your DPS to class/spec benchmarks (user-submitted data or scraped leaderboards).
- [ ] Export Report: One-click PDF/HTML export with charts (DPS over time, skill breakdown pie chart, crit rate timeline).
- [ ] Performance Optimizations: Stream timeline buckets to SQLite on disk for sessions >1000 runs; add indexes for skill_name queries.
π Help Wanted: Class Data
If you want to contribute, the biggest need is class reference data in plain text for every Throne & Liberty class:
- Class benefits and unique passives
- All skills with descriptions (damage types, DoT limits, cooldowns)
- Known combos/rotations and synergy notes
- Edge cases: caps on stacking or DoT application limits
Any text format works (TXT/MD/CSV). Drop links or files via an issue or PR so we can wire class-aware analysis faster.
π License
This project does not yet have a license file. A permissive open-source license (MIT or Apache 2.0) will be added before public release.
π Acknowledgments
- Throne & Liberty by NCSoft for the combat log format.
- FastMCP for the Model Context Protocol server framework.
- DuckDB for the in-memory SQL engine.
- llama.cpp and llama-cpp-python for local GGUF inference.
- Qwen2.5-7B-Instruct (Alibaba) for the coach model weights.
π¬ Contact & Links
- GitHub: github.com/stalcup-dev/tl-dps-mcp
- Author Email: allen.stalc@gmail.com
- Author GitHub: github.com/stalcup-dev
- Issues: github.com/stalcup-dev/tl-dps-mcp/issues
Built by a senior engineer who cares about contracts, testing, and user experience. π
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.