Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@DPSCoachWhy did my critical hit rate drop in the last run?"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
DPSCoach β AI-Powered Combat Log Analyzer for Throne & Liberty
DPSCoach is a desktop application and MCP toolkit that parses Throne & Liberty combat logs into actionable DPS metrics, powered by a local AI coach that answers natural-language questions about your performance using SQL-first planning and DuckDB analytics. Prototype note: class-specific context is not wired yet; the class dropdown is intentionally disabled until backend class filtering is implemented. Fair-play disclaimer: This app only reads your exported text combat logs; it does not hook, modify, or automate the game client and does not confer any in-game advantage beyond offline analytics.
π¬ Demo

TODO: Record a 30-second GIF showing: log analysis β coach question β SQL trace β answer. See docs/DEMO.md for recording instructions.
β¨ Features
AI Coach with Intent Routing: Ask "Why did my crit rate drop?" and the coach detects intent (CRIT_BUCKET_TREND, SKILL_DELTA, RUNS, etc.) and routes to deterministic handlers or LLM-planned SQL queries.
Single-Call MCP Analysis Packet: All metrics (run summary, top skills, timeline buckets, skill deltas, windows, action levers) returned in one
get_analysis_packetcallβno iterative prompting required.DuckDB Event Store: Combat events loaded into an in-memory DuckDB table for fast, safe, read-only SQL queries via MCP tools (
query_dps,get_events_schema).Strict Model Verification: GGUF model integrity enforced via SHA-256 hash and minimum file size checks; models downloaded on-demand to user app data (never bundled).
Read-Only Safety: All tools are SELECT-only; no INSERT/UPDATE/DELETE/file writes. Caps on result sizes (50 runs, 200 timeline buckets) prevent resource abuse.
Deterministic Fallbacks: If the LLM misbehaves, the coach falls back to a safe default query and still produces an answerβnever displays instructions to "use Quick Questions."
PySide6 Desktop UI: Native Windows app with tabbed views (Summary, Runs, Skills, Coach) and background workers for non-blocking analysis.
PyInstaller One-Click Build: Ships as a standalone EXE with Python runtime and all dependencies (except the GGUF model, which downloads on first launch).
Class Context (coming soon): UI shows class dropdown today; backend class-filtered analysis will land in the next iteration.
Fair Play: Reads UTF-8 combat log files only; no game hooks, memory reads, packet interception, or automation.
ποΈ Architecture
Key Design Choices:
MCP as the contract boundary: UI and coach communicate via MCP tools, ensuring the same payload shape for CLI, desktop, and future integrations.
Intent-first routing: Deterministic handlers (90% of questions) bypass the LLM entirely, guaranteeing consistent formatting and trace clarity.
DuckDB instead of pandas: In-memory SQL engine enables ad-hoc queries without shipping raw events to the UI; query results are capped and safe.
π Safety Guarantees
Read-Only Tools:
query_dpsenforces SELECT-only via AST parse; INSERT/UPDATE/DELETE raise exceptions.Clamped Parameters:
last_n_runslimited to [1, 50],top_k_skillsto [1, 50],bucket_secondscoerced to allowed values (1, 2, 5, 10, 15, 30, 60).No File Writes in Tools: The MCP server never writes files; all outputs go to stdout (CLI) or returned as JSON (tools).
Model Integrity: GGUF files must pass SHA-256 hash and minimum size checks before loading; corrupt or tampered models are rejected.
Deterministic Self-Test: Model must respond "OK" to a trivial prompt before the UI enables the coach chat.
π― Engineering Signals (for hiring managers)
This project demonstrates professional software engineering practices suitable for production systems:
1. Contract-Driven Design
MCP as API boundary: UI, CLI, and third-party clients consume identical JSON payloads.
Stable schemas:
runs_last_nnormalized tolist[dict]with explicit keys (run_id,dps,total_damage, etc.); consumers never rely on positional indexes.Test parity: Smoke tests (
smoke_mcp_tool.py) verify MCP tool output matches CLI output (modulogenerated_attimestamps).
2. Defensive Programming & Validation
Input sanitization: SQL inputs quoted via
.replace("'", "''"), user file paths resolved withPath(...).expanduser().Schema enforcement: Combat logs with unexpected column counts are skipped (not fatal); parsers yield instead of loading entire files into memory.
Graceful degradation: Missing or corrupt models trigger fallback UIs; malformed LLM outputs route to deterministic handlers.
3. Testability & Observability
73 unit tests covering intent detection, route handlers, DPS bound checks, session persistence, skill delta rendering.
Trace logging: Every coach answer includes a tool trace showing which MCP calls were made, with counts for runs/skills/timeline buckets.
Reproducible builds:
scripts/test_all.ps1runs all tests + smoke checks in one command; CI/CD-ready.
4. Performance & Resource Management
Streaming parsers: Log files parsed as iterators (
yieldper run) to avoid loading 100MB+ files into RAM.Background threads: Qt workers (
QThread) for model downloads, analysis, and coach inference keep the UI responsive.DuckDB in-memory: Query results are row-limited and columnar; no unbounded memory growth.
5. Security & Isolation
No shell=True: Model download uses
urllib.request, MCP client spawnspython -m mcp_serversafely.Subprocess sandboxing: MCP server runs in a child process; UI never directly touches combat log files.
User-controlled models: GGUF weights stored in
%APPDATA%\DPSCoach\models\, never bundled in the EXE, so users verify/replace files independently.
6. User Experience & Polish
Intent-aware routing: 90% of questions (RUNS, SKILLS, CRIT_BUCKET_TREND) skip the LLM and return instant, deterministic answers.
Self-documenting UI: "Quick Questions" buttons demonstrate capabilities; SQL trace shows exactly what was queried.
Transparent errors: Model validation failures display the exact error message; "Copy Error" button for support requests.
7. Maintainable Codebase
Modular architecture: Parser (
dps_logs/parser.py), metrics (metrics.py), reporting (reporting.py), server (server.py), UI (app/main.py) are independently testable.Type hints: All functions annotated with
-> Dict[str, Any],Optional[str], etc.; mypy-compatible.Docstrings: Public APIs documented with Google-style docstrings; test names are descriptive (e.g.,
test_runs_analysis_dps_not_exceeding_bound).
π Setup & Installation
Prerequisites
Python 3.11+ (tested with Windows default install)
Git (for cloning the repo)
Throne & Liberty combat logs exported as UTF-8
.txtor.logfiles
Development Install
Model Download
The desktop app will prompt to download the required GGUF model (~4.4 GB) on first launch. Models are stored at:
Alternatively, download manually:
Primary model: Qwen2.5-7B-Instruct Q4_K_M
Place in
models/model.gguf(repo root) or%APPDATA%\DPSCoach\models\model.gguf
Where Combat Logs Are Stored
Throne & Liberty saves logs to:
Point the UI or CLI at this directory to analyze your recent runs.
π§ͺ Running Tests
Official test contract (single command for all tests):
Tip for your analytics portfolio: include the test contract in writeups to demonstrate reliability and reproducibility alongside your data insights.
This runs:
python -m unittest discover -s tests -v(73 unit tests)python -m tests.smoke_mcp_tool --sample(MCP parity check)
Additional validation:
π Usage Examples
CLI Analysis
CLI Options
Flag | Description |
| File or directory containing TL combat logs. Defaults to |
| Shortcut to always use the bundled sample log. |
| Pretty-print the JSON output using an indent of 2 spaces. |
| When pointing at a directory, only parse the newest |
| Write the full JSON payload (always indented) to the given path while still printing to stdout. |
| Write a Markdown report built from the JSON payload. |
| Run the bundled smoke workflow, write |
MCP Tool (from Claude Desktop or other MCP clients)
Add to your MCP config (claude_desktop_config.json or similar):
Then ask Claude:
"Analyze my TL logs at C:\Users...\COMBATLOGS"
"What's my average DPS across the last 10 runs?"
"Show me skill efficiency for run_123"
Desktop App Workflow
Launch
python -m app.mainClick "Download Model" (one-time, ~4.4 GB)
After model self-test passes, click "Browse Combat Logs Directory"
Select your
COMBATLOGSfolderClick "Analyze Logs"
Switch to Coach tab and ask questions like:
"Why did my crit rate drop?"
"Which skill fell off?"
"Show me my best run"
πΊοΈ Roadmap
Multi-Run Comparisons: Side-by-side view of best vs. worst runs with delta highlights.
Rotation Suggestions: Detect opener sequences and suggest reordering based on early damage frontloading.
Benchmarks & Percentiles: Compare your DPS to class/spec benchmarks (user-submitted data or scraped leaderboards).
Export Report: One-click PDF/HTML export with charts (DPS over time, skill breakdown pie chart, crit rate timeline).
Performance Optimizations: Stream timeline buckets to SQLite on disk for sessions >1000 runs; add indexes for skill_name queries.
π Help Wanted: Class Data
If you want to contribute, the biggest need is class reference data in plain text for every Throne & Liberty class:
Class benefits and unique passives
All skills with descriptions (damage types, DoT limits, cooldowns)
Known combos/rotations and synergy notes
Edge cases: caps on stacking or DoT application limits
Any text format works (TXT/MD/CSV). Drop links or files via an issue or PR so we can wire class-aware analysis faster.
π License
This project does not yet have a license file. A permissive open-source license (MIT or Apache 2.0) will be added before public release.
π Acknowledgments
Throne & Liberty by NCSoft for the combat log format.
FastMCP for the Model Context Protocol server framework.
DuckDB for the in-memory SQL engine.
llama.cpp and llama-cpp-python for local GGUF inference.
Qwen2.5-7B-Instruct (Alibaba) for the coach model weights.
π¬ Contact & Links
Author Email: allen.stalc@gmail.com
Author GitHub: github.com/stalcup-dev
Built by a senior engineer who cares about contracts, testing, and user experience. π