Provides access to comprehensive D&D 5e game data including spells, monsters, classes, races, equipment, and rules through the Open5e and D&D 5e APIs with intelligent caching.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@LoreKeeper MCPwhat spells can a level 3 wizard cast?"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
LoreKeeper MCP
A Model Context Protocol (MCP) server for D&D 5e information lookup with AI assistants. LoreKeeper provides fast, cached access to comprehensive Dungeons & Dragons 5th Edition data through the Open5e API.
Features
Comprehensive D&D 5e Data: Access spells, monsters, classes, races, equipment, and rules
Semantic Search: Milvus Lite vector database with natural language search capabilities
Open5e API Integration: Access to comprehensive D&D 5e content via Open5e API
Type-Safe Configuration: Pydantic-based configuration management
Modern Python Stack: Built with Python 3.11+, async/await patterns, and FastMCP
Production Ready: Comprehensive test suite, code quality tools, and pre-commit hooks
Quick Start
Prerequisites
Python 3.11 or higher
uv for package management
Installation
# Clone the repository
git clone https://github.com/your-org/lorekeeper-mcp.git
cd lorekeeper-mcp
# Install dependencies
uv sync
# Set up pre-commit hooks
uv run pre-commit install
# Copy environment configuration
cp .env.example .envRunning the Server
# Start the MCP server (recommended)
lorekeeper serve
# Or with custom configuration
lorekeeper -v serve
lorekeeper --db-path /custom/path.db serve
# Backward compatible: start server without CLI
uv run python -m lorekeeper_mcpAvailable Tools
LoreKeeper provides 6 MCP tools for querying D&D 5e game data:
search_spell- Search spells by name, level, school, class, and propertiessearch_creature- Find monsters by name, CR, type, and sizesearch_character_option- Get classes, races, backgrounds, and featssearch_equipment- Search weapons, armor, and magic itemssearch_rule- Look up game rules, conditions, and reference informationsearch_all- Unified search across all content types with semantic search
See docs/tools.md for detailed usage and examples.
Document Filtering
All lookup tools and the search tool support filtering by source document:
# List available documents first
documents = await list_documents()
# Filter spells to SRD only
srd_spells = await search_spell(
level=3,
documents=["srd-5e"]
)
# Filter creatures from multiple sources
creatures = await search_creature(
type="dragon",
documents=["srd-5e", "tce", "phb"]
)
# Search with document filter
results = await search_all(
query="fireball",
documents=["srd-5e"]
)This allows you to:
Limit searches to SRD (free) content only
Filter by specific published books or supplements
Separate homebrew from official content
Control which sources you're using for licensing reasons
See docs/document-filtering.md for comprehensive guide and cross-source filtering examples.
CLI Usage
LoreKeeper includes a command-line interface for importing D&D content:
# Import content from OrcBrew file
lorekeeper import MegaPak_-_WotC_Books.orcbrew
# Show help
lorekeeper --help
lorekeeper import --helpSee docs/cli-usage.md for detailed CLI documentation.
Configuration
LoreKeeper uses environment variables for configuration. All settings use the LOREKEEPER_ prefix. Create a .env file:
# Cache backend settings
LOREKEEPER_CACHE_BACKEND=milvus # "milvus" (default) or "sqlite"
LOREKEEPER_MILVUS_DB_PATH=~/.local/share/lorekeeper/milvus.db # or $XDG_DATA_HOME/lorekeeper/milvus.db
LOREKEEPER_EMBEDDING_MODEL=all-MiniLM-L6-v2
# SQLite settings (if using sqlite backend)
LOREKEEPER_DB_PATH=./data/cache.db
# Cache TTL settings
LOREKEEPER_CACHE_TTL_DAYS=7
LOREKEEPER_ERROR_CACHE_TTL_SECONDS=300
# Logging
LOREKEEPER_LOG_LEVEL=INFO
LOREKEEPER_DEBUG=false
# API endpoints
LOREKEEPER_OPEN5E_BASE_URL=https://api.open5e.comSemantic Search
LoreKeeper uses Milvus Lite as the default cache backend, providing semantic search capabilities powered by vector embeddings.
Features
Semantic Search: Find content by meaning, not just exact text matches
Vector Embeddings: Uses sentence-transformers for high-quality text embeddings
Hybrid Search: Combine semantic queries with structured filters
Zero Configuration: Works out of the box with sensible defaults
Lightweight: Embedded database, no external services required
Usage Examples
# Find spells by concept (not just keywords)
healing = await search_spell(search="restore health and cure wounds")
# Returns: Cure Wounds, Healing Word, Mass Cure Wounds, etc.
# Find creatures by behavior
flyers = await search_creature(search="flying creatures with ranged attacks")
# Returns: Dragon, Wyvern, Harpy, etc.
# Hybrid search: semantic + structured filters
fire_evocation = await search_spell(
search="area fire damage",
level=3,
school="evocation"
)
# Returns: Fireball (exact match for both semantic and filter)
# Search across all content types
results = await search_all(query="dragon breath weapon")First-Run Setup
On first run, LoreKeeper downloads the embedding model (~80MB). This is a one-time download:
# First run will show:
# Downloading model 'all-MiniLM-L6-v2'...
lorekeeper serveConfiguration
Configure Milvus via environment variables:
# Use Milvus backend (default)
LOREKEEPER_CACHE_BACKEND=milvus
# Custom database path (defaults to $XDG_DATA_HOME/lorekeeper/milvus.db)
LOREKEEPER_MILVUS_DB_PATH=/path/to/milvus.db
# Alternative embedding model
LOREKEEPER_EMBEDDING_MODEL=all-MiniLM-L6-v2Migrating from SQLite
If you were using an older version with SQLite caching:
Set the backend to Milvus (default):
LOREKEEPER_CACHE_BACKEND=milvusRe-import your data (Milvus cache starts empty):
lorekeeper import /path/to/content.orcbrewOr let it repopulate from APIs on first query.
Rollback: To keep using SQLite (no semantic search):
LOREKEEPER_CACHE_BACKEND=sqlite
LOREKEEPER_DB_PATH=./data/cache.dbNote: SQLite cache does not support semantic search—only exact and pattern matching.
Development
Project Structure
lorekeeper-mcp/
├── src/lorekeeper_mcp/ # Main package
│ ├── cache/ # Vector database caching layer
│ │ ├── milvus.py # Milvus Lite cache implementation
│ │ ├── embedding.py # Embedding service for semantic search
│ │ ├── protocol.py # Cache protocol definition
│ │ └── factory.py # Cache factory
│ ├── api_clients/ # External API clients
│ ├── repositories/ # Repository pattern for data access
│ ├── tools/ # MCP tool implementations
│ ├── config.py # Configuration management
│ ├── server.py # FastMCP server setup
│ └── __main__.py # Package entry point
├── tests/ # Test suite
│ ├── test_cache/ # Cache layer tests
│ ├── test_config.py # Configuration tests
│ ├── test_server.py # Server tests
│ └── conftest.py # Pytest fixtures
├── docs/ # Documentation
├── pyproject.toml # Project configuration
├── .pre-commit-config.yaml # Code quality hooks
└── README.md # This fileRunning Tests
# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=lorekeeper_mcp
# Run specific test file
uv run pytest tests/test_cache/test_db.pyCode Quality
The project uses several code quality tools:
Black: Code formatting (100 character line length)
Ruff: Linting and import sorting
MyPy: Static type checking
Pre-commit: Git hooks for automated checks
# Run all quality checks
uv run ruff check src/
uv run ruff format src/
uv run mypy src/
# Run pre-commit hooks manually
uv run pre-commit run --all-filesVector Database Cache
LoreKeeper uses Milvus Lite for semantic search and efficient caching:
Vector Storage: 384-dimensional embeddings for semantic search
Entity Collections: Separate collections for spells, creatures, equipment, etc.
Hybrid Search: Combine vector similarity with scalar filters
Source Tracking: Records which API provided cached data
Zero Configuration: Embedded database with no external dependencies
API Strategy
The project follows a strategic API assignment:
Use Open5e API for all content lookups
Prefer Open5e v2 over v1 when available
Unified source: Single API ensures consistent behavior and simplified maintenance
See docs/tools.md for detailed API mapping and implementation notes.
📋 OpenSpec Integration
This project uses OpenSpec as its core development tooling for specification management and change tracking. OpenSpec provides:
Structured Specifications: All features, APIs, and architectural changes are documented in detailed specs
Change Management: Comprehensive change tracking with proposals, designs, and implementation tasks
Living Documentation: Specifications evolve alongside the codebase, ensuring documentation stays current
Development Workflow: Integration between specs, implementation, and testing
The openspec/ directory contains:
Current specifications for all project components
Historical change records with full context
Design documents and implementation plans
Task breakdowns for development work
When contributing, please review relevant specifications in openspec/ and follow the established change management process.
Contributing
We welcome contributions! Please see our Contributing Guidelines for details.
Development Workflow
Fork the repository
Create a feature branch:
git checkout -b feature-nameMake your changes and ensure tests pass
Run code quality checks:
uv run pre-commit run --all-filesCommit your changes
Push to your fork and create a pull request
Testing
All contributions must include tests:
New features should have corresponding unit tests
Maintain test coverage above 90%
Use pytest fixtures for consistent test setup
Follow async/await patterns for async code
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.