Skip to main content
Glama

LoreKeeper MCP

by frap129
spec.md4.74 kB
# base-client Specification ## Purpose TBD - created by archiving change build-api-clients. Update Purpose after archive. ## Requirements ### Requirement: HTTP Client Configuration The system SHALL provide configurable HTTP client settings for different use cases and environments while maintaining sensible defaults. #### Scenario: Configurable client settings When creating API clients, the system must support configurable settings including timeouts, retry limits, and rate limits while providing sensible defaults. **Acceptance Criteria:** - Configurable base URLs for different API services (Open5e v1, v2, D&D 5e API) - Configurable timeouts (default: 30s), retry limits (default: 5), and rate limits - Environment-specific configuration support via environment variables or config files - Default user agent header identifying LoreKeeper MCP client - Support for custom headers per request or client instance ### Requirement: Parallel cache queries with API calls The base HTTP client MUST query the entity cache in parallel with API requests to minimize latency and enable offline fallback. #### Scenario: Parallel cache and API query **Given** some entities are cached and others are not **When** making an API request for multiple entities **Then** cache query starts immediately in parallel with API request **And** both operations complete concurrently **And** results are merged with API taking precedence **And** total latency is approximately max(cache_time, api_time), not cache_time + api_time #### Scenario: Cache fills gaps in API response **Given** an API request that times out after returning partial results **And** remaining entities exist in cache **When** the request completes with timeout error **Then** cached entities are merged with partial API results **And** user receives complete dataset from cache + API **And** no error is raised for the partial timeout ### Requirement: Offline fallback using cache The base HTTP client MUST fall back to cached entities when APIs are unreachable, providing graceful degradation. #### Scenario: Network failure serves from cache **Given** multiple entities cached locally **When** an API request fails with NetworkError **Then** the client catches the error **And** returns all matching entities from cache **And** logs a warning about offline mode **And** does not raise an exception #### Scenario: Offline with partial cache data **Given** 5 out of 10 requested entities are cached **When** API request fails due to network error **Then** the 5 cached entities are returned **And** response indicates partial data (via metadata or logging) **And** user receives useful partial results instead of total failure #### Scenario: Complete cache miss during offline **Given** no relevant entities in cache **When** API request fails with network error **Then** an empty list is returned **And** error is logged about offline mode with no cache **And** appropriate exception is raised indicating no data available ### Requirement: Entity-based cache storage from responses The base HTTP client MUST cache individual entities from API responses instead of caching entire URL responses. #### Scenario: Cache entities from paginated API response **Given** an API endpoint returns a paginated response with 20 spells **When** the response is successfully received **Then** each of the 20 spells is cached individually in the spells table **And** the URL itself is NOT cached **And** subsequent requests for any of those 20 spells hit cache #### Scenario: Preserve entity metadata during caching **Given** an API response with entities containing source_api metadata **When** caching entities **Then** each entity retains its source_api field **And** created_at timestamp is set to current time for new entities **And** updated_at is set to current time **And** slug is extracted from entity for primary key ### Requirement: Cache-first query option for offline-preferred mode The base HTTP client MUST support a cache-first mode where cache results are returned immediately while API refreshes asynchronously in the background. #### Scenario: Return cached data immediately for faster UX **Given** entities exist in cache **When** making a request with `cache_first=True` flag **Then** cached entities are returned immediately to caller **And** API request starts in background to refresh cache **And** user experiences instant response time #### Scenario: Background API refresh updates cache **Given** a cache-first request returned stale data **When** background API request completes successfully **Then** cache is updated with fresh data **And** subsequent requests get the refreshed data **And** original caller is not blocked on API response

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/frap129/lorekeeper-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server