Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| OPENAI_API_KEY | Yes | OpenAI API key used for embeddings in the ask_book tool. | |
| MOTHERDUCK_TOKEN | Yes | MotherDuck access token for the knowledge graph database. |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": false
} |
| prompts | {
"listChanged": false
} |
| experimental | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| health_check | Check server health and graph scope. Returns database connection status, graph statistics (concept count, relationship count, avg confidence), and pipeline status. Call this first to understand how large the knowledge graph is and whether the database is reachable. |
| match_concepts | ENTRY POINT — Deterministically match a project description to knowledge graph concepts via embedding similarity. Returns ranked concepts with scores and creates a consultation_id that tracks the session. The same description always produces the same concept ranking and fingerprint. Pass the returned consultation_id to get_subgraph and ask_book for step logging. |
| list_concepts | BROWSE — List all 138 concepts in the knowledge graph. Returns compact output (id, name, category) by default. Use search to filter by name, and include_definitions for full definition text. Use this to browse the catalogue; for consultation workflows, prefer match_concepts as the entry point. |
| get_subgraph | QUERY PLANNER — Bounded graph traversal from seed concepts. Given one or more concept IDs (from match_concepts or list_concepts), performs BFS up to max_hops and returns all reachable nodes and edges. Use relationship types to discover what the user is missing: alternative_to for competing approaches, requires for prerequisites, conflicts_with for incompatibilities, complements for synergies. Pass consultation_id to log traversal steps for coverage tracking. |
| ask_book | DEEP CONTEXT — RAG search against book sections. Embeds a natural language question and returns the most relevant book passages with full text, chapter, page numbers, and section title. ALWAYS scope with concept_ids from get_subgraph for precision. Returns suggested_questions derived deterministically from graph edges. Pass consultation_id to log retrieval steps. |
| consultation_report | COVERAGE CHECK — Compute coverage metrics for a consultation session. Concept coverage counts matched concepts that were either traversed (get_subgraph seeds) or assessed (log_pattern_assessment). Also shows relationship type coverage, passage diversity, prerequisite/conflict edge checks, and specific gaps. Call before synthesizing to ensure thorough coverage. Optionally compare two sessions with the same project fingerprint to see diffs. |
| score_architecture | MATURITY SCORECARD — Deterministic architecture scoring from stored pattern assessments. Reads pattern_assessment steps logged during graph traversal and computes: maturity level (L1-L6), pattern status with goals (target status after recommendations), gap analysis with severity, recommended metrics from the book, and implementation roadmap. Same consultation always produces same results. Requires pattern_assessment steps to have been logged during step 3 (traverse graph). |
| log_pattern_assessment | LOG ASSESSMENT — Record a pattern assessment for a consultation session. Call this during graph traversal (step 3) for each architectural pattern you identify in the user's codebase or confirm is missing. These stored assessments are what score_architecture uses to compute deterministic maturity scores. |
| validate_subagent | VALIDATE — Schema validation for subagent responses from scatter-gather graph traversal. Checks that a subagent response contains the required fields (concept, key_relationships, recommendation, discovered_ids) with correct types. Returns validation result with errors and warnings. No LLM calls — pure structural validation. |
| critique_consultation | CRITIQUE — Deterministic quality critique of a consultation session. Analyzes logged steps for workflow completeness, traversal depth, pattern assessment coverage, passage diversity, and critical edge checks. Returns issues with severity (error/warning), categories, and actionable suggestions. No LLM calls — pure structural analysis. |
| write_state | SHARED STATE (write) — Upsert a key-value pair in consultation shared state. Use for subagent coordination: store discovered concepts, current phase, conflict markers, or any JSON-serializable value. Logs a state_write step to the consultation. |
| read_state | SHARED STATE (read) — Read shared state from a consultation. Returns one entry if key is specified, or all entries if omitted. Use for subagent coordination and progress tracking. |
| emit_event | EVENT (emit) — Emit a consultation event for reactive processing. Valid types: gap_found, pattern_assessed, coverage_threshold_reached, coverage_dropped, plan_created, state_conflict. Returns a reactive suggestion based on the event type. |
| get_events | EVENT (poll) — Poll consultation events with optional filters. Use since_id to get only new events since a previous poll. Use event_type to filter by type. |
| plan_consultation | PLAN — Generate an adaptive consultation plan after match_concepts. Assesses project complexity (simple/moderate/complex) based on concept count, description keywords, and relationship density. Returns a step-by-step plan with tool names and parameters. Call once after match_concepts, then follow the generated plan. |
| supervise_consultation | SUPERVISE — Track consultation progress and suggest the next action. Returns workflow phase progress (percent complete), the recommended next tool call with parameters, step summary, recent event alerts, and shared state entries. Call after each major step for guided workflow. |
| generate_failure_scenarios | STRESS TEST — Generate concrete failure scenario walkthroughs for missing/partial patterns. Each scenario shows a realistic cascading failure: trigger event, step-by-step propagation through the architecture (with file:line references when code evidence is available), downstream impact, and book-cited recovery recommendation. Also maps coverage against Ch. 7's five-step failure recovery chain. Flags inverted pyramid warnings when advanced patterns depend on missing foundations. Deterministic — same consultation always produces same scenarios. Requires pattern_assessment steps from step 3. |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
| consult | Start an architecture consultation. Provide your project context and get expert multi-agent system design advice grounded in the book. |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |