Iconsult MCP is an architecture consulting server for multi-agent systems. It analyzes codebases against a knowledge graph of 141 concepts and 462 relationships derived from Agentic Architectural Patterns for Building Multi-Agent Systems, delivering book-grounded recommendations with exact chapter/page citations.
Core Consultation Workflow
Match concepts: Embed a project description to rank relevant architectural patterns and create a tracked session
Plan consultation: Generate an adaptive step-by-step plan based on project complexity
Traverse the knowledge graph: BFS from seed concepts to discover prerequisites, conflicts, alternatives, and complements
Query the book: RAG search for exact passages with chapter numbers, page references, and section titles
Log pattern assessments: Record whether patterns are implemented, partial, missing, or not applicable — with code evidence
Score architecture: Compute a deterministic L1–L6 maturity scorecard with gap analysis and an implementation roadmap
Generate failure scenarios: Produce cascading failure walkthroughs for missing/partial patterns with recovery recommendations
Critique the consultation: Structural quality check for workflow completeness, traversal depth, and coverage gaps
Render an HTML report: Interactive before/after architecture review with SVG diagrams, tooltips, zoom controls, and animations
Coverage analysis: Compute concept/relationship coverage metrics and diff two consultation sessions
Supervision & Implementation Tracking
Supervise consultation progress across 9 workflow phases with recommended next steps
Generate phased implementation checklists classifying steps as mechanical or design decisions
Track plan step status (pending/in-progress/completed/skipped)
Multi-Agent Coordination
Shared key-value state store for subagent coordination
Typed, versioned blackboard facts with conflict detection, confidence scores, and TTL
Event-driven reactivity (emit/poll typed events like
gap_found,coverage_threshold_reached)Schema-validate subagent JSON responses
Quality & Analytics
Rate consultations (1–5) and surface quality trends across sessions
Browse and filter the full 141-concept catalogue with definitions
Health check with graph statistics
Uses Mermaid syntax to generate and render interactive architectural diagrams that visualize agent relationships, system flows, and recommended design improvements.
Integrates with OpenAI's API for generating embeddings to support RAG-based searches of architectural patterns and provides specialized analysis for systems built with the OpenAI Agents SDK.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Iconsult MCPReview my multi-agent architecture for gaps and provide a maturity score."
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Iconsult MCP
Architecture consulting for multi-agent systems, grounded in the textbook.
Iconsult is an MCP server that reviews your multi-agent architecture against a knowledge graph of 141 concepts and 462 relationships extracted from Agentic Architectural Patterns for Building Multi-Agent Systems (Arsanjani & Bustos, Packt 2026). Every recommendation comes with chapter numbers, page references, and concrete code-level changes — not abstract advice.
See It In Action
We pointed Iconsult at OpenAI's Financial Research Agent — a 5-stage multi-agent pipeline from their Agents SDK — and asked it to assess architectural maturity.
![]()
View the full interactive architecture review →
The agent's current architecture
The Financial Research Agent uses a 5-stage sequential pipeline orchestrated by FinancialResearchManager. Search is the only concurrent stage — everything else runs in sequence, and the verifier is a terminal dead end:
flowchart TD
Q["User Query"] --> MGR["FinancialResearchManager"]
MGR --> PLAN["PlannerAgent (o3-mini)"]
PLAN -->|"FinancialSearchPlan"| FAN{"Fan-out N searches"}
FAN --> S1["SearchAgent"]
FAN --> S2["SearchAgent"]
FAN --> SN["SearchAgent"]
S1 --> W["WriterAgent (gpt-5.2)"]
S2 --> W
SN --> W
W -.-> FA["FundamentalsAgent (.as_tool)"]
W -.-> RA["RiskAgent (.as_tool)"]
W --> V["VerifierAgent"]
V --> OUT["Output"]What Iconsult found
Solid foundation — and Iconsult's knowledge graph traversal identified 4 key opportunities for growth:
# | Finding | Recommended Pattern | Book Reference |
R1 | Verifier flags issues but pipeline terminates — no self-correction | Auto-Healing Agent Resuscitation | Ch. 7, p. 216 |
R2 | Raw search results pass unfiltered to writer | Hybrid Planner+Scorer | Ch. 12, pp. 387-390 |
R3 | All agents share same trust level — no capability boundaries | Supervision Tree with Guarded Capabilities | Ch. 5, pp. 142-145 |
R4 | Zero reliability patterns composed (book recommends 2-3 minimum) | Shared Epistemic Memory + Persistent Instruction Anchoring | Ch. 6, p. 203 |
Recommended architecture
The natural next evolution — adding a feedback loop, quality gate, shared memory, and retry logic:
flowchart TD
Q["User Query"] --> SUP["SupervisorManager"]
SUP --> MEM[("Shared Epistemic Memory")]
SUP --> PLAN["PlannerAgent"]
PLAN --> FAN{"Fan-out + Retry Logic"}
FAN --> S1["SearchAgent"]
FAN --> S2["SearchAgent"]
S1 & S2 --> SCR["ScorerAgent (quality gate)"]
SCR --> W["WriterAgent"]
W -.-> FA["FundamentalsAgent"]
W -.-> RA["RiskAgent"]
W --> V["VerifierAgent"]
V -->|"issues found"| W
V -->|"verified"| OUT["Output"]
MEM -.-> W
MEM -.-> VHow it got there
The consultation followed Iconsult's guided workflow:
Read the codebase — Fetched all source files from
manager.py,agents/*.py. Identified the orchestrator pattern inFinancialResearchManager, the.as_tool()composition, the broadexcept Exception: return Nonein search, and the terminal verifier.Match concepts —
match_conceptsembedded the project description and deterministically ranked the most relevant patterns: Orchestrator, Planner-Worker, Agent Delegates to Agent, Tool Use, and Supervisor.
2b. Plan — plan_consultation assessed complexity and generated an adaptive plan — how many concepts to traverse, whether to use subagents, and which critique steps to include.
Traverse the graph —
get_subgraphexplored each seed concept's neighborhood. Therequiresedges revealed that the Supervisor pattern requires Auto-Healing — an opportunity not yet in place. Thecomplementsedges surfaced Hybrid Planner+Scorer as a natural addition.log_pattern_assessmentrecorded each finding for deterministic scoring.Retrieve book passages —
ask_bookscoped to the discovered concepts returned exact citations: chapter numbers, page ranges, and quotes grounding each recommendation.Score + stress test + synthesize —
score_architecturecomputed the maturity scorecard from logged assessments.generate_failure_scenariosproduced concrete resilience scenarios for each opportunity — illustrating how the architecture responds under stress and where it would benefit from additional patterns. Thenrender_reportgenerated the interactive before/after architecture diagram server-side — pulling scores, scenarios, and coverage from the database and merging with narrative content to produce the complete HTML report with zoom controls, SVG tooltips, and animations. All recommended patterns are complementary — no conflicts detected.
What It Does
Point it at a codebase (or describe your architecture), and it runs a structured consultation: matching concepts, traversing the knowledge graph for prerequisites and conflicts, scoring maturity against a 6-level model, and generating an interactive HTML review with before/after architecture diagrams.
Tools (25)
Consultation workflow:
Tool | Role | What it does |
| Entry point | Embeds a project description → deterministic concept ranking + |
| Planning | Assesses complexity (simple/moderate/complex) and generates an adaptive step-by-step plan |
| Graph traversal | Priority-queue BFS from seed concepts — discovers alternatives, prerequisites, conflicts, complements |
| Assessment | Records whether each pattern is implemented, partial, missing, or not applicable |
| Deep context | RAG search against the book — returns passages with chapter, page numbers, and full text |
| Coverage | Computes concept/relationship coverage, identifies opportunities, optionally diffs two sessions |
| Scoring | Deterministic maturity scorecard (L1–L6) from logged pattern assessments; pattern ID aliases bridge KG ↔ maturity model IDs |
| Resilience analysis | Resilience scenarios for each opportunity — code-grounded or book-grounded, with Ch. 7 recovery chain mapping |
| Quality | Structural critique with actionable fix suggestions; multi-iteration mode (1-3 passes) with convergence detection |
| Report rendering | Server-side HTML rendering — pulls scores/scenarios/coverage from DB, merges with narrative content, writes complete HTML with CSS/JS/zoom/tooltips |
| Supervision | Tracks workflow progress across 9 phases, suggests next action with tool + params |
| Implementation | Phased markdown checklist from consultation results; classifies steps as mechanical or design decision |
| Implementation | Retrieve a previously generated plan with progress summary |
| Implementation | Update step status (pending/in_progress/completed/skipped); recomputes summary |
Coordination:
Tool | What it does |
| Shared key-value state for subagent coordination during traversal |
| Blackboard Knowledge Hub — typed, versioned facts with conflict detection, confidence scores, and TTL |
| Event-driven reactivity — emit events like |
Quality & utility:
Tool | What it does |
| Record user quality score (1-5) and/or feedback with metadata snapshot |
| Surface quality trends across consultations (avg rating, coverage, distribution) |
| Browse/filter the full 138-concept catalogue |
| Schema validation for subagent responses; optional semantic validation against the knowledge graph |
| Server health + graph stats |
Prompt
Prompt | What it does |
| Kick off a full architecture consultation — provide your project context and get the guided workflow |
The Knowledge Graph
141 concepts · 786 sections · 462 relationships · 1,248 concept-section mappingsRelationship types span uses, extends, alternative_to, component_of, requires, enables, complements, specializes, precedes, and conflicts_with — discovered through five extraction phases including cross-chapter semantic analysis.
Explore the interactive knowledge graph →
Setup
Prerequisites
Python 3.10+
A MotherDuck account (free tier works)
OpenAI API key (for embeddings used by
ask_book)Claude Code (optionally with the visual-explainer skill for ad-hoc diagrams outside consultations)
Database Access
The knowledge graph is hosted on MotherDuck and shared publicly. The server automatically detects whether you own the database or need to attach the public share — no extra configuration needed. Just provide your MotherDuck token and it works.
Install visual-explainer (optional)
The visual-explainer skill is no longer required for consultations — render_report now handles HTML rendering server-side. However, it remains useful for ad-hoc diagrams outside consultations:
git clone https://github.com/nicobailon/visual-explainer.git ~/.claude/skills/visual-explainer
mkdir -p ~/.claude/commands
cp ~/.claude/skills/visual-explainer/prompts/*.md ~/.claude/commands/Install
pip install git+https://github.com/marcus-waldman/Iconsult_mcp.gitFor development:
git clone https://github.com/marcus-waldman/Iconsult_mcp.git
cd Iconsult_mcp
pip install -e .Environment Variables
export MOTHERDUCK_TOKEN="your-token" # Required — database
export OPENAI_API_KEY="sk-..." # Required — embeddings for ask_bookMCP Configuration
Add to your Claude Desktop config (claude_desktop_config.json) or Claude Code settings:
{
"mcpServers": {
"iconsult": {
"command": "iconsult-mcp",
"env": {
"MOTHERDUCK_TOKEN": "your-token",
"OPENAI_API_KEY": "sk-..."
}
}
}
}Verify
iconsult-mcp --checkLicense
AGPL-3.0 — see LICENSE for details.
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.