Iconsult MCP
Iconsult MCP is an architecture consulting server for multi-agent systems. It analyzes codebases against a knowledge graph of 141 concepts and 462 relationships derived from Agentic Architectural Patterns for Building Multi-Agent Systems, delivering book-grounded recommendations with exact chapter/page citations.
Core Consultation Workflow
Match concepts: Embed a project description to rank relevant architectural patterns and create a tracked session
Plan consultation: Generate an adaptive step-by-step plan based on project complexity
Traverse the knowledge graph: BFS from seed concepts to discover prerequisites, conflicts, alternatives, and complements
Query the book: RAG search for exact passages with chapter numbers, page references, and section titles
Log pattern assessments: Record whether patterns are implemented, partial, missing, or not applicable — with code evidence
Score architecture: Compute a deterministic L1–L6 maturity scorecard with gap analysis and an implementation roadmap
Generate failure scenarios: Produce cascading failure walkthroughs for missing/partial patterns with recovery recommendations
Critique the consultation: Structural quality check for workflow completeness, traversal depth, and coverage gaps
Render an HTML report: Interactive before/after architecture review with SVG diagrams, tooltips, zoom controls, and animations
Coverage analysis: Compute concept/relationship coverage metrics and diff two consultation sessions
Supervision & Implementation Tracking
Supervise consultation progress across 9 workflow phases with recommended next steps
Generate phased implementation checklists classifying steps as mechanical or design decisions
Track plan step status (pending/in-progress/completed/skipped)
Multi-Agent Coordination
Shared key-value state store for subagent coordination
Typed, versioned blackboard facts with conflict detection, confidence scores, and TTL
Event-driven reactivity (emit/poll typed events like
gap_found,coverage_threshold_reached)Schema-validate subagent JSON responses
Quality & Analytics
Rate consultations (1–5) and surface quality trends across sessions
Browse and filter the full 141-concept catalogue with definitions
Health check with graph statistics
Uses Mermaid syntax to generate and render interactive architectural diagrams that visualize agent relationships, system flows, and recommended design improvements.
Integrates with OpenAI's API for generating embeddings to support RAG-based searches of architectural patterns and provides specialized analysis for systems built with the OpenAI Agents SDK.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Iconsult MCPReview my multi-agent architecture for gaps and provide a maturity score."
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Iconsult MCP
Architecture consulting for multi-agent systems, grounded in the textbook.
Iconsult is an MCP server that reviews your multi-agent architecture against a knowledge graph of 141 concepts and 462 relationships extracted from Agentic Architectural Patterns for Building Multi-Agent Systems (Arsanjani & Bustos, Packt 2026). Every recommendation comes with chapter numbers, page references, and concrete code-level changes — not abstract advice.
This project was influenced by Piaget's theories of cognitive development in which learning occurs through the adaptation of schemas.
See It In Action
We pointed Iconsult at OpenAI's Financial Research Agent — a 5-stage multi-agent pipeline from their Agents SDK — and asked it to assess architectural maturity.
![]()
View the full interactive architecture review →
The agent's current architecture
The Financial Research Agent uses a 5-stage sequential pipeline orchestrated by FinancialResearchManager. Search is the only concurrent stage — everything else runs in sequence, and the verifier is a terminal dead end:
flowchart TD
User(["User Query"]) --> Manager["FinancialResearchManager"]
Manager --> Planner["PlannerAgent\no3-mini"]
Planner -->|"FinancialSearchPlan"| FanOut{"Parallel Fan-Out"}
FanOut --> S1["SearchAgent 1"]
FanOut --> S2["SearchAgent 2"]
FanOut --> SN["SearchAgent N"]
S1 --> Collect["Collect Results"]
S2 --> Collect
SN --> Collect
Collect --> Writer["WriterAgent\ngpt-5.4"]
Writer -.->|"as_tool"| Fundamentals["FundamentalsAnalystAgent"]
Writer -.->|"as_tool"| Risk["RiskAnalystAgent"]
Fundamentals -.-> Writer
Risk -.-> Writer
Writer -->|"FinancialReportData"| Verifier["VerifierAgent\ngpt-5.4"]
Verifier --> Output(["Print Report"])What Iconsult found
Solid foundation — and Iconsult's knowledge graph traversal identified key opportunities across 7 categories:
Category | Rating | Key Finding |
Coordination & Planning | Established | Solid supervisor + agent-as-tool delegation |
Human-Agent Interaction | Emerging | Agent delegation works; no HITL checkpoints |
Agent Capabilities | Emerging | WebSearchTool + structured outputs in place |
Robustness | Not Started | 0% failure chain coverage; no retry, no timeout |
Explainability | Not Started | No instruction anchoring or fidelity auditing |
Infrastructure | Not Started | No event system, no auth, no registry |
Continuous Improvement | Not Started | Verification is informational only |
Recommended architecture
The natural next evolution — adding retry logic, checkpointing, shared memory, and a verification feedback loop:
flowchart TD
User(["User Query"]) --> Manager["FinancialResearchManager"]
Manager --> Planner["PlannerAgent\no3-mini"]
Planner -->|"FinancialSearchPlan"| FanOut{"Parallel Fan-Out"}
FanOut --> S1["SearchAgent 1"]
FanOut --> S2["SearchAgent 2"]
FanOut --> SN["SearchAgent N"]
S1 --> Collect["Collect Results"]
S2 --> Collect
SN --> Collect
FanOut -.-> WD["Watchdog Timeout\nSupervisor"]:::opportunity
S1 -.-> RT["Adaptive Retry\n+ Prompt Mutation"]:::opportunity
S2 -.-> RT
SN -.-> RT
Collect --> CP1["Checkpoint\nSearch Results"]:::opportunity
CP1 --> SharedMem[("Shared Epistemic\nMemory")]:::newpattern
SharedMem --> Writer["WriterAgent\ngpt-5.4"]
Writer -.->|"as_tool"| Fundamentals["FundamentalsAnalystAgent"]
Writer -.->|"as_tool"| Risk["RiskAnalystAgent"]
Fundamentals -.-> Writer
Risk -.-> Writer
Writer -->|"FinancialReportData"| Verifier["VerifierAgent\ngpt-5.4\n+ Scoring Rubric"]:::newpattern
Verifier -->|"Pass"| Output(["Print Report"])
Verifier -->|"Fail + Feedback"| Writer
Verifier -.-> Metrics["Custom Evaluation\nMetrics"]:::opportunity
classDef opportunity fill:none,stroke:#E74C3C,stroke-dasharray:5 5,color:#E74C3C
classDef newpattern fill:#27AE60,stroke:#333,color:whiteHow it got there
The consultation followed Iconsult's 7-step guided workflow — view the visual workflow →
Step | Tool(s) | What happened |
1. Read codebase | — | Fetched |
2. Match concepts |
| Embedded the project description (OpenAI |
2b. Plan |
| Assessed complexity as complex (score 86/100 — 20 concepts, high relationship density). Generated 11-step adaptive plan. Complexity controls traversal depth: simple (3 concepts, 1 hop, 8 steps) → moderate (5 concepts, 2 hops, adds follow-up questions + optional critique, 10 steps) → complex (8 concepts, 2 hops, parallel subagents, second traversal round, mandatory critique, 11 steps). |
3. Traverse graph |
| 4 parallel subagents explored concept clusters across two traversal rounds (39 nodes, 45 edges). Logged 20 pattern assessments (7 implemented, 3 partial, 7 missing, 3 N/A). Emitted |
4. Retrieve passages |
| Book passages scoped to discovered concepts — returned chapter numbers, page ranges, and quotes grounding each recommendation. |
5. Coverage + Score + Stress test |
|
|
5b. Critique |
| No LLM — 7 rule-based checks against fixed thresholds (workflow completeness, traversal depth >= 3, assessments >= 5, coverage >= 50%, critical edges examined, etc.). Flagged 2 issues; backfilled 6 unexplored concepts. |
6. Render report |
| Generated the interactive HTML report server-side — scores, scenarios, and coverage pulled from DB, merged with narrative content. |
7. Implementation plan |
| Offered step-by-step phased checklist (mechanical code changes vs. design decisions). |
What It Does
Point it at a codebase (or describe your architecture), and it runs a structured consultation: matching concepts, traversing the knowledge graph for prerequisites and conflicts, scoring maturity against a category-based rubric (7 categories × 3 levels from Ch. 12), and generating an interactive HTML review with before/after architecture diagrams.
Tools (25)
Consultation workflow:
Tool | Role | What it does |
| Entry point | Embeds a project description → deterministic concept ranking + |
| Planning | Assesses complexity (simple/moderate/complex) and generates an adaptive step-by-step plan |
| Graph traversal | Priority-queue BFS from seed concepts — discovers alternatives, prerequisites, conflicts, complements |
| Assessment | Records whether each pattern is implemented, partial, missing, or not applicable |
| Deep context | RAG search against the book — returns passages with chapter, page numbers, and full text |
| Coverage | Computes concept/relationship coverage, identifies opportunities, optionally diffs two sessions |
| Scoring | Category-based maturity scorecard (7 categories × 3 levels) from logged pattern assessments; pattern ID aliases bridge KG ↔ rubric IDs |
| Resilience analysis | Resilience scenarios for each opportunity — code-grounded or book-grounded, with Ch. 7 recovery chain mapping |
| Quality | Structural critique with actionable fix suggestions; multi-iteration mode (1-3 passes) with convergence detection |
| Report rendering | Server-side HTML rendering — pulls scores/scenarios/coverage from DB, merges with narrative content, writes complete HTML with CSS/JS/zoom/tooltips |
| Supervision | Tracks workflow progress across 9 phases, suggests next action with tool + params |
| Implementation | Phased markdown checklist from consultation results; classifies steps as mechanical or design decision |
| Implementation | Retrieve a previously generated plan with progress summary |
| Implementation | Update step status (pending/in_progress/completed/skipped); recomputes summary |
Coordination:
Tool | What it does |
| Shared key-value state for subagent coordination during traversal |
| Blackboard Knowledge Hub — typed, versioned facts with conflict detection, confidence scores, and TTL |
| Event-driven reactivity — emit events like |
Quality & utility:
Tool | What it does |
| Record user quality score (1-5) and/or feedback with metadata snapshot |
| Surface quality trends across consultations (avg rating, coverage, distribution) |
| Browse/filter the full 138-concept catalogue |
| Schema validation for subagent responses; optional semantic validation against the knowledge graph |
| Server health + graph stats |
Prompt
Prompt | What it does |
| Kick off a full architecture consultation — provide your project context and get the guided workflow |
The Knowledge Graph
141 concepts · 786 sections · 462 relationships · 1,248 concept-section mappingsRelationship types span uses, extends, alternative_to, component_of, requires, enables, complements, specializes, precedes, and conflicts_with.
How it was built
The graph was extracted from the book in 4 phases using Claude and OpenAI embeddings:
Phase | What it does | Output |
1a — Parse Index | Extract concept entries from the book's index (OCR-corrected) | 138 concepts with page references |
1b — Parse Book | Segment the book into sections by heading structure | 786 sections across 16 chapters |
2 — Tag Concepts | Claude maps each concept to relevant sections using index page numbers + semantic context | 1,248 concept-section mappings |
3a — Explicit Relationships | Claude identifies relationships between concepts within each chapter | Typed edges (uses, requires, extends, etc.) |
3b — Semantic Pairs | OpenAI embeddings find similar concepts across chapters; Claude validates and types the relationship | Cross-chapter semantic edges |
3c–3e — Cross-Chapter | Three additional passes: knowledge-based, cross-chapter semantic, and summary-based structural relationships | 462 total relationships at avg 0.695 confidence |
4 — Build Graph | Deduplicate edges, validate confidence thresholds, compute final embeddings from section content | Production-ready graph on MotherDuck |
See docs/development.md for pipeline commands and technical details.
Explore the interactive knowledge graph →
Setup
Prerequisites
Python 3.10+
A MotherDuck account (free tier works)
OpenAI API key (for embeddings used by
ask_book)Claude Code (optionally with the visual-explainer skill for ad-hoc diagrams outside consultations)
Database Access
The knowledge graph is hosted on MotherDuck and shared publicly. The server automatically detects whether you own the database or need to attach the public share — no extra configuration needed. Just provide your MotherDuck token and it works.
Install visual-explainer (optional)
The visual-explainer skill is no longer required for consultations — render_report now handles HTML rendering server-side. However, it remains useful for ad-hoc diagrams outside consultations:
git clone https://github.com/nicobailon/visual-explainer.git ~/.claude/skills/visual-explainer
mkdir -p ~/.claude/commands
cp ~/.claude/skills/visual-explainer/prompts/*.md ~/.claude/commands/Install
pip install git+https://github.com/marcus-waldman/Iconsult_mcp.gitFor development:
git clone https://github.com/marcus-waldman/Iconsult_mcp.git
cd Iconsult_mcp
pip install -e .Environment Variables
export MOTHERDUCK_TOKEN="your-token" # Required — database
export OPENAI_API_KEY="sk-..." # Required — embeddings for ask_bookMCP Configuration
Add to your Claude Desktop config (claude_desktop_config.json) or Claude Code settings:
{
"mcpServers": {
"iconsult": {
"command": "iconsult-mcp",
"env": {
"MOTHERDUCK_TOKEN": "your-token",
"OPENAI_API_KEY": "sk-..."
}
}
}
}Verify
iconsult-mcp --checkLicense
AGPL-3.0 — see LICENSE for details.
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/marcus-waldman/Iconsult_mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server