# Adaptive MCP Assistant - Activity #2
A production-ready multi-step agent with intelligent tool selection, quality validation, and MCP integration.
## π― Features
β
**Intelligent Tool Selection** - Pattern-based classification with sophisticated regex (negative lookahead)
β
**Multi-Step Workflows** - Declarative workflow configuration for each task type
β
**Quality Validation** - Validates analysis quality before proceeding (skips for dice_action)
β
**Retry Logic** - Automatically retries low-quality responses with feedback
β
**Content Synthesis** - Uses Claude to create coherent answers from tool results
β
**Professional Formatting** - Formatted output with metadata and processing steps
β
**Detailed Logging** - Shows which tools are called with parameters and results
β
**Dynamic Adaptation** - Adds web search to general queries when current info is needed
## π Architecture
```
langgraph_app/
βββ config/ # Configuration module
β βββ __init__.py # Clean exports
β βββ settings.py # AgentConfig, config, PROJECT_ROOT
β βββ task_mappings.py # Task patterns & workflows
β
βββ utils/ # Utility functions
β βββ __init__.py
β βββ query_classifier.py # Pattern matching, should_use_web_search
β βββ tool_logger.py # Detailed logging utilities
β
βββ nodes/ # Workflow nodes (6 nodes)
β βββ __init__.py
β βββ analyze_query.py # Query analysis & planning
β βββ tool_executor.py # Intelligent tool execution
β βββ synthesize_with_claude.py # Content synthesis
β βββ quality_check.py # Quality validation
β βββ retry.py # Retry handling
β βββ format_output.py # Output formatting
β
βββ routing/ # Conditional routing logic
β βββ __init__.py
β βββ quality_router.py # Route by quality score
β βββ retry_router.py # Route by retry count
β
βββ state.py # ResearchState schema
βββ agent.py # Production workflow builder
βββ README.md # This file
```
## π Complete Workflow
### Visual Diagram
```mermaid
graph TD
START([START]) --> A[analyze_query_node]
A -->|Creates execution plan| B[tool_executor_node]
B -->|Executes tools| C[synthesize_with_claude_node]
C -->|Creates coherent answer| D[quality_check_node]
D -->|pass: score β₯ 7.0| E[format_output_node]
D -->|retry: score < 7.0| F[retry_handler_node]
F -->|retry: count < 2| A
F -->|give_up: count β₯ 2| E
E --> END([END])
style A fill:#e1f5ff,stroke:#0066cc,stroke-width:2px,color:#000
style B fill:#fff4e1,stroke:#cc8800,stroke-width:2px,color:#000
style C fill:#f0e1ff,stroke:#8800cc,stroke-width:2px,color:#000
style D fill:#ffe1e1,stroke:#cc0000,stroke-width:2px,color:#000
style F fill:#fff0e1,stroke:#cc6600,stroke-width:2px,color:#000
style E fill:#e1ffe1,stroke:#00cc00,stroke-width:2px,color:#000
```
### Workflow Description
```
START
β
1. analyze_query_node
- Classifies query (dice_action, research, general)
- Selects tools based on task_mappings
- Checks should_use_web_search() for dynamic adaptation
- Creates execution plan
β
2. tool_executor_node
- Executes workflow steps from plan
- Logs each tool call with parameters
- Returns results
β
3. synthesize_with_claude_node
- Creates coherent answer from tool results
- Skips for dice_action (results are final)
- Uses creative mode for research
β
4. quality_check_node
- Validates answer quality (0-10)
- Skips for dice_action (auto-pass 10.0)
β
(conditional routing)
βββ pass? β format_output_node β END
βββ retry? β retry_handler_node β analyze_query_node (re-analyze)
β
5. format_output_node
- Formats with headers and metadata
- Includes processing steps
- Shows workflow execution details
β
END
```
## π¨ Task Types & Workflows
### dice_action
**Patterns:**
- `roll.*dice`, `dice.*roll`, `\d+d\d+`
**Workflow:**
```
Step 1: roll_dice β Returns actual dice rolls
```
**Example:** "roll a dice 5 times" β `ROLLS: 4, 3, 3, 1, 1`
### research
**Patterns:**
- `research|study|analyze|investigate`
- `latest|recent|current|news|update`
- `what is.*(latest|current|comprehensive|detailed)` (depth indicators)
- `compare|contrast|difference between`
**Workflow:**
```
Step 1: web_search β Search for current information
Step 2: ask_specialized_claude(summarize) β Summarize findings
Step 3: ask_specialized_claude(explain) β Explain simply
```
**Example:** "What is the latest AI news?" β Web search + Summary + Explanation
### general
**Patterns:**
- `what is\b(?!.*(latest|current|recent))` (negative lookahead!)
- `define|definition`
- `^(who|what|when|where|why|how)\s`
**Workflow:**
```
Step 1: ask_specialized_claude(general) β Direct answer
```
**Dynamic:** Adds web_search if `should_use_web_search()` detects current info keywords
**Example:** "What is MCP?" β Just Claude (no web search)
## π Usage
### Basic Usage
```python
from langgraph_app import create_research_agent
from langchain_core.messages import HumanMessage
# Create agent with context manager
async with create_research_agent() as agent:
result = await agent.ainvoke({
"query": "What is quantum computing?",
"messages": [HumanMessage(content="What is quantum computing?")],
"task_type": "general",
"selected_tools": [],
"workflow_plan": [],
"processing_steps": [],
"question_type": "factual",
"tools_used": [],
"workflow_steps": [],
"search_results": "",
"analysis": "",
"quality_score": 0.0,
"retry_count": 0,
"final_answer": "",
"error": None
})
# Get formatted answer
print(result['final_answer'])
# MCP connection automatically closed
```
### Run Demo
```bash
# Run demo with example queries
uv run python examples/research_demo.py
# Run single query
uv run python examples/research_demo.py -q "What is AI?"
# Interactive mode
uv run python examples/research_demo.py -i
```
## βοΈ Configuration
Configure via environment variables with `AGENT_` prefix:
```bash
# .env
AGENT_MODEL_NAME=openai:gpt-4o
AGENT_TEMPERATURE=0.7
AGENT_MAX_TOKENS=2048
AGENT_QUALITY_THRESHOLD=7.0
AGENT_MAX_RETRIES=2
```
Or in code:
```python
from langgraph_app.config import config
config.quality_threshold = 8.0
config.max_retries = 3
```
## π§ Design Principles
### Single Responsibility Principle (SRP)
- Each node has ONE job
- Each file has ONE clear purpose
- Separation of analysis, execution, synthesis, validation, formatting
### Open/Closed Principle (OCP)
- Easy to add new task types (just update config/task_mappings.py)
- Easy to add new workflow steps
- Extensible through configuration
### DRY (Don't Repeat Yourself)
- Shared state schema
- Reusable routing functions
- Centralized logging utilities
- Declarative workflow configuration
### KISS (Keep It Simple, Stupid)
- Small, focused files (20-80 lines each)
- Clear naming conventions
- Configuration over code
## π§ͺ Testing
```python
# Test query classification
from langgraph_app.utils import classify_query
assert classify_query("roll a dice") == "dice_action"
assert classify_query("latest AI news") == "research"
assert classify_query("What is Python?") == "general"
# Test should_use_web_search
from langgraph_app.utils import should_use_web_search
assert should_use_web_search("latest news") == True
assert should_use_web_search("What is Python?") == False
# Test routing logic
from langgraph_app.routing import check_quality
state = {"quality_score": 8.0, "retry_count": 0}
assert check_quality(state) == "pass"
```
## π MCP Tools Used
### From Activity #1:
1. **web_search** (Tavily API)
- Used by: research workflow (step 1)
- Purpose: Gather current web information
- Logged with query parameters
2. **ask_specialized_claude** (Anthropic API)
- Used by: All workflows with different profiles
- Profiles: general, summarize, explain, creative
- Logged with task_type and action
3. **roll_dice** (D&D dice roller)
- Used by: dice_action workflow
- Purpose: Generate random dice rolls
- Logged with notation and num_rolls
## π Learning Outcomes
This implementation demonstrates:
β
**LangGraph Fundamentals**
- StateGraph construction
- Conditional edges
- Multi-node workflows
β
**MCP Integration**
- Connecting to MCP servers
- Loading and using MCP tools
- Tool orchestration with logging
β
**Production Patterns**
- Sophisticated pattern matching (negative lookahead)
- Error handling and retry logic
- Quality validation
- Detailed observability
β
**Software Engineering**
- Modular architecture (config/, nodes/, utils/, routing/)
- Design principles (SOLID)
- Declarative configuration
- Clean code practices
## π Advanced Features
### Sophisticated Pattern Matching
```python
# Negative lookahead for precise classification
r"what is\b(?!.*(latest|current|recent))" # Matches "what is X" but not "what is the latest X"
# Conditional depth matching
r"what is.*\b(latest|current|comprehensive|detailed)" # Only research if depth indicators present
```
### Dynamic Web Search
```python
# Automatically adds web search to general queries when needed
if task_type == "general" and should_use_web_search(query):
# Prepends web_search step to workflow
```
### Tool Name Clarity
```python
# Shows task_type in tool names
Tools: web_search, ask_specialized_claude(summarize), ask_specialized_claude(explain)
```
## π Example Outputs
### Dice Action
```
Query: "roll a dice 5 times"
Task Type: DICE_ACTION
Tools: roll_dice
Result: Roll 1: ROLLS: 4 -> RETURNS: 4
Roll 2: ROLLS: 3 -> RETURNS: 3
...
```
### Research
```
Query: "What is the latest AI news?"
Task Type: RESEARCH
Tools: web_search, ask_specialized_claude(summarize), ask_specialized_claude(explain)
Result: [Comprehensive answer with current information]
```
### General
```
Query: "What is MCP?"
Task Type: GENERAL
Tools: ask_specialized_claude(general)
Result: [Direct answer without web search]
```
## π¨ Visualization
### Generate PNG Diagram
Run the visualization script to generate a PNG diagram:
```bash
uv run python visualize_graph.py
```
This creates `langgraph_app/workflow_diagram.png` showing the complete graph structure.
**Note:** Requires graphviz: `brew install graphviz`
### LangGraph Studio (Interactive Visualization)
For interactive visualization and debugging:
1. **Install LangGraph Studio:**
```bash
pip install langgraph-studio
```
2. **Run Studio:**
```bash
langgraph dev
```
3. **Open in browser:** http://localhost:8123
4. **Test queries interactively:**
- Type: "roll a dice 5 times"
- Watch nodes execute in real-time
- Inspect state at each step
- See tool calls and results
**Features:**
- β
Visual graph with execution highlighting
- β
State inspection at each node
- β
Step-through debugging
- β
Interactive query testing
- β
Perfect for screenshots and demos
## π Related
- **Activity #1**: MCP Server (`server.py`, `tools/`, `core/`)
- **MCP Docs**: https://modelcontextprotocol.io
- **LangGraph Docs**: https://langchain-ai.github.io/langgraph
- **LangChain MCP Adapters**: https://github.com/langchain-ai/langchain-mcp-adapters
- **LangGraph Studio**: https://github.com/langchain-ai/langgraph-studio