# Model Selection Configuration
This document explains how Amicus selects models for different tasks to optimize cost and performance.
## Current System
The `get_best_model(task_description)` tool selects models based on keyword matching in task descriptions.
## Configuration
Edit `.amicus/config.json` to configure model selection:
```json
{
"model_selection": {
"default_model": "claude-sonnet-4.5",
"keywords": {
"simple": "claude-haiku-4.5",
"quick": "claude-haiku-4.5",
"trivial": "claude-haiku-4.5",
"test": "claude-haiku-4.5",
"fix": "claude-sonnet-4.5",
"implement": "claude-sonnet-4.5",
"design": "claude-opus-4.5",
"architecture": "claude-opus-4.5",
"complex": "claude-opus-4.5",
"research": "claude-sonnet-4.5",
"review": "claude-sonnet-4.5",
"refactor": "claude-sonnet-4.5"
}
}
}
```
## Model Tiers
### Fast/Cheap: Claude Haiku 4.5
- **Use for**: Simple tasks, quick fixes, tests, documentation updates
- **Cost**: ~$0.25 per million input tokens
- **Speed**: Fastest responses
- **Keywords**: simple, quick, trivial, test, docs
### Standard: Claude Sonnet 4.5
- **Use for**: General development, bug fixes, reviews, research
- **Cost**: ~$3 per million input tokens
- **Speed**: Balanced
- **Keywords**: fix, implement, review, refactor, research
### Premium: Claude Opus 4.5
- **Use for**: Complex architecture, critical design decisions
- **Cost**: ~$15 per million input tokens
- **Speed**: Slower but highest quality
- **Keywords**: design, architecture, complex, critical
## Usage
### Manual Model Selection
```python
# In task description
model = amicus.get_best_model("Simple test file creation")
# Returns: "claude-haiku-4.5"
model = amicus.get_best_model("Complex architecture design for distributed system")
# Returns: "claude-opus-4.5"
```
### Automatic Selection
When spawning subagents with the `task` tool, Amicus can automatically select the appropriate model:
```python
# Low-cost task
task(
agent_type="developer",
description="Quick bug fix",
prompt="Fix typo in README.md"
)
# Automatically uses Haiku
# Standard task
task(
agent_type="developer",
description="Feature implementation",
prompt="Implement user authentication"
)
# Automatically uses Sonnet
# Complex task
task(
agent_type="architect",
description="System architecture design",
prompt="Design microservices architecture for scale"
)
# Automatically uses Opus
```
## Cost Optimization Strategies
### 1. Task Granularity
Break large tasks into smaller subtasks with appropriate models:
- Planning (Opus) → Implementation (Sonnet) → Testing (Haiku)
### 2. Cascading
Start with cheaper model, escalate if needed:
- Try Haiku → If insufficient, try Sonnet → If still insufficient, use Opus
### 3. Batch Simple Tasks
Group simple tasks for a single Haiku agent rather than multiple calls
### 4. Monitoring
Track model usage with metrics:
```python
# Metrics automatically track model usage
amicus.query_metrics(metric="node.registered")
# Shows which models were used for which tasks
```
## Example Cost Analysis
### Inefficient Approach
```
Task: "Update 5 documentation files"
Model: Claude Opus 4.5 (premium)
Cost: ~$0.15 for task
Time: 3 minutes
```
### Efficient Approach
```
Task: "Update 5 documentation files"
Model: Claude Haiku 4.5 (fast/cheap)
Cost: ~$0.01 for task
Time: 30 seconds
Savings: 93% cost reduction, 6x faster
```
### Complex Task Example
```
Task: "Design and implement new feature"
Phase 1: Architecture (Opus)
- Cost: $0.30
- Deliverable: Design doc
Phase 2: Implementation (Sonnet)
- Cost: $0.20
- Deliverable: Working code
Phase 3: Testing (Haiku)
- Cost: $0.02
- Deliverable: Test suite
Total: $0.52 (vs $0.90 if all Opus)
Savings: 42%
```
## Default Assignments by Role
Based on role-specific prompts:
| Role | Default Model | Rationale |
|------|---------------|-----------|
| bootstrap_manager | Sonnet | Needs good reasoning, not complex |
| architect | Opus | Complex design decisions |
| developer | Sonnet | General coding quality |
| reviewer | Sonnet | Thorough analysis needed |
| researcher | Sonnet | Balanced research quality |
Override with `model` parameter when spawning agents.
## Configuration Best Practices
1. **Start conservative**: Use Sonnet as default
2. **Monitor costs**: Track metrics to find optimization opportunities
3. **Iterate keywords**: Refine keyword matching based on actual usage
4. **Document decisions**: Note why certain keywords map to certain models
5. **Review regularly**: Adjust as new models become available
## Integration with Cluster
The model selection integrates with:
- **Task system**: Automatically assigns model when claiming tasks
- **Metrics**: Tracks model usage per task
- **Bootstrap manager**: Uses model recommendations when spawning
- **Cost tracking**: Future feature to estimate costs before execution