# Multi-Domain MCP Server Project Brief
## Project Overview
**Project Name:** `personal-productivity-mcp`
**Goal:** Build a single MCP server with multiple domains (career, fitness, family) that provides Claude with tools to help me manage different aspects of my life. This is both a learning exercise to understand MCP architecture and a practical tool for daily use.
**Deployment:** Docker container in home lab (avoiding local Python environment issues)
## Architecture
### Single MCP Server, Multiple Domains
```
personal-productivity-mcp/
├── src/
│ ├── mcp_server/
│ │ ├── __init__.py
│ │ ├── server.py # Main MCP server setup
│ │ ├── models.py # Shared Pydantic models
│ │ ├── config.py # Configuration management
│ │ ├── domains/
│ │ │ ├── __init__.py
│ │ │ ├── career/
│ │ │ │ ├── __init__.py
│ │ │ │ ├── read.py # Read tools (search, fetch)
│ │ │ │ ├── write.py # Write tools (track apps, update)
│ │ │ │ ├── analysis.py # Analysis tools (resume match, etc)
│ │ │ │ └── scrapers/
│ │ │ ├── fitness/
│ │ │ │ ├── __init__.py
│ │ │ │ ├── read.py # Read health data
│ │ │ │ ├── write.py # Log workouts, update routines
│ │ │ │ └── analysis.py # Progress analysis, recommendations
│ │ │ └── family/
│ │ │ ├── __init__.py
│ │ │ ├── read.py # Query balances, invoices
│ │ │ ├── write.py # Record transactions
│ │ │ └── analysis.py # Spending patterns, reports
│ │ └── utils/
│ │ ├── cache.py
│ │ ├── logging.py
│ │ └── storage.py
├── tests/
├── Dockerfile
├── docker-compose.yml
├── pyproject.toml
├── README.md
└── .env.example
```
## Tech Stack
- **Runtime:** Python 3.11+
- **MCP SDK:** `mcp` (Anthropic's Python SDK)
- **Web Scraping:** Playwright (async)
- **Data Validation:** Pydantic v2
- **HTTP Client:** httpx (async)
- **Storage:** SQLite for persistence (simple, file-based)
- **Caching:** Redis (optional, start with in-memory)
- **Deployment:** Docker + Docker Compose
## Domain Specifications
### 1. Career Domain (`career.*`)
**Purpose:** Job search automation, application tracking, interview preparation
#### Read Tools
- `career.search_greenhouse_boards(company_name: str) -> List[JobListing]`
- `career.search_lever_boards(company_name: str) -> List[JobListing]`
- `career.scrape_company_careers(url: str, keywords: List[str]) -> List[JobListing]`
- `career.extract_job_details(job_url: str) -> JobDetails`
- `career.get_applications(status: Optional[str]) -> List[Application]`
- `career.find_hiring_manager(company: str, role: str) -> ContactInfo` (future)
#### Write Tools
- `career.track_application(job_url: str, company: str, status: str, notes: str) -> Application`
- `career.update_application_status(app_id: str, status: str, notes: str) -> Application`
- `career.save_target_company(name: str, url: str, priority: int) -> Company`
#### Analysis Tools
- `career.analyze_job_fit(job_url: str, resume_text: str) -> FitAnalysis`
- `career.suggest_resume_modifications(job_description: str, resume_text: str) -> Suggestions`
- `career.generate_cover_letter_points(job_description: str, background: str) -> List[str]`
- `career.analyze_application_pipeline() -> PipelineStats`
**Data Models:**
```python
class JobListing:
title: str
url: str
company: str
location: str | None
posted_date: datetime | None
source: str # greenhouse, lever, custom
class JobDetails(JobListing):
description: str
requirements: List[str]
qualifications: List[str]
salary_range: str | None
class Application:
id: str
job: JobListing
applied_date: datetime
status: str # interested, applied, interviewing, offered, rejected
notes: str
next_action: str | None
```
### 2. Fitness Domain (`fitness.*`)
**Purpose:** Health tracking, workout management, progress analysis
#### Read Tools
- `fitness.get_health_metrics(start_date: date, end_date: date) -> HealthMetrics`
- From Health Auto Export CSV
- `fitness.get_workout_history(start_date: date, end_date: date) -> List[Workout]`
- From Hevy CSV/API
- `fitness.get_current_routine() -> WorkoutRoutine`
- `fitness.get_activity_summary(period: str) -> ActivitySummary`
#### Write Tools
- `fitness.log_workout(exercise: str, sets: int, reps: int, weight: float) -> Workout`
- `fitness.update_routine(routine: WorkoutRoutine) -> WorkoutRoutine`
- `fitness.set_goal(goal_type: str, target: float, deadline: date) -> Goal`
#### Analysis Tools
- `fitness.analyze_progress(metric: str, period: str) -> ProgressAnalysis`
- `fitness.identify_weaknesses() -> List[Weakness]`
- `fitness.suggest_routine_modifications() -> RoutineAdjustments`
- `fitness.generate_weekly_summary() -> WeeklySummary`
- `fitness.compare_periods(period1: str, period2: str) -> Comparison`
**Data Models:**
```python
class HealthMetrics:
steps: int
active_calories: int
heart_rate_avg: float
sleep_hours: float
class Workout:
date: datetime
exercises: List[Exercise]
duration: int
notes: str
class Exercise:
name: str
sets: List[Set]
class Set:
reps: int
weight: float
rpe: int | None # Rate of perceived exertion
```
### 3. Family Domain (`family.*`)
**Purpose:** Kids billing, Math Academy tracking, expense management
#### Read Tools
- `family.get_balance(person: str) -> Balance`
- `family.get_transactions(person: str, start_date: date) -> List[Transaction]`
- `family.get_invoice(person: str, week: int) -> Invoice`
- `family.get_math_academy_status(person: str) -> MathAcademyStatus`
#### Write Tools
- `family.record_transaction(person: str, amount: float, category: str, description: str) -> Transaction`
- `family.generate_invoice(person: str, week: int) -> Invoice`
- `family.update_math_academy_progress(person: str, mastery: float, xp: int) -> Progress`
- `family.apply_penalty(person: str, amount: float, reason: str) -> Transaction`
- `family.apply_bonus(person: str, amount: float, reason: str) -> Transaction`
#### Analysis Tools
- `family.analyze_spending_patterns(person: str, period: str) -> SpendingAnalysis`
- `family.predict_balance(person: str, weeks_ahead: int) -> BalancePrediction`
- `family.evaluate_math_progress(person: str) -> Evaluation`
- `family.generate_monthly_report(person: str) -> MonthlyReport`
**Data Models:**
```python
class Transaction:
id: str
person: str
date: datetime
amount: float
category: str
description: str
source: str # manual, yay_lunch, math_academy
class Balance:
person: str
current: float
escrow: float
pending_penalties: float
```
## Cross-Cutting Concerns
### Configuration
- Store credentials in `.env` (API keys, Google Sheets, etc.)
- Support multiple profiles/environments
- Config validation with Pydantic
### Storage
- SQLite database for all structured data
- File storage for CSVs, exports
- Google Sheets integration for family data (existing system)
### Caching
- Cache scraped jobs for 1 hour
- Cache health data for 15 minutes
- Invalidation strategies per domain
### Error Handling
- Graceful degradation when scraping fails
- Retry logic with exponential backoff
- Detailed error messages for Claude to interpret
### Logging
- Structured logging (JSON)
- Request/response logging
- Performance metrics
## Docker Setup
### Dockerfile
```dockerfile
FROM python:3.11-slim
# Install Playwright dependencies
RUN apt-get update && apt-get install -y \
wget \
gnupg \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY pyproject.toml .
RUN pip install -e .
# Install Playwright browsers
RUN playwright install chromium
RUN playwright install-deps chromium
COPY src/ ./src/
CMD ["python", "-m", "mcp_server"]
```
### docker-compose.yml
```yaml
version: '3.8'
services:
mcp-server:
build: .
container_name: personal-productivity-mcp
volumes:
- ./data:/app/data
- ./config:/app/config
environment:
- MCP_SERVER_PORT=3000
- LOG_LEVEL=INFO
env_file:
- .env
restart: unless-stopped
networks:
- homelab
# Expose via Tailscale or local network
ports:
- "3000:3000"
networks:
homelab:
external: true
```
## Development Phases
### Phase 1: Foundation (Start Here)
**Goal:** Get basic MCP server running with one domain working
- [ ] Set up Docker environment
- [ ] MCP server scaffolding
- [ ] Career domain: Greenhouse scraper only
- [ ] Career domain: Job tracking (write)
- [ ] Basic SQLite storage
- [ ] Test with Claude
### Phase 2: Career Domain Complete
- [ ] Add Lever scraper
- [ ] Add generic career page scraper
- [ ] Job details extraction
- [ ] Application tracking
- [ ] Resume analysis tool
- [ ] Cover letter generator
### Phase 3: Fitness Domain
- [ ] Health Auto Export CSV parser
- [ ] Hevy API integration
- [ ] Basic read tools
- [ ] Progress analysis
- [ ] Routine suggestions
### Phase 4: Family Domain
- [ ] Google Sheets integration
- [ ] Transaction recording
- [ ] Invoice generation
- [ ] Math Academy parser
- [ ] Analysis tools
### Phase 5: Polish
- [ ] Error handling improvements
- [ ] Caching optimization
- [ ] Monitoring/observability
- [ ] Documentation
- [ ] Example conversations
## Learning Objectives
This project is designed to teach:
- ✅ MCP server architecture and tool design
- ✅ Multi-domain service organization
- ✅ Async Python patterns
- ✅ Web scraping at scale
- ✅ API integration patterns
- ✅ Data modeling with Pydantic
- ✅ Containerization best practices
- ✅ Error handling and retry logic
- ✅ Caching strategies
- ✅ Testing async code
## Integration with Existing Systems
### n8n Workflows
The MCP server can be called from n8n for scheduled tasks:
- Daily job board checks
- Weekly fitness summaries
- Monthly family billing
### Google Sheets
Family domain should integrate with existing sheets:
- Read current balances
- Write new transactions
- Sync bidirectionally
### Health Auto Export
- Monitor export directory
- Parse new CSV files
- Sync to database
## Testing Strategy
### Unit Tests
- Each tool function independently
- Mock external services (Playwright, APIs)
- Pydantic model validation
### Integration Tests
- Real scraping against test companies
- Database operations
- End-to-end tool calls
### Manual Testing
- Use Claude to exercise tools
- Document example conversations
- Test error conditions
## Success Criteria
**Minimum Viable Product:**
- [ ] Docker container runs reliably
- [ ] Career domain finds jobs from 3+ sources
- [ ] Can track applications through Claude conversation
- [ ] Resume analysis provides useful feedback
- [ ] No pyenv/local Python headaches
**Stretch Goals:**
- [ ] All three domains fully functional
- [ ] Integration with n8n for automation
- [ ] Notification system for new jobs
- [ ] Weekly automated reports
- [ ] Mobile-friendly dashboard (future)
## Getting Started with Claude Code
1. **Copy this brief** into Claude Code
2. **Start with:** "Set up the Docker environment and basic MCP server structure for the career domain only"
3. **Iterate:** Get Greenhouse scraper working first
4. **Test:** Connect to Claude and try: "Search for SE jobs at Anthropic"
5. **Expand:** Add tools one at a time based on what you actually need
## Notes
- Keep each domain independent (separate files, clear boundaries)
- Use dependency injection for shared services (database, cache)
- Design for extensibility - easy to add new domains later
- Document tool usage with examples for Claude to understand
- Start simple, add complexity only when needed
- This is a learning exercise AND a useful tool - balance both
## Questions for Claude Code to Resolve
1. Best way to structure tool registration across domains?
2. Shared vs domain-specific Pydantic models?
3. Database schema design for flexibility?
4. How to handle Playwright browser lifecycle in async context?
5. Best practices for MCP error responses?