# Setup Complete! π
Your Personal Productivity MCP Server is ready to use.
## What Was Created
### Core Infrastructure
β
**Docker Environment**
- `Dockerfile` - Python 3.11 with Playwright support
- `docker-compose.yml` - Container orchestration
- `.dockerignore` - Optimized build context
β
**Project Configuration**
- `pyproject.toml` - Python dependencies and build config
- `.env` - Environment variables (from template)
- `.gitignore` - Git ignore rules
β
**Database Setup**
- SQLite schema for applications and companies
- Automatic initialization on first run
- Data directory for persistent storage
### MCP Server (17 Python files)
β
**Core Server**
- `src/mcp_server/server.py` - Main MCP server with tool registration
- `src/mcp_server/__main__.py` - Module entry point
- `src/mcp_server/config.py` - Configuration management with Pydantic
- `src/mcp_server/models.py` - Shared data models
β
**Career Domain**
- `src/mcp_server/domains/career/read.py` - Job search tools
- `src/mcp_server/domains/career/write.py` - Application tracking
- `src/mcp_server/domains/career/analysis.py` - Pipeline statistics
- `src/mcp_server/domains/career/scrapers/greenhouse.py` - Greenhouse API scraper
- `src/mcp_server/domains/career/scrapers/lever.py` - Lever API scraper
β
**Utilities**
- `src/mcp_server/utils/storage.py` - Database connection management
- `src/mcp_server/utils/logging.py` - Structured logging
- `src/mcp_server/utils/cache.py` - In-memory caching with TTL
β
**Testing**
- `tests/test_scrapers.py` - Scraper integration tests
- Test framework configured in pyproject.toml
β
**Scripts**
- `scripts/start.sh` - Quick start script
- `scripts/test.sh` - Test runner script
β
**Documentation**
- `README.md` - Main documentation
- `QUICKSTART.md` - Step-by-step getting started guide
- This file!
## Available MCP Tools
### Career Domain (Phase 1)
**Read Tools:**
1. `career_search_greenhouse` - Search Greenhouse job boards
2. `career_search_lever` - Search Lever job boards
3. `career_get_applications` - Get tracked applications (with optional status filter)
**Write Tools:**
4. `career_track_application` - Track a new job application
5. `career_update_application` - Update application status and notes
**Analysis Tools:**
6. `career_analyze_pipeline` - Get application pipeline statistics
## Quick Start (30 seconds)
```bash
# Start the server
./scripts/start.sh
# Or manually
docker-compose up --build -d
docker-compose logs -f
```
## Next Steps
### 1. Verify the Server (1 min)
```bash
docker ps | grep personal-productivity-mcp
docker-compose logs
```
Look for: "Server running on stdio"
### 2. Configure Claude Desktop (2 min)
Edit: `~/Library/Application Support/Claude/claude_desktop_config.json`
```json
{
"mcpServers": {
"personal-productivity": {
"command": "docker",
"args": ["exec", "-i", "personal-productivity-mcp", "python", "-m", "mcp_server.server"]
}
}
}
```
Restart Claude Desktop completely.
### 3. Test It! (2 min)
In Claude Desktop, try:
```
Search for software engineering jobs at Anthropic
```
If you see job listings, it works! π
## Architecture Highlights
### Clean Domain Separation
- Each domain (career, fitness, family) is independent
- Easy to add new domains later
- Clear boundaries between read/write/analysis operations
### Async-First Design
- All scrapers and database operations use async/await
- Efficient for multiple concurrent job searches
- Ready for rate limiting and retries
### Extensible Tool Registration
- Tools defined in `server.py` with JSON schemas
- Easy to add new tools - just add to list_tools() and call_tool()
- Pydantic models ensure type safety
### Production-Ready Foundations
- Structured logging to stderr
- SQLite with proper indexes
- In-memory caching with TTL
- Error handling throughout
## Project Structure
```
personal-productivity-mcp/
βββ src/mcp_server/ # Main application code
β βββ server.py # MCP server setup
β βββ models.py # Data models
β βββ config.py # Configuration
β βββ domains/
β β βββ career/ # Career domain (Phase 1)
β β βββ read.py
β β βββ write.py
β β βββ analysis.py
β β βββ scrapers/
β βββ utils/ # Shared utilities
βββ tests/ # Test suite
βββ data/ # SQLite database
βββ config/ # Runtime config
βββ scripts/ # Helper scripts
βββ Dockerfile # Container definition
βββ docker-compose.yml # Orchestration
βββ pyproject.toml # Python config
βββ README.md # Full documentation
```
## Development Workflow
### Making Changes
1. Edit code in `src/mcp_server/`
2. Rebuild: `docker-compose up --build -d`
3. Test: `./scripts/test.sh`
4. Check logs: `docker-compose logs -f`
### Adding a New Tool
1. Add to `list_tools()` in `server.py`
2. Add handler in `call_tool()` in `server.py`
3. Implement function in appropriate domain file
4. Add tests in `tests/`
### Debugging
```bash
# Access container
docker-compose exec mcp-server /bin/bash
# View database
docker-compose exec mcp-server sqlite3 /app/data/productivity.db
# Python shell inside container
docker-compose exec mcp-server python
```
## What's Next? (From brief.md)
### Phase 2: Complete Career Domain
- [ ] Generic career page scraper (Playwright)
- [ ] Job details extraction
- [ ] Resume analysis tool
- [ ] Cover letter generator
### Phase 3: Fitness Domain
- [ ] Health Auto Export CSV parser
- [ ] Hevy API integration
- [ ] Progress analysis
- [ ] Routine suggestions
### Phase 4: Family Domain
- [ ] Google Sheets integration
- [ ] Transaction recording
- [ ] Invoice generation
- [ ] Math Academy tracking
## Performance Considerations
**Current Setup:**
- ~500ms for Greenhouse/Lever API calls
- In-memory cache reduces repeat searches
- SQLite is fast enough for personal use
**Future Optimizations:**
- Add Redis for distributed caching
- Rate limiting for scraping
- Background job queue for slow operations
## Security Notes
**Current Status:**
- No authentication required (single user, local Docker)
- Database stored in mounted volume (backup recommended)
- No sensitive data in code (use .env)
**Before Production:**
- Add API authentication
- Implement rate limiting
- Encrypt sensitive fields in database
- Add audit logging
## Troubleshooting
### Container won't start
- Check Docker is running
- Check port availability (if using network mode)
- Review `.env` file syntax
### Tools not appearing in Claude
- Restart Claude Desktop completely
- Check config file path and JSON syntax
- Verify container is running: `docker ps`
### Scrapers return empty results
- Check company name spelling
- Try different company (e.g., "anthropic", "netflix")
- Check logs for HTTP errors
### Database errors
- Ensure `data/` directory exists and is writable
- Reset: `docker-compose down && rm data/*.db && docker-compose up -d`
## Support
- π Full docs: `README.md`
- π Quick start: `QUICKSTART.md`
- π Project plan: `brief.md`
- π§ͺ Example tests: `tests/test_scrapers.py`
## Success Criteria β
From brief.md Phase 1:
- [x] Set up Docker environment
- [x] MCP server scaffolding
- [x] Career domain: Greenhouse scraper
- [x] Career domain: Lever scraper
- [x] Career domain: Job tracking (write)
- [x] Basic SQLite storage
- [x] Ready to test with Claude
**You're all set! Start the server and try it out! π**
```bash
./scripts/start.sh
```
Then in Claude Desktop:
```
Search for jobs at Anthropic
```