# Personal Productivity MCP Server
A multi-domain Model Context Protocol (MCP) server providing Claude with tools to manage different aspects of personal productivity.
## Overview
This MCP server provides tools across three domains:
1. **Career** (Phase 1 - Current): Job search automation, application tracking, resume analysis
2. **Fitness** (Phase 3 - Planned): Health tracking, workout management, progress analysis
3. **Family** (Phase 4 - Planned): Kids billing, expense tracking, Math Academy monitoring
## Current Features (Phase 1)
### Career Domain Tools
**Read Tools:**
- `career_search_greenhouse` - Search jobs on Greenhouse job boards
- `career_search_lever` - Search jobs on Lever job boards
- `career_get_applications` - Get tracked applications with optional status filter
**Write Tools:**
- `career_track_application` - Track a new job application
- `career_update_application` - Update application status and notes
**Analysis Tools:**
- `career_analyze_pipeline` - Get statistics about your application pipeline
## Quick Start
### Prerequisites
- Docker and Docker Compose
- Claude Desktop or compatible MCP client
### Setup
1. Clone the repository:
```bash
cd personal-productivity-mcp
```
2. Copy the environment template:
```bash
cp .env.example .env
```
3. Build and run with Docker:
```bash
docker-compose up --build
```
### Connecting to Claude Desktop
Add to your Claude Desktop configuration (`~/Library/Application Support/Claude/claude_desktop_config.json` on macOS):
```json
{
"mcpServers": {
"personal-productivity": {
"command": "docker",
"args": [
"exec",
"-i",
"personal-productivity-mcp",
"python",
"-m",
"mcp_server.server"
]
}
}
}
```
Or for local development without Docker:
```json
{
"mcpServers": {
"personal-productivity": {
"command": "python",
"args": ["-m", "mcp_server.server"],
"cwd": "/path/to/personal-productivity-mcp",
"env": {
"PYTHONPATH": "/path/to/personal-productivity-mcp/src"
}
}
}
}
```
## Usage Examples
### Searching for Jobs
```
Search for software engineering jobs at Anthropic
```
Claude will use the `career_search_greenhouse` and `career_search_lever` tools to find available positions.
### Tracking Applications
```
Track a new application:
- Company: Anthropic
- Title: Senior Software Engineer
- URL: https://jobs.lever.co/anthropic/example
- Status: applied
```
### Viewing Your Pipeline
```
Show me my application pipeline statistics
```
## Project Structure
```
personal-productivity-mcp/
├── src/
│ └── mcp_server/
│ ├── __init__.py
│ ├── server.py # Main MCP server
│ ├── models.py # Pydantic data models
│ ├── config.py # Configuration management
│ ├── domains/
│ │ └── career/
│ │ ├── read.py # Read tools
│ │ ├── write.py # Write tools
│ │ ├── analysis.py # Analysis tools
│ │ └── scrapers/
│ │ ├── greenhouse.py
│ │ └── lever.py
│ └── utils/
│ ├── storage.py # Database utilities
│ ├── logging.py # Logging setup
│ └── cache.py # Caching utilities
├── data/ # SQLite database storage
├── Dockerfile
├── docker-compose.yml
├── pyproject.toml
└── README.md
```
## Development
### Local Development Setup
1. Create a virtual environment:
```bash
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
```
2. Install dependencies:
```bash
pip install -e ".[dev]"
```
3. Install Playwright browsers:
```bash
playwright install chromium
```
4. Run the server:
```bash
python -m mcp_server.server
```
### Running Tests
```bash
pytest
```
### Code Formatting
```bash
black src/
ruff check src/
```
## Configuration
Environment variables (see `.env.example`):
- `MCP_SERVER_PORT` - Server port (default: 3000)
- `LOG_LEVEL` - Logging level (default: INFO)
- `DATABASE_PATH` - Path to SQLite database
- `CACHE_TTL_JOBS` - Job listing cache TTL in seconds (default: 3600)
- `PLAYWRIGHT_HEADLESS` - Run Playwright in headless mode (default: true)
## Roadmap
- [x] Phase 1: Basic career domain with Greenhouse/Lever scrapers
- [ ] Phase 2: Complete career domain with resume analysis
- [ ] Phase 3: Fitness domain with health tracking
- [ ] Phase 4: Family domain with billing automation
- [ ] Phase 5: Polish and monitoring
## Architecture Decisions
### Why Docker?
- Avoids local Python environment issues (especially with Playwright)
- Consistent deployment across different machines
- Easy to run in home lab environment
### Why SQLite?
- Simple file-based storage, no separate database server needed
- Perfect for single-user personal productivity use case
- Easy backups (just copy the database file)
### Why MCP?
- Native integration with Claude
- Structured tool interface with validation
- Async-first design for scraping and API calls
## Contributing
This is a personal project, but feel free to fork and adapt for your own needs!
## License
MIT