development.md•7.72 kB
# Developer Guide
## Development Environment Setup
This project uses `uv` for fast and reliable Python package and environment management.
### Prerequisites
- Python 3.11 or higher
- [uv](https://github.com/astral-sh/uv) package manager
### Installing uv
**macOS/Linux**:
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
**Windows**:
```powershell
irm https://astral.sh/uv/install.ps1 | iex
```
Or via Homebrew (macOS):
```bash
brew install uv
```
### Initial Setup
1. **Create virtual environment**:
```bash
uv venv
```
This creates a `.venv` directory with Python 3.11.
2. **Activate the environment**:
**macOS/Linux**:
```bash
source .venv/bin/activate
```
**Windows**:
```powershell
.venv\Scripts\activate
```
3. **Install the project in editable mode with dev dependencies**:
```bash
uv pip install -e ".[dev]"
```
This installs:
- Core dependencies: `pydantic`
- Dev dependencies: `pytest`, `pytest-cov`, `ruff`
### Daily Development Workflow
#### Installing New Dependencies
**Add a core dependency**:
```bash
uv pip install <package-name>
# Then update pyproject.toml dependencies list
```
**Add a dev dependency**:
```bash
uv pip install <package-name>
# Then update pyproject.toml dev dependencies list
```
#### Running Tests
**All tests**:
```bash
pytest
```
**With coverage report**:
```bash
pytest --cov=src/mcp_debug_tool --cov-report=html --cov-report=term
```
**Specific test file**:
```bash
pytest tests/unit/test_sessions.py -v
```
**Integration tests only**:
```bash
pytest tests/integration/ -v
```
**With markers**:
```bash
pytest -m unit # Only unit tests
pytest -m integration # Only integration tests
pytest -m "not slow" # Skip slow tests
```
#### Code Quality
**Lint with ruff**:
```bash
ruff check .
```
**Auto-fix lint issues**:
```bash
ruff check --fix .
```
**Format code**:
```bash
ruff format .
```
**Check types** (if using mypy):
```bash
mypy src/
```
#### Running the Server Locally
**Start MCP server**:
```bash
mcp-debug --workspace /path/to/test/project
```
**Or run directly**:
```bash
python -m mcp_debug_tool.server --workspace .
```
### Project Structure Conventions
```
src/mcp_debug_tool/
├── __init__.py # Package metadata
├── schemas.py # Pydantic models (requests/responses)
├── utils.py # Shared utilities and constants
├── sessions.py # Session lifecycle management
├── debugger.py # bdb-based debugger controller
├── runner.py # Subprocess execution wrapper
└── server.py # MCP SDK-based async server (v0.2.0+)
tests/
├── unit/ # Fast, isolated tests
│ ├── test_sessions.py
│ ├── test_schemas.py
│ └── test_debugger.py
└── integration/ # End-to-end tests with real execution
├── samples/ # Test scripts
└── test_*.py
```
### Code Style Guidelines
This project follows:
- **Python 3.11+** syntax and features
- **Type hints** for all function signatures
- **Docstrings** in Google style for all public APIs
- **100 character** line length (configured in ruff.toml)
- **snake_case** for functions and variables
- **PascalCase** for classes
- **camelCase** for API field names (to match OpenAPI contract)
### Async Patterns (v0.2.0+)
The MCP server uses **async/await** via the official MCP SDK:
**Server Structure**:
```python
from mcp.server import Server
from mcp.server.stdio import stdio_server
class MCPServerV2:
def __init__(self, workspace_root: Path | None = None):
self.server = Server("python-debug")
self.session_manager = SessionManager(workspace_root)
async def run(self):
async with stdio_server() as (read_stream, write_stream):
await self.server.run(read_stream, write_stream, ...)
```
**Tool Handlers**:
```python
@self.server.list_tools()
async def handle_list_tools() -> list[Tool]:
return [...]
@self.server.call_tool()
async def handle_call_tool(name: str, arguments: dict) -> list[TextContent]:
# Call sync SessionManager methods via asyncio
result = await asyncio.to_thread(
self.session_manager.create_session, request
)
return [TextContent(type="text", text=json.dumps(result.model_dump()))]
```
**Key Patterns**:
- Use `@server.list_tools()` and `@server.call_tool()` decorators
- Wrap sync operations with `asyncio.to_thread()` for non-blocking execution
- Return `list[TextContent]` from tool handlers
- Use `async with stdio_server()` for I/O streams
- Run server with `asyncio.run(main())`
**Testing Async Code**:
```python
import pytest
@pytest.mark.asyncio
async def test_async_feature():
server = MCPServerV2()
result = await server.some_async_method()
assert result.success
```
### Adding New Features
Follow the SpecKit methodology:
1. **Read the spec**: Check `specs/001-python-debug-tool/spec.md`
2. **Check tasks**: Review `specs/001-python-debug-tool/tasks.md`
3. **Write tests first**: Create unit tests before implementation
4. **Implement feature**: Follow the task breakdown
5. **Verify quality**: Run `ruff check .` and `pytest`
6. **Update docs**: Keep README.md and this guide current
7. **Mark task complete**: Update tasks.md with `[X]`
### Testing Best Practices
**Unit Tests**:
- Fast (< 1s per test)
- No file I/O or network
- Mock external dependencies
- Test one thing per test
- Use pytest fixtures for setup
**Integration Tests**:
- Test real execution flows
- Use temporary directories (pytest's `tmp_path`)
- Mark with `@pytest.mark.integration`
- Clean up resources in teardown
**Example Test Structure**:
```python
import pytest
from pathlib import Path
@pytest.fixture
def sample_script(tmp_path):
"""Create a test script."""
script = tmp_path / "test.py"
script.write_text("x = 1\nprint(x)")
return script
def test_feature(sample_script):
"""Test description."""
# Arrange
...
# Act
...
# Assert
...
```
### Performance Considerations
When implementing debug features:
1. **Timeouts**: Default 20s, configurable via `DEFAULT_TIMEOUT_SECONDS`
2. **Output limits**: 32KB max via `MAX_OUTPUT_BYTES`
3. **Depth limits**: 2 levels via `MAX_DEPTH`
4. **Container limits**: 50 items via `MAX_CONTAINER_ITEMS`
5. **String limits**: 256 chars via `MAX_REPR_LENGTH`
These are defined in `src/mcp_debug_tool/utils.py`.
### Debugging the Debugger
**Enable verbose logging**:
```python
import logging
logging.basicConfig(level=logging.DEBUG)
```
**Use pdb to debug**:
```bash
python -m pdb -m pytest tests/unit/test_sessions.py
```
**Print subprocess output**:
Check the runner.py IPC messages by adding debug prints.
### Common Issues
**Import errors**:
```bash
# Reinstall in editable mode
uv pip install -e .
```
**Test discovery issues**:
```bash
# Ensure __init__.py exists in test directories
touch tests/__init__.py
touch tests/unit/__init__.py
```
**Type checking errors**:
```bash
# Install type stubs if needed
uv pip install types-<package-name>
```
### Release Checklist
Before releasing a new version:
1. ✅ All tests pass: `pytest`
2. ✅ Code quality checks: `ruff check .`
3. ✅ Coverage ≥ 80%: `pytest --cov`
4. ✅ Documentation updated: README.md, CHANGELOG.md
5. ✅ Version bumped: `pyproject.toml`
6. ✅ Tasks marked complete: `specs/001-python-debug-tool/tasks.md`
### Resources
- [uv documentation](https://docs.astral.sh/uv/)
- [pytest documentation](https://docs.pytest.org/)
- [ruff documentation](https://docs.astral.sh/ruff/)
- [Pydantic documentation](https://docs.pydantic.dev/)
- [Python bdb module](https://docs.python.org/3/library/bdb.html)