# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
GPU-accelerated MCP (Model Context Protocol) servers for computational mathematics, physics simulations, and machine learning. The system provides 4 specialized MCP servers that enable AI assistants to perform real scientific computing.
## Build & Development Commands
```bash
# Install all dependencies
uv sync --all-extras
# Install MCP servers in editable mode (required for entry points)
uv pip install --python .venv/bin/python \
-e servers/math-mcp \
-e servers/quantum-mcp \
-e servers/molecular-mcp \
-e servers/neural-mcp
# Run tests (CPU only)
uv run pytest -m "not gpu"
# Run all tests including GPU
uv run pytest
# Run tests for specific server
uv run pytest servers/math-mcp/tests/
# Run single test
uv run pytest servers/math-mcp/tests/test_symbolic.py::test_solve_quadratic
# Lint and format
uv run ruff check --fix .
uv run ruff format .
# Type checking
uv run mypy shared/ servers/
# Run pre-commit hooks
uv run pre-commit run --all-files
# Run with coverage
uv run pytest --cov=shared --cov=servers
```
## Architecture
### Workspace Structure
This is a uv workspace with 6 packages:
- **servers/**: 4 MCP server implementations (math-mcp, quantum-mcp, molecular-mcp, neural-mcp)
- **shared/**: 2 shared packages (mcp-common, compute-core)
### Shared Packages
- **mcp-common**: GPUManager (CUDA/CPU backend selection), TaskManager (async task handling), Config (KDL configuration)
- **compute-core**: Unified NumPy/CuPy array interface, FFT operations, linear algebra
### MCP Server Pattern
Each server follows this structure in `servers/<name>/src/<name>/server.py`:
```python
from mcp.server import Server
mcp = Server("server-name")
# Storage for stateful objects (avoid passing large arrays)
_arrays: dict[str, np.ndarray] = {}
@mcp.tool()
async def tool_name(param: str) -> dict[str, Any]:
"""Tool description."""
result_id = f"result://{uuid4()}"
_results[result_id] = result
return {"result_id": result_id}
```
### URI-Based References
Large data is stored by reference to minimize token usage:
- `array://uuid`, `potential://uuid`, `system://uuid`, `trajectory://uuid`, `model://uuid`, `simulation://uuid`
## Demo Generation
**The demos showcase what users can expect when using the MCP servers through an AI assistant.** They can ONLY be generated by prompting an LLM with the MCP tools enabled - this is the entire point. Do not create standalone scripts that bypass the MCP servers.
Use `claude -p` for one-shot generation:
```bash
claude -p "Create a double-slit interference demo with sensor line at x=220, save to /tmp/demo.gif"
```
Or interactively:
```bash
claude
> Simulate two galaxies colliding and render to /tmp/galaxy.gif
```
### Critical Demo Parameters (from lessons learned)
**Slit Experiments:**
- Wavefunction velocity = HALF momentum value (use momentum=0.2)
- Slits at x=85 (1/3 of 256 grid), NOT halfway
- Wavepacket width=35, time_steps=1400, dt=0.1
- Sensor line at x=220 with fixed scale
**Bragg Scattering:**
- Use tight Gaussian point centers (width=3), NOT cosine waves
- Lattice spacing=25, depth=100
**Galaxy Collision:**
- View bounds from INITIAL frame only (prevents postage stamp effect)
- Slow approach velocity (0.15) for merge near end
- Per-particle colors: blue=#4da6ff, red=#ff6b6b
## Code Style
- Line length: 100 characters
- Type hints required for all functions
- All MCP tool functions must be async
- Use Google-style docstrings
- Conventional commits: `feat(math-mcp):`, `fix(quantum-mcp):`, `docs:`, etc.
## Test Markers
```python
@pytest.mark.gpu # Requires CUDA
@pytest.mark.slow # Long-running
@pytest.mark.integration # Cross-component
@pytest.mark.benchmark # Performance tests
```