AGENTS.md•2.4 kB
# Katamari MCP - Agent Development Guide
## Build/Test Commands
- `poetry install` - Install dependencies
- `python -m katamari_mcp.server` - Start server
- `pytest` - Run all tests
- `pytest tests/test_adaptive_learning.py` - Run Phase 2 adaptive learning tests
- `pytest tests/test_specific.py` - Run single test file
- `pytest -k "test_name"` - Run specific test
- `poetry run ruff check .` - Lint code
- `poetry run black .` - Format code
- `poetry run mypy .` - Type checking
## Code Style Guidelines
- **Python 3.9+** with type hints required
- **Async/await** for all I/O operations
- **Pydantic models** for data validation
- **Environment isolation** - each capability in separate venv
- **Error handling** - fail fast with detailed error messages
- **Naming**: snake_case for functions/variables, PascalCase for classes
- **Imports**: group stdlib, third-party, local imports
- **Documentation**: docstrings for all public functions/classes
## Architecture Rules
- **Capabilities** are isolated modules in `capabilities/` directory
- **Router** handles all call routing via tiny LLM
- **ACP System** - self-modification through heuristic governance
- **Adaptive Learning** - dynamic improvement through feedback loops
- **Security** - all code must pass validation before integration
- **Stateless** by default, opt-in state with explicit tracking
- **Asset provenance** - track origin of every component
### ACP Development Guidelines
- **Heuristic Tags** - all operations use 7-tag safety system
- **Feedback Collection** - integrate with adaptive learning components
- **Performance Tracking** - monitor capability health and trends
- **Parallel Testing** - core/security changes require isolated testing
- **Git Integration** - all ACP changes tracked with metadata
## Testing Requirements
- **Isolation** - each capability tested in separate environment
- **Async tests** - use pytest-asyncio for async functions
- **Coverage** - maintain >90% test coverage
- **Integration tests** - test capability interactions
- **Adaptive Learning Tests** - test feedback loops and learning cycles
- **Performance Tests** - validate tracking and analytics functionality
## Security Guidelines
- **Sandboxed execution** for all capabilities
- **Package validation** before adding to ecosystem
- **No hardcoded secrets** - use environment variables
- **Input validation** using Pydantic models