# Optimization & Resource Detection
This document describes the runtime optimization and resource-detection features added to domin8.
## Overview
- `domin8.resources` provides functions for detecting CPU cores, process affinity, and GPUs.
- `domin8.config.OptimizationConfig` exposes repo-level tuning parameters in `config/optimization.toml` or `config/optimization.json`.
- `domin8.tools.import_graph` uses process-based parallelism to speed up repository indexing when enabled.
## Configuration
You can add `config/optimization.json` to the repository with the following options:
- `enable_parallel_indexing` (bool, default true): Run index builds and test-reference checks in parallel.
- `max_workers` (int or null): Number of worker processes to use. If `null`, auto-detected from CPU affinity or number of CPUs.
- `chunksize` (int): Chunk size used by some parallel operations (currently not widely used, default 1).
- `maxtasksperchild` (int or null): If provided, passed to process pool implementations for worker recycling.
- `use_gpu_auto` (bool, default true): Attempt to enable GPU-backed ML acceleration if a GPU is detected.
- `auto_install_deps` (bool, default false): When true and a GPU is present, the MCP server will attempt to install user-specified ML packages via `uv add --upgrade`.
- `preferred_gpu_backend` (string or null): Preferred ML backend (e.g., "torch").
## Notes on GPU
GPU support is only relevant for ML-based components (e.g., ML classifiers used in heuristics). CPU-bound parsing and AST tasks are most effectively sped up via parallel CPU execution rather than GPUs.
Automatic dependency installation is opt-in and controlled via `auto_install_deps` and the MCP server configuration.