check
Verify available quantization backends and hardware compatibility for model compression. Reports installed engines (GGUF/GPTQ/AWQ), GPU support, and system resources.
Instructions
Check available quantization backends on this system.
Reports which quantization engines (GGUF/GPTQ/AWQ) are installed, whether PyTorch and transformers are available, GPU information (CUDA or Apple MPS), and system RAM.
No arguments required. Lightweight system check.
Returns: Dictionary of available backends and hardware info.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- mcp_turboquant/server.py:90-153 (handler)The `check` tool handler, which checks available quantization backends, PyTorch/transformers versions, and hardware information. It calls `check_dependencies` to gather system details.
def check() -> dict[str, Any]: """Check available quantization backends on this system. Reports which quantization engines (GGUF/GPTQ/AWQ) are installed, whether PyTorch and transformers are available, GPU information (CUDA or Apple MPS), and system RAM. No arguments required. Lightweight system check. Returns: Dictionary of available backends and hardware info. """ deps = check_dependencies() backends = { "gguf": { "available": deps.get("gguf", False), "install": "pip install llama-cpp-python", }, "gptq": { "available": deps.get("gptq", False), "install": "pip install auto-gptq datasets", }, "awq": { "available": deps.get("awq", False), "install": "pip install autoawq", }, } hardware = { "platform": deps.get("platform", "unknown"), "arch": deps.get("arch", "unknown"), "system_ram_gb": deps.get("system_ram_gb", 0), } if deps.get("cuda"): hardware["gpu"] = deps.get("gpu_name", "CUDA GPU") hardware["gpu_mem_gb"] = deps.get("gpu_mem_gb", 0) hardware["accelerator"] = "cuda" elif deps.get("mps"): hardware["accelerator"] = "mps" hardware["gpu"] = "Apple Silicon (Metal Performance Shaders)" else: hardware["accelerator"] = "cpu" core = { "torch": { "available": deps.get("torch", False), "version": deps.get("torch_version", None), "install": "pip install torch", }, "transformers": { "available": deps.get("transformers", False), "version": deps.get("transformers_version", None), "install": "pip install transformers", }, } return { "backends": backends, "core_dependencies": core, "hardware": hardware, "server_version": __version__, }