Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
gpu_statusA

One-shot summary of all AMD GPUs: product name, GPU utilization %, VRAM used/total bytes and %, edge/junction/memory temperatures, average and max power, fan speed % and RPM. Returns one entry per card. Fields that the card does not support are returned as null (not omitted).

gpu_metricsA

Full rocm-smi -a --json output for every GPU (clocks, voltages, PCIe link width/speed, firmware versions, per-engine activity, throttle status, energy counters). Use when gpu_status is not enough. The shape is rocm-smi’s native JSON, unmodified.

gpu_processesA

List compute processes using the GPU (KFD PIDs) with their VRAM usage and card index. Returns an empty list when no compute workloads are running.

gpu_watchA

Take N snapshots of gpu_status at a fixed interval and return both the raw frames and per-card min/max/avg statistics for utilization, temperature, power, and VRAM usage. Useful for answering “is this training run stable?”. Default: 5 samples at 1000ms intervals.

rocm_infoA

Report the rocm-smi version, kernel driver version, whether the amdgpu module is loaded, installed ROCm/HIP/HSA packages (from dpkg), and whether amdgpu_top is available. Useful for checking ROCm install health before running workloads.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/LukeLamb/claude-rocm-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server