Skip to main content
Glama

mcp_call_gpu_available

Verify GPU availability in PyTorch or TensorFlow within a conda environment, check MPS support, and benchmark performance. Ensures proper GPU acceleration setup for Apple Metal frameworks.

Instructions

Check if GPU is available in torch for a specific conda environment. Input: torch or tensorflow if framework is not provided, it will default to torch. Returns a detailed dictionary with the following information: - "torch_version": PyTorch version string - "python_version": Python version string - "platform": Platform information string - "processor": Processor type - "architecture": CPU architecture - "mps_available": True if MPS (Metal Performance Shaders) is available - "mps_built": True if PyTorch was built with MPS support - "mps_functional": True if MPS is functional, False otherwise - "benchmarks": A list of benchmark results for different matrix sizes, each containing: - "size": Matrix size used for benchmark - "cpu_time": Time taken on CPU (seconds) - "mps_time": Time taken on MPS (seconds) - "speedup": Ratio of CPU time to MPS time (higher means MPS is faster) This helps determine if GPU acceleration via Apple's Metal is properly configured and functioning, with performance benchmarks for comparison.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
env_nameYes
frameworkNotorch

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/zhongmingyuan/mcp-my-mac'

If you have feedback or need assistance with the MCP directory API, please join our Discord server