mcp-my-mac

mcp_call_gpu_available

Verify GPU availability and configuration in PyTorch or TensorFlow within a conda environment. Includes detailed version, platform, and benchmark data to assess Metal Performance Shaders (MPS) functionality and acceleration performance.

Instructions

Check if GPU is available in torch for a specific conda environment. Input: torch or tensorflow if framework is not provided, it will default to torch. Returns a detailed dictionary with the following information: - "torch_version": PyTorch version string - "python_version": Python version string - "platform": Platform information string - "processor": Processor type - "architecture": CPU architecture - "mps_available": True if MPS (Metal Performance Shaders) is available - "mps_built": True if PyTorch was built with MPS support - "mps_functional": True if MPS is functional, False otherwise - "benchmarks": A list of benchmark results for different matrix sizes, each containing: - "size": Matrix size used for benchmark - "cpu_time": Time taken on CPU (seconds) - "mps_time": Time taken on MPS (seconds) - "speedup": Ratio of CPU time to MPS time (higher means MPS is faster) This helps determine if GPU acceleration via Apple's Metal is properly configured and functioning, with performance benchmarks for comparison.

Input Schema

NameRequiredDescriptionDefault
env_nameYes
frameworkNotorch

Input Schema (JSON Schema)

{ "properties": { "env_name": { "title": "Env Name", "type": "string" }, "framework": { "default": "torch", "title": "Framework", "type": "string" } }, "required": [ "env_name" ], "title": "mcp_call_gpu_availableArguments", "type": "object" }

You must be authenticated.

Other Tools from mcp-my-mac

Related Tools

    MCP directory API

    We provide all the information about MCP servers via our MCP API.

    curl -X GET 'https://glama.ai/api/mcp/v1/servers/zhongmingyuan/mcp-my-mac'

    If you have feedback or need assistance with the MCP directory API, please join our Discord server