mcp_call_gpu_available
Verify GPU availability in PyTorch or TensorFlow within a conda environment, check MPS support, and benchmark performance. Ensures proper GPU acceleration setup for Apple Metal frameworks.
Instructions
Check if GPU is available in torch for a specific conda environment.
Input: torch or tensorflow
if framework is not provided, it will default to torch.
Returns a detailed dictionary with the following information:
- "torch_version": PyTorch version string
- "python_version": Python version string
- "platform": Platform information string
- "processor": Processor type
- "architecture": CPU architecture
- "mps_available": True if MPS (Metal Performance Shaders) is available
- "mps_built": True if PyTorch was built with MPS support
- "mps_functional": True if MPS is functional, False otherwise
- "benchmarks": A list of benchmark results for different matrix sizes, each containing:
- "size": Matrix size used for benchmark
- "cpu_time": Time taken on CPU (seconds)
- "mps_time": Time taken on MPS (seconds)
- "speedup": Ratio of CPU time to MPS time (higher means MPS is faster)
This helps determine if GPU acceleration via Apple's Metal is properly configured
and functioning, with performance benchmarks for comparison.
Input Schema
Name | Required | Description | Default |
---|---|---|---|
env_name | Yes | ||
framework | No | torch |
Input Schema (JSON Schema)
{
"properties": {
"env_name": {
"title": "Env Name",
"type": "string"
},
"framework": {
"default": "torch",
"title": "Framework",
"type": "string"
}
},
"required": [
"env_name"
],
"title": "mcp_call_gpu_availableArguments",
"type": "object"
}