Skip to main content
Glama

RunPod MCP Server

by runpod
MIT License
21
  • Apple
  • Linux

create-endpoint

Create scalable GPU or CPU endpoints on RunPod by specifying template configurations, worker counts, and compute resources for deploying containerized applications.

Input Schema

NameRequiredDescriptionDefault
computeTypeNoGPU or CPU endpoint
dataCenterIdsNoList of data centers
gpuCountNoNumber of GPUs per worker
gpuTypeIdsNoList of acceptable GPU types
nameNoName for the endpoint
templateIdYesTemplate ID to use
workersMaxNoMaximum number of workers
workersMinNoMinimum number of workers

Input Schema (JSON Schema)

{ "properties": { "computeType": { "description": "GPU or CPU endpoint", "enum": [ "GPU", "CPU" ], "type": "string" }, "dataCenterIds": { "description": "List of data centers", "items": { "type": "string" }, "type": "array" }, "gpuCount": { "description": "Number of GPUs per worker", "type": "number" }, "gpuTypeIds": { "description": "List of acceptable GPU types", "items": { "type": "string" }, "type": "array" }, "name": { "description": "Name for the endpoint", "type": "string" }, "templateId": { "description": "Template ID to use", "type": "string" }, "workersMax": { "description": "Maximum number of workers", "type": "number" }, "workersMin": { "description": "Minimum number of workers", "type": "number" } }, "required": [ "templateId" ], "type": "object" }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/runpod/runpod-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server