Skip to main content
Glama
novitalabs

Novita MCP Server

Official
by novitalabs

create-gpu-instance

Deploy GPU instances on the Novita AI platform by specifying configurations like GPU count, container image, and storage to run compute-intensive workloads.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
nameYesName for the instance. Must contain only numbers, letters, and hyphens
productIdYesID of the product used to deploy the instance. The availableGpuNumber of the product must be greater than 0. You can use the `list-products` tool to get or check the product ID if needed. Before calling the MCP tool, MUST show me the details of the product to help me identify it, including name, price, etc.
kindNoType of the instancegpu
gpuNumYesNumber of GPUs allocated to the instance. The availableGpuNumber of the product must be greater than or equal to the gpuNum.
rootfsSizeYesRoot filesystem size (container disk size) in GB. Free tier includes 60GB.
imageUrlYesDocker image URL to initialize the instance
imageAuthIdNoID of the container registry auth. Required only when the imageUrl is private. You can use the `list-container-registry-auths` tool to check the ID if you're not sure.
commandNoContainer start command to run when the instance starts
portsNoPorts to expose (e.g., '8888/http', '22/tcp'), separated by commas if multiple. Maximum of 10 ports.
envNoEnvironment variables
networkStoragesNoNetwork storages to mount

Implementation Reference

  • The handler function for the "create-gpu-instance" tool. It sends a POST request to the Novita API's /gpu/instance/create endpoint using the novitaRequest helper and returns the result as a formatted text content block.
    }, async (params) => { const result = await novitaRequest("/gpu/instance/create", "POST", params); return { content: [ { type: "text", text: JSON.stringify(result, null, 2), }, ], }; });
  • Zod schema defining the input parameters for the create-gpu-instance tool, including name, productId, gpuNum, imageUrl, and optional fields like env and networkStorages.
    name: z .string() .max(255) .trim() .describe("Name for the instance. Must contain only numbers, letters, and hyphens"), productId: z .string() .nonempty() .describe("ID of the product used to deploy the instance. The availableGpuNumber of the product must be greater than 0. You can use the `list-products` tool to get or check the product ID if needed. Before calling the MCP tool, MUST show me the details of the product to help me identify it, including name, price, etc."), kind: z .enum(["gpu"]) .default("gpu") .describe("Type of the instance"), gpuNum: z .number() .min(1) .describe("Number of GPUs allocated to the instance. The availableGpuNumber of the product must be greater than or equal to the gpuNum."), rootfsSize: z .number() .min(10) .describe("Root filesystem size (container disk size) in GB. Free tier includes 60GB."), imageUrl: z .string() .trim() .nonempty() .max(500) .describe("Docker image URL to initialize the instance"), imageAuthId: z .string() .optional() .describe("ID of the container registry auth. Required only when the imageUrl is private. You can use the `list-container-registry-auths` tool to check the ID if you're not sure."), command: z .string() .max(2048) .optional() .describe("Container start command to run when the instance starts"), ports: z .string() .optional() .describe("Ports to expose (e.g., '8888/http', '22/tcp'), separated by commas if multiple. Maximum of 10 ports."), env: z .array(z.object({ key: z.string().nonempty().max(2048).describe("Environment variable key"), value: z.string().max(2048).describe("Environment variable value"), })) .optional() .describe("Environment variables"), networkStorages: z .array(z.object({ Id: z.string().nonempty().describe("ID of the network storage to mount. You can use the `list-network-storage` tool to get or check the ID if needed. The network storage's cluster must match the product's cluster."), mountPoint: z.string().nonempty().describe("Path to mount the network storage"), })) .optional() .describe("Network storages to mount"),
  • src/tools.ts:141-207 (registration)
    Registration of the "create-gpu-instance" tool within the registerGPUInstanceTools function using McpServer.tool method.
    server.tool("create-gpu-instance", { name: z .string() .max(255) .trim() .describe("Name for the instance. Must contain only numbers, letters, and hyphens"), productId: z .string() .nonempty() .describe("ID of the product used to deploy the instance. The availableGpuNumber of the product must be greater than 0. You can use the `list-products` tool to get or check the product ID if needed. Before calling the MCP tool, MUST show me the details of the product to help me identify it, including name, price, etc."), kind: z .enum(["gpu"]) .default("gpu") .describe("Type of the instance"), gpuNum: z .number() .min(1) .describe("Number of GPUs allocated to the instance. The availableGpuNumber of the product must be greater than or equal to the gpuNum."), rootfsSize: z .number() .min(10) .describe("Root filesystem size (container disk size) in GB. Free tier includes 60GB."), imageUrl: z .string() .trim() .nonempty() .max(500) .describe("Docker image URL to initialize the instance"), imageAuthId: z .string() .optional() .describe("ID of the container registry auth. Required only when the imageUrl is private. You can use the `list-container-registry-auths` tool to check the ID if you're not sure."), command: z .string() .max(2048) .optional() .describe("Container start command to run when the instance starts"), ports: z .string() .optional() .describe("Ports to expose (e.g., '8888/http', '22/tcp'), separated by commas if multiple. Maximum of 10 ports."), env: z .array(z.object({ key: z.string().nonempty().max(2048).describe("Environment variable key"), value: z.string().max(2048).describe("Environment variable value"), })) .optional() .describe("Environment variables"), networkStorages: z .array(z.object({ Id: z.string().nonempty().describe("ID of the network storage to mount. You can use the `list-network-storage` tool to get or check the ID if needed. The network storage's cluster must match the product's cluster."), mountPoint: z.string().nonempty().describe("Path to mount the network storage"), })) .optional() .describe("Network storages to mount"), }, async (params) => { const result = await novitaRequest("/gpu/instance/create", "POST", params); return { content: [ { type: "text", text: JSON.stringify(result, null, 2), }, ], }; });
  • Helper utility function novitaRequest used by the tool handler to perform API calls to the Novita GPU instance service.
    export async function novitaRequest( endpoint: string, method: string = "GET", body: any = null ) { // Base URL for Novita AI API const API_BASE_URL = "https://api.novita.ai/gpu-instance/openapi/v1"; // Get API key from environment variable const API_KEY = process.env.NOVITA_API_KEY; const url = `${API_BASE_URL}${endpoint}`; const headers = { Authorization: `Bearer ${API_KEY}`, "Content-Type": "application/json", }; const options: any = { method, headers, }; if (body && (method === "POST" || method === "PATCH")) { options.body = JSON.stringify(body); } try { const response = await fetch(url, options); if (!response.ok) { const errorText = await response.text(); throw new Error(`Novita AI API Error: ${response.status} - ${errorText}`); } // Some endpoints might not return JSON const contentType = response.headers.get("content-type"); if (contentType && contentType.includes("application/json")) { return await response.json(); } return { success: true, status: response.status }; } catch (error) { console.error("Error calling Novita AI API:", error); throw error; } }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/novitalabs/novita-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server