Skip to main content
Glama

venice_list_models

List available Venice AI models by type to identify suitable options for text generation, image creation, speech synthesis, embeddings, and other AI tasks.

Instructions

List available Venice AI models by type

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
typeNoFilter by model type (text, image, embedding, tts, asr, upscale, inpaint, video, or all)all

Implementation Reference

  • The main handler function for the 'venice_list_models' tool. It calls the Venice API with the provided type filter, parses the ModelsResponse, formats a list of models, and returns it as text content.
    async ({ type }) => { // Always pass type parameter - API may return only text models without it const endpoint = `/models?type=${type}`; const response = await veniceAPI(endpoint); const data = await response.json() as ModelsResponse; if (!response.ok) return { content: [{ type: "text" as const, text: `Error: ${data.error?.message || response.statusText}` }] }; const models = data.data || []; const list = models.map((m) => `- ${m.id} (${m.type || m.object || "unknown"})`).join("\n"); return { content: [{ type: "text" as const, text: `Available models (${models.length}):\n${list}` }] }; }
  • Zod input schema for the tool, defining an optional 'type' parameter to filter models.
    { type: z.enum(["text", "image", "embedding", "tts", "asr", "upscale", "inpaint", "video", "all"]).optional().default("all").describe("Filter by model type (text, image, embedding, tts, asr, upscale, inpaint, video, or all)") },
  • Registration of the 'venice_list_models' tool on the MCP server, specifying name, description, input schema, and handler function.
    server.tool( "venice_list_models", "List available Venice AI models by type", { type: z.enum(["text", "image", "embedding", "tts", "asr", "upscale", "inpaint", "video", "all"]).optional().default("all").describe("Filter by model type (text, image, embedding, tts, asr, upscale, inpaint, video, or all)") }, async ({ type }) => { // Always pass type parameter - API may return only text models without it const endpoint = `/models?type=${type}`; const response = await veniceAPI(endpoint); const data = await response.json() as ModelsResponse; if (!response.ok) return { content: [{ type: "text" as const, text: `Error: ${data.error?.message || response.statusText}` }] }; const models = data.data || []; const list = models.map((m) => `- ${m.id} (${m.type || m.object || "unknown"})`).join("\n"); return { content: [{ type: "text" as const, text: `Available models (${models.length}):\n${list}` }] }; } );
  • Helper function 'veniceAPI' used by the tool handler to perform authenticated HTTP requests to the Venice AI API.
    export async function veniceAPI(endpoint: string, options: RequestInit = {}): Promise<Response> { const url = `${BASE_URL}${endpoint}`; const headers: Record<string, string> = { "Authorization": `Bearer ${API_KEY}`, "Content-Type": "application/json", ...(options.headers as Record<string, string> || {}), }; return fetch(url, { ...options, headers }); }
  • TypeScript interface 'ModelsResponse' used to type the API response data in the handler.
    export interface ModelsResponse extends VeniceAPIError { data?: Model[]; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/georgeglarson/venice-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server