Skip to main content
Glama

venice_list_models

List available Venice AI models by type to help users select appropriate models for text, image, embedding, TTS, ASR, upscale, inpaint, or video tasks.

Instructions

List available Venice AI models by type

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
typeNoFilter by model type (text, image, embedding, tts, asr, upscale, inpaint, video, or all)all

Implementation Reference

  • The asynchronous handler function that implements the core logic of the venice_list_models tool. It calls the Venice API to fetch models filtered by type, handles errors, and formats the response as a text list.
    async ({ type }) => { // Always pass type parameter - API may return only text models without it const endpoint = `/models?type=${type}`; const response = await veniceAPI(endpoint); const data = await response.json() as ModelsResponse; if (!response.ok) return { content: [{ type: "text" as const, text: `Error: ${data.error?.message || response.statusText}` }] }; const models = data.data || []; const list = models.map((m) => `- ${m.id} (${m.type || m.object || "unknown"})`).join("\n"); return { content: [{ type: "text" as const, text: `Available models (${models.length}):\n${list}` }] }; }
  • Zod input schema for the tool, defining an optional 'type' parameter to filter models by type, defaulting to 'all'.
    { type: z.enum(["text", "image", "embedding", "tts", "asr", "upscale", "inpaint", "video", "all"]).optional().default("all").describe("Filter by model type (text, image, embedding, tts, asr, upscale, inpaint, video, or all)") },
  • Registration of the 'venice_list_models' tool on the MCP server, specifying name, description, input schema, and handler function.
    server.tool( "venice_list_models", "List available Venice AI models by type", { type: z.enum(["text", "image", "embedding", "tts", "asr", "upscale", "inpaint", "video", "all"]).optional().default("all").describe("Filter by model type (text, image, embedding, tts, asr, upscale, inpaint, video, or all)") }, async ({ type }) => { // Always pass type parameter - API may return only text models without it const endpoint = `/models?type=${type}`; const response = await veniceAPI(endpoint); const data = await response.json() as ModelsResponse; if (!response.ok) return { content: [{ type: "text" as const, text: `Error: ${data.error?.message || response.statusText}` }] }; const models = data.data || []; const list = models.map((m) => `- ${m.id} (${m.type || m.object || "unknown"})`).join("\n"); return { content: [{ type: "text" as const, text: `Available models (${models.length}):\n${list}` }] }; } );

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/georgeglarson/venice-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server