Skip to main content
Glama

venice_generate_image

Create images from text descriptions using AI models, with options to specify size, style, and content to avoid.

Instructions

Generate an image from a text prompt using Venice AI

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesText description of the image to generate
modelNoImage model (e.g., fluently-xl, flux-dev)fluently-xl
sizeNoImage size (e.g., 512x512, 1024x1024, 1792x1024)1024x1024
style_presetNoStyle preset name
negative_promptNoWhat to avoid in the image

Implementation Reference

  • The handler function for venice_generate_image tool. It constructs the request body, calls the Venice AI /images/generations API, handles errors, decodes and saves b64_json image to ~/venice-images/, or returns URL, and formats response as MCP content.
    async ({ prompt, model, size, style_preset, negative_prompt }) => { const body: Record<string, unknown> = { model, prompt, size, n: 1, response_format: "b64_json" }; if (style_preset) body.style_preset = style_preset; if (negative_prompt) body.negative_prompt = negative_prompt; const response = await veniceAPI("/images/generations", { method: "POST", body: JSON.stringify(body) }); const data = await response.json() as ImageGenerationResponse; if (!response.ok) return { content: [{ type: "text" as const, text: `Error: ${data.error?.message || response.statusText}` }] }; const img = data.data?.[0]; if (img?.b64_json) { const outputDir = getImageOutputDir(); const filename = `venice-${Date.now()}.png`; const filepath = join(outputDir, filename); writeFileSync(filepath, Buffer.from(img.b64_json, "base64")); return { content: [{ type: "text" as const, text: JSON.stringify({ success: true, path: filepath }), }], }; } if (img?.url) return { content: [{ type: "text" as const, text: JSON.stringify({ success: true, url: img.url }) }] }; return { content: [{ type: "text" as const, text: JSON.stringify({ success: false, error: "No image data returned" }) }] }; }
  • Input schema using Zod for the venice_generate_image tool parameters: prompt (string), model (string opt default 'fluently-xl'), size (string opt '1024x1024'), style_preset (opt), negative_prompt (opt).
    { prompt: z.string().describe("Text description of the image to generate"), model: z.string().optional().default("fluently-xl").describe("Image model (e.g., fluently-xl, flux-dev)"), size: z.string().optional().default("1024x1024").describe("Image size (e.g., 512x512, 1024x1024, 1792x1024)"), style_preset: z.string().optional().describe("Style preset name"), negative_prompt: z.string().optional().describe("What to avoid in the image"), },
  • Direct registration of the 'venice_generate_image' tool on the MCP server via server.tool(), specifying name, description, input schema, and execution handler.
    server.tool( "venice_generate_image", "Generate an image from a text prompt using Venice AI", { prompt: z.string().describe("Text description of the image to generate"), model: z.string().optional().default("fluently-xl").describe("Image model (e.g., fluently-xl, flux-dev)"), size: z.string().optional().default("1024x1024").describe("Image size (e.g., 512x512, 1024x1024, 1792x1024)"), style_preset: z.string().optional().describe("Style preset name"), negative_prompt: z.string().optional().describe("What to avoid in the image"), }, async ({ prompt, model, size, style_preset, negative_prompt }) => { const body: Record<string, unknown> = { model, prompt, size, n: 1, response_format: "b64_json" }; if (style_preset) body.style_preset = style_preset; if (negative_prompt) body.negative_prompt = negative_prompt; const response = await veniceAPI("/images/generations", { method: "POST", body: JSON.stringify(body) }); const data = await response.json() as ImageGenerationResponse; if (!response.ok) return { content: [{ type: "text" as const, text: `Error: ${data.error?.message || response.statusText}` }] }; const img = data.data?.[0]; if (img?.b64_json) { const outputDir = getImageOutputDir(); const filename = `venice-${Date.now()}.png`; const filepath = join(outputDir, filename); writeFileSync(filepath, Buffer.from(img.b64_json, "base64")); return { content: [{ type: "text" as const, text: JSON.stringify({ success: true, path: filepath }), }], }; } if (img?.url) return { content: [{ type: "text" as const, text: JSON.stringify({ success: true, url: img.url }) }] }; return { content: [{ type: "text" as const, text: JSON.stringify({ success: false, error: "No image data returned" }) }] }; } );
  • src/index.ts:15-18 (registration)
    Top-level registration of tool groups in main server entrypoint; registerInferenceTools includes the venice_generate_image tool.
    // Register all tool categories registerInferenceTools(server); registerDiscoveryTools(server); registerAdminTools(server);
  • Helper function to get or create the local directory ~/venice-images/ for saving generated images.
    function getImageOutputDir(): string { const dir = join(homedir(), "venice-images"); if (!existsSync(dir)) { mkdirSync(dir, { recursive: true }); } return dir; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/georgeglarson/venice-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server