Skip to main content
Glama

venice_generate_image

Create images from text descriptions using AI models, with options to specify size, style, and elements to exclude.

Instructions

Generate an image from a text prompt using Venice AI

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesText description of the image to generate
modelNoImage model (e.g., fluently-xl, flux-dev)fluently-xl
sizeNoImage size (e.g., 512x512, 1024x1024, 1792x1024)1024x1024
style_presetNoStyle preset name
negative_promptNoWhat to avoid in the image

Implementation Reference

  • Handler function that sends a POST request to Venice AI's /images/generations endpoint with the provided prompt and parameters, processes the base64 JSON response, saves the image to a local file in ~/venice-images/, and returns the file path in JSON format.
    async ({ prompt, model, size, style_preset, negative_prompt }) => { const body: Record<string, unknown> = { model, prompt, size, n: 1, response_format: "b64_json" }; if (style_preset) body.style_preset = style_preset; if (negative_prompt) body.negative_prompt = negative_prompt; const response = await veniceAPI("/images/generations", { method: "POST", body: JSON.stringify(body) }); const data = await response.json() as ImageGenerationResponse; if (!response.ok) return { content: [{ type: "text" as const, text: `Error: ${data.error?.message || response.statusText}` }] }; const img = data.data?.[0]; if (img?.b64_json) { const outputDir = getImageOutputDir(); const filename = `venice-${Date.now()}.png`; const filepath = join(outputDir, filename); writeFileSync(filepath, Buffer.from(img.b64_json, "base64")); return { content: [{ type: "text" as const, text: JSON.stringify({ success: true, path: filepath }), }], }; } if (img?.url) return { content: [{ type: "text" as const, text: JSON.stringify({ success: true, url: img.url }) }] }; return { content: [{ type: "text" as const, text: JSON.stringify({ success: false, error: "No image data returned" }) }] }; }
  • Input schema using Zod validation for parameters: prompt (required string), model (optional, default 'fluently-xl'), size (optional, default '1024x1024'), style_preset (optional), negative_prompt (optional).
    { prompt: z.string().describe("Text description of the image to generate"), model: z.string().optional().default("fluently-xl").describe("Image model (e.g., fluently-xl, flux-dev)"), size: z.string().optional().default("1024x1024").describe("Image size (e.g., 512x512, 1024x1024, 1792x1024)"), style_preset: z.string().optional().describe("Style preset name"), negative_prompt: z.string().optional().describe("What to avoid in the image"), },
  • The tool is registered using McpServer.tool() method within the registerInferenceTools function, specifying name, description, input schema, and handler.
    server.tool( "venice_generate_image", "Generate an image from a text prompt using Venice AI", { prompt: z.string().describe("Text description of the image to generate"), model: z.string().optional().default("fluently-xl").describe("Image model (e.g., fluently-xl, flux-dev)"), size: z.string().optional().default("1024x1024").describe("Image size (e.g., 512x512, 1024x1024, 1792x1024)"), style_preset: z.string().optional().describe("Style preset name"), negative_prompt: z.string().optional().describe("What to avoid in the image"), }, async ({ prompt, model, size, style_preset, negative_prompt }) => { const body: Record<string, unknown> = { model, prompt, size, n: 1, response_format: "b64_json" }; if (style_preset) body.style_preset = style_preset; if (negative_prompt) body.negative_prompt = negative_prompt; const response = await veniceAPI("/images/generations", { method: "POST", body: JSON.stringify(body) }); const data = await response.json() as ImageGenerationResponse; if (!response.ok) return { content: [{ type: "text" as const, text: `Error: ${data.error?.message || response.statusText}` }] }; const img = data.data?.[0]; if (img?.b64_json) { const outputDir = getImageOutputDir(); const filename = `venice-${Date.now()}.png`; const filepath = join(outputDir, filename); writeFileSync(filepath, Buffer.from(img.b64_json, "base64")); return { content: [{ type: "text" as const, text: JSON.stringify({ success: true, path: filepath }), }], }; } if (img?.url) return { content: [{ type: "text" as const, text: JSON.stringify({ success: true, url: img.url }) }] }; return { content: [{ type: "text" as const, text: JSON.stringify({ success: false, error: "No image data returned" }) }] }; } );
  • Helper function to get or create the local directory ~/venice-images/ for saving generated images.
    function getImageOutputDir(): string { const dir = join(homedir(), "venice-images"); if (!existsSync(dir)) { mkdirSync(dir, { recursive: true }); } return dir; }
  • src/index.ts:16-16 (registration)
    Top-level call to registerInferenceTools(server) which includes the venice_generate_image tool registration.
    registerInferenceTools(server);

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/georgeglarson/venice-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server