generate_image
Generate AI images from text prompts with smart model selection. Supports Flux, SDXL, and Imagen4 for diverse styles and quality needs.
Instructions
Generate an image using AI.
Supports 3 AI models: Flux, SDXL, Imagen 4. Smart routing automatically picks the best model for your prompt. Cost: ~6 credits per image.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | Description of the image to generate (Turkish or English) | |
| model | No | AI model to use — "auto" (smart routing), "flux" (best quality), "sdxl" (fast), "imagen4" (Google, photorealistic) | auto |
| negative_prompt | No | Things to avoid in the image | |
| width | No | Image width in pixels (64-2048, default 512) | |
| height | No | Image height in pixels (64-2048, default 512) | |
| style | No | Style preset (realistic, anime, cinematic, artistic) |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- src/yaparai/tools/generate.py:10-58 (handler)The main handler function for the generate_image tool. It takes a prompt, optional model (auto/flux/sdxl/imagen4), negative_prompt, width, height, and style, then creates a payload and calls YaparAIClient.generate() to start an image generation job, polls for the result, and returns the image URL.
async def generate_image( prompt: str, model: Literal["auto", "flux", "sdxl", "imagen4"] = "auto", negative_prompt: str = "", width: int = 512, height: int = 512, style: Literal["realistic", "anime", "cinematic", "artistic"] | None = None, ) -> dict: """ Generate an image using AI. Supports 3 AI models: Flux, SDXL, Imagen 4. Smart routing automatically picks the best model for your prompt. Cost: ~6 credits per image. Args: prompt: Description of the image to generate (Turkish or English) model: AI model to use — "auto" (smart routing), "flux" (best quality), "sdxl" (fast), "imagen4" (Google, photorealistic) negative_prompt: Things to avoid in the image width: Image width in pixels (64-2048, default 512) height: Image height in pixels (64-2048, default 512) style: Style preset (realistic, anime, cinematic, artistic) Returns: Dict with job_id, status, result_url (image URL when done), credits_used, and balance_remaining. """ client = YaparAIClient() payload: dict = { "type": "image", "prompt": prompt, "negative_prompt": negative_prompt, "width": width, "height": height, "style": style, } if model != "auto": payload["model"] = model job = await client.generate(payload) result = await client.wait_for_result(job["job_id"], timeout=60) return { "status": "success", "image_url": result.get("result_url"), "job_id": result.get("job_id"), "credits_used": job.get("credits_used"), "balance_remaining": job.get("balance_remaining"), } - src/yaparai/server.py:24-29 (registration)Import of generate_image from the generate module into the server.
from yaparai.tools.generate import ( generate_image, generate_video, generate_music, generate_music_video, ) - src/yaparai/server.py:123-123 (registration)Registration of generate_image as an MCP tool via mcp.tool(generate_image).
mcp.tool(generate_image) - src/yaparai/client.py:126-128 (helper)The YaparAIClient.generate() method that actually sends the POST request to /v1/public/generate to start the generation job.
async def generate(self, request: dict) -> dict: """Start a generation job.""" return await self._request("POST", "/v1/public/generate", json=request) - src/yaparai/client.py:142-163 (helper)The YaparAIClient.wait_for_result() method that polls the job status until completion or timeout.
async def wait_for_result( self, job_id: str, timeout: int = 120, poll_interval: int = 3, ) -> dict: """Poll job status until completed or timeout.""" elapsed = 0 while elapsed < timeout: job = await self.get_job(job_id) status = job.get("status", "") if status == "succeeded": return job if status == "failed": error = job.get("error_message") or job.get("error") or "Unknown error" raise RuntimeError(f"Generation failed: {error}") await asyncio.sleep(poll_interval) elapsed += poll_interval raise TimeoutError( f"Job {job_id} is still processing after {timeout}s. " f"Use get_job_status('{job_id}') to check later." )