Skip to main content
Glama

generate_image

Create custom images from text prompts using the Flux.1 Schnell model. Specify dimensions and model preferences to generate high-quality visuals tailored to your needs.

Instructions

Generate an image based on the text prompt, model, and optional dimensions

Input Schema

NameRequiredDescriptionDefault
heightNoOptional height for the image
modelYesThe exact model name as it appears in Together AI. If incorrect, it will fallback to the default model (black-forest-labs/FLUX.1-schnell).
promptYesThe text prompt for image generation
widthNoOptional width for the image

Input Schema (JSON Schema)

{ "properties": { "height": { "description": "Optional height for the image", "type": "number" }, "model": { "description": "The exact model name as it appears in Together AI. If incorrect, it will fallback to the default model (black-forest-labs/FLUX.1-schnell).", "type": "string" }, "prompt": { "description": "The text prompt for image generation", "type": "string" }, "width": { "description": "Optional width for the image", "type": "number" } }, "required": [ "prompt", "model" ], "type": "object" }

Implementation Reference

  • Executes the generate_image tool by extracting parameters from arguments, validating required fields, calling the make_together_request helper to interact with Together AI API (with model fallback), and returning either ImageContent with base64 image data or error TextContent.
    if name == "generate_image": prompt = arguments.get("prompt") model = arguments.get("model") width = arguments.get("width") height = arguments.get("height") if not prompt or not model: return [ types.TextContent(type="text", text="Missing prompt or model parameter") ] async with httpx.AsyncClient() as client: response_data = await make_together_request( client=client, prompt=prompt, model=model, # User-provided model (or fallback will be used) width=width, height=height, ) if "error" in response_data: return [types.TextContent(type="text", text=response_data["error"])] try: b64_image = response_data["data"][0]["b64_json"] return [ types.ImageContent( type="image", data=b64_image, mimeType="image/jpeg" ) ] except (KeyError, IndexError) as e: return [ types.TextContent( type="text", text=f"Failed to parse API response: {e}" ) ]
  • Registers the generate_image tool in the list_tools() handler, specifying name, description, and JSON schema for inputs (required: prompt, model; optional: width, height).
    types.Tool( name="generate_image", description="Generate an image based on the text prompt, model, and optional dimensions", inputSchema={ "type": "object", "properties": { "prompt": { "type": "string", "description": "The text prompt for image generation", }, "model": { "type": "string", "description": "The exact model name as it appears in Together AI. If incorrect, it will fallback to the default model (black-forest-labs/FLUX.1-schnell).", }, "width": { "type": "number", "description": "Optional width for the image", }, "height": { "type": "number", "description": "Optional height for the image", }, }, "required": ["prompt", "model"], }, ) ]
  • JSON schema defining the input parameters for the generate_image tool: object with properties prompt (string, required), model (string, required), width (number, optional), height (number, optional).
    inputSchema={ "type": "object", "properties": { "prompt": { "type": "string", "description": "The text prompt for image generation", }, "model": { "type": "string", "description": "The exact model name as it appears in Together AI. If incorrect, it will fallback to the default model (black-forest-labs/FLUX.1-schnell).", }, "width": { "type": "number", "description": "Optional width for the image", }, "height": { "type": "number", "description": "Optional height for the image", }, }, "required": ["prompt", "model"], },
  • Supporting function that constructs and sends POST request to Together AI image generation API, adds optional width/height, handles authentication, parses response, detects invalid model errors and automatically falls back to the default model (black-forest-labs/FLUX.1-schnell), returns API response dict or error dict.
    async def make_together_request( client: httpx.AsyncClient, prompt: str, model: str, width: Optional[int] = None, height: Optional[int] = None, ) -> dict[str, Any]: """Make a request to the Together API with error handling and fallback for incorrect model.""" request_body = {"model": model, "prompt": prompt, "response_format": "b64_json"} headers = {"Authorization": f"Bearer {API_KEY}"} if width is not None: request_body["width"] = width if height is not None: request_body["height"] = height async def send_request(body: dict) -> (int, dict): response = await client.post(TOGETHER_AI_BASE, headers=headers, json=body) try: data = response.json() except Exception: data = {} return response.status_code, data # First request with user-provided model status, data = await send_request(request_body) # Check if the request failed due to an invalid model error if status != 200 and "error" in data: error_info = data["error"] error_msg = error_info.get("message", "").lower() error_code = error_info.get("code", "").lower() if ( "model" in error_msg and "not available" in error_msg ) or error_code == "model_not_available": # Fallback to the default model request_body["model"] = DEFAULT_MODEL status, data = await send_request(request_body) if status != 200 or "error" in data: return { "error": f"Fallback API error: {data.get('error', 'Unknown error')} (HTTP {status})" } return data else: return {"error": f"Together API error: {data.get('error')}"} elif status != 200: return {"error": f"HTTP error {status}"} return data

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sarthakkimtani/mcp-image-gen'

If you have feedback or need assistance with the MCP directory API, please join our Discord server