Skip to main content
Glama

post-prompt-inferences

Generate, complete, or invent new prompts for AI image generation using various modes like completion, contextual, inventive, or structured approaches.

Instructions

Generate, complete or invent new prompts.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
dryRunNo
modeYesThe mode used to generate new prompt(s).
ensureIPClearedNoWhether we try to ensure IP removal for new prompt generation.
imageNoThe input image as a data URL (example: "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVQYV2NgYAAAAAMAAWgmWQ0AAAAASUVORK5CYII=") or the asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") Required when `mode` is `image-editing-prompt`.
imagesNo
seedNoIf specified, the API will make a best effort to produce the same results, such that repeated requests with the same `seed` and parameters should return the same outputs. Must be used along with the same parameters including prompt, model's state, etc..
modelIdNoThe modelId used to condition the generation. When provided, the generation will take into account model's training images, examples. Only supports 'gemini-2.0-flash', 'gemini-2.5-flash', 'gpt-image-1', 'flux-kontext' and 'runway-gen4-image' for now when `mode` is `image-editing-prompt`.
temperatureNoThe sampling temperature to use. Higher values like `0.8` will make the output more random, while lower values like `0.2` will make it more focused and deterministic. We generally recommend altering this or `topP` but not both.
assetIdsNo
numResultsNoThe number of results to return.
promptNoThe initial prompt spark feed to `completion`, `inventive` or `structured` modes.
topPNoAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So `0.1` means only the tokens comprising the top `10%` probability mass are considered. We generally recommend altering this or `temperature` but not both.

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pasie15/scenario.com-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server