Skip to main content
Glama
awkoy

replicate-flux-mcp

create_prediction

Generate high-quality images from text prompts using the Flux Schnell model. Customize aspect ratio, output format, and quality for tailored results.

Instructions

Generate an prediction from a text prompt using Flux Schnell model

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
aspect_ratioNoAspect ratio for the generated image1:1
disable_safety_checkerNoDisable safety checker for generated images.
go_fastNoRun faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16
megapixelsNoApproximate number of megapixels for generated image1
num_inference_stepsNoNumber of denoising steps. 4 is recommended, and lower number of steps produce lower quality outputs, faster.
num_outputsNoNumber of outputs to generate
output_formatNoFormat of the output imageswebp
output_qualityNoQuality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs
promptYesPrompt for generated image
seedNoRandom seed. Set for reproducible generation

Implementation Reference

  • The handler function that executes the create_prediction tool: creates a prediction via Replicate API with the given input, polls for completion, and returns the result as text JSON.
    export const registerCreatePredictionTool = async (
      input: CreatePredictionParams
    ): Promise<CallToolResult> => {
      try {
        const prediction = await replicate.predictions.create({
          model: CONFIG.imageModelId,
          input,
        });
    
        await replicate.predictions.get(prediction.id);
        const completed = await pollForCompletion(prediction.id);
    
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify(completed || "Processing timed out", null, 2),
            },
          ],
        };
      } catch (error) {
        handleError(error);
      }
    };
  • Input schema using Zod for validating parameters of the create_prediction tool, including prompt, seed, aspect ratio, etc.
    export const createPredictionSchema = {
      prompt: z.string().min(1).describe("Prompt for generated image"),
      seed: z
        .number()
        .int()
        .optional()
        .describe("Random seed. Set for reproducible generation"),
      go_fast: z
        .boolean()
        .default(true)
        .describe(
          "Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16"
        ),
      megapixels: z
        .enum(["1", "0.25"])
        .default("1")
        .describe("Approximate number of megapixels for generated image"),
      num_outputs: z
        .number()
        .int()
        .min(1)
        .max(4)
        .default(1)
        .describe("Number of outputs to generate"),
      aspect_ratio: z
        .enum([
          "1:1",
          "16:9",
          "21:9",
          "3:2",
          "2:3",
          "4:5",
          "5:4",
          "3:4",
          "4:3",
          "9:16",
          "9:21",
        ])
        .default("1:1")
        .describe("Aspect ratio for the generated image"),
      output_format: z
        .enum(["webp", "jpg", "png"])
        .default("webp")
        .describe("Format of the output images"),
      output_quality: z
        .number()
        .int()
        .min(0)
        .max(100)
        .default(80)
        .describe(
          "Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs"
        ),
      num_inference_steps: z
        .number()
        .int()
        .min(1)
        .max(4)
        .default(4)
        .describe(
          "Number of denoising steps. 4 is recommended, and lower number of steps produce lower quality outputs, faster."
        ),
      disable_safety_checker: z
        .boolean()
        .default(false)
        .describe("Disable safety checker for generated images."),
    };
  • Registration of the 'create_prediction' tool on the MCP server, providing name, description, schema, and handler function.
    server.tool(
      "create_prediction",
      "Generate an prediction from a text prompt using Flux Schnell model",
      createPredictionSchema,
      registerCreatePredictionTool
    );
Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/awkoy/replicate-flux-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server