flux_kontext
Generate high-quality images adhering to text prompts with precise typography and customizable parameters such as size, quantity, and inference steps.
Instructions
FLUX Kontext Pro - State-of-the-art prompt adherence and typography
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| guidance_scale | No | ||
| image_size | No | landscape_4_3 | |
| num_images | No | ||
| num_inference_steps | No | ||
| prompt | Yes | Text prompt for image generation |
Implementation Reference
- src/index.ts:102-102 (registration)Model registry entry for 'flux_kontext' tool, defining its ID, FAL endpoint, name, and description. This registers it as an imageGeneration model.{ id: 'flux_kontext', endpoint: 'fal-ai/flux-pro/kontext/text-to-image', name: 'FLUX Kontext Pro', description: 'State-of-the-art prompt adherence and typography' },
- src/index.ts:346-393 (schema)Dynamic schema generator for tool inputs. For flux_kontext (contains 'flux'), adds num_inference_steps and guidance_scale to the base image generation schema (prompt, image_size, num_images). Called during tool listing.private generateToolSchema(model: any, category: string) { const baseSchema = { name: model.id, description: `${model.name} - ${model.description}`, inputSchema: { type: 'object', properties: {} as any, required: [] as string[], }, }; if (category === 'imageGeneration') { baseSchema.inputSchema.properties = { prompt: { type: 'string', description: 'Text prompt for image generation' }, image_size: { type: 'string', enum: ['square_hd', 'square', 'portrait_4_3', 'portrait_16_9', 'landscape_4_3', 'landscape_16_9'], default: 'landscape_4_3' }, num_images: { type: 'number', default: 1, minimum: 1, maximum: 4 }, }; baseSchema.inputSchema.required = ['prompt']; // Add model-specific parameters if (model.id.includes('flux') || model.id.includes('stable_diffusion')) { baseSchema.inputSchema.properties.num_inference_steps = { type: 'number', default: 25, minimum: 1, maximum: 50 }; baseSchema.inputSchema.properties.guidance_scale = { type: 'number', default: 3.5, minimum: 1, maximum: 20 }; } if (model.id.includes('stable_diffusion') || model.id === 'ideogram_v3') { baseSchema.inputSchema.properties.negative_prompt = { type: 'string', description: 'Negative prompt' }; } } else if (category === 'textToVideo') { baseSchema.inputSchema.properties = { prompt: { type: 'string', description: 'Text prompt for video generation' }, duration: { type: 'number', default: 5, minimum: 1, maximum: 30 }, aspect_ratio: { type: 'string', enum: ['16:9', '9:16', '1:1', '4:3', '3:4'], default: '16:9' }, }; baseSchema.inputSchema.required = ['prompt']; } else if (category === 'imageToVideo') { baseSchema.inputSchema.properties = { image_url: { type: 'string', description: 'URL of the input image' }, prompt: { type: 'string', description: 'Motion description prompt' }, duration: { type: 'string', enum: ['5', '10'], default: '5', description: 'Video duration in seconds' }, aspect_ratio: { type: 'string', enum: ['16:9', '9:16', '1:1'], default: '16:9' }, negative_prompt: { type: 'string', description: 'What to avoid in the video' }, cfg_scale: { type: 'number', default: 0.5, minimum: 0, maximum: 1, description: 'How closely to follow the prompt' } }; baseSchema.inputSchema.required = ['image_url', 'prompt']; } return baseSchema; }
- src/index.ts:495-563 (handler)Core handler function for all image generation tools, including flux_kontext. Builds input params with flux-specific steps/guidance, calls fal.subscribe on the model endpoint, downloads/processes output images, returns JSON result.private async handleImageGeneration(args: any, model: any) { const { prompt, image_size = 'landscape_4_3', num_inference_steps = 25, guidance_scale = 3.5, num_images = 1, negative_prompt, safety_tolerance, raw, } = args; try { // Configure FAL client lazily with query config override configureFalClient(this.currentQueryConfig); const inputParams: any = { prompt }; // Add common parameters if (image_size) inputParams.image_size = image_size; if (num_images > 1) inputParams.num_images = num_images; // Add model-specific parameters based on model capabilities if (model.id.includes('flux') || model.id.includes('stable_diffusion')) { if (num_inference_steps) inputParams.num_inference_steps = num_inference_steps; if (guidance_scale) inputParams.guidance_scale = guidance_scale; } if ((model.id.includes('stable_diffusion') || model.id === 'ideogram_v3') && negative_prompt) { inputParams.negative_prompt = negative_prompt; } if (model.id.includes('flux_pro') && safety_tolerance) { inputParams.safety_tolerance = safety_tolerance; } if (model.id === 'flux_pro_ultra' && raw !== undefined) { inputParams.raw = raw; } const result = await fal.subscribe(model.endpoint, { input: inputParams }); const imageData = result.data as FalImageResult; const processedImages = await downloadAndProcessImages(imageData.images, model.id); return { content: [ { type: 'text', text: JSON.stringify({ model: model.name, id: model.id, endpoint: model.endpoint, prompt, images: processedImages, metadata: inputParams, download_path: DOWNLOAD_PATH, data_url_settings: { enabled: ENABLE_DATA_URLS, max_size_mb: Math.round(MAX_DATA_URL_SIZE / 1024 / 1024), }, autoopen_settings: { enabled: AUTOOPEN, note: AUTOOPEN ? "Files automatically opened with default application" : "Auto-open disabled" }, }, null, 2), }, ], }; } catch (error) { throw new Error(`${model.name} generation failed: ${error}`); } }
- src/index.ts:456-492 (handler)MCP CallTool handler that dispatches flux_kontext (name=='flux_kontext') to handleImageGeneration after model lookup.this.server.setRequestHandler(CallToolRequestSchema, async (request) => { const { name, arguments: args } = request.params; try { // Handle special tools first if (name === 'list_available_models') { return await this.handleListModels(args); } else if (name === 'execute_custom_model') { return await this.handleCustomModel(args); } const model = getModelById(name); if (!model) { throw new McpError( ErrorCode.MethodNotFound, `Unknown model: ${name}` ); } // Determine category and handle accordingly if (MODEL_REGISTRY.imageGeneration.find(m => m.id === name)) { return await this.handleImageGeneration(args, model); } else if (MODEL_REGISTRY.textToVideo.find(m => m.id === name)) { return await this.handleTextToVideo(args, model); } else if (MODEL_REGISTRY.imageToVideo.find(m => m.id === name)) { return await this.handleImageToVideo(args, model); } throw new McpError( ErrorCode.MethodNotFound, `Unsupported model category for: ${name}` ); } catch (error) { const errorMessage = error instanceof Error ? error.message : String(error); throw new McpError(ErrorCode.InternalError, errorMessage); } });
- src/index.ts:139-143 (helper)Helper to retrieve model config by ID ('flux_kontext'), used in dispatch and execution.// Helper function to get model by ID function getModelById(id: string) { const allModels = getAllModels(); return allModels.find(model => model.id === id); }