Skip to main content
Glama

iterate_screen

Apply design feedback to an existing screen mockup, such as changing colors or adding sections.

Instructions

Refine an existing generated screen based on feedback. Use this for follow-up edits like 'change color', 'add a section', 'make it more spacious'.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
screen_idYesID of the screen to iterate on (from generate_screen output).
feedbackYesWhat to change: 'make the hero card larger', 'use orange accent instead of blue', etc.
nameNoOptional new name for the iteration. Defaults to original name + ' (v2)'.

Implementation Reference

  • handleIterate function is the main handler for the 'iterate_screen' tool. It fetches the original screen by ID, reads its previous HTML, calls generateScreen with feedback to produce an iteration, renders the HTML to PNG, saves the new screen, and returns the image along with a summary.
    async function handleIterate(input: z.infer<typeof iterateInput>) {
      const original = await getScreen(input.screen_id);
      if (!original) throw new Error(`Screen not found: ${input.screen_id}`);
      const previousHtml = await readHtml(original);
    
      const result = await generateScreen({
        prompt: original.prompt,
        designSystem: original.designSystem,
        feedback: input.feedback,
        previousHtml,
      });
      const render = await renderHtml(result.html);
      const saved = await saveScreen({
        project: original.project,
        name: input.name ?? `${original.name} (v2)`,
        prompt: original.prompt,
        designSystem: original.designSystem,
        html: result.html,
        png: render.png,
        parentId: original.id,
        tokens: {
          input: result.inputTokens,
          output: result.outputTokens,
          cacheRead: result.cacheReadTokens,
        },
        model: result.model,
      });
    
      return {
        content: [
          {
            type: "image" as const,
            data: render.png.toString("base64"),
            mimeType: "image/png",
          },
          {
            type: "text" as const,
            text: summarize(saved, render, { iteratedFrom: original.id, result }),
          },
        ],
      };
    }
  • iterateInput Zod schema defining the input parameters for the iterate_screen tool: screen_id (string, required), feedback (string, required), and name (string, optional).
    const iterateInput = z.object({
      screen_id: z.string().describe("ID of the screen to iterate on (from generate_screen output)."),
      feedback: z.string().describe("What to change: 'make the hero card larger', 'use orange accent instead of blue', etc."),
      name: z.string().optional().describe("Optional new name for the iteration. Defaults to original name + ' (v2)'."),
    });
  • src/server.ts:55-60 (registration)
    Tool registration entry for 'iterate_screen' with its description and inputSchema in the TOOLS array.
    {
      name: "iterate_screen",
      description:
        "Refine an existing generated screen based on feedback. Use this for follow-up edits like 'change color', 'add a section', 'make it more spacious'.",
      inputSchema: zodToJson(iterateInput),
    },
  • src/server.ts:126-127 (registration)
    Switch-case dispatch routing 'iterate_screen' calls to the handleIterate function.
    case "iterate_screen":
      return await handleIterate(iterateInput.parse(args));
  • buildUserPrompt helper that constructs the prompt for iteration: includes previous HTML and revision feedback when feedback/previousHtml are provided.
    export function buildUserPrompt(input: {
      prompt: string;
      designSystem?: string;
      feedback?: string;
      previousHtml?: string;
    }): string {
      const parts: string[] = [];
    
      if (input.designSystem) {
        parts.push("## DESIGN SYSTEM\n\n" + input.designSystem.trim());
      }
    
      if (input.previousHtml && input.feedback) {
        parts.push("## PREVIOUS VERSION\n\n```html\n" + input.previousHtml.trim() + "\n```");
        parts.push("## REVISION FEEDBACK\n\n" + input.feedback.trim());
        parts.push("Output the revised HTML following the same scaffold. Apply the feedback precisely.");
      } else {
        parts.push("## SCREEN BRIEF\n\n" + input.prompt.trim());
        parts.push("Output the complete self-contained HTML for this screen.");
      }
    
      return parts.join("\n\n");
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden for behavioral disclosure. It implies mutation ("refine") but does not specify whether changes are reversible, what permissions are needed, or how the tool handles multiple iterations. The examples are non-destructive, but explicit security or side-effect info is missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the primary action, and includes practical examples without extraneous text. Every sentence adds value and the structure is efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description adequately explains the tool's purpose and gives usage examples. However, it does not describe the return value (e.g., updated screen object) or confirm that screen_id must come from generate_screen, which is noted only in the schema. A bit more context on how to use the result would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage, so baseline is 3. The description adds minimal extra meaning beyond the schema's parameter descriptions (e.g., "feedback" examples). The optional "name" parameter is described in both, but the description reinforces but does not significantly augment the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool refines an existing generated screen based on feedback, distinguishing it from siblings like generate_screen (creation) and get_screen (retrieval). The examples of feedback ("change color", "add a section") further clarify its purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description specifies that this tool is for follow-up edits after generation, providing clear usage context. However, it does not explicitly state when not to use it (e.g., for new screens or non-generated screens) or mention prerequisites like valid screen_id from generate_screen.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/karyaboyraz/mockit-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server