Skip to main content
Glama

mimic_generate_build_report

Read-onlyIdempotent

Generate a structured build report compiling design system compliance, learned patterns, gap recommendations, and build metadata after a Mimic build. Accepts compliance data from validate_ds_compliance and returns markdown or HTML format.

Instructions

Generate a build report after a Mimic build. Compiles DS compliance data, learned patterns, DS gap recommendations, and build metadata into a structured report. Call validate_ds_compliance first to get the complianceData, then pass it here. Returns the report as markdown (default) or HTML. Optionally saves to a file.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
screenNameYesName of the screen that was built (e.g., "Dataflow Landing Page").
dsNameYesName of the design system used.
artboardNodeIdNoArtboard node ID (for metadata).
complianceDataNoOutput from validate_ds_compliance: { stats, violations, summary }.
sectionsBuiltNoList of section names built (e.g., ["Nav", "Hero", "Metrics"]).
dsComponentsNoDS components used: [{ name, count, variant }].
primitivesNoElements built as primitives: [{ name, reason, instances }]. instances = count of this element in the build (e.g., 4 metric cards). Used for efficiency estimates.
formatNoOutput format. Default: markdown.
savePathNoOptional file path to save the report.
toolCallCountNoTotal tool calls made during this build (use_figma + get_screenshot + get_metadata). Tracked in-memory by the build orchestrator.
cacheHitsNoNumber of patterns resolved from cache (skipped DS search).
coldBuildEstimateNoEstimated tool calls for the same build with no cache. Default: toolCallCount * 2 if not provided.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, so the description adds value by specifying the output format (markdown or HTML) and file save option, which are behavioral traits beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences long, front-loads the purpose, and wastes no words. Every sentence adds necessary context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 12 parameters and no output schema, the description clearly states what the tool does, what inputs are critical (complianceData), and the output format. It is sufficient for an agent to understand the tool's role in the workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 12 parameters have schema descriptions covering 100%, so the description doesn't add new meaning. The description mentions complianceData, screenName, etc., but mostly restates schema info, meriting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Generate a build report after a Mimic build' and lists the data it compiles, distinguishing it from siblings like `mimic_generate_design_md` which generates design markdown, and other Mimic tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly instructs to call `validate_ds_compliance` first and pass complianceData here, which is a key prerequisite. Also notes output format and optional file saving, but lacks exclusions or alternative tool references.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/miapre/mimic-ai'

If you have feedback or need assistance with the MCP directory API, please join our Discord server