Skip to main content
Glama

compose

Execute design system tasks by describing them in natural language. The tool classifies your intent, creates a multi-step plan, and runs it to generate specs, code, or audits.

Instructions

Run the agent orchestrator with a natural language design intent — classifies the task, builds a multi-step plan, and executes it.

Prerequisites: No Figma connection required for spec/code tasks. Figma-touching tasks (design generation, audits) require the bridge to be running. The orchestrator automatically dispatches to registered agent workers when available, or falls back to internal execution.

Returns on success: Orchestrator result object with shape { success: boolean, plan: { steps: [] }, results: [], summary: string, errors?: [] }. Each step includes the agent role that handled it and its output.

Error behavior: Returns success=false with an errors array if planning fails or execution throws. Individual step failures are captured per-step and do not abort the entire plan.

Intent examples:

  • "create a dashboard page with KPI cards, a chart, and a data table" — generates specs and code

  • "audit button variants for WCAG contrast and touch target compliance" — runs accessibility checks

  • "generate a login page with email/password form and OAuth buttons" — spec + codegen

  • "pull design system, then generate all missing component specs" — chained multi-step pipeline

  • "create a molecule spec for a search bar composing Input and Button atoms" — atomic design authoring

Be specific — vague intents like "make something nice" produce generic plans. Include component names, atomic levels, and target pages when relevant.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
intentYesNatural language design task. Be specific about what to create, modify, or check. Include atomic level if relevant (atom/molecule/organism/template/page), component names, and target output (spec, code, audit). Examples: 'create a KPI card atom with value, label, and trend props', 'audit all organism specs for WCAG 2.2 compliance', 'generate the LoginPage template from the AuthForm organism spec'.
dryRunNoIf true, returns the execution plan without running any steps. Use to inspect what the orchestrator intends to do before committing. Defaults to false.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure and does so effectively. It explains the tool's orchestration behavior, fallback mechanisms, success/error handling (including that individual step failures don't abort the plan), and return format. It also covers prerequisites and execution modes. The only minor gap is lack of explicit rate limit or performance characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections: purpose, prerequisites, return behavior, error handling, and examples. While comprehensive, it could be slightly more concise—some information about prerequisites and fallback mechanisms might be condensed. However, every sentence adds value, and the front-loaded purpose statement is strong.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (orchestrator with multi-step execution), no annotations, and no output schema, the description does an excellent job covering behavior, prerequisites, and examples. It explains the return format in detail despite lacking an output schema. The only minor gap is that without annotations, it doesn't explicitly state whether this is a read-only or mutating operation, though the examples imply both creation and audit capabilities.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds value by providing context for the 'intent' parameter through multiple detailed examples that illustrate specificity requirements and domain concepts (atomic levels, component names). It also explains the practical use of 'dryRun' in the context of inspecting plans before execution, which complements the schema's technical description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Run the agent orchestrator with a natural language design intent — classifies the task, builds a multi-step plan, and executes it.' It specifies the verb ('run'), resource ('agent orchestrator'), and distinguishes from siblings by focusing on orchestration of multi-step tasks rather than single operations like 'create_spec' or 'generate_code'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives. It states prerequisites (Figma connection requirements for design tasks), mentions it 'automatically dispatches to registered agent workers when available, or falls back to internal execution,' and includes intent examples that illustrate appropriate use cases. The 'Be specific' warning also helps distinguish from vague intents better handled elsewhere.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sarveshsea/m-moire'

If you have feedback or need assistance with the MCP directory API, please join our Discord server