Skip to main content
Glama

Smasher Studio — AI Fashion Design

Server Details

AI fashion design — product photos, videos, tech packs, colorways & fabric sims.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.8/5 across 9 of 9 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose targeting specific fashion design tasks, such as generating images, videos, color variants, fabric simulations, multi-angle views, tech packs, checking credits, listing collections, and monitoring video status. There is no overlap in functionality, making tool selection straightforward for an agent.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case, starting with 'generate_', 'check_', or 'list_' followed by a descriptive noun (e.g., generate_fashion_image, check_credits, list_collections). This uniformity enhances readability and predictability across the toolset.

Tool Count5/5

With 9 tools, the server is well-scoped for AI fashion design, covering key workflows like image/video generation, design variations, technical documentation, and user management. Each tool serves a unique and necessary function without being excessive or insufficient for the domain.

Completeness4/5

The toolset provides comprehensive coverage for fashion design tasks, including creation (images, videos, colorways, fabrics, angles, tech packs), status checking, and user management. A minor gap exists in editing or updating existing designs, but agents can work around this by regenerating assets as needed.

Available Tools

9 tools
check_creditsCheck CreditsCInspect

Check the user's current credit balance, subscription plan, and monthly allocation. Free to use.

ParametersJSON Schema
NameRequiredDescriptionDefault
verboseNoIf true, include detailed plan info and upgrade suggestions

Output Schema

ParametersJSON Schema
NameRequiredDescription
tipNoGuidance for unauthenticated users
planYesSubscription plan: free, creator, studio, pro, or guest
authenticatedYesWhether the user is authenticated
credits_per_monthNoMonthly credit allocation (authenticated users)
credits_remainingYesCurrent credit balance
credits_per_month_guestNoMonthly credit limit (guest users)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states 'Free to use,' which hints at no cost, but doesn't cover other behavioral traits like rate limits, authentication needs, error handling, or response format. For a tool with no annotations, this is insufficient, as it leaves key operational aspects unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, stating the core purpose in the first part and adding a useful note ('Free to use') in the second. Both sentences earn their place by providing essential information without redundancy. It could be slightly improved by integrating usage context, but it's efficiently structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no annotations, but with an output schema), the description is somewhat complete. It covers the purpose and cost aspect, but lacks behavioral details and usage guidelines. The presence of an output schema means return values are documented elsewhere, so the description doesn't need to explain them, but it should still address other contextual gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description doesn't mention any parameters, while the input schema has one parameter ('verbose') with 100% schema description coverage. Since the schema fully documents the parameter, the description doesn't need to add extra semantics. This meets the baseline of 3, as the schema handles the parameter information adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: checking credit balance, subscription plan, and monthly allocation. It uses specific verbs ('check') and resources ('user's current credit balance, subscription plan, and monthly allocation'), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools, which are unrelated (e.g., generate_fashion_image, list_collections), so it falls short of a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance: 'Free to use' implies no cost, but it doesn't specify when to use this tool versus alternatives or any prerequisites. There's no mention of context for usage, such as checking before resource-intensive operations, nor exclusions. This lack of explicit guidance limits its helpfulness for an AI agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_video_statusCheck Video StatusAInspect

Check the status of an async video generation job. Call after generate_fashion_video with the returned job_id. Returns status and video URL when complete. Free (no credits).

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYesThe job_id returned by generate_fashion_video

Output Schema

ParametersJSON Schema
NameRequiredDescription
tipNoGuidance on what to do next
errorNoError message when status is failed
statusYesCurrent job status
successYesWhether the status check succeeded
durationNoVideo duration in seconds
progressNoCompletion progress 0-100 (Runway only)
providerNoVideo provider: kling or runway
video_urlNoPermanent Storj URL when status is completed
elapsed_secondsNoSeconds since job was submitted
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it's a polling/status-check operation (implied by 'Check the status'), returns status and video URL when complete (output behavior), and mentions 'Free (no credits)' which is useful cost/rate limit context. However, it doesn't specify error handling or timeout behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with four information-dense sentences that each serve a distinct purpose: stating the action, providing usage timing, describing outputs, and adding cost context. No wasted words, front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simple polling nature, single parameter with full schema coverage, and existence of an output schema (which handles return values), the description provides complete contextual information. It covers purpose, usage timing, output indication, and cost context—everything needed beyond the structured fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the single parameter. The description adds minimal value beyond the schema by mentioning 'job_id returned by generate_fashion_video' which reinforces the sibling relationship but doesn't provide additional semantic context about the parameter itself.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check the status') and resource ('async video generation job'), distinguishing it from sibling tools like generate_fashion_video (which creates jobs) and other generation tools. It explicitly identifies the tool's role in a workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Call after generate_fashion_video with the returned job_id.' This tells the agent exactly when to use this tool and references the specific sibling tool that provides the required input, eliminating ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_colorwaysGenerate ColorwaysAInspect

Generate color variants of a garment design. Creates product shots in multiple colors with Pantone references. Uses Nano Banana 2 (primary) with Flux 2 Pro fallback. Costs 4 credits per colorway.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleNoPhotography style for colorway shots: product_shot (catalog), on_model (lifestyle), flat_lay (social), editorial (magazine)product_shot
promptYesBase garment prompt WITHOUT color (color will be added per variant)
qualityNoImage quality: standard (fast), hd (recommended), ultra (maximum detail)hd
colorwaysYesArray of colorway variants to generate, each with a name and color description
backgroundNoBackground description: "pure white seamless", "gradient beige to cream"
aspect_ratioNoAspect ratio: 1:1 (square), 4:3 (landscape), 3:4 (portrait), 16:9 (wide), 9:16 (stories)1:1
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it specifies the AI models used ('Nano Banana 2 with Flux 2 Pro fallback'), cost implications ('4 credits per colorway'), and the generative nature of the operation. It doesn't mention rate limits, authentication needs, or error conditions, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first sentence, followed by operational details. Every sentence earns its place: the first states what it does, the second specifies output characteristics, the third reveals implementation details, and the fourth discloses cost. Zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a generative tool with 6 parameters, 100% schema coverage, and no output schema, the description provides good contextual completeness. It covers purpose, implementation details, and cost structure. The main gap is lack of output format description (what gets returned), but given the tool's name and context, the agent can reasonably infer image generation results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 6 parameters thoroughly. The description doesn't add meaningful parameter semantics beyond what's in the schema - it mentions 'multiple colors' which relates to the colorways parameter, but provides no additional syntax, format, or usage guidance for any parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('generate color variants', 'creates product shots') and resources ('garment design', 'multiple colors with Pantone references'). It distinguishes from siblings by focusing specifically on colorway generation rather than fabric simulation, multi-angle generation, or other fashion-related tasks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the credit cost mention ('Costs 4 credits per colorway'), suggesting this is a premium operation. However, it doesn't explicitly state when to use this tool versus alternatives like generate_fashion_image or generate_fabric_sim, nor does it provide clear exclusion criteria or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_fabric_simGenerate Fabric SimulationAInspect

Visualize a garment design in different fabrics and materials (cotton, silk, denim, leather, etc.). Uses Nano Banana 2 (primary) with Flux 2 Pro fallback. Costs 5 credits per fabric variant.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleNoPhotography style for fabric shots: product_shot (catalog), on_model (lifestyle), flat_lay (social), editorial (magazine)product_shot
promptYesBase garment prompt WITHOUT fabric/material (added automatically per variant)
fabricsYesFabric names: ["cotton twill", "raw denim", "silk charmeuse"]
qualityNoImage quality: standard (fast), hd (recommended), ultra (maximum detail)hd
backgroundNoBackground description: "pure white seamless", "gradient beige to cream"
aspect_ratioNoAspect ratio: 1:1 (square), 4:3 (landscape), 3:4 (portrait), 16:9 (wide), 9:16 (stories)1:1
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully reveals key behavioral traits: the generative nature of the tool, the multi-model approach with fallback, and the cost implication ('Costs 5 credits per fabric variant'). It doesn't mention rate limits, authentication needs, or error handling, but provides substantial operational context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with three information-dense sentences: purpose statement, implementation details, and cost information. Every sentence earns its place by providing distinct, valuable information without redundancy. The structure is front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex generative tool with 6 parameters, 100% schema coverage, and no output schema, the description provides good contextual completeness. It covers purpose, implementation approach, and cost structure. The main gap is the lack of output format description (what the simulation returns), but given the tool's name and context, this is partially inferable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 6 parameters thoroughly. The description adds no parameter-specific information beyond what's in the schema. The baseline score of 3 reflects that the schema does the heavy lifting for parameter documentation, and the description doesn't compensate with additional semantic context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Visualize a garment design'), target resource ('in different fabrics and materials'), and scope ('cotton, silk, denim, leather, etc.'). It distinguishes from siblings like generate_colorways (color variations) and generate_fashion_image (single image generation) by focusing specifically on fabric/material simulation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('Visualize a garment design in different fabrics and materials') and mentions implementation details ('Uses Nano Banana 2 (primary) with Flux 2 Pro fallback'). However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools, though the purpose differentiation is implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_fashion_imageGenerate Fashion ImageAInspect

Generate AI fashion product photography. Creates professional-quality product shots, on-model photos, flat lays, and editorial imagery. Uses Nano Banana 2 (primary) with Flux 2 Pro fallback. Costs 5 credits per image.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleYesPhotography style: product_shot (catalog), on_model (lifestyle), flat_lay (social), editorial (magazine), campaign (advertising)
promptYesDetailed prompt: subject, lighting, background, angle, style, mood
qualityNoImage quality: standard (fast), hd (recommended), ultra (maximum detail)hd
backgroundNoBackground: "pure white", "gradient beige", "urban street"
aspect_ratioNoAspect ratio: 1:1 (square), 4:3 (landscape), 3:4 (portrait), 16:9 (wide), 9:16 (stories)1:1
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively reveals key behavioral traits: the AI models used (Nano Banana 2 with Flux 2 Pro fallback), cost implications (5 credits per image), and the professional-quality nature of outputs. It doesn't mention rate limits, authentication needs, or error conditions, but provides substantial operational context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first sentence, followed by specific output types, technical details about AI models, and cost information. Every sentence earns its place by adding distinct value: the first establishes purpose, the second enumerates output formats, the third specifies technical implementation, and the fourth provides cost implications. No wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 5 parameters, 100% schema coverage, but no annotations and no output schema, the description provides excellent contextual completeness. It covers the tool's purpose, output types, technical implementation (AI models), and cost structure. The main gap is the lack of information about return values (image format, size, URL structure) since there's no output schema, but the description compensates well with other operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly with descriptions and enum values. The description doesn't add any parameter-specific information beyond what's in the schema. However, it provides overall context about the tool's purpose that helps understand parameter usage collectively, meeting the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Generate AI fashion product photography') and resources ('fashion product photography'), listing concrete output types like product shots, on-model photos, flat lays, and editorial imagery. It distinguishes itself from siblings like generate_colorways or generate_fashion_video by focusing specifically on image generation rather than color variations, fabric simulations, or video content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool by specifying it's for creating professional-quality fashion imagery, implying it's for marketing, catalog, or editorial purposes. However, it doesn't explicitly state when NOT to use it or mention alternatives among siblings (e.g., using generate_colorways for color variations instead). The cost information (5 credits per image) offers practical usage consideration but not comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_fashion_videoGenerate Fashion VideoAInspect

Submit an async video generation job from an existing image. Returns a job_id immediately — call check_video_status to poll for completion. Uses Kling 3.0 (primary, 1080p, native audio) with Seedance and Runway Gen-4 Turbo fallbacks. Costs 100-300 credits based on duration.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleYesVideo motion style: 360_turntable (product pages), gentle_animation (social media), catwalk (runway), zoom_pan (cinematic)
promptYesVideo motion description: "smooth 360 rotation, consistent lighting"
qualityNoVideo quality: hd (recommended), 4k (maximum resolution)hd
durationNoDuration in seconds: 5 (short), 10 (standard), 15 (long)10
source_image_urlYesURL from a previous generate_fashion_image result
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: async operation with job_id return, polling requirement via check_video_status, underlying technologies (Kling 3.0 with fallbacks), cost implications (100-300 credits), and quality/resolution details (1080p, native audio). It doesn't mention rate limits or authentication needs, but covers most critical behavioral aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first sentence, followed by essential behavioral details. Every sentence earns its place: async nature, polling requirement, technology stack, and cost information. No wasted words, and the structure flows logically from action to implementation details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex video generation tool with 5 parameters, no annotations, and no output schema, the description does an excellent job covering critical context: async behavior, polling workflow, technology details, cost implications, and quality specifications. The main gap is lack of explicit error handling or rate limit information, but it provides sufficient context for an agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain prompt formatting or image URL requirements). However, it does provide context about the source_image_url being 'from a previous generate_fashion_image result,' which adds some semantic value. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Submit an async video generation job'), the resource ('from an existing image'), and distinguishes it from siblings by mentioning the polling requirement ('call check_video_status to poll for completion'). It goes beyond just restating the name/title by explaining the async nature and immediate return of job_id.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: it specifies when to use this tool ('Submit an async video generation job from an existing image'), when not to use it (implied: not for real-time generation), and names an alternative tool for checking completion ('call check_video_status to poll for completion'). It also mentions fallback systems, which helps set expectations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_multi_angleGenerate Multi-Angle ViewsBInspect

Generate coordinated multi-angle views of a garment (front, back, side, etc.) with consistent style across all angles. Uses Nano Banana 2 (primary) with Flux 2 Pro fallback. Costs 4 credits per angle.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleNoPhotography style: product_shot (catalog), on_model (lifestyle), flat_lay (social), editorial (magazine)product_shot
anglesYesCamera angles to generate: front, back, side_left, side_right, three_quarter, detail_close
promptYesBase garment prompt WITHOUT angle direction (added automatically per view)
qualityNoImage quality: standard (fast), hd (recommended), ultra (maximum detail)hd
backgroundNoBackground description: "pure white seamless", "gradient beige to cream"
aspect_ratioNoAspect ratio: 1:1 (square), 4:3 (landscape), 3:4 (portrait), 16:9 (wide), 9:16 (stories)1:1
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses useful behavioral traits: the AI models used (Nano Banana 2 with Flux 2 Pro fallback) and cost information (4 credits per angle). However, it doesn't mention rate limits, authentication needs, error conditions, or what happens when generation fails. The cost disclosure is helpful but other operational aspects remain unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded with essential information in just two sentences. The first sentence states the core purpose, and the second provides critical operational details (models and cost). Every word earns its place with zero waste or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters, no annotations, and no output schema, the description is somewhat incomplete. While it covers purpose, models, and cost, it lacks information about return values (image URLs? metadata?), error handling, or typical use cases. The 100% schema coverage helps, but for a generation tool with cost implications, more behavioral context would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly with descriptions and enum values. The description adds no additional parameter semantics beyond what's in the schema. The baseline score of 3 reflects that the schema does the heavy lifting for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: generating coordinated multi-angle views of a garment with consistent style across angles. It specifies the resource (garment) and verb (generate views), but doesn't explicitly differentiate from sibling tools like 'generate_fashion_image' or 'generate_colorways' beyond mentioning multi-angle coordination.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'generate_fashion_image' (which might be for single images) or 'generate_colorways' (which might be for color variations), nor does it specify prerequisites or scenarios where this multi-angle approach is preferred.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_tech_packGenerate Tech PackAInspect

Generate a complete manufacturing tech pack with measurements, materials, BOM, construction steps, size chart, and colorways. Uses Claude AI for fashion-specific technical specifications. Costs 50 credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
seasonNoSeason/collection: SS26, FW26
garment_typeYesGarment type: blazer, dress, t-shirt, pants, jacket, etc.
style_numberNoStyle number if known
additional_notesNoExtra requirements or notes
garment_descriptionYesDetailed garment description: style, fit, details, closures, pockets
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it reveals the AI engine used ('Claude AI for fashion-specific technical specifications'), the cost implication ('Costs 50 credits'), and the comprehensive output scope. It doesn't mention rate limits, error conditions, or authentication needs, but provides substantial operational context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first clause, followed by supporting details about components, AI engine, and cost. Every sentence earns its place with zero wasted words, making it highly efficient while remaining comprehensive.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of generating a complete tech pack with no output schema and no annotations, the description provides strong context about output components, AI specialization, and cost. It could benefit from mentioning the format of the returned tech pack (e.g., PDF, structured data) or any limitations, but covers the essential operational context well for a tool with comprehensive input documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema properties. This meets the baseline expectation when schema coverage is high, but doesn't enhance understanding of individual parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('generate a complete manufacturing tech pack') and lists the comprehensive components included (measurements, materials, BOM, construction steps, size chart, colorways). It distinguishes itself from siblings like 'generate_colorways' or 'generate_fashion_image' by emphasizing the complete technical package rather than individual elements.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning 'fashion-specific technical specifications' and 'costs 50 credits', suggesting this is for fashion manufacturing planning with resource costs. However, it doesn't explicitly state when to use this versus alternatives like 'generate_fabric_sim' or 'list_collections', nor does it provide exclusion criteria or prerequisites beyond the credit cost.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_collectionsList CollectionsCInspect

List the user's design collections with session counts and asset counts. Requires authentication.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of collections to return (default: all)

Output Schema

ParametersJSON Schema
NameRequiredDescription
tipNoGuidance for unauthenticated users
totalYesTotal number of collections
collectionsYesList of design collections
authenticatedYesWhether the user is authenticated
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions authentication requirement which is valuable, but doesn't describe other important behaviors: whether this is a read-only operation, what happens when limit is exceeded, pagination behavior, error conditions, or rate limits. For a listing tool with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two clear sentences. The first sentence states the core functionality, and the second adds the authentication requirement. There's no wasted verbiage, though it could be slightly more structured by separating functional description from requirements.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there's an output schema (which handles return values) and 100% schema coverage for the single parameter, the description provides basic functional context. However, for a tool with no annotations, it should ideally provide more behavioral context about what 'list' means operationally - pagination, sorting, default behaviors, etc. The description is minimally adequate but leaves room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the single 'limit' parameter. The description doesn't add any parameter-specific information beyond what's in the schema. The baseline score of 3 is appropriate when the schema does all the parameter documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List the user's design collections with session counts and asset counts.' It specifies the resource (design collections) and what information is included (session counts, asset counts). However, it doesn't explicitly differentiate from sibling tools, which are mostly generation-focused rather than listing operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance. It mentions 'Requires authentication' which is a prerequisite but doesn't offer guidance on when to use this tool versus alternatives. There's no mention of when-not-to-use scenarios or comparison with sibling tools, leaving the agent with little context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources