Skip to main content
Glama
Ownership verified

Server Details

web3d-mcp is an MCP server for AI-powered 3D scene and ad generation on the web. It provides tools to generate, edit, animate, preview, validate, and export 3D scenes built with React Three Fiber (R3F). Designed for web developers and creative teams, it enables programmatic creation of production-ready interactive 3D web experiences and advertisements through natural language.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL

See and control every tool call

Log every tool call with full inputs and outputs
Control which tools are enabled per connector
Manage credentials once, use from any MCP client
Monitor uptime and get alerted when servers go down
Tool DescriptionsA

Average 4.3/5 across 12 of 12 tools scored. Lowest: 3.5/5.

Server CoherenceA
Disambiguation3/5

Significant overlap exists between apply_animation and edit_scene, as both can add or replace animations on objects. An agent cannot easily determine which to use for simple animation requests without reading detailed descriptions. Other tools have distinct purposes, but this dual path for animation modification creates confusion.

Naming Consistency4/5

Most tools follow a consistent verb_noun pattern (generate_scene, validate_scene, export_asset). Minor deviations exist with integration_help (noun_noun), preview (single noun), and optimize_for_web (prepositional), but the overall convention is readable and predictable.

Tool Count5/5

Twelve tools is well-scoped for a 3D scene generation pipeline covering the full lifecycle: intent refinement, planning, generation, validation, preview, geometry synthesis, code generation, optimization, and export. Each tool earns its place without redundancy.

Completeness4/5

The toolset covers the core CRUD workflow for 3D scene generation with clear pipeline stages. Minor gaps exist in post-creation modification (cannot add new objects or change camera position via edit_scene), but these appear to be intentional scope limitations rather than oversights.

Available Tools

12 tools
apply_animationAInspect

Apply or stack animations on objects in an existing 3D scene.

Single animation (backward compatible): Provide animation_type (string) to apply one animation. Existing animation config fields are PRESERVED by default. Only missing fields are filled from defaults.

Stacked animations (new): Provide animations[] array to apply multiple animations at once. Each entry can target a different object and carry its own config. Compatible animations on the same object are merged safely. Channel conflicts (e.g. float + bounce both on position.y) are detected and reported as warnings — not errors.

Config merge behavior (override field): override: false (default) — existing config fields win. Preserves range, speed, amplitude set by generate_scene. override: true — incoming config fully replaces existing.

Rotate range semantics: range >= 3.14 → CONTINUOUS SPIN (robot.rotation.y = t * speed) range < 3.14 → OSCILLATION (robot.rotation.y = sin(t) * range) Default range for rotate is 6.28 (full continuous spin).

Merge flag: merge: true (default) — new animations added alongside existing. merge: false — existing animations for same target+type replaced.

ParametersJSON Schema
NameRequiredDescriptionDefault
mergeNo
animationsNo
scene_dataYes
animation_typeNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Comprehensive disclosure of merge logic, config preservation defaults, conflict detection (warnings not errors), and critical rotate range threshold semantics (>=3.14 continuous vs oscillation).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-organized with clear section headers and front-loaded purpose; dense but justified by complexity and lack of schema descriptions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Thoroughly covers complex nested schema behavior, default values, and error handling despite zero annotations or schema descriptions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Compensates well for 0% schema coverage by explaining animations array, target_id concept, config merge flags, and rotate range semantics, though could explicitly map more config properties like amplitude/scale.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Explicitly states 'Apply or stack animations on objects in an existing 3D scene' with specific verbs and distinguishes from generate_scene by referencing 'existing' scene and preservation of fields set by generate_scene.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Clearly delineates single vs stacked animation modes and override/merge behaviors, but lacks explicit comparison to sibling alternatives like edit_scene.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

edit_sceneAInspect

Apply targeted modifications to an existing scene_data object.

WHEN TO CALL:

  • After validate_scene returns is_valid: false

  • When the user requests a style, material, animation, or position change to an already-generated scene

  • Do NOT call this to create a new scene — use generate_scene instead

WHAT THIS TOOL CAN MODIFY:

  • background: color and style preset

  • material: for all objects or a named object

  • animation: add or replace animations on objects

  • position: move a named object or the primary object

  • lighting: intensity adjustments (darker / lighter)

  • design_tokens: kept in sync with all changes automatically

WHAT THIS TOOL CANNOT DO:

  • Add new objects to the scene (use generate_scene for this)

  • Remove existing objects (out of scope in current version)

  • Change camera position or FOV

  • Modify individual mesh geometry

INPUT:

  • scene_data: the full scene_data object from generate_scene or a previous edit_scene call

  • edit_prompt: a plain-language description of the desired change

EDIT PROMPT EXAMPLES:

  • "make it darker" → dims ambient lighting, deepens background

  • "make the material glass" → applies glass_frost to all objects

  • "add spinning motion" → appends rotate animation, keeps existing

  • "move the robot up" → moves object named "robot" up by 1 unit

  • "change animation to float only" → replaces all animations with float

  • "make it neon" → applies neon material + neon_edge lighting

OUTPUT:

  • scene_data: updated scene with all changes applied

  • edit_summary: { applied[], skipped[], warnings[] }

PIPELINE POSITION: generate_scene → validate_scene → [edit_scene if invalid] → validate_scene (re-run) → synthesize_geometry → generate_r3f_code

ParametersJSON Schema
NameRequiredDescriptionDefault
scene_dataYes
edit_promptYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description carries full burden with comprehensive disclosure of sync behavior (design_tokens), output structure (edit_summary fields), pipeline position, and explicit limitations (cannot add/remove objects).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear headers and front-loaded purpose; information-dense but slightly verbose for an MCP description (could combine INPUT/OUTPUT sections).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage including pipeline workflow, examples, limitations, and output structure despite no output schema being provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage; description compensates with detailed semantics for scene_data (source object from previous calls) and edit_prompt (plain-language description with 6 concrete examples showing expected syntax).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Apply targeted modifications') + specific resource ('scene_data object'), clearly distinguishes from generate_scene (create vs edit) and other siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'WHEN TO CALL' section with preconditions (validate_scene failure) and clear exclusion (Do NOT call this to create a new scene — use generate_scene instead).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

export_assetAInspect

Package generated 3D scene output into downloadable files.

Formats: r3f -> Packages R3F code into a named .tsx file. Requires r3f_code string from generate_r3f_code. Does NOT regenerate code - it packages what you give it.

json -> Packages scene_data into a named .json file. Requires scene_data object from generate_scene.

Call order: For .tsx file: generate_r3f_code(scene_data) -> export_asset({ r3f_code, format: "r3f" })

For .json file: generate_scene(scene_plan) -> export_asset({ scene_data, format: "json" })

For visual preview of the scene layout, use the preview tool instead. preview tool returns SVG wireframe + spatial validation. export_asset does not generate previews.

Do NOT pass synthesized_components to export_asset. Pass them to generate_r3f_code, then pass the resulting r3f_code here.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatYes
typingNonone
filenameNo
r3f_codeNo
frameworkNoplain
scene_dataNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses critical behavioral constraints (does not regenerate code, does not generate previews, requires specific inputs from sibling tools) despite no annotations being present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (Formats, Call order), uses formatting effectively (arrows, code blocks), and front-loads the core purpose without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for input workflows and sibling relationships, but lacks explicit description of return value/output format since no output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema coverage, description compensates by explaining format options, r3f_code source requirements, and scene_data provenance, though typing and framework parameters lack explicit semantic explanation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific action (package) and resource (3D scene output), clearly distinguishes from siblings like preview and generate_r3f_code by explicitly stating what it does NOT do.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit call order workflows for both formats, explicitly warns against passing synthesized_components here, and directs users to preview tool for visual wireframes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_r3f_codeAInspect

Convert structured scene data into React Three Fiber code.

Returns a complete React component that renders the scene.

Material translation:

  • glass / glass_frost → MeshTransmissionMaterial (drei)

  • metal / metal_chrome → meshPhysicalMaterial with metalness:1

  • neon / high emissive → meshStandardMaterial with emissive + companion pointLight

  • matte / standard → meshStandardMaterial

Framework support:

  • "nextjs" adds "use client" directive (required for App Router)

  • "vite" / "plain" omit it

ParametersJSON Schema
NameRequiredDescriptionDefault
typingNo
frameworkNo
scene_dataYes
synthesized_componentsNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses detailed material translation mappings (glass→MeshTransmissionMaterial, etc.) and framework-specific code generation behavior ('use client' directive) despite having no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured with clear heading-like sections (Material translation, Framework support); every line conveys essential mapping logic or behavioral constraints without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers the core transformation logic and return value (React component) given no output schema, but incomplete due to undocumented parameters (typing, synthesized_components) given the schema lacks descriptions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Explains scene_data input and framework enum values via the framework support section, but fails to document 'typing' or 'synthesized_components' parameters despite 0% schema description coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific action ('Convert structured scene data into React Three Fiber code') and output format, implicitly distinguishes from siblings like export_asset or preview by specifying React Three Fiber output.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides framework-specific usage guidance (Next.js vs Vite) and material translation logic, but lacks explicit guidance on when to use this versus sibling tools like export_asset or generate_scene.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_sceneAInspect

Generate a complete 3D scene from a structured scene plan.

Your job:

  • Convert the scene plan into structured scene data

  • Drive materials, lighting, background, and layout from design_tokens when present

  • Apply user-specified color hints to background and accent colors

  • Propagate design_tokens through to scene_data.metadata for downstream tools

Rules:

  • Do NOT modify the scene plan

  • Do NOT add new objects

  • Use provided objects exactly

  • First object is the main subject

  • Apply style and animation as given

  • Consume design_tokens directly when present

This tool is deterministic and does not interpret intent.

ParametersJSON Schema
NameRequiredDescriptionDefault
scene_planYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses deterministic nature, lack of intent interpretation, and downstream metadata propagation for other tools, covering key behavioral traits absent from annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear 'Your job' and 'Rules' sections; every sentence conveys essential information about function, constraints, or behavior without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complex nested schema and lack of output schema, description adequately covers input handling logic and hints at output structure (scene_data.metadata), though explicit return value description would strengthen it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema coverage, it effectively compensates by explaining scene_plan structure (first object = main subject), design_tokens effects (drive materials/lighting), and color_hints application (background/accent).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Explicitly states the tool generates a 3D scene from a structured plan, with specific actions (convert, drive, apply, propagate) and clear differentiation from siblings via negative constraints (do NOT modify/add objects).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-not guidance via rules (don't modify plan, don't add objects, deterministic), though it doesn't explicitly name sibling tools as alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_scene_planAInspect

Create a structured 3D scene plan from the refined prompt.

Your job:

  • Identify the main object(s) in the scene

  • Extract style, environment, and animation intent

  • Reuse upstream design tokens and object hints when provided

Rules:

  • Include 1 to 4 objects MAXIMUM

  • Objects must be meaningful nouns (e.g., "shoe", "bottle", "phone")

  • Do NOT include adjectives or effects as objects (e.g., "glowing", "stylish", "background" are NOT objects)

  • Do NOT repeat objects

  • Choose ONE primary object (first in list)

  • Prefer structured style tokens when provided, but correct obvious approximations when the prompt contains stronger style evidence

id="hero_rule" Rules:

  • First object is the main subject

  • Additional objects (if any) must support the main object

  • Avoid unrelated objects in the same scene

Style rules:

  • Style should describe visual feel (e.g., premium, minimal, futuristic)

  • Style must be ONLY ONE keyword

  • Allowed styles:

    • premium

    • minimal

    • futuristic

    • playful

    • dark

  • Do NOT return multiple words (e.g., "premium dark moody" is invalid)

Object rules:

  • Only include PHYSICAL objects that exist as visible 3D items

  • Do NOT include:

    • lighting elements (e.g., "light", "spotlight", "glow")

    • effects (e.g., "particles", "smoke", "sparkles")

    • environment words (e.g., "background", "atmosphere")

  • Do NOT include environment elements

  • Objects must be real-world items (e.g., "shoe", "bottle", "phone") Examples of VALID objects:

  • "shoe", "bottle", "phone", "chair"

Examples of INVALID objects:

  • "light", "particles", "glow", "shadow"

  • Animation should be simple (rotate, float, none) Examples:

User: "3D rotating sneaker ad with dark premium feel" Output: { "objects": ["sneaker"], "style": "premium", "animation": "rotate", "use_case": "advertisement" }

User: "modern website hero section with floating phone" Output: { "objects": ["phone"], "style": "minimal", "animation": "float", "use_case": "website" }

Return structured scene plan data plus warnings and constraints.

ParametersJSON Schema
NameRequiredDescriptionDefault
contextYes
refined_promptYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Excellent disclosure of constraints without annotations: lists allowed style keywords, animation types, object limits (1-4), physical object requirements, and provides clear output examples.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excessively verbose with redundant 'Object rules' sections, an HTML-like id artifact, and repetitive constraints; information density is low relative to length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates well for missing output schema by providing detailed JSON examples and explaining return structure including warnings and constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fails to adequately document the `context` parameter (mentioned once vaguely) or explain that `refined_prompt` is the natural language input to be parsed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States it creates a structured 3D scene plan from a refined prompt, distinguishing it from sibling generation/refinement tools, though could explicitly clarify it is a preprocessing planning step.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides examples of valid inputs and workflow context (upstream design tokens), but lacks explicit guidance on when to use alternatives like `generate_scene` or `refine_prompt`.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

integration_helpAInspect

Provide guidance on how to integrate a generated 3D scene into an application.

Your job:

  • Explain how to use exported assets

  • Provide step-by-step instructions

  • Include code examples when helpful

Supported platforms:

  • react (React Three Fiber)

  • nextjs

  • html (basic usage)

Next.js router modes:

  • app_router

  • pages_router

router parameter:

  • Required when platform is nextjs. Omitting this when platform is nextjs will cause a validation error. Default for non-nextjs platforms: not applicable.

Rules:

  • Keep instructions simple and practical

  • Focus on helping user run the scene quickly

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNo
routerNo
platformYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses validation behavior (router required for nextjs) and output style (practical instructions, code examples), but lacks info on side effects or rate limits with no annotations provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (job, platforms, rules); front-loaded purpose with no redundant sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a documentation tool with no output schema; covers integration context, supported platforms, and conditional validation logic.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Compensates for 0% schema coverage by explaining platform values and router conditional logic, but omits format parameter (r3f vs json) entirely.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States clear purpose (integration guidance for 3D scenes) and distinguishes from generation/export siblings, though 'provide guidance' is slightly generic.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides platform-specific parameter guidance (router requirement) but lacks explicit guidance on when to use this tool vs alternatives like generate_r3f_code or export_asset.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

optimize_for_webAInspect

Optimize a 3D scene for web performance.

Your job:

  • Reduce rendering cost

  • Ensure smooth performance on web and mobile

  • Analyze scene cost before and after optimization

  • Report every optimization decision

Rules:

  • Do NOT change scene intent

  • Do NOT remove main object

  • Only simplify and optimize

Optimizations include:

  • reducing object count

  • simplifying materials

  • adjusting lighting

  • improving performance

  • capping heavy geometry / particle settings for mobile

  • returning a detailed optimization report

ParametersJSON Schema
NameRequiredDescriptionDefault
targetNo
scene_dataYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses specific behavioral constraints (preserves intent/main objects), analysis behavior (before/after cost analysis), and output format (detailed report) without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (Your job, Rules, Optimizations include), front-loaded purpose statement, and scannable bullet points with no redundant text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage of constraints, optimization types, and output (detailed report) despite lack of output schema and annotations; only minor gap is explicit parameter mapping.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description implies parameter meanings through context (scene optimization, mobile/desktop targeting) but never explicitly maps description text to the 'scene_data' or 'target' parameter names.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Explicitly states the specific action (optimize) and resource (3D scene) with clear performance focus, distinguishing it from sibling tools like edit_scene or generate_scene.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'Rules' section stating constraints (do NOT change intent, do NOT remove main object) that define boundaries versus general editing tools, though doesn't explicitly name alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

previewAInspect

Preview a 3D scene before generating code.

Returns two outputs:

  1. An SVG wireframe — a 2D top-down orthographic view of all objects, lights, and camera frustum in the scene.

  2. A structured text description — scene overview, object list, lighting summary, animation summary, and spatial validation checks.

Use this tool AFTER generate_scene and BEFORE synthesize_geometry to validate that objects are correctly positioned, lights are placed, animations have valid targets, and no objects overlap.

The spatial_validation section runs 6 automated checks and returns a confidence_score (0-10). If score < 7, fix the issues before proceeding to generate_r3f_code.

ParametersJSON Schema
NameRequiredDescriptionDefault
viewNoCamera view angle for the wireframetop
scene_dataYesThe scene_data object produced by generate_scene or edit_scene
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses specific behavioral details absent from schema: 6 automated validation checks, confidence_score range (0-10), and specific output composition (orthographic view, spatial validation sections).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (outputs, usage, validation); front-loaded purpose with actionable sequencing guidance; no redundant text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates effectively for missing output schema by detailing both return formats (SVG characteristics, text description sections) and validation metrics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage; description adds minimal param semantics beyond schema, though it reinforces scene_data provenance from generate_scene.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it previews 3D scenes via SVG wireframe and text description, with specific workflow positioning that distinguishes it from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly specifies temporal sequencing ('AFTER generate_scene and BEFORE synthesize_geometry') and conditional logic ('If score < 7, fix issues before proceeding').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

refine_promptAInspect

Refine a user's request for creating a 3D scene.

Your job:

  • Understand the user's intent clearly

  • Identify the purpose (advertisement, website, showcase, etc.)

  • Extract typed design tokens

  • Detect if animation is implied

Return these structured fields when possible:

  • use_case

  • theme / style

  • material_preset

  • animation

  • lighting_preset

  • background_preset

  • composition

  • confirmed_objects

  • object_hints

  • discarded_hints

Rules:

  • Do NOT generate objects here

  • Do NOT create a scene

  • Only clarify and structure intent

  • Keep richer scene-object detail in confirmed_objects for downstream tools

Return a refined prompt and structured context for the next step.

ParametersJSON Schema
NameRequiredDescriptionDefault
user_promptYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden and discloses return structure (lists 9 output fields), clarifies it only structures intent without scene creation, and notes data flows to downstream tools.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (Your job, Return fields, Rules); front-loaded purpose; lists necessary output fields to compensate for missing output schema without excessive verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and 10+ siblings, description adequately explains return values via enumerated fields and establishes clear pipeline position relative to generation tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% coverage (just 'user_prompt' with no description); description implies the parameter contains the raw user request through process context but never explicitly documents the input parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Explicitly states it refines user requests for 3D scenes and distinguishes itself from siblings via 'Do NOT generate objects'/'Do NOT create a scene' rules.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Clear 'Rules' section specifies what not to do (generating objects, creating scenes) and mentions 'downstream tools,' though could explicitly state call timing relative to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

synthesize_geometryAInspect

Explicitly request a synthesis contract for a named 3D object.

Use this tool when generate_r3f_code returns status SYNTHESIS_REQUIRED, or to pre-generate geometry constraints before calling generate_r3f_code.

Complexity tiers: low — 4 to 7 parts. Only Box, Sphere, Cylinder geometries. Best for: mobile banners, thumbnails, low-end devices. medium — 10 to 20 parts. Adds Capsule and Torus geometries. Best for: website sections, embedded widgets, tablets. high — 28+ parts. All geometries. Full emissive detail. Best for: hero sections, desktop showcase, ad campaigns.

If target is set to "mobile" and complexity is not explicitly provided, complexity defaults to "low" automatically.

This tool does NOT generate geometry. It returns the synthesis_contract with constraints calibrated to the requested complexity tier. The LLM generates the actual JSX and passes it to generate_r3f_code via synthesized_components.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleYes
targetNoOptional. When "mobile" is set and complexity is not explicitly provided, complexity defaults to "low" automatically.
object_idNo
base_colorNo#e8edf8
complexityNoControls mesh part count and geometry detail. low = 4-7 parts, mobile banners, thumbnails. medium = 10-20 parts, website sections, widgets. high = 28+ parts, hero sections, showcase scenes. Defaults to medium if not provided. Automatically overrides to low when target is mobile and complexity is not explicitly set.medium
object_nameYes
accent_colorNo#00F5FF
material_presetYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Excellent disclosure of what the tool does NOT do (no geometry generation), explains the abstract output (synthesis_contract), and documents automatic behavioral defaults (mobile target forcing low complexity).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear visual hierarchy: one-line purpose → usage conditions → detailed complexity tiers → default behavior → negative capability statement. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, adequately explains the return value concept (synthesis_contract) and downstream workflow (LLM generates JSX → generate_r3f_code), though explicit return structure details would perfect it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Compensates well for low schema coverage (25%) with detailed tier breakdowns for the critical 'complexity' parameter (part counts, geometry types, use cases), though could briefly contextualize 'style' or 'material_preset' enums.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Explicitly states it requests a 'synthesis contract' (specific verb+resource) and clearly distinguishes from sibling generate_r3f_code by stating 'This tool does NOT generate geometry'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit trigger conditions ('when generate_r3f_code returns status SYNTHESIS_REQUIRED'), workflow guidance ('pre-generate geometry constraints before calling'), and complexity selection criteria tied to specific use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_sceneAInspect

Validate scene_data before generating 3D code.

Runs 12 structural checks across 4 categories: S — Structure (4 rules): scene_id, objects array, camera validity O — Objects (5 rules): ids, positions, frustum bounds, overlap, pending synthesis contracts L — Lighting (2 rules): non-ambient light presence, intensity range A — Animation(2 rules): target_id resolution, config fields

Severity levels: error → blocks codegen. Must fix before generate_r3f_code. warn → does not block. Review before proceeding.

Returns is_valid: true only when zero "error" rules fail. Returns next_step string with exact instruction for what to do next.

Call this tool AFTER generate_scene and BEFORE synthesize_geometry. If is_valid is false, call edit_scene to fix errors, then re-run validate_scene before proceeding to codegen.

ParametersJSON Schema
NameRequiredDescriptionDefault
strictNoWhen true, treat "warn" severity as "error". Useful for CI/CD pipelines or production exports.
scene_dataYesThe scene_data object from generate_scene or edit_scene.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description fully compensates by detailing 12 validation checks across 4 categories, severity semantics (error blocks/warn doesn't), and return value behavior (is_valid logic).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with S-O-L-A categorization, severity levels, and workflow section; every sentence provides actionable information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates for missing output schema by documenting return values (is_valid, next_step) and validation categories; sufficient for complex validation workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (baseline 3); description adds value by specifying scene_data provenance (from generate_scene/edit_scene) and workflow integration context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb (validate) + resource (scene_data), clearly distinguishes from siblings via explicit workflow positioning (after generate_scene, before synthesize_geometry).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit temporal sequencing with siblings (after generate_scene, before codegen), clear error handling loop (edit_scene → re-validate), and when-to-call guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.