Skip to main content
Glama

generate_hyper3d_model_via_images

Create 3D models with materials from images and import them into Blender for 3D modeling workflows.

Instructions

Generate 3D asset using Hyper3D by giving images of the wanted asset, and import the generated asset into Blender. The 3D asset has built-in materials. The generated model has a normalized size, so re-scaling after generation can be useful.

Parameters:

  • input_image_paths: The absolute paths of input images. Even if only one image is provided, wrap it into a list. Required if Hyper3D Rodin in MAIN_SITE mode.

  • input_image_urls: The URLs of input images. Even if only one image is provided, wrap it into a list. Required if Hyper3D Rodin in FAL_AI mode.

  • bbox_condition: Optional. If given, it has to be a list of ints of length 3. Controls the ratio between [Length, Width, Height] of the model.

Only one of {input_image_paths, input_image_urls} should be given at a time, depending on the Hyper3D Rodin's current mode. Returns a message indicating success or failure.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
input_image_pathsNo
input_image_urlsNo
bbox_conditionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool generates a 3D asset with built-in materials, imports it into Blender, produces normalized-size models (suggesting rescaling may be needed), and returns a success/failure message. It also clarifies the mode dependency (MAIN_SITE vs. FAL_AI) for input types. While it covers mutation (generation and import) and output behavior, it lacks details on permissions, rate limits, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It front-loads the core purpose, followed by key behavioral details, and ends with a clear parameter section. Each sentence adds value, such as explaining material inclusion, normalized size, and mode dependencies. Minor verbosity in repeating 'Even if only one image is provided, wrap it into a list' could be streamlined, but overall it is efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (3 parameters, 0% schema coverage, no annotations, no output schema), the description is largely complete. It explains the tool's purpose, usage context, parameters, and behavioral traits like import into Blender and normalized sizing. However, it lacks details on output structure beyond success/failure messages and does not address potential errors or integration specifics with sibling tools, leaving some gaps for a mutation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must fully compensate. It provides detailed semantics for all three parameters: input_image_paths (absolute paths, list-wrapped, required for MAIN_SITE mode), input_image_urls (URLs, list-wrapped, required for FAL_AI mode), and bbox_condition (optional list of 3 ints controlling length, width, height ratio). It also clarifies the exclusive choice between input_image_paths and input_image_urls based on mode, adding critical context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Generate 3D asset using Hyper3D by giving images of the wanted asset, and import the generated asset into Blender.' It specifies the verb (generate and import), resource (3D asset via Hyper3D), and distinguishes it from sibling tools like generate_hyper3d_model_via_text (which uses text input) and import_generated_asset (which only imports).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage: it specifies that input images are required and distinguishes between input_image_paths (for MAIN_SITE mode) and input_image_urls (for FAL_AI mode). However, it does not explicitly state when to use this tool versus alternatives like generate_hunyuan3d_model or import_generated_asset, nor does it mention prerequisites or exclusions beyond the mode dependency.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SolonaBot/blender-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server