Skip to main content
Glama

Server Details

Twinmotion rendering via APS — import Revit, set environments, render images, export video.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: export_video handles animated rendering, import_rvt manages model import/translation, list_scenes provides scene/model structure, render_image generates still images, and set_environment configures visualization settings. The descriptions reinforce these distinct functions, eliminating any ambiguity.

Naming Consistency5/5

All tools follow a consistent 'tm_' prefix with snake_case naming and clear verb_noun patterns (e.g., export_video, import_rvt, list_scenes). This uniformity makes the tool set predictable and easy to understand, with no deviations in style or convention.

Tool Count5/5

With 5 tools, the set is well-scoped for a Twinmotion visualization server, covering core workflows like model import, scene management, rendering (both image and video), and environment configuration. Each tool earns its place without feeling excessive or insufficient for the domain.

Completeness4/5

The tools provide strong coverage for visualization workflows, including import, scene listing, rendering (image and video), and environment setup. A minor gap exists in operations like updating or deleting scenes/models, but agents can likely work around this given the core functionality is well-covered for the stated purpose.

Available Tools

5 tools
tm_export_videoCInspect

Prepare a model for an animated walkthrough / video export by verifying the manifest is complete, then starting a secondary Model Derivative job that produces OBJ geometry (suitable for ingestion into offline rendering pipelines, Blender, or Unreal Engine). Also returns the list of available named views so the operator can stitch them into a camera path. Does NOT itself produce an mp4 — video encoding happens in the downstream UE/Twinmotion pipeline.

When to use: when a user wants a walkthrough/flythrough video of a BIM model (e.g. 'make a 30-second tour of Tower A') — this tool gets the geometry into a UE-ingestible form (.obj, plus suggests FBX/glTF/USD naming like TowerA_walkthrough.fbx for the exported asset) and enumerates named views to guide camera path authoring. When NOT to use: not to actually encode video (no runtime renderer in this worker — output must be finished in Unreal/Twinmotion/Blender), not before tm_import_rvt, not if the manifest is still 'inprogress' (the tool will short-circuit and return status='pending'). Not for still images (use tm_render_image) or clash animations (use navisworks-mcp). APS scopes required: data:read data:write viewables:read. Write scopes are needed because this kicks off a new Model Derivative translation job (OBJ + thumbnail). Rate limits: APS default ~50 req/min; Model Derivative translation jobs ~60 req/min. OBJ derivatives of large BIM models can be multi-GB and take 10–45 min — rely on manifest polling with exponential backoff, not re-calling this tool. Errors: 401/403 = token/scope (data:write commonly missing); 404 = URN not found; 409 = OBJ derivative already queued (treat as success); 422 = input format does not support OBJ output (some IFC variants / proprietary formats — fall back to FBX/glTF via a different derivative format); 429 = back off 60s; 5xx = APS upstream. Side effects: STARTS a new translation job on an existing URN (consumes APS cloud credits). Writes usage_log. NOT idempotent per-call (each call creates a new job record), but APS will dedupe identical output requests internally if manifest already contains the derivative.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNoIntended final video container (metadata hint for the downstream UE/Twinmotion render step). mp4 = H.264 web-friendly, mov = ProRes for editing, webm = VP9/AV1 for web.
project_idYesBase64-URL-safe URN of a fully-translated model (manifest.status must equal 'success'). If status != success, the tool returns status='pending' without starting a job.
resolutionNoIntended final video resolution (metadata hint). 4K (3840x2160) roughly quadruples UE render time vs 1080p.
animation_nameYesHuman-readable label for the walkthrough/animation (used in downstream asset naming; suggest matching the exported video/USD filename base, e.g. 'tower_a_lobby_tour' → tower_a_lobby_tour.mp4 / .fbx / .glb / .usd).
duration_secondsNoTarget duration of the final video in seconds (integer). Used only as metadata for the downstream UE Movie Render Queue; this tool does not encode video. Typical: 15–120s.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool starts a rendering job, implying an asynchronous or long-running process, but doesn't mention whether it returns a job ID, status updates, or completion time, nor does it address potential rate limits, authentication needs, or error conditions. This leaves significant gaps in understanding the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two clauses, front-loading the main action ('Start a rendering job for animated visualization') and adding a clarifying note. There's no wasted text, but it could be slightly more structured by explicitly separating purpose from context or output details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of starting a rendering job (an asynchronous operation), no annotations, and no output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., a job ID, status), how to check progress, or potential side effects, making it inadequate for an agent to use this tool effectively without additional context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema fully documents all parameters. The description adds no additional meaning beyond what's in the schema, such as explaining relationships between parameters (e.g., how 'duration_seconds' interacts with 'animation_name') or providing usage examples. Baseline 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Start a rendering job') and the resource ('animated visualization'), specifying it translates a model with additional output formats. However, it doesn't explicitly differentiate from sibling tools like 'tm_render_image' (which might handle static images), leaving some ambiguity about when to choose this tool over others.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions 'additional output formats' but doesn't specify what makes this tool unique compared to siblings like 'tm_render_image' or 'tm_list_scenes', nor does it indicate prerequisites or exclusions for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tm_import_rvtBInspect

Import a Revit/BIM model into the Twinmotion visualization pipeline: downloads the source file from a public URL, uploads it to an APS OSS transient bucket, and kicks off an SVF2 + thumbnail translation job. Returns the base64 URN (project_id) used by every other tm_* tool.

When to use: when a user wants to prepare a Revit (.rvt), IFC (.ifc), or other BIM/CAD model for real-time visualization in Unreal Engine / Twinmotion — typically the first step before rendering stills, defining scenes, or exporting FBX/glTF/OBJ geometry for a UE import. Also use when you need thumbnails or view metadata from a source file that has not yet been translated by APS. When NOT to use: not for MEP clash review (use navisworks-mcp), not for quantity takeoff or cost estimation (use qto-mcp), not for Twinmotion presets editing — Twinmotion itself has no public REST API, so scene/material authoring must happen manually in the UE editor after FBX/USD export. APS scopes required: data:read data:write data:create bucket:read bucket:create viewables:read. Uses Model Derivative API (translation) + OSS (upload). Twinmotion has no public REST API; all automation is APS Model Derivative + manual Unreal Engine export. Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; large .rvt/.nwd/.ifc files are often multi-GB and translation can take 5–60 min — poll the manifest with exponential backoff (start 5s, cap 60s) rather than retrying this tool. Worker request ceiling is ~100MB body; extremely large files may need signed-URL upload instead. Errors: 401 = APS token failed (check APS_CLIENT_ID/APS_CLIENT_SECRET, re-auth); 403 = scope missing (bucket:create/data:write not granted — have user re-consent); 404 = file_url unreachable; 409 = bucket key collision (rare — retry, tool uses timestamp); 413/507 = file too large for worker memory (advise signed-URL upload); 422 = unsupported source format (only Autodesk-accepted types: rvt, ifc, nwd, dwg, dgn, 3dm, stp, etc.); 429 = back off 60s before retrying; 5xx = APS upstream outage, retry with backoff. Side effects: CREATES a new transient OSS bucket (scanbim-viz-, auto-expires in 24h), CREATES an object in OSS, STARTS a translation job consuming APS cloud credits. NOT idempotent — each call creates a new bucket + URN. Writes a row to usage_log D1 table.

ParametersJSON Schema
NameRequiredDescriptionDefault
file_urlYesPublic HTTPS URL to download the source BIM/CAD file. Must be reachable without auth from Cloudflare Workers egress. Supports rvt, ifc, nwd, dwg, dgn, 3dm, stp, obj, and other APS-supported formats. Signed URLs (S3/GCS) work if the signature is embedded in the query string.
file_nameYesFilename with extension used as the OSS object key. Non-alphanumeric characters are sanitized to underscores. Extension drives APS translator selection (.rvt → Revit, .ifc → IFC, etc.). For downstream Twinmotion/UE import, keep the base name meaningful (e.g. 'TowerA_L01-L20.rvt' → later exported as TowerA_L01-L20.fbx / .glb / .usd).
lighting_presetNoLighting preset label stored alongside the import — purely metadata for downstream tm_render_image / UE scene setup; does not affect the APS translation itself. 'natural' = daylight sun+sky, 'studio' = neutral 3-point, 'evening' = warm low sun.
include_materialsNoIf true (default), the translation preserves material/texture data so the derivative is visually meaningful in Twinmotion/UE. Set false only for geometry-only pipelines (faster, smaller derivatives).
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the process ('uploads to APS OSS and starts SVF2 translation with thumbnail generation') but fails to disclose critical traits such as whether this is a long-running operation, error handling, rate limits, or authentication requirements. For a tool with no annotations and complex operations, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Import a Revit model for visualization') and adds essential process details without waste. Every part earns its place by clarifying the tool's scope and actions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the tool (involving upload, translation, and thumbnail generation), no annotations, and no output schema, the description is incomplete. It lacks information on behavioral traits, error handling, and what the tool returns (e.g., success indicators, job IDs). This leaves significant gaps for an AI agent to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description adds no additional meaning beyond what the schema provides (e.g., it doesn't explain the implications of 'lighting_preset' or 'include_materials' in the context of import/translation). Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Import a Revit model for visualization') and the resource ('Revit model'), with explicit details about the process ('uploads to APS OSS and starts SVF2 translation with thumbnail generation'). It distinguishes from sibling tools like 'tm_export_video' or 'tm_render_image' by focusing on import/translation rather than export or rendering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'tm_list_scenes' or 'tm_set_environment', nor does it mention prerequisites (e.g., file accessibility, authentication needs) or exclusions. It implies usage for Revit model import but lacks context for decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tm_list_scenesBInspect

Enumerate every 2D/3D view ('scene') baked into the translated model, plus a shallow dump of the model object tree (first 50 top-level nodes across all 3D views), plus the list of completed derivatives (svf2, thumbnail, obj, etc.) available via APS. The canonical discovery tool for anything downstream that needs a view name or GUID.

When to use: before tm_render_image (to pick a valid camera_preset), before tm_export_video (to plan a camera path across named views), to audit what was translated ('did the 3D coordination view survive translation?'), or to expose the top-level model hierarchy for UI display. Also a useful health check — if scene_count=0, the translation is incomplete or failed. When NOT to use: not for full property queries on individual objects (this tool returns names + GUIDs + child counts only — use a dedicated property-query tool for full attribute dumps), not for geometry data (use tm_export_video for OBJ export), not on a URN that has not yet started translating. APS scopes required: viewables:read data:read. Read-only across Model Derivative manifest + metadata + object-tree endpoints. Rate limits: APS default ~50 req/min. This tool fans out across every 3D view to fetch object trees — for models with many 3D views (10+) it can burn a chunk of the budget in one call. Prefer caching the result on the caller side rather than re-invoking. Errors: 401/403 = token/scope; 404 = URN not found; 422 = n/a; 429 = back off 60s (this tool makes multiple APS calls per invocation, so 429 is more likely than on single-call tools); 5xx = APS upstream. A 202 on object-tree means APS is still building the tree — the tool retries once internally. Side effects: NONE on APS (read-only). Writes a usage_log row. Idempotent.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesBase64-URL-safe URN of the translated model. Should have manifest.status='success' for full results; if still translating, scene_count may be 0 or partial.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While 'list' implies a read-only operation, the description doesn't explicitly state this or address other behavioral aspects like authentication requirements, rate limits, pagination, or what happens when the project_id is invalid. It provides minimal behavioral context beyond the basic operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that clearly states the tool's purpose with zero wasted words. It's appropriately sized for a simple listing tool and front-loads the essential information without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple listing tool with one well-documented parameter and no output schema, the description is minimally adequate. However, with no annotations and no output schema, it should ideally provide more context about what the listing returns (format, structure, limitations) and behavioral constraints. The description meets basic requirements but leaves gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'project_id' fully documented as 'Base64-encoded URN'. The description adds no additional parameter semantics beyond what the schema provides. Since the schema does the heavy lifting, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('list all available') and resources ('views, scenes, and model structure from a translated model'). It distinguishes itself from siblings like tm_export_video or tm_render_image by focusing on listing rather than exporting or rendering. However, it doesn't explicitly differentiate from potential overlapping list functions in other tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context for usage, or exclusions. While the purpose is clear, there's no explicit direction on when this listing tool should be preferred over other operations on the same model.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tm_render_imageCInspect

Render a still preview image of the model at a specified resolution by pulling the APS Model Derivative thumbnail (capped at 800x800 by the APS endpoint). Also resolves the camera_preset against model metadata to identify which 3D view it maps to, and applies any stored environment config from tm_set_environment for reference.

When to use: when you need a quick visual sanity-check of an imported model (e.g. 'show me what Tower A looks like'), to preview a specific named view before committing to a full UE/Twinmotion render, or to embed a low-res preview in a chat/report. Pair with tm_list_scenes first to discover valid view names/GUIDs. When NOT to use: not for production-quality renders (APS thumbnails are low-res and raster-only; for cinematic output use Unreal Engine Movie Render Queue after FBX/USD export), not for arbitrary custom camera angles (only named views from the source file are resolvable — there is no runtime camera placement API here), not for 2D sheet exports (use tm_list_scenes to find 2D roles and fetch directly). APS scopes required: viewables:read data:read. Hits Model Derivative thumbnail + metadata endpoints only. Rate limits: APS default ~50 req/min per app per endpoint. Thumbnail endpoint is usually fast (<2s) once the model has translated; if called while status='inprogress' it returns no thumbnail. Do not loop-poll this tool — poll the manifest via tm_set_environment or tm_list_scenes instead. Errors: 401/403 = token/scope; 404 = URN not found or thumbnail not yet generated (model still translating — retry after manifest reports success); 409 = n/a; 422 = n/a; 429 = back off 30s; 5xx = APS upstream. Side effects: NONE (read-only on APS). Reads KV env_config_. Writes a row to usage_log. Idempotent.

ParametersJSON Schema
NameRequiredDescriptionDefault
qualityNoQuality label — metadata only, since APS thumbnails have fixed quality. Use 'cinematic' as an intent signal that the operator should do a post-export UE render instead.
project_idYesBase64-URL-safe URN of the translated model (from tm_import_rvt). Model must have reached manifest.status='success' or at least have a thumbnail derivative available.
resolutionNoRequested output resolution. Note: APS thumbnail endpoint hard-caps at 800x800 — selecting 1920x1080 will be clamped to 800x800. For true HD/4K renders, export FBX/USD and render in UE Movie Render Queue.
camera_presetNoView name (e.g. '3D View 1', '{3D}', 'Perspective - Lobby') or metadata GUID to render from. Discover valid values via tm_list_scenes. If omitted or unmatched, the first 3D view is used. Custom ad-hoc camera placements are not supported — only views baked into the source file.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions generating a thumbnail and specifies resolution, but fails to describe critical behaviors such as whether this is a read-only operation, potential performance impacts, authentication needs, or rate limits. For a rendering tool with zero annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('Render a still image') and includes key details. There's no wasted text, though it could be slightly more structured (e.g., separating purpose from constraints).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a rendering tool with no annotations and no output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., image data or a URL), error conditions, or dependencies on other tools (like needing a processed model first). This leaves the agent with insufficient context for reliable use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all parameters (quality, project_id, resolution, camera_preset) with descriptions and enums. The description adds marginal value by implying thumbnail generation and resolution specification, but doesn't provide additional syntax or format details beyond what the schema offers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Render a still image') and specifies the resource source ('from APS Model Derivative'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like tm_export_video (which likely renders video) or tm_list_scenes (which likely lists scenes rather than rendering).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like tm_export_video or tm_list_scenes. It mentions generating a thumbnail at a specified resolution, but lacks context about prerequisites (e.g., needing a processed model) or exclusions (e.g., not for animations).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tm_set_environmentCInspect

Configure the visualization environment (weather, time-of-day, surround context) for a previously imported model. Validates the model exists via APS Model Derivative manifest, then stores the environment config in KV (24h TTL) so tm_render_image and tm_export_video can apply it.

When to use: after tm_import_rvt completes and the manifest status is 'success' (or in-progress if you just want to pre-stage config), when the user wants to set scene context — e.g. 'render the tower at 17:00 in an urban setting with clear weather' — before generating images or video walkthroughs. Typical step 2 in the Twinmotion flow. When NOT to use: not for editing geometry, materials, or UE post-process volumes (those live in the Unreal Engine editor after FBX/USD import — Twinmotion has no public REST API). Do not call before tm_import_rvt — there is no URN to attach config to. APS scopes required: viewables:read data:read (manifest + metadata fetch only — read-only for this tool). No bucket or write scopes needed. Rate limits: APS default ~50 req/min per app per endpoint; manifest/metadata are cheap but polling-heavy if the model is still translating — prefer a single call per user intent, not a status-poll loop. KV writes are effectively unlimited at this scale. Errors: 401 = APS token expired/invalid; 403 = viewables:read not granted; 404 = URN unknown to APS (wrong project_id, or translation never started); 409 = n/a; 422 = n/a; 429 = back off 30s; 5xx = APS Model Derivative outage. Side effects: WRITES the env config to KV under key env_config_ (TTL 86400s). Idempotent — calling again overwrites the prior config. Writes a row to usage_log.

ParametersJSON Schema
NameRequiredDescriptionDefault
weatherNoWeather condition label stored with the scene config. Drives UE sky/atmosphere presets during manual Twinmotion scene authoring.
project_idYesBase64-URL-safe URN returned by tm_import_rvt (the `project_id` / `urn` field). This is the Autodesk design URN — NOT an object ID, NOT a bucket key. Format: base64url of 'urn:adsk.objects:os.object:<bucket>/<object>', trailing '=' stripped.
environmentNoSurround/context preset for the UE scene. Purely metadata — applied when the operator builds the Twinmotion scene post-FBX-export. 'custom' means the user will supply their own HDRI/backdrop in UE.
time_of_dayNo24-hour clock time as HH:MM. Used for sun position in the UE scene. Default if omitted is '12:00' (noon).
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions storing scene config and retrieving model metadata, which implies read-write operations, but doesn't specify if this is destructive, requires specific permissions, or has side effects like overwriting existing settings. For a configuration tool with zero annotation coverage, this leaves significant gaps in understanding behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, using a single sentence to state the core purpose. Every part earns its place by covering configuration, storage, and retrieval aspects. However, it could be slightly more structured by separating the dual actions (configure vs. retrieve) for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a configuration tool with no annotations and no output schema, the description is incomplete. It doesn't explain what 'stores scene config' entails (e.g., persistence, format) or what 'retrieves model metadata' returns, leaving the agent uncertain about outcomes. For a tool with 4 parameters and behavioral implications, more detail is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (weather, project_id, environment, time_of_day) with descriptions and enums. The description adds no additional meaning beyond implying these parameters configure the environment, which is redundant. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('configure', 'stores', 'retrieves') and resources ('visualization environment settings', 'scene config', 'model metadata'). It distinguishes the tool from siblings like tm_render_image or tm_export_video by focusing on environment configuration rather than rendering or exporting. However, it doesn't explicitly differentiate from tm_list_scenes, which might be related.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing a project_id from tm_import_rvt, or when to choose this over tm_list_scenes for scene management. Usage is implied through the action verbs but lacks explicit context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources