Skip to main content
Glama

Twinmotion MCP

Server Details

Twinmotion rendering via APS — import Revit, set environments, render images, export video.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: export_video handles animated rendering, import_rvt manages model import/translation, list_scenes provides scene/model structure, render_image generates still images, and set_environment configures visualization settings. The descriptions reinforce these distinct functions, eliminating any ambiguity.

Naming Consistency5/5

All tools follow a consistent 'tm_' prefix with snake_case naming and clear verb_noun patterns (e.g., export_video, import_rvt, list_scenes). This uniformity makes the tool set predictable and easy to understand, with no deviations in style or convention.

Tool Count5/5

With 5 tools, the set is well-scoped for a Twinmotion visualization server, covering core workflows like model import, scene management, rendering (both image and video), and environment configuration. Each tool earns its place without feeling excessive or insufficient for the domain.

Completeness4/5

The tools provide strong coverage for visualization workflows, including import, scene listing, rendering (image and video), and environment setup. A minor gap exists in operations like updating or deleting scenes/models, but agents can likely work around this given the core functionality is well-covered for the stated purpose.

Available Tools

5 tools
tm_export_videoCInspect

Start a rendering job for animated visualization — translates model with additional output formats

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNoOutput video format
project_idYesBase64-encoded URN
resolutionNoOutput resolution
animation_nameYesLabel for this animation/walkthrough
duration_secondsNoTarget duration
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool starts a rendering job, implying an asynchronous or long-running process, but doesn't mention whether it returns a job ID, status updates, or completion time, nor does it address potential rate limits, authentication needs, or error conditions. This leaves significant gaps in understanding the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two clauses, front-loading the main action ('Start a rendering job for animated visualization') and adding a clarifying note. There's no wasted text, but it could be slightly more structured by explicitly separating purpose from context or output details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of starting a rendering job (an asynchronous operation), no annotations, and no output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., a job ID, status), how to check progress, or potential side effects, making it inadequate for an agent to use this tool effectively without additional context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema fully documents all parameters. The description adds no additional meaning beyond what's in the schema, such as explaining relationships between parameters (e.g., how 'duration_seconds' interacts with 'animation_name') or providing usage examples. Baseline 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Start a rendering job') and the resource ('animated visualization'), specifying it translates a model with additional output formats. However, it doesn't explicitly differentiate from sibling tools like 'tm_render_image' (which might handle static images), leaving some ambiguity about when to choose this tool over others.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions 'additional output formats' but doesn't specify what makes this tool unique compared to siblings like 'tm_render_image' or 'tm_list_scenes', nor does it indicate prerequisites or exclusions for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tm_import_rvtBInspect

Import a Revit model for visualization — uploads to APS OSS and starts SVF2 translation with thumbnail generation

ParametersJSON Schema
NameRequiredDescriptionDefault
file_urlYesPublic URL to download the Revit file
file_nameYesFile name (e.g. 'Building.rvt')
lighting_presetNoScene lighting preset label
include_materialsNoInclude material data in translation
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the process ('uploads to APS OSS and starts SVF2 translation with thumbnail generation') but fails to disclose critical traits such as whether this is a long-running operation, error handling, rate limits, or authentication requirements. For a tool with no annotations and complex operations, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Import a Revit model for visualization') and adds essential process details without waste. Every part earns its place by clarifying the tool's scope and actions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the tool (involving upload, translation, and thumbnail generation), no annotations, and no output schema, the description is incomplete. It lacks information on behavioral traits, error handling, and what the tool returns (e.g., success indicators, job IDs). This leaves significant gaps for an AI agent to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description adds no additional meaning beyond what the schema provides (e.g., it doesn't explain the implications of 'lighting_preset' or 'include_materials' in the context of import/translation). Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Import a Revit model for visualization') and the resource ('Revit model'), with explicit details about the process ('uploads to APS OSS and starts SVF2 translation with thumbnail generation'). It distinguishes from sibling tools like 'tm_export_video' or 'tm_render_image' by focusing on import/translation rather than export or rendering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'tm_list_scenes' or 'tm_set_environment', nor does it mention prerequisites (e.g., file accessibility, authentication needs) or exclusions. It implies usage for Revit model import but lacks context for decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tm_list_scenesBInspect

List all available views, scenes, and model structure from a translated model

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesBase64-encoded URN
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While 'list' implies a read-only operation, the description doesn't explicitly state this or address other behavioral aspects like authentication requirements, rate limits, pagination, or what happens when the project_id is invalid. It provides minimal behavioral context beyond the basic operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that clearly states the tool's purpose with zero wasted words. It's appropriately sized for a simple listing tool and front-loads the essential information without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple listing tool with one well-documented parameter and no output schema, the description is minimally adequate. However, with no annotations and no output schema, it should ideally provide more context about what the listing returns (format, structure, limitations) and behavioral constraints. The description meets basic requirements but leaves gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'project_id' fully documented as 'Base64-encoded URN'. The description adds no additional parameter semantics beyond what the schema provides. Since the schema does the heavy lifting, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('list all available') and resources ('views, scenes, and model structure from a translated model'). It distinguishes itself from siblings like tm_export_video or tm_render_image by focusing on listing rather than exporting or rendering. However, it doesn't explicitly differentiate from potential overlapping list functions in other tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context for usage, or exclusions. While the purpose is clear, there's no explicit direction on when this listing tool should be preferred over other operations on the same model.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tm_render_imageCInspect

Render a still image — generates thumbnail from APS Model Derivative at specified resolution

ParametersJSON Schema
NameRequiredDescriptionDefault
qualityNoRender quality
project_idYesBase64-encoded URN
resolutionNoOutput resolution
camera_presetNoView name or GUID to render from
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions generating a thumbnail and specifies resolution, but fails to describe critical behaviors such as whether this is a read-only operation, potential performance impacts, authentication needs, or rate limits. For a rendering tool with zero annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('Render a still image') and includes key details. There's no wasted text, though it could be slightly more structured (e.g., separating purpose from constraints).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a rendering tool with no annotations and no output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., image data or a URL), error conditions, or dependencies on other tools (like needing a processed model first). This leaves the agent with insufficient context for reliable use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all parameters (quality, project_id, resolution, camera_preset) with descriptions and enums. The description adds marginal value by implying thumbnail generation and resolution specification, but doesn't provide additional syntax or format details beyond what the schema offers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Render a still image') and specifies the resource source ('from APS Model Derivative'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like tm_export_video (which likely renders video) or tm_list_scenes (which likely lists scenes rather than rendering).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like tm_export_video or tm_list_scenes. It mentions generating a thumbnail at a specified resolution, but lacks context about prerequisites (e.g., needing a processed model) or exclusions (e.g., not for animations).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tm_set_environmentCInspect

Configure visualization environment settings — stores scene config and retrieves model metadata for context

ParametersJSON Schema
NameRequiredDescriptionDefault
weatherNoWeather condition
project_idYesBase64-encoded URN of the translated model
environmentNoEnvironment preset
time_of_dayNoTime of day (e.g. '14:30')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions storing scene config and retrieving model metadata, which implies read-write operations, but doesn't specify if this is destructive, requires specific permissions, or has side effects like overwriting existing settings. For a configuration tool with zero annotation coverage, this leaves significant gaps in understanding behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, using a single sentence to state the core purpose. Every part earns its place by covering configuration, storage, and retrieval aspects. However, it could be slightly more structured by separating the dual actions (configure vs. retrieve) for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a configuration tool with no annotations and no output schema, the description is incomplete. It doesn't explain what 'stores scene config' entails (e.g., persistence, format) or what 'retrieves model metadata' returns, leaving the agent uncertain about outcomes. For a tool with 4 parameters and behavioral implications, more detail is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (weather, project_id, environment, time_of_day) with descriptions and enums. The description adds no additional meaning beyond implying these parameters configure the environment, which is redundant. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('configure', 'stores', 'retrieves') and resources ('visualization environment settings', 'scene config', 'model metadata'). It distinguishes the tool from siblings like tm_render_image or tm_export_video by focusing on environment configuration rather than rendering or exporting. However, it doesn't explicitly differentiate from tm_list_scenes, which might be related.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing a project_id from tm_import_rvt, or when to choose this over tm_list_scenes for scene management. Usage is implied through the action verbs but lacks explicit context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources