Skip to main content
Glama

ScanBIM MCP

Server Details

AI Hub for AEC — 50+ 3D formats, clash detection, ACC integration via Autodesk Platform Services.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.9/5 across 19 of 19 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes targeting specific resources or actions (e.g., acc_create_issue vs. acc_create_rfi, twinmotion_render vs. twinmotion_walkthrough). However, some tools like lumion_render and twinmotion_render could be confused as both generate architectural visualizations, though their descriptions differentiate the styles and features.

Naming Consistency4/5

The naming follows a mostly consistent snake_case pattern with clear verb_noun structures (e.g., acc_list_projects, get_model_metadata). Minor deviations exist, such as xr_launch_ar_session using 'launch' instead of a more uniform verb like 'start', but overall the conventions are readable and predictable.

Tool Count4/5

With 19 tools, the count is slightly high but reasonable for the server's broad scope covering BIM model management, rendering, XR, and ACC/Forma integration. Each tool appears to serve a specific function, though some overlap in visualization tools might indicate slight bloat.

Completeness5/5

The tool set provides comprehensive coverage for BIM workflows, including model upload and management (upload_model, list_models), metadata and clash detection (get_model_metadata, detect_clashes), rendering (lumion_render, twinmotion_render), XR sessions (xr_launch_ar_session, xr_list_sessions), and ACC/Forma integration (acc_create_issue, acc_search_documents). No obvious gaps are present, supporting full lifecycle operations from model handling to collaboration and visualization.

Available Tools

19 tools
acc_create_issueCInspect

Create a real issue in ACC/Forma via APS Issues API.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYes
due_dateNoISO date string (YYYY-MM-DD)
priorityNo
project_idYesACC project ID (b.xxxx format)
assigned_toNoUser ID or email to assign
descriptionYes
linked_model_idNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states this creates a 'real issue' (implying a write/mutation operation) but doesn't disclose behavioral traits like required permissions, whether it's idempotent, rate limits, or what happens on success/failure. This is inadequate for a mutation tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words. It's appropriately sized and front-loaded with the core action, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a mutation tool with 7 parameters, low schema coverage (43%), no annotations, and no output schema, the description is incomplete. It lacks crucial context like expected outcomes, error handling, or dependencies, leaving significant gaps for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (43%), with only 3 of 7 parameters having descriptions. The description adds no parameter semantics beyond what's implied by the tool name (e.g., it doesn't explain what 'project_id' or 'linked_model_id' mean in context). This fails to compensate for the schema gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create') and target resource ('real issue in ACC/Forma via APS Issues API'), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'acc_create_rfi' or 'acc_list_issues', which would require a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There are no mentions of prerequisites, context, or exclusions, leaving the agent with no usage direction beyond the basic purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_create_rfiCInspect

Create a real RFI in ACC/Forma via APS RFIs API.

ParametersJSON Schema
NameRequiredDescriptionDefault
subjectYes
priorityNo
questionYes
project_idYes
assigned_toNo
linked_clash_idNo
linked_model_idNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states this creates a 'real RFI', implying a write operation, but doesn't cover critical aspects like authentication requirements, rate limits, error handling, or what 'real' entails (e.g., immediate persistence). This leaves significant gaps for an agent to understand the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a creation tool with 7 parameters, 0% schema coverage, no annotations, and no output schema, the description is incomplete. It doesn't address parameter meanings, behavioral traits, or usage context, leaving the agent with insufficient information to effectively invoke the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, meaning none of the 7 parameters are documented in the schema. The description adds no information about parameters beyond implying 'project_id', 'subject', and 'question' are involved in creation. It doesn't explain what 'linked_clash_id' or 'assigned_to' mean, failing to compensate for the schema's lack of documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create') and resource ('a real RFI in ACC/Forma via APS RFIs API'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'acc_create_issue' or 'acc_list_rfis', which would require mentioning what distinguishes RFI creation from issue creation or listing operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing a valid project_id, or compare it to sibling tools like 'acc_create_issue' for context on when RFIs versus issues are appropriate, leaving usage unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_list_issuesCInspect

List and filter live issues from an ACC/Forma project.

ParametersJSON Schema
NameRequiredDescriptionDefault
statusNoopen, closed, in_review, draft
priorityNo
project_idYes
assigned_toNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'list and filter' but does not specify if this is a read-only operation, what permissions are required, how results are returned (e.g., pagination), or any rate limits. This is inadequate for a tool with multiple parameters and no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded and appropriately sized, making it easy to understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a filtering tool with 4 parameters, low schema coverage (25%), no annotations, and no output schema, the description is incomplete. It lacks details on behavioral traits, parameter usage, and return values, making it insufficient for effective tool invocation by an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low at 25%, with only the 'status' parameter documented. The description adds value by implying filtering capabilities ('filter live issues'), which aligns with parameters like 'status', 'priority', and 'assigned_to', but it does not detail what these parameters mean or how to use them, partially compensating for the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List and filter') and resource ('live issues from an ACC/Forma project'), providing a specific purpose. However, it does not explicitly differentiate from sibling tools like 'acc_list_projects' or 'acc_list_rfis', which list different resources, so it falls short of a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives. It does not mention prerequisites, such as needing a project ID, or compare it to similar tools like 'acc_search_documents' for document-related queries, leaving usage context implied but not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_list_projectsBInspect

List all ACC/BIM360 projects you have access to via APS.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states the tool lists projects but lacks behavioral details: it doesn't specify if this is a read-only operation, mention pagination or rate limits, describe the output format, or note any access restrictions beyond 'you have access to'. For a tool with zero annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded with the core action. However, it could be slightly more structured by including key behavioral hints, but it earns high marks for brevity and clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (listing projects in a BIM system) and lack of annotations and output schema, the description is incomplete. It doesn't explain what information is returned (e.g., project names, IDs, status), how results are formatted, or any limitations. For a tool with no structured data support, this leaves significant gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, meaning no parameters are documented in the schema. The description doesn't add parameter details, which is appropriate since there are none to explain. Baseline score is 4 for tools with 0 parameters, as no compensation is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List all') and resource ('ACC/BIM360 projects'), specifying the scope as projects accessible via APS. It distinguishes from siblings like acc_list_issues or acc_list_rfis by focusing on projects rather than issues or RFIs. However, it doesn't explicitly differentiate from acc_project_summary, which might overlap in purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites like authentication, compare to acc_project_summary for detailed project info, or indicate when listing projects is appropriate over searching or creating. This leaves the agent without context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_list_rfisCInspect

List and filter live RFIs from an ACC/Forma project.

ParametersJSON Schema
NameRequiredDescriptionDefault
statusNo
project_idYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'list and filter live RFIs' but doesn't cover critical aspects like whether it's read-only, pagination behavior, rate limits, authentication needs, or what 'live' means operationally. This leaves significant gaps for an agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It's appropriately sized and front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (2 parameters, no annotations, no output schema), the description is incomplete. It lacks details on behavior, parameter usage, output format, and how it fits with sibling tools, making it inadequate for full agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, so the description must compensate. It implies filtering by status but doesn't explain the 'status' parameter's purpose, possible values, or how 'project_id' is used. This adds minimal semantic value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List and filter') and resource ('live RFIs from an ACC/Forma project'), making the purpose understandable. It doesn't explicitly differentiate from sibling tools like 'acc_list_issues' or 'acc_list_projects', which would require a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description mentions filtering but doesn't specify when to use it over other list tools or how it relates to siblings like 'acc_search_documents' or 'acc_project_summary'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_project_summaryCInspect

Get a full ACC/Forma project summary including hub, project metadata, and stats.

ParametersJSON Schema
NameRequiredDescriptionDefault
hub_idNo
project_idYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states 'Get' implies a read operation, but doesn't disclose behavioral traits such as whether it requires authentication, has rate limits, returns structured data, or handles errors. For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and details. Every word earns its place, with no redundancy or wasted space. It's appropriately sized for a simple retrieval tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, 0% schema coverage, and no output schema, the description is incomplete. It doesn't explain what the summary includes beyond high-level categories, how parameters affect results, or the return format. For a tool with two parameters and complex output implied by 'full...summary', more context is needed to guide effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It mentions 'project summary' but doesn't explain the parameters 'project_id' (required) or 'hub_id' (optional), their formats, or how they relate to the output. The description adds no meaning beyond what the bare schema provides, failing to address the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('ACC/Forma project summary'), and specifies what information is included ('hub, project metadata, and stats'). It distinguishes from siblings like 'acc_list_projects' by focusing on detailed summary rather than listing. However, it doesn't explicitly contrast with other tools that might retrieve project data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, when not to use it, or refer to sibling tools like 'acc_list_projects' for different use cases. The agent must infer usage from the tool name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_search_documentsCInspect

Search drawings, specs, submittals and documents in ACC/Forma via APS.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes
project_idYes
document_typeNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states it's a search operation, implying read-only behavior, but doesn't cover critical aspects like authentication needs, rate limits, pagination, error handling, or what the search returns (e.g., list of documents with metadata). For a tool with 3 parameters and no output schema, this leaves significant gaps in understanding how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every part ('Search drawings, specs, submittals and documents in ACC/Forma via APS') contributes directly to understanding the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (search tool with 3 parameters), lack of annotations, 0% schema description coverage, and no output schema, the description is incomplete. It covers the basic purpose but fails to provide necessary context on behavior, parameters, or results, making it inadequate for effective tool use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but adds no parameter information. It doesn't explain what 'project_id', 'query', or 'document_type' mean, their formats, or how they affect the search. With 3 parameters (2 required) and no schema descriptions, this is a major shortfall.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Search') and the target resources ('drawings, specs, submittals and documents'), with the platform context ('in ACC/Forma via APS'). It distinguishes from siblings like 'acc_list_projects' or 'list_models' by focusing on document search rather than listing or other operations. However, it doesn't explicitly differentiate from hypothetical similar search tools (none exist in siblings), so it's not a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a project_id), exclusions, or compare to siblings like 'acc_list_issues' for issue-related searches. Usage is implied by the name and purpose but not explicitly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

detect_clashesCInspect

Run VDC clash detection between two element categories in a BIM model. Uses 20+ years of field-tested VDC intelligence to assess severity, suggest fixes, and estimate rework hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesAPS URN or model ID from upload_model
category_aYesFirst category (e.g. Ducts, Pipes, Structure, Electrical)
category_bYesSecond category to clash against
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It adds some context beyond basic functionality by mentioning 'Uses 20+ years of field-tested VDC intelligence' and outputs like 'assess severity, suggest fixes, and estimate rework hours.' However, it lacks critical details such as whether this is a read-only or destructive operation, performance characteristics (e.g., runtime, rate limits), authentication needs, or error handling. This is a significant gap for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured in two sentences. The first sentence clearly states the core functionality, and the second adds valuable context about intelligence and outputs. There's no wasted text, and it's front-loaded with the main purpose. However, it could be slightly more efficient by integrating the context into the first sentence without losing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of clash detection (a potentially resource-intensive analysis tool), no annotations, and no output schema, the description is incomplete. It hints at outputs ('assess severity, suggest fixes, and estimate rework hours') but doesn't detail the return format, error conditions, or behavioral traits like side effects. For a tool with no structured data to rely on, this leaves significant gaps for an AI agent to understand how to invoke it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already documents all three parameters (model_id, category_a, category_b) with clear descriptions. The tool description doesn't add any parameter-specific information beyond what's in the schema, such as examples for categories or constraints on model_id. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't need to given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Run VDC clash detection between two element categories in a BIM model.' It specifies the verb ('Run VDC clash detection'), resource ('BIM model'), and scope ('between two element categories'). However, it doesn't explicitly differentiate from sibling tools like 'acc_create_issue' or 'get_model_metadata', which might handle related BIM tasks but not clash detection specifically.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions the tool's capabilities (e.g., 'assess severity, suggest fixes, and estimate rework hours'), but doesn't specify prerequisites, exclusions, or recommend other tools for different tasks. For example, it doesn't clarify if this should be used before or after tools like 'acc_create_issue' for reporting clashes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_model_metadataCInspect

Get detailed metadata for a model including element count, format, translation status, and properties via APS Model Derivative.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesAPS URN or model ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a read operation ('Get'), implying it's likely non-destructive, but doesn't clarify authentication needs, rate limits, error conditions, or what happens if the model_id is invalid. For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and lists key metadata details. It avoids unnecessary words, but could be slightly more structured (e.g., by explicitly stating it's for retrieving metadata rather than listing models).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of retrieving model metadata, the lack of annotations, and no output schema, the description is incomplete. It doesn't explain the return format, potential errors, or how to interpret the metadata fields (e.g., what 'translation status' means). For a tool with these gaps, more context is needed to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage, with the single parameter 'model_id' documented as 'APS URN or model ID'. The description adds no additional parameter semantics beyond what the schema provides, such as format examples or where to obtain the model_id. Since schema coverage is high, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('detailed metadata for a model'), specifying what information is retrieved (element count, format, translation status, properties) and the underlying service (APS Model Derivative). However, it doesn't explicitly differentiate from sibling tools like 'list_models' or 'get_supported_formats', which would be needed for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a model ID from 'list_models'), exclusions, or comparisons to siblings like 'list_models' (which might list models without metadata) or 'get_supported_formats' (which might provide format info without model-specific details).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_supported_formatsBInspect

List all 50+ supported 3D file formats by tier.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool lists formats but doesn't describe traits like whether it's read-only, if it requires authentication, rate limits, or how the data is structured (e.g., as a list or grouped by tier). This leaves significant gaps in understanding the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key information: 'List all 50+ supported 3D file formats by tier.' It wastes no words and clearly communicates the core purpose without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is minimally adequate. It states what the tool does but lacks context on behavior, usage, or output format. For a simple listing tool, this might suffice, but it doesn't fully compensate for the absence of annotations or output details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add parameter details, which is appropriate. A baseline of 4 is given since no parameters exist, and the description doesn't contradict or add unnecessary information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List all 50+ supported 3D file formats by tier.' It specifies the verb ('List'), resource ('supported 3D file formats'), and scope ('by tier'), making it easy to understand. However, it doesn't explicitly differentiate from sibling tools like 'list_models' or 'get_model_metadata', which might also involve file formats, so it misses full sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context for usage, or exclusions. For example, it doesn't clarify if this is for checking compatibility before upload or for general reference, leaving the agent without usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_modelsCInspect

List all uploaded models in APS OSS storage.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNo
project_nameNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states it's a list operation, implying read-only behavior, but doesn't mention any constraints like pagination, rate limits, authentication needs, or what 'all' entails (e.g., completeness, ordering). This leaves significant gaps for a tool with parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that is front-loaded with the core purpose. There is no wasted verbiage, making it appropriately concise for a simple list tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (2 parameters, no annotations, no output schema), the description is incomplete. It doesn't explain the parameters, return values, or behavioral traits, leaving the agent with insufficient information to use the tool effectively beyond its basic purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 2 parameters with 0% description coverage, and the tool description provides no information about what 'format' or 'project_name' mean, their expected values, or how they affect the listing. This fails to compensate for the low schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List all uploaded models') and resource ('in APS OSS storage'), providing a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'get_model_metadata' or 'get_supported_formats', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description lacks context about prerequisites, exclusions, or comparisons to sibling tools such as 'acc_list_projects' or 'get_model_metadata', leaving usage unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lumion_renderCInspect

Generate Lumion-style architectural visualization with landscaping, people, vehicles, and atmospheric effects.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleNo
model_idYes
add_peopleNo
add_vehiclesNo
add_landscapingNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions what visual elements are generated but doesn't cover critical aspects like whether this is a read/write operation, processing time, authentication needs, rate limits, or output format. For a tool that likely involves significant computation, this lack of transparency is a notable gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the core functionality without unnecessary words. It's front-loaded with the main action and includes relevant details in a compact list format, making every word count.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a rendering tool with 5 parameters, 0% schema coverage, no annotations, and no output schema, the description is insufficient. It doesn't explain what the tool returns, how long processing takes, error conditions, or dependencies on other tools like 'list_models'. The agent lacks critical context to use this tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for all 5 parameters. It only partially addresses parameters by mentioning 'landscaping, people, vehicles' (mapping to add_landscaping, add_people, add_vehicles) but omits model_id and style entirely. The atmospheric effects hint at style options but aren't specific. This leaves key parameters undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'generate' and the resource 'Lumion-style architectural visualization', specifying the visual elements included (landscaping, people, vehicles, atmospheric effects). It distinguishes from sibling tools like 'twinmotion_render' by specifying the Lumion style, though it doesn't explicitly contrast with other rendering tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'twinmotion_render' or 'get_viewer_link'. It mentions the visual elements but doesn't specify use cases, prerequisites, or exclusions, leaving the agent to infer usage from the tool name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

twinmotion_renderCInspect

Generate photorealistic Twinmotion-style render with time-of-day, weather, season, and camera controls.

ParametersJSON Schema
NameRequiredDescriptionDefault
seasonNo
weatherNo
model_idYes
resolutionNo
time_of_dayNo
camera_presetNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but provides minimal behavioral information. It mentions 'generate' which implies a creation operation, but doesn't disclose execution time, output format, file storage, permissions needed, rate limits, or whether it's destructive. The description doesn't contradict annotations (none exist), but fails to provide essential behavioral context for a render generation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that efficiently communicates the core function. No wasted words, appropriately front-loaded with the main action. Every element earns its place in this compact description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex render generation tool with 6 parameters, 0% schema coverage, no annotations, and no output schema, the description is inadequate. It doesn't explain what the tool returns (image file, URL, status), execution characteristics, or important constraints. The completeness gap is significant given the tool's complexity and lack of supporting structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but only mentions parameter categories (time-of-day, weather, season, camera controls) without explaining their purpose or relationships. It doesn't clarify that 'model_id' is required or explain what camera_preset entails. For 6 parameters with 4 having enums, this minimal coverage is insufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'generate' and resource 'photorealistic Twinmotion-style render' with specific controls (time-of-day, weather, season, camera). It distinguishes from sibling 'lumion_render' by specifying Twinmotion style, but doesn't explicitly differentiate from 'twinmotion_walkthrough' which suggests a different output type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'lumion_render' or 'twinmotion_walkthrough'. The description implies it's for static renders but doesn't state this explicitly or provide any context about prerequisites, dependencies, or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

twinmotion_walkthroughCInspect

Generate animated cinematic walkthrough video of the model.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleNo
model_idYes
duration_secondsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool generates a video, implying a potentially resource-intensive or time-consuming operation, but fails to mention critical details like required permissions, whether it's asynchronous, expected runtime, output format, or any rate limits. This leaves significant gaps in understanding how the tool behaves in practice.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It's front-loaded with the core action ('generate'), making it easy to parse quickly, though this brevity contributes to gaps in other dimensions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of video generation (a non-trivial operation), no annotations, no output schema, and 0% schema description coverage, the description is insufficient. It lacks details on behavioral traits, parameter meanings, expected outputs, and usage context, making it incomplete for effective agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but adds no parameter information. It doesn't explain what 'model_id' refers to, the meaning of 'style' enum values (cinematic, technical, presentation), or how 'duration_seconds' affects the output. This leaves all three parameters semantically unclear beyond their schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('generate') and the resource ('animated cinematic walkthrough video of the model'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'twinmotion_render' or 'lumion_render', which might offer similar rendering capabilities, leaving some ambiguity about when to choose this specific tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a pre-existing model), exclusions, or compare it to siblings like 'twinmotion_render' or 'xr_launch_vr_session', leaving the agent to infer usage context from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

upload_modelAInspect

Upload a 3D model file to APS/ScanBIM (Revit .rvt, Navisworks .nwd/.nwc, IFC, FBX, OBJ, SolidWorks, point clouds, 50+ formats). Translates via Autodesk Platform Services and returns a browser-based 3D viewer link and QR code.

ParametersJSON Schema
NameRequiredDescriptionDefault
file_urlYesPublic URL to the 3D model file
file_nameYesFilename with extension (e.g. building.rvt)
project_nameNoProject name for organization (optional)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behaviors: translation via Autodesk Platform Services and return of a viewer link and QR code. However, it lacks details on permissions, rate limits, file size constraints, or error handling, which are important for a file upload operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by format examples and outcomes, all in two efficient sentences. There is no wasted text, and every element (e.g., format list, translation detail, return values) serves to clarify the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is moderately complete: it covers the upload action, formats, translation process, and return values. However, for a tool that mutates state (uploading files), it lacks details on authentication, side effects, or error scenarios, leaving gaps in contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the three parameters. The description does not add any parameter-specific semantics beyond what the schema provides, such as format requirements for 'file_url' or naming conventions for 'file_name'. Baseline score of 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Upload a 3D model file'), identifies the target system ('to APS/ScanBIM'), enumerates supported formats with examples, and distinguishes this tool from siblings like 'list_models' or 'get_viewer_link' by focusing on the upload and translation process.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a 3D model needs uploading and translation, but does not explicitly state when to use this versus alternatives like 'get_supported_formats' for format checks or 'get_viewer_link' if a link already exists. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xr_launch_ar_sessionCInspect

Launch AR passthrough session — overlay BIM on real jobsite via camera.

ParametersJSON Schema
NameRequiredDescriptionDefault
scaleNo
model_idYes
session_nameNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions launching a session and overlaying BIM, implying a real-time, interactive operation, but fails to describe critical behaviors such as required permissions, session duration, resource consumption, error handling, or what happens if a session is already active. This leaves significant gaps for safe and effective tool invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single sentence that directly states the tool's purpose without any fluff or redundant information. It is front-loaded and efficiently communicates the core functionality, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of launching an AR session (likely involving real-time graphics, device compatibility, and user interaction), the description is incomplete. With no annotations, no output schema, and 0% schema description coverage, it lacks details on behavioral traits, return values, error conditions, and parameter meanings. This makes it inadequate for an agent to use the tool confidently without additional context or trial-and-error.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, so the description must compensate, but it adds no parameter-specific information. It doesn't explain what 'model_id', 'scale', or 'session_name' mean in context, their formats, or how they affect the session. With three parameters (one required) and no schema descriptions, this is a minimal baseline score, as the description fails to clarify parameter roles beyond what the schema's structure implies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Launch AR passthrough session') and the resource/outcome ('overlay BIM on real jobsite via camera'), which is specific and informative. However, it doesn't explicitly differentiate from its sibling 'xr_launch_vr_session', which might be a related AR/VR tool, leaving some ambiguity about when to choose one over the other.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'xr_launch_vr_session' or other AR/VR-related tools. It lacks context about prerequisites, typical use cases, or exclusions, leaving the agent to infer usage based on the name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xr_launch_vr_sessionCInspect

Launch immersive VR walkthrough on Meta Quest via ScanBIM XR. Share via QR code.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYes
session_nameNo
max_participantsNo
enable_measurementsNo
enable_voice_annotationsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions launching and sharing via QR code, but lacks details on permissions required, whether the session is persistent or temporary, rate limits, error handling, or what happens after launch (e.g., session management). This is a significant gap for a tool that likely involves resource allocation and user interaction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two brief sentences that are front-loaded with the core action. Every word earns its place by specifying the action, platform, and key feature without any fluff or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of launching a VR session with 5 parameters, no annotations, and no output schema, the description is incomplete. It lacks details on behavioral traits, parameter meanings, expected outputs, and usage context, making it inadequate for an agent to fully understand how to invoke this tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for all 5 parameters. It only implicitly relates to 'model_id' (via 'immersive VR walkthrough') and 'session_name' (via 'Share via QR code'), but provides no meaning for 'max_participants', 'enable_measurements', or 'enable_voice_annotations'. The description adds minimal value beyond the schema's parameter names.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Launch immersive VR walkthrough') and target platform ('on Meta Quest via ScanBIM XR'), with a specific feature mentioned ('Share via QR code'). It distinguishes from the sibling 'xr_launch_ar_session' by specifying VR rather than AR, but doesn't explicitly differentiate from other immersive tools like 'twinmotion_walkthrough' beyond the platform mention.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives is provided. The description implies usage for VR walkthroughs on Meta Quest, but doesn't specify prerequisites, compare to 'xr_launch_ar_session' for AR vs VR scenarios, or mention when not to use it (e.g., for non-VR platforms or without a model).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xr_list_sessionsCInspect

List active and past VR/AR sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idNo
session_typeNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It implies a read-only operation by using 'List', but doesn't specify if it requires authentication, has rate limits, returns paginated results, or what the output format looks like. For a tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It uses minimal text to convey the essential action and scope, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no annotations, no output schema, and low parameter schema coverage, the description is incomplete. It doesn't address behavioral aspects like authentication needs, return format, or error handling, leaving gaps that could hinder effective tool invocation by an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description mentions 'VR/AR sessions' but doesn't explain the parameters 'model_id' or 'session_type', which have 0% schema description coverage. It doesn't clarify what 'model_id' refers to (e.g., a specific VR model) or how 'session_type' with enum values ('vr', 'ar', 'all') affects the listing. This fails to compensate for the low schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('VR/AR sessions'), specifying both active and past sessions. It distinguishes from siblings like 'xr_launch_ar_session' and 'xr_launch_vr_session' by focusing on listing rather than launching. However, it doesn't explicitly differentiate from other list tools like 'acc_list_issues' or 'list_models', which keeps it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing specific permissions or setup, or compare it to other list tools like 'list_models' for context. This lack of usage context leaves the agent without clear direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources