Skip to main content
Glama

Server Details

AI Hub for AEC — 50+ 3D formats, clash detection, ACC integration via Autodesk Platform Services.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.9/5 across 19 of 19 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes targeting specific resources or actions (e.g., acc_create_issue vs. acc_create_rfi, twinmotion_render vs. twinmotion_walkthrough). However, some tools like lumion_render and twinmotion_render could be confused as both generate architectural visualizations, though their descriptions differentiate the styles and features.

Naming Consistency4/5

The naming follows a mostly consistent snake_case pattern with clear verb_noun structures (e.g., acc_list_projects, get_model_metadata). Minor deviations exist, such as xr_launch_ar_session using 'launch' instead of a more uniform verb like 'start', but overall the conventions are readable and predictable.

Tool Count4/5

With 19 tools, the count is slightly high but reasonable for the server's broad scope covering BIM model management, rendering, XR, and ACC/Forma integration. Each tool appears to serve a specific function, though some overlap in visualization tools might indicate slight bloat.

Completeness5/5

The tool set provides comprehensive coverage for BIM workflows, including model upload and management (upload_model, list_models), metadata and clash detection (get_model_metadata, detect_clashes), rendering (lumion_render, twinmotion_render), XR sessions (xr_launch_ar_session, xr_list_sessions), and ACC/Forma integration (acc_create_issue, acc_search_documents). No obvious gaps are present, supporting full lifecycle operations from model handling to collaboration and visualization.

Available Tools

19 tools
acc_create_issueCInspect

Create a real issue (punchlist/QC item) in ACC Build's Issues module via the APS Construction Issues v1 API. Returns the ACC-generated issue_id which can be linked back to a model URN or a detected clash. When to use: detect_clashes flagged a critical clash, or a field user reports a QC defect, and you want to track it in ACC for assignment and closeout. When NOT to use: you want to file a formal information request between trades — use acc_create_rfi instead. You want a note on a model element — that is a markup, not an issue. APS scopes: data:read data:write account:read Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh; 403 scope or resource permission denied (app not provisioned for the project's ACC account); 404 project_id not found — check the ID (strip any leading 'b.'); 429 rate limited — backoff and retry; 5xx APS upstream outage — retry with jitter. Side effects: NON-IDEMPOTENT. Creates a new ACC issue each call (repeated calls create duplicates). Inserts a row into D1 usage_log.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesShort human-readable issue title, 1-255 chars. Shows up as the headline in ACC Issues UI.
due_dateNoISO 8601 calendar date (YYYY-MM-DD). Time component is ignored by ACC.
priorityNoACC issue priority. Defaults to 'medium' if omitted.
project_idYesACC project ID in either 'b.<uuid>' or plain '<uuid>' form (the worker strips the 'b.' prefix before calling the Issues endpoint). Obtainable via acc_list_projects.
assigned_toNoACC user ID (UUID) or email of the assignee. Pass null or omit to leave unassigned.
descriptionYesLong-form issue body. Plain text; supports newlines. Include clash coordinates, trade impact, and suggested fix.
linked_model_idNoOptional APS URN linking this issue back to the source model. Stored for ScanBIM cross-referencing; not forwarded to ACC's linkedDocuments field.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states this creates a 'real issue' (implying a write/mutation operation) but doesn't disclose behavioral traits like required permissions, whether it's idempotent, rate limits, or what happens on success/failure. This is inadequate for a mutation tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words. It's appropriately sized and front-loaded with the core action, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a mutation tool with 7 parameters, low schema coverage (43%), no annotations, and no output schema, the description is incomplete. It lacks crucial context like expected outcomes, error handling, or dependencies, leaving significant gaps for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (43%), with only 3 of 7 parameters having descriptions. The description adds no parameter semantics beyond what's implied by the tool name (e.g., it doesn't explain what 'project_id' or 'linked_model_id' mean in context). This fails to compensate for the schema gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create') and target resource ('real issue in ACC/Forma via APS Issues API'), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'acc_create_rfi' or 'acc_list_issues', which would require a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There are no mentions of prerequisites, context, or exclusions, leaving the agent with no usage direction beyond the basic purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_create_rfiCInspect

Create a Request For Information in ACC Build's RFIs module via the APS Construction RFIs v1 API, in 'draft' status. Returns the ACC rfi_id. When to use: a trade or subcontractor needs formal information from the design team (unclear detail, conflicting spec, missing dimension) and you want a tracked paper trail. When NOT to use: the item is just a punchlist fix — use acc_create_issue. The question is internal to one trade — handle inside that trade's toolchain. APS scopes: data:read data:write account:read Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh; 403 scope or resource permission denied (app not provisioned for the project's ACC account, or RFIs module not enabled); 404 project_id not found — check the ID; 429 rate limited — backoff and retry; 5xx APS upstream outage — retry with jitter. Side effects: NON-IDEMPOTENT. Creates a new draft RFI each call. Inserts a row into D1 usage_log.

ParametersJSON Schema
NameRequiredDescriptionDefault
subjectYesShort RFI subject line, 1-255 chars. Appears as the RFI headline in ACC.
priorityNoRFI priority. Defaults to 'medium'.
questionYesFull question body sent to the design team. Plain text with newlines allowed.
project_idYesACC project ID in 'b.<uuid>' or '<uuid>' form (the 'b.' prefix is stripped automatically). Obtainable via acc_list_projects.
assigned_toNoACC user ID (UUID) or email of the responder. Pass null or omit to leave unassigned.
linked_clash_idNoOptional clash ID from detect_clashes output used to link this RFI back to the triggering clash. Stored for ScanBIM cross-referencing; not forwarded to ACC.
linked_model_idNoOptional APS URN of the model the RFI references. Stored for ScanBIM cross-referencing; not forwarded to ACC.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states this creates a 'real RFI', implying a write operation, but doesn't cover critical aspects like authentication requirements, rate limits, error handling, or what 'real' entails (e.g., immediate persistence). This leaves significant gaps for an agent to understand the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a creation tool with 7 parameters, 0% schema coverage, no annotations, and no output schema, the description is incomplete. It doesn't address parameter meanings, behavioral traits, or usage context, leaving the agent with insufficient information to effectively invoke the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, meaning none of the 7 parameters are documented in the schema. The description adds no information about parameters beyond implying 'project_id', 'subject', and 'question' are involved in creation. It doesn't explain what 'linked_clash_id' or 'assigned_to' mean, failing to compensate for the schema's lack of documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create') and resource ('a real RFI in ACC/Forma via APS RFIs API'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'acc_create_issue' or 'acc_list_rfis', which would require mentioning what distinguishes RFI creation from issue creation or listing operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing a valid project_id, or compare it to sibling tools like 'acc_create_issue' for context on when RFIs versus issues are appropriate, leaving usage unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_list_issuesCInspect

List up to 50 issues from an ACC project, optionally filtered by status and priority. Returns a normalized array of {id, title, status, priority, due_date}. When to use: you need a dashboard view of open issues, to find a specific issue by metadata, or to check the status of previously created issues. When NOT to use: you want the full audit trail of a single issue — the ACC Issues UI or the per-issue endpoint is better. This tool caps at 50 results and does no pagination. APS scopes: data:read account:read Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh; 403 scope or resource permission denied; 404 project_id not found — check the ID; 429 rate limited — backoff and retry; 5xx APS upstream outage — retry with jitter. Side effects: READ-ONLY. Inserts a row into D1 usage_log. Idempotent.

ParametersJSON Schema
NameRequiredDescriptionDefault
statusNoFilter by ACC issue status. Accepted values: 'open', 'closed', 'in_review', 'draft'. Omit for all statuses.
priorityNoFilter by priority: 'critical' | 'high' | 'medium' | 'low'. Omit for all priorities.
project_idYesACC project ID in 'b.<uuid>' or '<uuid>' form (the 'b.' prefix is stripped automatically). Obtainable via acc_list_projects.
assigned_toNoReserved for future filtering by assignee user ID or email. Currently not forwarded to the ACC API.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'list and filter' but does not specify if this is a read-only operation, what permissions are required, how results are returned (e.g., pagination), or any rate limits. This is inadequate for a tool with multiple parameters and no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded and appropriately sized, making it easy to understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a filtering tool with 4 parameters, low schema coverage (25%), no annotations, and no output schema, the description is incomplete. It lacks details on behavioral traits, parameter usage, and return values, making it insufficient for effective tool invocation by an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low at 25%, with only the 'status' parameter documented. The description adds value by implying filtering capabilities ('filter live issues'), which aligns with parameters like 'status', 'priority', and 'assigned_to', but it does not detail what these parameters mean or how to use them, partially compensating for the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List and filter') and resource ('live issues from an ACC/Forma project'), providing a specific purpose. However, it does not explicitly differentiate from sibling tools like 'acc_list_projects' or 'acc_list_rfis', which list different resources, so it falls short of a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives. It does not mention prerequisites, such as needing a project ID, or compare it to similar tools like 'acc_search_documents' for document-related queries, leaving usage context implied but not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_list_projectsBInspect

List every Autodesk Construction Cloud (ACC) / BIM 360 project the configured APS 2-legged app has access to, flattened across all hubs, with hub_id, hub_name, project_id, project_name, and project type. When to use: you need a project_id to pass into acc_create_issue, acc_list_issues, acc_create_rfi, acc_list_rfis, acc_search_documents, or acc_project_summary. When NOT to use: you already have the b.xxxx project_id. This tool makes N+1 API calls (one per hub) so avoid calling it in tight loops. APS scopes: data:read account:read Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh; 403 scope or resource permission denied (app not provisioned for any hub in ACC Account Admin → Custom Integrations); 404 no hubs found — check APS app provisioning; 429 rate limited — backoff and retry; 5xx APS upstream outage — retry with jitter. Side effects: READ-ONLY. Inserts a row into D1 usage_log. Idempotent.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states the tool lists projects but lacks behavioral details: it doesn't specify if this is a read-only operation, mention pagination or rate limits, describe the output format, or note any access restrictions beyond 'you have access to'. For a tool with zero annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded with the core action. However, it could be slightly more structured by including key behavioral hints, but it earns high marks for brevity and clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (listing projects in a BIM system) and lack of annotations and output schema, the description is incomplete. It doesn't explain what information is returned (e.g., project names, IDs, status), how results are formatted, or any limitations. For a tool with no structured data support, this leaves significant gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, meaning no parameters are documented in the schema. The description doesn't add parameter details, which is appropriate since there are none to explain. Baseline score is 4 for tools with 0 parameters, as no compensation is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List all') and resource ('ACC/BIM360 projects'), specifying the scope as projects accessible via APS. It distinguishes from siblings like acc_list_issues or acc_list_rfis by focusing on projects rather than issues or RFIs. However, it doesn't explicitly differentiate from acc_project_summary, which might overlap in purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites like authentication, compare to acc_project_summary for detailed project info, or indicate when listing projects is appropriate over searching or creating. This leaves the agent without context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_list_rfisCInspect

List up to 50 RFIs from an ACC project, optionally filtered by status. Returns a normalized array of {id, subject, status}. When to use: you need a quick rollup of outstanding or answered RFIs on a project, or to find a specific RFI id. When NOT to use: you want the full response thread of a single RFI — use the ACC UI or per-RFI endpoint. This tool caps at 50 results and does no pagination. APS scopes: data:read account:read Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh; 403 scope or resource permission denied (RFIs module may not be enabled on the project); 404 project_id not found — check the ID; 429 rate limited — backoff and retry; 5xx APS upstream outage — retry with jitter. Side effects: READ-ONLY. Inserts a row into D1 usage_log. Idempotent.

ParametersJSON Schema
NameRequiredDescriptionDefault
statusNoFilter by RFI status. Common values: 'draft', 'open', 'answered', 'closed', 'void'. Omit for all.
project_idYesACC project ID in 'b.<uuid>' or '<uuid>' form (the 'b.' prefix is stripped automatically).
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'list and filter live RFIs' but doesn't cover critical aspects like whether it's read-only, pagination behavior, rate limits, authentication needs, or what 'live' means operationally. This leaves significant gaps for an agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It's appropriately sized and front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (2 parameters, no annotations, no output schema), the description is incomplete. It lacks details on behavior, parameter usage, output format, and how it fits with sibling tools, making it inadequate for full agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, so the description must compensate. It implies filtering by status but doesn't explain the 'status' parameter's purpose, possible values, or how 'project_id' is used. This adds minimal semantic value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List and filter') and resource ('live RFIs from an ACC/Forma project'), making the purpose understandable. It doesn't explicitly differentiate from sibling tools like 'acc_list_issues' or 'acc_list_projects', which would require a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description mentions filtering but doesn't specify when to use it over other list tools or how it relates to siblings like 'acc_search_documents' or 'acc_project_summary'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_project_summaryCInspect

Fetch a single ACC/BIM 360 project's full attributes (name, type, dates, address, hub) from the APS Data Management project endpoint. If hub_id is omitted, the first hub the app can see is used. When to use: you need name, type, or scope details for a single project before acting on it, or to confirm the project still exists. When NOT to use: you want the list of all projects — call acc_list_projects. You want issues/RFIs counts — call the list tools. APS scopes: data:read account:read Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh; 403 scope or resource permission denied; 404 project_id or hub_id not found — check the IDs; 429 rate limited — backoff and retry; 5xx APS upstream outage — retry with jitter. Side effects: READ-ONLY. Inserts a row into D1 usage_log. Idempotent.

ParametersJSON Schema
NameRequiredDescriptionDefault
hub_idNoOptional ACC hub_id (format 'b.<account-uuid>'). If omitted, the worker picks the first hub returned by /project/v1/hubs.
project_idYesFull ACC project_id including the 'b.' prefix, exactly as returned by acc_list_projects. Unlike the Issues/RFIs tools, this tool passes the ID through unchanged to the Data Management project endpoint.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states 'Get' implies a read operation, but doesn't disclose behavioral traits such as whether it requires authentication, has rate limits, returns structured data, or handles errors. For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and details. Every word earns its place, with no redundancy or wasted space. It's appropriately sized for a simple retrieval tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, 0% schema coverage, and no output schema, the description is incomplete. It doesn't explain what the summary includes beyond high-level categories, how parameters affect results, or the return format. For a tool with two parameters and complex output implied by 'full...summary', more context is needed to guide effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It mentions 'project summary' but doesn't explain the parameters 'project_id' (required) or 'hub_id' (optional), their formats, or how they relate to the output. The description adds no meaning beyond what the bare schema provides, failing to address the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('ACC/Forma project summary'), and specifies what information is included ('hub, project metadata, and stats'). It distinguishes from siblings like 'acc_list_projects' by focusing on detailed summary rather than listing. However, it doesn't explicitly contrast with other tools that might retrieve project data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, when not to use it, or refer to sibling tools like 'acc_list_projects' for different use cases. The agent must infer usage from the tool name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_search_documentsCInspect

Full-text search the ACC Docs module on a project for drawings, specs, submittals, and other documents matching a query string. Calls the APS Data Management v1 search endpoint scoped to a project. When to use: an agent needs to locate a spec section, a sheet, or a submittal by keyword (e.g. 'fireproofing', 'A-101', 'RFI 23'). When NOT to use: you already have the document URN/lineage — fetch it directly. You want the file contents — this returns metadata; download separately via Data Management. APS scopes: data:read account:read Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh; 403 scope or resource permission denied (Docs module access required); 404 project_id not found — check the ID (note: this endpoint re-prepends 'b.' so pass the UUID form); 429 rate limited — backoff and retry; 5xx APS upstream outage — retry with jitter. Side effects: READ-ONLY. Inserts a row into D1 usage_log. Idempotent.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesFree-text search string. Matched against document names and attributes. URL-encoded automatically by the worker.
project_idYesACC project ID in 'b.<uuid>' or '<uuid>' form (the 'b.' prefix is stripped and re-prepended automatically for the Data Management API).
document_typeNoOptional document type filter forwarded as filter[type]. Common values: 'drawing', 'spec', 'submittal', 'rfi', 'photo'.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states it's a search operation, implying read-only behavior, but doesn't cover critical aspects like authentication needs, rate limits, pagination, error handling, or what the search returns (e.g., list of documents with metadata). For a tool with 3 parameters and no output schema, this leaves significant gaps in understanding how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every part ('Search drawings, specs, submittals and documents in ACC/Forma via APS') contributes directly to understanding the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (search tool with 3 parameters), lack of annotations, 0% schema description coverage, and no output schema, the description is incomplete. It covers the basic purpose but fails to provide necessary context on behavior, parameters, or results, making it inadequate for effective tool use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but adds no parameter information. It doesn't explain what 'project_id', 'query', or 'document_type' mean, their formats, or how they affect the search. With 3 parameters (2 required) and no schema descriptions, this is a major shortfall.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Search') and the target resources ('drawings, specs, submittals and documents'), with the platform context ('in ACC/Forma via APS'). It distinguishes from siblings like 'acc_list_projects' or 'list_models' by focusing on document search rather than listing or other operations. However, it doesn't explicitly differentiate from hypothetical similar search tools (none exist in siblings), so it's not a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a project_id), exclusions, or compare to siblings like 'acc_list_issues' for issue-related searches. Usage is implied by the name and purpose but not explicitly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

detect_clashesCInspect

Run a VDC-grade clash detection pass between two element categories in a translated model, returning each overlapping element pair with a severity (critical/warning), a trade-specific suggested fix, and an estimated rework hour count. Uses AABB bounding-box intersection on elements pulled from the APS Model Derivative properties endpoint, with a synthetic fallback if properties have not yet been computed. When to use: you want a first-pass coordination report between two MEP or structural trades (e.g. Ducts vs Structural Framing) for a model that has finished translating. When NOT to use: the model has not finished translating yet (call get_model_metadata first to confirm manifest.status=='success'), or you need clash detection between more than two categories — call this tool multiple times. APS scopes: data:read viewables:read Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh; 403 scope or resource permission denied; 404 URN not found or has no derivatives yet — check the ID; 429 rate limited — backoff and retry; 5xx APS upstream outage — retry with jitter. Side effects: READ-ONLY on APS. Inserts a row into D1 usage_log for analytics. Idempotent — repeated calls return the same clash set for a given model.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesAPS URN returned by upload_model. Base64url-encoded Autodesk object ID starting with 'dXJu' (which decodes to 'urn:adsk.objects:os.object:...'). Unpadded.
category_aYesRevit/IFC category name (case-sensitive, exactly as it appears in the model properties). Common values: 'Ducts', 'Pipes', 'Electrical', 'Structural Framing', 'Structural Columns', 'Mechanical Equipment', 'Walls'.
category_bYesSecond Revit/IFC category to clash against category_a. Case-sensitive; must match a category present in the translated model's property set.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It adds some context beyond basic functionality by mentioning 'Uses 20+ years of field-tested VDC intelligence' and outputs like 'assess severity, suggest fixes, and estimate rework hours.' However, it lacks critical details such as whether this is a read-only or destructive operation, performance characteristics (e.g., runtime, rate limits), authentication needs, or error handling. This is a significant gap for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured in two sentences. The first sentence clearly states the core functionality, and the second adds valuable context about intelligence and outputs. There's no wasted text, and it's front-loaded with the main purpose. However, it could be slightly more efficient by integrating the context into the first sentence without losing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of clash detection (a potentially resource-intensive analysis tool), no annotations, and no output schema, the description is incomplete. It hints at outputs ('assess severity, suggest fixes, and estimate rework hours') but doesn't detail the return format, error conditions, or behavioral traits like side effects. For a tool with no structured data to rely on, this leaves significant gaps for an AI agent to understand how to invoke it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already documents all three parameters (model_id, category_a, category_b) with clear descriptions. The tool description doesn't add any parameter-specific information beyond what's in the schema, such as examples for categories or constraints on model_id. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't need to given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Run VDC clash detection between two element categories in a BIM model.' It specifies the verb ('Run VDC clash detection'), resource ('BIM model'), and scope ('between two element categories'). However, it doesn't explicitly differentiate from sibling tools like 'acc_create_issue' or 'get_model_metadata', which might handle related BIM tasks but not clash detection specifically.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions the tool's capabilities (e.g., 'assess severity, suggest fixes, and estimate rework hours'), but doesn't specify prerequisites, exclusions, or recommend other tools for different tasks. For example, it doesn't clarify if this should be used before or after tools like 'acc_create_issue' for reporting clashes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_model_metadataCInspect

Fetch the APS Model Derivative manifest and metadata for a URN, including translation progress, derivative outputs, and a viewer URL. Use this to confirm a model has finished translating (manifest.status == 'success') before calling detect_clashes or opening the viewer. When to use: right after upload_model to poll translation progress, or later to inspect which viewable derivatives (SVF2, thumbnail, OBJ) are available. When NOT to use: you just want a link to share — call get_viewer_link. You want the actual element properties list — this tool returns the metadata index, not the full property collection. APS scopes: data:read viewables:read Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh; 403 scope or resource permission denied; 404 URN not found or job not yet submitted — check the ID; 429 rate limited — backoff and retry; 5xx APS upstream outage — retry with jitter. Side effects: READ-ONLY on APS. Inserts a row into D1 usage_log. Idempotent.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesAPS URN (base64url-encoded Autodesk object ID, starts with 'dXJu', unpadded) as returned by upload_model.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a read operation ('Get'), implying it's likely non-destructive, but doesn't clarify authentication needs, rate limits, error conditions, or what happens if the model_id is invalid. For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and lists key metadata details. It avoids unnecessary words, but could be slightly more structured (e.g., by explicitly stating it's for retrieving metadata rather than listing models).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of retrieving model metadata, the lack of annotations, and no output schema, the description is incomplete. It doesn't explain the return format, potential errors, or how to interpret the metadata fields (e.g., what 'translation status' means). For a tool with these gaps, more context is needed to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage, with the single parameter 'model_id' documented as 'APS URN or model ID'. The description adds no additional parameter semantics beyond what the schema provides, such as format examples or where to obtain the model_id. Since schema coverage is high, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('detailed metadata for a model'), specifying what information is retrieved (element count, format, translation status, properties) and the underlying service (APS Model Derivative). However, it doesn't explicitly differentiate from sibling tools like 'list_models' or 'get_supported_formats', which would be needed for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a model ID from 'list_models'), exclusions, or comparisons to siblings like 'list_models' (which might list models without metadata) or 'get_supported_formats' (which might provide format info without model-specific details).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_supported_formatsBInspect

Return the full matrix of supported input formats organized by subscription tier (free / pro / enterprise). Use to tell a user whether their file type is accepted before calling upload_model, or to surface pricing tier info. When to use: you need to validate a file extension or show a customer the supported format list. When NOT to use: you already know the extension is common (.rvt/.ifc/.nwd/.obj) — just call upload_model, which returns an 'Unsupported format' error for anything outside the matrix. APS scopes: none (static data). Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh (not applicable: no APS call); 403 scope or resource permission denied (not applicable); 404 not applicable; 429 rate limited — backoff and retry (worker-level only); 5xx APS upstream outage — retry with jitter (not applicable). Side effects: READ-ONLY and pure. Idempotent.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool lists formats but doesn't describe traits like whether it's read-only, if it requires authentication, rate limits, or how the data is structured (e.g., as a list or grouped by tier). This leaves significant gaps in understanding the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key information: 'List all 50+ supported 3D file formats by tier.' It wastes no words and clearly communicates the core purpose without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is minimally adequate. It states what the tool does but lacks context on behavior, usage, or output format. For a simple listing tool, this might suffice, but it doesn't fully compensate for the absence of annotations or output details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add parameter details, which is appropriate. A baseline of 4 is given since no parameters exist, and the description doesn't contradict or add unnecessary information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List all 50+ supported 3D file formats by tier.' It specifies the verb ('List'), resource ('supported 3D file formats'), and scope ('by tier'), making it easy to understand. However, it doesn't explicitly differentiate from sibling tools like 'list_models' or 'get_model_metadata', which might also involve file formats, so it misses full sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context for usage, or exclusions. For example, it doesn't clarify if this is for checking compatibility before upload or for general reference, leaving the agent without usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_modelsCInspect

List every object currently stored in the scanbim-models OSS bucket, with URN, size in MB, and a viewer URL for each. Returns the raw OSS inventory, not the D1 models table, so freshly uploaded items appear immediately. When to use: you need to enumerate previously uploaded models to find a URN, show an inventory, or pick one for a follow-up tool call. When NOT to use: you already know the exact URN — call get_model_metadata directly. This tool is not a search; it returns up to the OSS default page (typically first 10 objects unless OSS paginates). APS scopes: bucket:read data:read Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh; 403 scope or resource permission denied; 404 bucket not found — no models have been uploaded yet (upload one first); 429 rate limited — backoff and retry; 5xx APS upstream outage — retry with jitter. Side effects: READ-ONLY. Idempotent.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNoReserved for future filtering by file extension (e.g. 'rvt', 'ifc'). Currently informational only; the OSS listing is not filtered by this value.
project_nameNoReserved for future filtering by the D1 project_name column. Currently informational only; the OSS listing is not filtered by this value.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states it's a list operation, implying read-only behavior, but doesn't mention any constraints like pagination, rate limits, authentication needs, or what 'all' entails (e.g., completeness, ordering). This leaves significant gaps for a tool with parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that is front-loaded with the core purpose. There is no wasted verbiage, making it appropriately concise for a simple list tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (2 parameters, no annotations, no output schema), the description is incomplete. It doesn't explain the parameters, return values, or behavioral traits, leaving the agent with insufficient information to use the tool effectively beyond its basic purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 2 parameters with 0% description coverage, and the tool description provides no information about what 'format' or 'project_name' mean, their expected values, or how they affect the listing. This fails to compensate for the low schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List all uploaded models') and resource ('in APS OSS storage'), providing a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'get_model_metadata' or 'get_supported_formats', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description lacks context about prerequisites, exclusions, or comparisons to sibling tools such as 'acc_list_projects' or 'get_model_metadata', leaving usage unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lumion_renderCInspect

Queue a Lumion-style architectural visualization still render with landscaping, people, vehicles, and atmospheric effects. Returns a render_id and preview_url; the render pipeline is a ScanBIM roadmap item so today this tool responds synchronously with a stub job descriptor. When to use: you want a more 'Lumion-flavored' render (lush entourage, vehicles, people) vs. Twinmotion's cleaner look. When NOT to use: you need real-time viewing — use get_viewer_link. You need video — use twinmotion_walkthrough. APS scopes: none today (render pipeline is ScanBIM-internal); viewables:read data:read will apply when live. Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh (will apply when pipeline is live); 403 scope or resource permission denied; 404 URN not found — check the ID; 429 rate limited — backoff and retry; 5xx APS upstream outage — retry with jitter. Side effects: NON-IDEMPOTENT. Each call mints a new render_id (lum_<epoch_ms>). Inserts a row into D1 usage_log.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleNoOverall visual preset. 'photorealistic' = full PBR, 'artistic' = painterly, 'sketch' = line-drawing overlay, 'aerial' = drone perspective.
model_idYesAPS URN (base64url-encoded, starts with 'dXJu', unpadded) of the model to render.
add_peopleNoPopulate animated/static human entourage. Defaults to true.
add_vehiclesNoPopulate cars, trucks, and other vehicles in parking/streets. Defaults to false.
add_landscapingNoPopulate trees, shrubs, and ground cover appropriate to region. Defaults to true.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions what visual elements are generated but doesn't cover critical aspects like whether this is a read/write operation, processing time, authentication needs, rate limits, or output format. For a tool that likely involves significant computation, this lack of transparency is a notable gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the core functionality without unnecessary words. It's front-loaded with the main action and includes relevant details in a compact list format, making every word count.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a rendering tool with 5 parameters, 0% schema coverage, no annotations, and no output schema, the description is insufficient. It doesn't explain what the tool returns, how long processing takes, error conditions, or dependencies on other tools like 'list_models'. The agent lacks critical context to use this tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for all 5 parameters. It only partially addresses parameters by mentioning 'landscaping, people, vehicles' (mapping to add_landscaping, add_people, add_vehicles) but omits model_id and style entirely. The atmospheric effects hint at style options but aren't specific. This leaves key parameters undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'generate' and the resource 'Lumion-style architectural visualization', specifying the visual elements included (landscaping, people, vehicles, atmospheric effects). It distinguishes from sibling tools like 'twinmotion_render' by specifying the Lumion style, though it doesn't explicitly contrast with other rendering tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'twinmotion_render' or 'get_viewer_link'. It mentions the visual elements but doesn't specify use cases, prerequisites, or exclusions, leaving the agent to infer usage from the tool name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

twinmotion_renderCInspect

Queue a photorealistic Twinmotion-style still render of a translated model with time-of-day, weather, season, and resolution controls. Returns a render_id and preview_url; the actual render pipeline is a ScanBIM roadmap item (Week 5 buildout), so today this tool responds synchronously with a stub job descriptor. When to use: you want a scripted way to request a hero still for a proposal or client deck. When NOT to use: you need real-time interactive rendering — use get_viewer_link. You need a moving camera — use twinmotion_walkthrough. You expect the image file bytes back in the response — this tool returns a URL, not bytes. APS scopes: none today (render pipeline is ScanBIM-internal); viewables:read data:read will apply when the pipeline goes live. Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh (will apply when pipeline is live); 403 scope or resource permission denied; 404 URN not found — check the ID; 429 rate limited — backoff and retry; 5xx APS upstream outage — retry with jitter. Side effects: NON-IDEMPOTENT. Each call mints a new render_id (tm_<epoch_ms>). Inserts a row into D1 usage_log. When the pipeline is live it will create a rendering job on ScanBIM's compute backend.

ParametersJSON Schema
NameRequiredDescriptionDefault
seasonNoVegetation and ground-cover preset. Defaults to 'summer'.
weatherNoSky and atmospheric preset. Defaults to 'clear'.
model_idYesAPS URN (base64url-encoded, starts with 'dXJu', unpadded) of the model to render.
resolutionNoOutput image resolution. Defaults to '4k'.
time_of_dayNoSun angle preset driving lighting, shadows, and sky. Defaults to 'noon'.
camera_presetNoNamed camera viewpoint (e.g. 'hero-exterior', 'lobby-entry'). Free-form string passed through to the render pipeline.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but provides minimal behavioral information. It mentions 'generate' which implies a creation operation, but doesn't disclose execution time, output format, file storage, permissions needed, rate limits, or whether it's destructive. The description doesn't contradict annotations (none exist), but fails to provide essential behavioral context for a render generation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that efficiently communicates the core function. No wasted words, appropriately front-loaded with the main action. Every element earns its place in this compact description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex render generation tool with 6 parameters, 0% schema coverage, no annotations, and no output schema, the description is inadequate. It doesn't explain what the tool returns (image file, URL, status), execution characteristics, or important constraints. The completeness gap is significant given the tool's complexity and lack of supporting structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but only mentions parameter categories (time-of-day, weather, season, camera controls) without explaining their purpose or relationships. It doesn't clarify that 'model_id' is required or explain what camera_preset entails. For 6 parameters with 4 having enums, this minimal coverage is insufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'generate' and resource 'photorealistic Twinmotion-style render' with specific controls (time-of-day, weather, season, camera). It distinguishes from sibling 'lumion_render' by specifying Twinmotion style, but doesn't explicitly differentiate from 'twinmotion_walkthrough' which suggests a different output type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'lumion_render' or 'twinmotion_walkthrough'. The description implies it's for static renders but doesn't state this explicitly or provide any context about prerequisites, dependencies, or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

twinmotion_walkthroughCInspect

Queue a cinematic Twinmotion-style fly-through video of a translated model. Returns a video_id and download_url; the render pipeline is a ScanBIM roadmap item so today this tool responds synchronously with a stub job descriptor. When to use: you want a short marketing or pre-con video scripted from an agent workflow. When NOT to use: you want real-time interactivity — use get_viewer_link. You want a still image — use twinmotion_render. APS scopes: none today (render pipeline is ScanBIM-internal); viewables:read data:read will apply when live. Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh (will apply when pipeline is live); 403 scope or resource permission denied; 404 URN not found — check the ID; 429 rate limited — backoff and retry; 5xx APS upstream outage — retry with jitter. Side effects: NON-IDEMPOTENT. Each call mints a new video_id (tmv_<epoch_ms>). Inserts a row into D1 usage_log.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleNoAnimation and color-grade preset. 'cinematic' = orbits + tilts, 'technical' = orthographic pans, 'presentation' = slow lobby-to-penthouse.
model_idYesAPS URN (base64url-encoded, starts with 'dXJu', unpadded) of the model to animate.
duration_secondsNoVideo duration in seconds. Integer 10-600; defaults to 60 when omitted.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool generates a video, implying a potentially resource-intensive or time-consuming operation, but fails to mention critical details like required permissions, whether it's asynchronous, expected runtime, output format, or any rate limits. This leaves significant gaps in understanding how the tool behaves in practice.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It's front-loaded with the core action ('generate'), making it easy to parse quickly, though this brevity contributes to gaps in other dimensions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of video generation (a non-trivial operation), no annotations, no output schema, and 0% schema description coverage, the description is insufficient. It lacks details on behavioral traits, parameter meanings, expected outputs, and usage context, making it incomplete for effective agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but adds no parameter information. It doesn't explain what 'model_id' refers to, the meaning of 'style' enum values (cinematic, technical, presentation), or how 'duration_seconds' affects the output. This leaves all three parameters semantically unclear beyond their schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('generate') and the resource ('animated cinematic walkthrough video of the model'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'twinmotion_render' or 'lumion_render', which might offer similar rendering capabilities, leaving some ambiguity about when to choose this specific tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a pre-existing model), exclusions, or compare it to siblings like 'twinmotion_render' or 'xr_launch_vr_session', leaving the agent to infer usage context from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

upload_modelAInspect

Ingest a 3D model from a public URL into APS OSS and kick off a Model Derivative translation job, returning the URN plus a browser viewer link and QR code. Supports 50+ formats: Revit (.rvt/.rfa), Navisworks (.nwd/.nwc), IFC, FBX, OBJ, SolidWorks, point clouds (E57/LAS/RCP), CAD (DWG/STEP/IGES), etc. When to use: you have a publicly downloadable 3D file (S3 presigned URL, GitHub raw, etc.) and need it translated to SVF2 so it can be viewed, measured, or clash-checked via other tools. When NOT to use: the file is only on a local disk or behind auth (fetch will fail) — first push it to a public URL. Do not call to re-translate a model already uploaded; call get_model_metadata instead. APS scopes: data:read data:write data:create bucket:read bucket:create viewables:read Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh; 403 scope or resource permission denied; 404 source file_url not reachable or bucket not found — check the ID; 409 bucket name conflict (bucket already owned by another app — pick a unique bucketKey); 429 rate limited — backoff and retry; 5xx APS upstream outage — retry with jitter. Side effects: NON-IDEMPOTENT. Creates the scanbim-models bucket if absent, uploads a new OSS object with a timestamped key (each call creates a distinct object even for the same input), submits a Model Derivative job (x-ads-force=true overwrites prior derivatives for the same URN), and inserts a row into D1 usage_log + models table.

ParametersJSON Schema
NameRequiredDescriptionDefault
file_urlYesPublicly fetchable HTTPS URL to the 3D model file. Must be directly downloadable (no login wall, no JS redirect); the worker does a plain fetch() and streams the bytes into APS OSS. Max 100MB for direct upload. Presigned S3/GCS URLs work well.
file_nameYesFilename including the extension. The extension is used to determine the tier (free/pro/enterprise) and is preserved in the OSS object key (prefixed with a Unix-ms timestamp). Use only ASCII + dash/underscore/dot; no path separators.
project_nameNoOptional free-text label stored alongside the model row in D1 for grouping models by project. Does not affect APS storage or URN. Defaults to 'default' when omitted.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behaviors: translation via Autodesk Platform Services and return of a viewer link and QR code. However, it lacks details on permissions, rate limits, file size constraints, or error handling, which are important for a file upload operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by format examples and outcomes, all in two efficient sentences. There is no wasted text, and every element (e.g., format list, translation detail, return values) serves to clarify the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is moderately complete: it covers the upload action, formats, translation process, and return values. However, for a tool that mutates state (uploading files), it lacks details on authentication, side effects, or error scenarios, leaving gaps in contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the three parameters. The description does not add any parameter-specific semantics beyond what the schema provides, such as format requirements for 'file_url' or naming conventions for 'file_name'. Baseline score of 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Upload a 3D model file'), identifies the target system ('to APS/ScanBIM'), enumerates supported formats with examples, and distinguishes this tool from siblings like 'list_models' or 'get_viewer_link' by focusing on the upload and translation process.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a 3D model needs uploading and translation, but does not explicitly state when to use this versus alternatives like 'get_supported_formats' for format checks or 'get_viewer_link' if a link already exists. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xr_launch_ar_sessionCInspect

Create a shareable WebXR AR passthrough session URL and QR code. On phone or tablet with WebXR AR support, the model is overlaid on the camera feed at the requested scale. When to use: a field user needs to walk the jobsite with a phone and see the model overlaid in-place at 1:1 scale, or drop a tabletop mini-model on a desk. When NOT to use: the target device is a Meta Quest in VR mode — use xr_launch_vr_session. The device lacks WebXR AR (desktop browser) — use get_viewer_link. APS scopes: viewables:read data:read (enforced at viewer page load, not at tool call). Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh (only at viewer page load); 403 scope or resource permission denied; 404 URN not found — check the ID; 429 rate limited — backoff and retry; 5xx APS upstream outage — retry with jitter. Side effects: NON-IDEMPOTENT. Each call mints a new session_id (ar_<epoch_ms>). Inserts a row into D1 usage_log read by xr_list_sessions. No APS resources are created.

ParametersJSON Schema
NameRequiredDescriptionDefault
scaleNoModel placement scale. '1:1' for in-situ real-world scale, 'tabletop' for ~1:50 desk-top display, 'custom' to allow pinch-to-scale. Defaults to '1:1'.
model_idYesAPS URN (base64url-encoded, starts with 'dXJu', unpadded) of the model to load in AR.
session_nameNoHuman-readable session label shown in the session list.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions launching a session and overlaying BIM, implying a real-time, interactive operation, but fails to describe critical behaviors such as required permissions, session duration, resource consumption, error handling, or what happens if a session is already active. This leaves significant gaps for safe and effective tool invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single sentence that directly states the tool's purpose without any fluff or redundant information. It is front-loaded and efficiently communicates the core functionality, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of launching an AR session (likely involving real-time graphics, device compatibility, and user interaction), the description is incomplete. With no annotations, no output schema, and 0% schema description coverage, it lacks details on behavioral traits, return values, error conditions, and parameter meanings. This makes it inadequate for an agent to use the tool confidently without additional context or trial-and-error.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, so the description must compensate, but it adds no parameter-specific information. It doesn't explain what 'model_id', 'scale', or 'session_name' mean in context, their formats, or how they affect the session. With three parameters (one required) and no schema descriptions, this is a minimal baseline score, as the description fails to clarify parameter roles beyond what the schema's structure implies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Launch AR passthrough session') and the resource/outcome ('overlay BIM on real jobsite via camera'), which is specific and informative. However, it doesn't explicitly differentiate from its sibling 'xr_launch_vr_session', which might be a related AR/VR tool, leaving some ambiguity about when to choose one over the other.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'xr_launch_vr_session' or other AR/VR-related tools. It lacks context about prerequisites, typical use cases, or exclusions, leaving the agent to infer usage based on the name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xr_launch_vr_sessionCInspect

Create a shareable WebXR VR walkthrough session URL (and Meta Quest oculus:// deep link + QR code) for a translated model. The session_id is generated server-side; rendering happens in the user's Quest browser. When to use: you need to walk a client or field team through a model in immersive VR on Meta Quest 2/3/Pro. When NOT to use: the user is on a phone/tablet without a headset — use xr_launch_ar_session or get_viewer_link. The model has not finished translating — call get_model_metadata first. APS scopes: viewables:read data:read (enforced at viewer page load, not at tool call). Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh (only at viewer page load); 403 scope or resource permission denied; 404 URN not found — check the ID; 429 rate limited — backoff and retry; 5xx APS upstream outage — retry with jitter. Side effects: NON-IDEMPOTENT. Each call mints a new session_id (vr_<epoch_ms>). Inserts a row into D1 usage_log which is later read by xr_list_sessions. No APS resources are created.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idYesAPS URN (base64url-encoded, starts with 'dXJu', unpadded) of the model to load in VR.
session_nameNoHuman-readable session label shown in the session list. Defaults to 'VR Session' if omitted.
max_participantsNoMaximum concurrent participants in multi-user mode. Integer 1-20. Defaults to 5.
enable_measurementsNoEnable in-VR tape-measure tool. Defaults to true.
enable_voice_annotationsNoEnable voice-note recording anchored to model elements. Defaults to false.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions launching and sharing via QR code, but lacks details on permissions required, whether the session is persistent or temporary, rate limits, error handling, or what happens after launch (e.g., session management). This is a significant gap for a tool that likely involves resource allocation and user interaction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two brief sentences that are front-loaded with the core action. Every word earns its place by specifying the action, platform, and key feature without any fluff or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of launching a VR session with 5 parameters, no annotations, and no output schema, the description is incomplete. It lacks details on behavioral traits, parameter meanings, expected outputs, and usage context, making it inadequate for an agent to fully understand how to invoke this tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for all 5 parameters. It only implicitly relates to 'model_id' (via 'immersive VR walkthrough') and 'session_name' (via 'Share via QR code'), but provides no meaning for 'max_participants', 'enable_measurements', or 'enable_voice_annotations'. The description adds minimal value beyond the schema's parameter names.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Launch immersive VR walkthrough') and target platform ('on Meta Quest via ScanBIM XR'), with a specific feature mentioned ('Share via QR code'). It distinguishes from the sibling 'xr_launch_ar_session' by specifying VR rather than AR, but doesn't explicitly differentiate from other immersive tools like 'twinmotion_walkthrough' beyond the platform mention.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives is provided. The description implies usage for VR walkthroughs on Meta Quest, but doesn't specify prerequisites, compare to 'xr_launch_ar_session' for AR vs VR scenarios, or mention when not to use it (e.g., for non-VR platforms or without a model).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

xr_list_sessionsCInspect

List the last 20 VR/AR sessions launched via xr_launch_vr_session and xr_launch_ar_session, sorted by creation time desc. Sourced from the D1 usage_log table; returns an empty array if D1 is unavailable or no sessions have been recorded. When to use: you want to audit who launched which XR session and when, or surface recent sessions to a user. When NOT to use: you want details (join URL, features) for a specific session — those details live inside the original launch response and are not stored beyond the log row. APS scopes: none (D1 read only). Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation jobs ~60 req/min; OSS uploads size-limited per file to 100MB for direct upload, larger via resumable. Errors: 401 APS token expired/invalid — refresh (not applicable: no APS call); 403 scope or resource permission denied (not applicable); 404 not applicable; 429 rate limited — backoff and retry (worker-level only); 5xx APS upstream outage — retry with jitter (not applicable). Side effects: READ-ONLY. Idempotent.

ParametersJSON Schema
NameRequiredDescriptionDefault
model_idNoReserved for future filtering by model URN. Currently not applied; all recent xr_* sessions are returned.
session_typeNoReserved for future filtering by session type. Currently not applied; both VR and AR sessions are returned.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It implies a read-only operation by using 'List', but doesn't specify if it requires authentication, has rate limits, returns paginated results, or what the output format looks like. For a tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It uses minimal text to convey the essential action and scope, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no annotations, no output schema, and low parameter schema coverage, the description is incomplete. It doesn't address behavioral aspects like authentication needs, return format, or error handling, leaving gaps that could hinder effective tool invocation by an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description mentions 'VR/AR sessions' but doesn't explain the parameters 'model_id' or 'session_type', which have 0% schema description coverage. It doesn't clarify what 'model_id' refers to (e.g., a specific VR model) or how 'session_type' with enum values ('vr', 'ar', 'all') affects the listing. This fails to compensate for the low schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('VR/AR sessions'), specifying both active and past sessions. It distinguishes from siblings like 'xr_launch_ar_session' and 'xr_launch_vr_session' by focusing on listing rather than launching. However, it doesn't explicitly differentiate from other list tools like 'acc_list_issues' or 'list_models', which keeps it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing specific permissions or setup, or compare it to other list tools like 'list_models' for context. This lack of usage context leaves the agent without clear direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources