Skip to main content
Glama

Server Details

Autodesk Construction Cloud via APS — projects, issues, RFIs, documents, submittals.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.9/5 across 9 of 9 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose targeting specific ACC resources and actions, with no overlap in functionality. For example, create_issue vs. update_issue, list_issues vs. list_rfis, and project_summary vs. list_projects all serve unique roles.

Naming Consistency5/5

All tools follow a consistent 'acc_verb_noun' pattern with snake_case throughout, making them predictable and easy to parse. The naming convention is uniform across all nine tools, with no deviations in style or structure.

Tool Count5/5

With 9 tools, the count is well-scoped for managing ACC/BIM 360 projects, covering core operations like listing, creating, updating, and searching. Each tool earns its place without feeling excessive or insufficient for the domain.

Completeness4/5

The tool set provides strong coverage for ACC project management, including CRUD for issues and RFIs, project listing, summaries, document search, and file uploads. A minor gap exists in updating RFIs (only create and list are available), but agents can work around this with the existing tools.

Available Tools

9 tools
acc_create_issueCInspect

Create a new ACC issue (field observation, coordination clash, safety, quality, etc.) in the target project via the APS Construction Issues API.

When to use: The user wants to log a new issue — e.g. 'open a high-priority issue about the leaking valve on level 3' or a downstream agent detected a defect during a model review and needs to record it for the project team.

When NOT to use: Do not use to modify an existing issue (use acc_update_issue) and do not use for RFIs (use acc_create_rfi).

APS scopes: data:read data:write account:read.

Rate limits: ACC Issues API limited to ~100 req/min per app; APS default ~50 req/min per endpoint — batch creations with backoff.

Errors: 401 (APS token expired — refresh); 403 (user lacks 'Create Issues' permission on the project or scope insufficient — surface to user); 404 (project_id not found — verify the 'b.' prefix and that the project belongs to a hub the app can see via acc_list_projects); 422 (validation — required field like title/description missing or priority enum invalid); 429 (rate limit — retry after 60s); 5xx (ACC upstream — retry with jitter, do not double-create).

Side effects: Creates a persistent issue record visible to all project members. NOT idempotent — a retry on a 5xx may create duplicates; dedupe by title before retrying.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesShort issue title, 1–255 chars. Required.
due_dateNoOptional due date in ISO 8601 date format (YYYY-MM-DD).
priorityNoIssue priority. Defaults to 'medium' if omitted.
project_idYesACC project ID. MUST use the 'b.' prefix literal (e.g. 'b.a1b2c3d4-...'). The worker strips the prefix internally for the Issues endpoint. Obtain via acc_list_projects.
assigned_toNoOptional APS user ID (oxygen ID / ACC user UUID) of the assignee. Leave null for unassigned.
descriptionYesDetailed issue description / body. Plain text, up to ~10,000 chars. Required.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Create' implies a write/mutation operation, the description doesn't disclose permission requirements, rate limits, whether the operation is idempotent, what happens on failure, or what the response contains. This is inadequate for a mutation tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that states the core purpose without unnecessary words. It's appropriately sized and front-loaded with the essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with 6 parameters, 17% schema coverage, no annotations, and no output schema, the description is insufficient. It doesn't compensate for the missing behavioral context, parameter documentation, or output expectations that would help an agent use this tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 17% (only project_id has a description), and the description adds no parameter information beyond what's in the schema. It doesn't explain what 'title', 'description', 'due_date', 'priority', or 'assigned_to' mean in this context, nor does it provide format guidance for the string parameters beyond the enum for priority.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create a new issue') and the target resource ('in an ACC project via APS Issues API'), providing a specific verb+resource combination. However, it doesn't differentiate this tool from its sibling 'acc_update_issue', which would be needed for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'acc_update_issue' or 'acc_list_issues'. There's no mention of prerequisites, constraints, or appropriate contexts for creation versus other operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_create_rfiCInspect

Create a new Request For Information (RFI) in an ACC project via the APS Construction RFIs API. RFI is created in 'draft' status — the project workflow owner typically transitions it to 'submitted'.

When to use: The user needs a formal question-of-record to the design or GC team — e.g. 'raise an RFI asking for clarification on the Level 2 beam schedule'. RFIs are the auditable channel for clarifications; issues are for field observations.

When NOT to use: Do not use for informal observations (use acc_create_issue) or to answer an existing RFI (not supported here).

APS scopes: data:read data:write account:read.

Rate limits: APS default ~50 req/min per endpoint per app. RFIs share the Construction API umbrella with issues (~100 req/min combined).

Errors: 401 (APS token expired — refresh); 403 (user lacks RFI create permission on project); 404 (project_id not found — verify 'b.' prefix and hub membership); 422 (validation — subject/question missing or priority enum invalid); 429 (rate limit — back off 60s); 5xx (ACC upstream — retry with jitter, check for duplicate before retrying).

Side effects: Creates a persistent RFI record. NOT idempotent — retry on 5xx risks duplicates; dedupe by subject before retrying.

ParametersJSON Schema
NameRequiredDescriptionDefault
subjectYesShort RFI subject/title, 1–255 chars. Required.
priorityNoRFI priority. Defaults to 'medium' if omitted.
questionYesFull RFI question body. Plain text, up to ~10,000 chars. Required.
project_idYesACC project ID. MUST use 'b.' prefix literal. Obtain via acc_list_projects.
assigned_toNoOptional APS user ID of the person the RFI is directed to.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states this is a creation operation but doesn't mention what permissions are required, whether this is a write operation with side effects, what happens on success/failure, or any rate limits. The description is minimal and lacks important behavioral context for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise - a single sentence that directly states the tool's purpose without any unnecessary words. It's appropriately sized for what it communicates, though it communicates very little beyond the basic action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with 5 parameters (3 required), 0% schema coverage, no annotations, and no output schema, the description is severely inadequate. It doesn't explain what an RFI is, what the creation process entails, what data is returned, or provide any context about the parameters or their relationships.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage for 5 parameters, the description provides no information about any parameters. It doesn't explain what 'project_id', 'subject', 'question', 'priority', or 'assigned_to' mean or how they should be used. The description fails to compensate for the complete lack of schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create a new RFI') and target resource ('in an ACC project via APS RFIs API'), providing a specific verb+resource combination. However, it doesn't differentiate this tool from its sibling 'acc_create_issue', which appears to be a similar creation operation for a different resource type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance about when to use this tool versus alternatives like 'acc_create_issue' or 'acc_list_rfis'. There's no mention of prerequisites, appropriate contexts, or exclusions for RFI creation versus other operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_list_issuesCInspect

List and filter issues from a single ACC project (limit 50 per call) via the APS Construction Issues API.

When to use: The user or upstream agent needs to review open issues, count issues by status/priority, or look up an issue_id before calling acc_update_issue. E.g. 'show me all critical open issues on the Tower project'.

When NOT to use: Do not use to fetch RFIs (use acc_list_rfis) or to search documents.

APS scopes: data:read account:read. No write scope required.

Rate limits: ACC Issues API ~100 req/min per app; results pageable (limit 50 here, max 200 upstream). For large projects, call once and filter client-side instead of looping.

Errors: 401 (APS token expired — refresh); 403 (user lacks 'View Issues' permission on project or scope insufficient); 404 (project_id not found — verify 'b.' prefix and hub membership via acc_list_projects); 422 (invalid filter value — check status/priority spelling); 429 (rate limit — back off 60s); 5xx (ACC upstream — retry with jitter).

Side effects: None. Read-only and idempotent.

ParametersJSON Schema
NameRequiredDescriptionDefault
statusNoOptional status filter. Typical values: open, in_review, closed, draft.
priorityNoOptional priority filter. Typical values: critical, high, medium, low.
project_idYesACC project ID. MUST use 'b.' prefix literal. Obtain via acc_list_projects.
assigned_toNoOptional assignee APS user ID filter. Accepted but not currently forwarded as a URL filter by this server — filter client-side if needed.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'List and filter' but does not describe key traits such as whether this is a read-only operation, if it requires authentication, pagination behavior, rate limits, or what the output format looks like. This leaves significant gaps for a tool with multiple parameters and no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It is front-loaded with the core action ('List and filter issues'), making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (4 parameters, 1 required, no output schema, and no annotations), the description is incomplete. It lacks details on behavioral traits, parameter usage, output expectations, and differentiation from siblings. For a filtering tool with multiple inputs, this minimal description does not provide enough context for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate for undocumented parameters. It only vaguely implies filtering via 'filter issues' without explaining what 'status', 'priority', or 'assigned_to' mean, their expected values, or how they interact. This adds minimal value beyond the bare schema, failing to adequately clarify parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List and filter') and resource ('issues from an ACC project'), making the purpose understandable. It distinguishes from siblings like 'acc_create_issue' (creation) and 'acc_list_projects' (different resource), but does not explicitly differentiate from 'acc_update_issue' or 'acc_search_documents' in terms of scope or filtering capabilities, which keeps it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., needing a valid project_id), exclusions, or comparisons to siblings like 'acc_search_documents' for document-related queries or 'acc_update_issue' for modifications, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_list_projectsBInspect

Enumerate every ACC and BIM 360 project the authenticated APS app can see by walking all accessible hubs and their project lists.

When to use: The agent needs to discover project IDs before calling any other tool (e.g. the user says 'show me my projects' or 'find issues in the Tower project' and no project_id is known yet). Also useful to confirm hub membership for a project.

When NOT to use: Do not call this repeatedly in a loop — cache the result; if the user already supplied a project_id starting with 'b.', skip discovery.

APS scopes: data:read account:read. No write scope needed.

Rate limits: APS default ~50 req/min per app per endpoint; BIM 360 hubs endpoints are pageable (limit 200). This tool fans out 1 hubs call + N project calls (one per hub) so call it sparingly on tenants with many hubs.

Errors: 401 (APS token expired — refresh and retry once); 403 (app not provisioned in the BIM 360/ACC account — ask user to have an account admin add the APS client_id); 404 (rare, indicates hub deleted mid-call); 429 (rate limit — back off 60s); 5xx (ACC upstream — retry with jitter).

Side effects: None. Read-only and idempotent.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but provides minimal behavioral context. It implies a read-only operation by using 'List', but doesn't disclose details like pagination, rate limits, sorting, error conditions, or what 'access' entails (e.g., permissions). The description is too vague for a tool with potential complexity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('List all ACC/BIM 360 projects') and adds necessary context ('you have access to via APS Data Management'). There is zero waste or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and 0 parameters, the description is minimally adequate but lacks depth. It states what the tool does but omits behavioral details like response format, error handling, or usage context. For a list operation in a complex system like ACC/BIM 360, more completeness would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, maintaining focus on the tool's purpose without unnecessary detail.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('ACC/BIM 360 projects'), specifying the scope ('you have access to via APS Data Management'). It distinguishes from siblings like acc_list_issues and acc_list_rfis by focusing on projects rather than issues or RFIs, but doesn't explicitly contrast them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites like authentication, compare with acc_project_summary for different project data, or indicate when listing projects is appropriate versus searching documents or creating issues.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_list_rfisCInspect

List and filter RFIs from a single ACC project (limit 50 per call) via the APS Construction RFIs API.

When to use: The user wants to review open RFIs, count outstanding ones, or look up an RFI ID. E.g. 'how many RFIs are still open on the Tower project?'

When NOT to use: Do not use for issues (use acc_list_issues) or document search (use acc_search_documents).

APS scopes: data:read account:read. No write scope required.

Rate limits: APS default ~50 req/min per endpoint; ACC Construction API shared ~100 req/min cap. Pageable (limit 50 here; upstream max 200).

Errors: 401 (APS token expired — refresh); 403 (user lacks RFI view permission); 404 (project_id not found — verify 'b.' prefix and hub membership); 422 (invalid filter value); 429 (rate limit — back off 60s); 5xx (ACC upstream — retry).

Side effects: None. Read-only and idempotent.

ParametersJSON Schema
NameRequiredDescriptionDefault
statusNoOptional RFI status filter. Typical values: draft, submitted, open, answered, closed, void.
project_idYesACC project ID. MUST use 'b.' prefix literal. Obtain via acc_list_projects.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool lists and filters RFIs, implying a read-only operation, but doesn't clarify if it's paginated, what the output format is, or if there are rate limits or authentication requirements. For a list/filter tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence: 'List and filter RFIs from an ACC project.' It is front-loaded with the core purpose, has zero wasted words, and is appropriately sized for a simple tool. Every part of the sentence contributes directly to understanding the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a list/filter operation with 2 parameters), lack of annotations, and no output schema, the description is incomplete. It doesn't explain the return values, error handling, or how parameters interact. For a tool that likely returns a list of RFIs, more context on output structure or usage constraints would be necessary for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 2 parameters with 0% description coverage, meaning no details are provided in the schema. The description mentions 'filter RFIs' but doesn't explain what 'status' or 'project_id' represent, their expected formats, or how filtering works. It adds minimal value beyond the schema, failing to compensate for the low coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List and filter RFIs from an ACC project.' It specifies the verb ('List and filter'), resource ('RFIs'), and scope ('from an ACC project'), which is specific and actionable. However, it doesn't explicitly distinguish this tool from its sibling 'acc_list_issues' or 'acc_search_documents', which might also list project-related items, so it misses full differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to prefer this over 'acc_list_issues' for RFIs specifically, or if there are prerequisites like project access. Without any context on exclusions or alternatives, users must infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_project_summaryCInspect

Fetch the full ACC project metadata record (name, type, status, dates, extension attributes) for a single project via APS Data Management. If hub_id is omitted the tool picks the first accessible hub, which may be wrong on multi-hub tenants.

When to use: The user asks 'tell me about project X' or an agent needs project metadata (start/end dates, type, Forma/BIM 360 flavor) before deciding which downstream tool to call.

When NOT to use: Do not use as a cheap existence check — prefer acc_list_projects which returns hub_id with every project and is one call regardless of tenant size.

APS scopes: data:read account:read. Forma / BIM 360 hubs endpoints only require data:read.

Rate limits: APS default ~50 req/min per endpoint; BIM 360 hubs endpoints pageable (limit 200). Cache results for the session.

Errors: 401 (APS token expired — refresh); 403 (user lacks project view or app not in account); 404 (project not in the chosen hub — supply the correct hub_id, or call acc_list_projects first); 422 (malformed project_id — confirm 'b.' prefix); 429 (rate limit — back off 60s); 5xx (ACC upstream — retry).

Side effects: None. Read-only and idempotent.

ParametersJSON Schema
NameRequiredDescriptionDefault
hub_idNoOptional ACC/BIM 360 hub ID. Also uses the 'b.' prefix literal. If omitted, the first hub returned by APS is used — prefer supplying this explicitly on multi-hub tenants to avoid 404s.
project_idYesACC project ID. MUST use 'b.' prefix literal (this endpoint — unlike Issues/RFIs — wants the prefixed form). Obtain via acc_list_projects.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool 'Get[s]' data, implying a read-only operation, but doesn't disclose behavioral traits like authentication needs, rate limits, error handling, or response format. For a tool with no annotation coverage, this leaves significant gaps in understanding its operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every part ('Get full ACC project summary including hub, metadata, issue counts, and RFI counts') contributes directly to understanding the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (2 parameters, no output schema, no annotations), the description is incomplete. It covers the purpose but lacks usage guidelines, parameter explanations, and behavioral details. Without an output schema, it should ideally hint at return values, but it doesn't, leaving the agent with insufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the schema provides no parameter details. The description mentions 'project summary' but doesn't explain the parameters (project_id and hub_id), their purposes, or relationships. It fails to compensate for the lack of schema documentation, leaving parameters largely undefined.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and the resource ('ACC project summary'), specifying what information is included (hub, metadata, issue counts, RFI counts). It distinguishes itself from siblings like 'acc_list_projects' by focusing on detailed summary rather than listing. However, it doesn't explicitly contrast with all siblings, keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing a project_id, or compare it to siblings like 'acc_list_projects' for high-level overviews or 'acc_list_issues'/'acc_list_rfis' for detailed lists. Without such context, an agent might misuse it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_search_documentsCInspect

Full-text search the ACC Docs repository of a project for drawings, specs, submittals, and other files via the APS Data Management search endpoint.

When to use: The user wants to find a document by keyword (filename, sheet number, or metadata match). E.g. 'find the latest A-201 sheet' or 'search for mechanical specs on Tower project'.

When NOT to use: Do not use to upload a file (use acc_upload_file); do not use to fetch issues/RFIs. If you already have a document URN, fetch it directly with an agent that has Data Management folder/item access.

APS scopes: data:read account:read. No write scope required.

Rate limits: APS Data Management ~50 req/min per app per endpoint; pageable (limit 200 upstream). Avoid tight query loops.

Errors: 401 (APS token expired — refresh); 403 (user lacks Docs view permission on the project); 404 (project_id not found — verify 'b.' prefix and hub membership); 422 (invalid filter syntax — simplify query text); 429 (rate limit — back off 60s); 5xx (ACC upstream — retry with jitter).

Side effects: None. Read-only and idempotent.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesFree-text search string matched against filenames, titles, and indexed metadata. 1–500 chars.
project_idYesACC project ID. MUST use 'b.' prefix literal. The worker re-adds the prefix for Data Management URL formatting. Obtain via acc_list_projects.
document_typeNoOptional APS document type filter (e.g. 'items:autodesk.bim360:File', 'items:autodesk.bim360:Document').
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool searches documents but doesn't disclose behavioral traits like whether it's read-only (implied by 'Search'), pagination, rate limits, authentication needs, or error handling. This leaves significant gaps for a tool with 3 parameters and no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It front-loads the key action and resources, making it easy to scan and understand quickly without unnecessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (3 parameters, 0% schema coverage, no annotations, no output schema), the description is incomplete. It doesn't explain return values, error cases, or provide enough context for effective use. For a search tool with undocumented parameters, more detail is needed to compensate for the lack of structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for all parameters. It mentions 'drawings, specs, submittals and documents' which loosely relates to 'document_type', but doesn't explain 'query' (search term), 'project_id' (context), or provide details on parameter usage, formats, or constraints. This adds minimal value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Search') and the target resources ('drawings, specs, submittals and documents'), with the context 'in ACC via APS Data Management' providing the platform. It distinguishes from siblings like 'acc_list_projects' or 'acc_upload_file' by focusing on search functionality, though it doesn't explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing a project_id from 'acc_list_projects', or differentiate from other search-related tools (none listed in siblings). The description implies usage for document search but lacks explicit when/when-not instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_update_issueCInspect

Patch an existing ACC issue — change status, priority, assignee, or description via the APS Construction Issues API.

When to use: The user asks to close/reopen/escalate an issue, reassign it, or edit its body. Typical agent flow: acc_list_issues → pick an id → acc_update_issue.

When NOT to use: Do not use to create issues (acc_create_issue) or to add comments (not supported by this server).

APS scopes: data:read data:write account:read.

Rate limits: ACC Issues API ~100 req/min per app; APS default ~50 req/min per endpoint.

Errors: 401 (APS token expired — refresh); 403 (user lacks edit permission or status transition not allowed by project workflow); 404 (project_id or issue_id not found — verify 'b.' prefix on project_id and that issue_id belongs to that project); 422 (validation — invalid status/priority enum or illegal state transition); 429 (rate limit — back off 60s); 5xx (ACC upstream — retry with jitter).

Side effects: Mutates the issue record. Idempotent when the same body is resent (PATCH semantics) — safe to retry.

ParametersJSON Schema
NameRequiredDescriptionDefault
statusNoNew status. Project workflow may forbid certain transitions (e.g. draft → closed).
issue_idYesUUID of the issue to update, as returned by acc_list_issues or acc_create_issue.
priorityNoNew priority.
project_idYesACC project ID. MUST use the 'b.' prefix literal. Obtain via acc_list_projects.
assigned_toNoAPS user ID of the new assignee. Omit to leave unchanged.
descriptionNoReplacement description body. Plain text.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states this is an update operation, implying mutation, but doesn't disclose behavioral traits like required permissions, whether changes are reversible, rate limits, or what happens to unspecified fields. For a mutation tool with zero annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It's front-loaded with the core action and key details, making it easy to scan and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (mutation tool with 6 parameters, 0% schema coverage, no annotations, no output schema), the description is incomplete. It doesn't explain return values, error conditions, or provide enough context for safe and effective use. More details are needed for a tool of this nature.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It lists updatable fields ('status, priority, assignee, description'), which maps to 4 of the 6 parameters, but doesn't cover 'project_id' and 'issue_id' (the required ones). This adds some value but doesn't fully compensate for the coverage gap, especially for the required parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Update') and resource ('an existing ACC issue'), and specifies the fields that can be updated ('status, priority, assignee, description'). It distinguishes from sibling tools like 'acc_create_issue' by focusing on updates rather than creation, though it doesn't explicitly mention all siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing issue), exclusions, or comparisons to siblings like 'acc_list_issues' for viewing issues. Usage is implied but not explicitly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

acc_upload_fileCInspect

Upload a file from a public source URL into an ACC project folder. Runs the full four-step APS Data Management flow: top-folder discovery → storage object creation → OSS PUT of bytes → first-version item creation.

When to use: The user wants to push a document/photo/model into ACC Docs — e.g. 'upload this site photo to the Tower project Photos folder' or an automation needs to archive an exported report into Project Files.

When NOT to use: Do not use for files already in ACC; do not use for files behind auth-gated URLs (fetch step is an unauthenticated GET). For very large files (>100MB), prefer the chunked/signed-S3 upload flow, not this single-PUT implementation.

APS scopes: data:read data:write data:create account:read.

Rate limits: APS Data Management ~50 req/min per endpoint; OSS upload bandwidth typically 100 MB/min per app. This tool issues 3–5 APS calls per upload, so budget accordingly.

Errors: 401 (APS token expired — refresh); 403 (user lacks folder write permission — ask account admin to grant 'Edit' on folder); 404 (project_id not found or folder_path does not match any top folder — verify 'b.' prefix, hub membership, and folder name); 422 (invalid file_name or conflicting version); 429 (rate limit — back off 60s); 5xx (ACC/OSS upstream — retry with jitter BUT be cautious: storage object may already be created so reuse, do not re-create). Also: if source file_url returns non-2xx, the tool throws before touching ACC.

Side effects: Creates a storage object, uploads bytes, and creates a versioned item in the target folder. NOT idempotent — a retry may create a duplicate item with a new version. Surface the returned item_id to the user to avoid re-uploads.

ParametersJSON Schema
NameRequiredDescriptionDefault
file_urlYesPublicly fetchable HTTPS URL of the source file. Must return 2xx to an unauthenticated GET. Max practical size ~100MB for this single-PUT implementation.
file_nameYesDestination filename in ACC, including extension. 1–255 chars. Avoid path separators.
project_idYesACC project ID. MUST use 'b.' prefix literal. Obtain via acc_list_projects.
folder_pathNoCase-insensitive substring of the target top-level folder's display name. Defaults to 'Project Files'. Common values: 'Project Files', 'Plans', 'Photos', 'Submittals'.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While 'upload' implies a write/mutation operation, the description doesn't address critical behavioral aspects like authentication requirements, rate limits, file size constraints, error conditions, or what happens on success/failure. It mentions the APS Data Management system but doesn't explain its implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise - a single sentence that directly states the tool's purpose without any fluff or unnecessary elaboration. It's appropriately sized and front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a file upload tool with 4 parameters, 0% schema coverage, no annotations, and no output schema, the description is severely incomplete. It doesn't address parameter meanings, behavioral expectations, error handling, or result format. While concise, it lacks the necessary detail for an agent to properly understand and invoke this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage and 4 parameters (3 required), the description provides no parameter information whatsoever. It doesn't explain what 'project_id', 'file_url', 'file_name', or optional 'folder_path' mean, their formats, constraints, or relationships. The description fails to compensate for the complete lack of schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('upload a file') and target resource ('to an ACC project folder via APS Data Management'), providing a specific verb+resource combination. However, it doesn't distinguish this tool from potential sibling upload tools (none are listed among siblings, but the description doesn't explicitly address uniqueness).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, prerequisites, or contextual constraints. It simply states what the tool does without any usage context or comparison to sibling tools like document search or project listing tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources