ACC MCP
Server Details
Autodesk Construction Cloud via APS — projects, issues, RFIs, documents, submittals.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 2.9/5 across 9 of 9 tools scored.
Each tool has a clearly distinct purpose targeting specific ACC resources and actions, with no overlap in functionality. For example, create_issue vs. update_issue, list_issues vs. list_rfis, and project_summary vs. list_projects all serve unique roles.
All tools follow a consistent 'acc_verb_noun' pattern with snake_case throughout, making them predictable and easy to parse. The naming convention is uniform across all nine tools, with no deviations in style or structure.
With 9 tools, the count is well-scoped for managing ACC/BIM 360 projects, covering core operations like listing, creating, updating, and searching. Each tool earns its place without feeling excessive or insufficient for the domain.
The tool set provides strong coverage for ACC project management, including CRUD for issues and RFIs, project listing, summaries, document search, and file uploads. A minor gap exists in updating RFIs (only create and list are available), but agents can work around this with the existing tools.
Available Tools
9 toolsacc_create_issueCInspect
Create a new issue in an ACC project via APS Issues API
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | ||
| due_date | No | ||
| priority | No | ||
| project_id | Yes | ACC project ID (b.xxxx format) | |
| assigned_to | No | ||
| description | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Create' implies a write/mutation operation, the description doesn't disclose permission requirements, rate limits, whether the operation is idempotent, what happens on failure, or what the response contains. This is inadequate for a mutation tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that states the core purpose without unnecessary words. It's appropriately sized and front-loaded with the essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 6 parameters, 17% schema coverage, no annotations, and no output schema, the description is insufficient. It doesn't compensate for the missing behavioral context, parameter documentation, or output expectations that would help an agent use this tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 17% (only project_id has a description), and the description adds no parameter information beyond what's in the schema. It doesn't explain what 'title', 'description', 'due_date', 'priority', or 'assigned_to' mean in this context, nor does it provide format guidance for the string parameters beyond the enum for priority.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a new issue') and the target resource ('in an ACC project via APS Issues API'), providing a specific verb+resource combination. However, it doesn't differentiate this tool from its sibling 'acc_update_issue', which would be needed for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'acc_update_issue' or 'acc_list_issues'. There's no mention of prerequisites, constraints, or appropriate contexts for creation versus other operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
acc_create_rfiCInspect
Create a new RFI in an ACC project via APS RFIs API
| Name | Required | Description | Default |
|---|---|---|---|
| subject | Yes | ||
| priority | No | ||
| question | Yes | ||
| project_id | Yes | ||
| assigned_to | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states this is a creation operation but doesn't mention what permissions are required, whether this is a write operation with side effects, what happens on success/failure, or any rate limits. The description is minimal and lacks important behavioral context for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise - a single sentence that directly states the tool's purpose without any unnecessary words. It's appropriately sized for what it communicates, though it communicates very little beyond the basic action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 5 parameters (3 required), 0% schema coverage, no annotations, and no output schema, the description is severely inadequate. It doesn't explain what an RFI is, what the creation process entails, what data is returned, or provide any context about the parameters or their relationships.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 5 parameters, the description provides no information about any parameters. It doesn't explain what 'project_id', 'subject', 'question', 'priority', or 'assigned_to' mean or how they should be used. The description fails to compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a new RFI') and target resource ('in an ACC project via APS RFIs API'), providing a specific verb+resource combination. However, it doesn't differentiate this tool from its sibling 'acc_create_issue', which appears to be a similar creation operation for a different resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance about when to use this tool versus alternatives like 'acc_create_issue' or 'acc_list_rfis'. There's no mention of prerequisites, appropriate contexts, or exclusions for RFI creation versus other operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
acc_list_issuesCInspect
List and filter issues from an ACC project
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | ||
| priority | No | ||
| project_id | Yes | ||
| assigned_to | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'List and filter' but does not describe key traits such as whether this is a read-only operation, if it requires authentication, pagination behavior, rate limits, or what the output format looks like. This leaves significant gaps for a tool with multiple parameters and no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It is front-loaded with the core action ('List and filter issues'), making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (4 parameters, 1 required, no output schema, and no annotations), the description is incomplete. It lacks details on behavioral traits, parameter usage, output expectations, and differentiation from siblings. For a filtering tool with multiple inputs, this minimal description does not provide enough context for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate for undocumented parameters. It only vaguely implies filtering via 'filter issues' without explaining what 'status', 'priority', or 'assigned_to' mean, their expected values, or how they interact. This adds minimal value beyond the bare schema, failing to adequately clarify parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List and filter') and resource ('issues from an ACC project'), making the purpose understandable. It distinguishes from siblings like 'acc_create_issue' (creation) and 'acc_list_projects' (different resource), but does not explicitly differentiate from 'acc_update_issue' or 'acc_search_documents' in terms of scope or filtering capabilities, which keeps it from a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., needing a valid project_id), exclusions, or comparisons to siblings like 'acc_search_documents' for document-related queries or 'acc_update_issue' for modifications, leaving the agent to infer usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
acc_list_projectsBInspect
List all ACC/BIM 360 projects you have access to via APS Data Management
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden but provides minimal behavioral context. It implies a read-only operation by using 'List', but doesn't disclose details like pagination, rate limits, sorting, error conditions, or what 'access' entails (e.g., permissions). The description is too vague for a tool with potential complexity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action ('List all ACC/BIM 360 projects') and adds necessary context ('you have access to via APS Data Management'). There is zero waste or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and 0 parameters, the description is minimally adequate but lacks depth. It states what the tool does but omits behavioral details like response format, error handling, or usage context. For a list operation in a complex system like ACC/BIM 360, more completeness would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, maintaining focus on the tool's purpose without unnecessary detail.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('ACC/BIM 360 projects'), specifying the scope ('you have access to via APS Data Management'). It distinguishes from siblings like acc_list_issues and acc_list_rfis by focusing on projects rather than issues or RFIs, but doesn't explicitly contrast them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites like authentication, compare with acc_project_summary for different project data, or indicate when listing projects is appropriate versus searching documents or creating issues.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
acc_list_rfisCInspect
List and filter RFIs from an ACC project
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | ||
| project_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool lists and filters RFIs, implying a read-only operation, but doesn't clarify if it's paginated, what the output format is, or if there are rate limits or authentication requirements. For a list/filter tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence: 'List and filter RFIs from an ACC project.' It is front-loaded with the core purpose, has zero wasted words, and is appropriately sized for a simple tool. Every part of the sentence contributes directly to understanding the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a list/filter operation with 2 parameters), lack of annotations, and no output schema, the description is incomplete. It doesn't explain the return values, error handling, or how parameters interact. For a tool that likely returns a list of RFIs, more context on output structure or usage constraints would be necessary for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 2 parameters with 0% description coverage, meaning no details are provided in the schema. The description mentions 'filter RFIs' but doesn't explain what 'status' or 'project_id' represent, their expected formats, or how filtering works. It adds minimal value beyond the schema, failing to compensate for the low coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List and filter RFIs from an ACC project.' It specifies the verb ('List and filter'), resource ('RFIs'), and scope ('from an ACC project'), which is specific and actionable. However, it doesn't explicitly distinguish this tool from its sibling 'acc_list_issues' or 'acc_search_documents', which might also list project-related items, so it misses full differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to prefer this over 'acc_list_issues' for RFIs specifically, or if there are prerequisites like project access. Without any context on exclusions or alternatives, users must infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
acc_project_summaryCInspect
Get full ACC project summary including hub, metadata, issue counts, and RFI counts
| Name | Required | Description | Default |
|---|---|---|---|
| hub_id | No | ||
| project_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool 'Get[s]' data, implying a read-only operation, but doesn't disclose behavioral traits like authentication needs, rate limits, error handling, or response format. For a tool with no annotation coverage, this leaves significant gaps in understanding its operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every part ('Get full ACC project summary including hub, metadata, issue counts, and RFI counts') contributes directly to understanding the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (2 parameters, no output schema, no annotations), the description is incomplete. It covers the purpose but lacks usage guidelines, parameter explanations, and behavioral details. Without an output schema, it should ideally hint at return values, but it doesn't, leaving the agent with insufficient context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter details. The description mentions 'project summary' but doesn't explain the parameters (project_id and hub_id), their purposes, or relationships. It fails to compensate for the lack of schema documentation, leaving parameters largely undefined.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and the resource ('ACC project summary'), specifying what information is included (hub, metadata, issue counts, RFI counts). It distinguishes itself from siblings like 'acc_list_projects' by focusing on detailed summary rather than listing. However, it doesn't explicitly contrast with all siblings, keeping it from a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing a project_id, or compare it to siblings like 'acc_list_projects' for high-level overviews or 'acc_list_issues'/'acc_list_rfis' for detailed lists. Without such context, an agent might misuse it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
acc_search_documentsCInspect
Search drawings, specs, submittals and documents in ACC via APS Data Management
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| project_id | Yes | ||
| document_type | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool searches documents but doesn't disclose behavioral traits like whether it's read-only (implied by 'Search'), pagination, rate limits, authentication needs, or error handling. This leaves significant gaps for a tool with 3 parameters and no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It front-loads the key action and resources, making it easy to scan and understand quickly without unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (3 parameters, 0% schema coverage, no annotations, no output schema), the description is incomplete. It doesn't explain return values, error cases, or provide enough context for effective use. For a search tool with undocumented parameters, more detail is needed to compensate for the lack of structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for all parameters. It mentions 'drawings, specs, submittals and documents' which loosely relates to 'document_type', but doesn't explain 'query' (search term), 'project_id' (context), or provide details on parameter usage, formats, or constraints. This adds minimal value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Search') and the target resources ('drawings, specs, submittals and documents'), with the context 'in ACC via APS Data Management' providing the platform. It distinguishes from siblings like 'acc_list_projects' or 'acc_upload_file' by focusing on search functionality, though it doesn't explicitly contrast with them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing a project_id from 'acc_list_projects', or differentiate from other search-related tools (none listed in siblings). The description implies usage for document search but lacks explicit when/when-not instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
acc_update_issueCInspect
Update an existing ACC issue (status, priority, assignee, description)
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | ||
| issue_id | Yes | ||
| priority | No | ||
| project_id | Yes | ||
| assigned_to | No | ||
| description | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states this is an update operation, implying mutation, but doesn't disclose behavioral traits like required permissions, whether changes are reversible, rate limits, or what happens to unspecified fields. For a mutation tool with zero annotation coverage, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It's front-loaded with the core action and key details, making it easy to scan and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (mutation tool with 6 parameters, 0% schema coverage, no annotations, no output schema), the description is incomplete. It doesn't explain return values, error conditions, or provide enough context for safe and effective use. More details are needed for a tool of this nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists updatable fields ('status, priority, assignee, description'), which maps to 4 of the 6 parameters, but doesn't cover 'project_id' and 'issue_id' (the required ones). This adds some value but doesn't fully compensate for the coverage gap, especially for the required parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Update') and resource ('an existing ACC issue'), and specifies the fields that can be updated ('status, priority, assignee, description'). It distinguishes from sibling tools like 'acc_create_issue' by focusing on updates rather than creation, though it doesn't explicitly mention all siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing issue), exclusions, or comparisons to siblings like 'acc_list_issues' for viewing issues. Usage is implied but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
acc_upload_fileCInspect
Upload a file to an ACC project folder via APS Data Management
| Name | Required | Description | Default |
|---|---|---|---|
| file_url | Yes | ||
| file_name | Yes | ||
| project_id | Yes | ||
| folder_path | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. While 'upload' implies a write/mutation operation, the description doesn't address critical behavioral aspects like authentication requirements, rate limits, file size constraints, error conditions, or what happens on success/failure. It mentions the APS Data Management system but doesn't explain its implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise - a single sentence that directly states the tool's purpose without any fluff or unnecessary elaboration. It's appropriately sized and front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a file upload tool with 4 parameters, 0% schema coverage, no annotations, and no output schema, the description is severely incomplete. It doesn't address parameter meanings, behavioral expectations, error handling, or result format. While concise, it lacks the necessary detail for an agent to properly understand and invoke this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 4 parameters (3 required), the description provides no parameter information whatsoever. It doesn't explain what 'project_id', 'file_url', 'file_name', or optional 'folder_path' mean, their formats, constraints, or relationships. The description fails to compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('upload a file') and target resource ('to an ACC project folder via APS Data Management'), providing a specific verb+resource combination. However, it doesn't distinguish this tool from potential sibling upload tools (none are listed among siblings, but the description doesn't explicitly address uniqueness).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, prerequisites, or contextual constraints. It simply states what the tool does without any usage context or comparison to sibling tools like document search or project listing tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!