Navisworks MCP
Server Details
Navisworks coordination and clash detection via APS — reports, viewpoints, model objects.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3/5 across 5 of 5 tools scored.
Each tool has a clearly distinct purpose: export_report generates reports, get_clashes detects clashes, get_viewpoints retrieves viewpoints, list_objects lists objects, and upload handles file uploads. There is no overlap in functionality, making it easy for an agent to select the correct tool.
All tool names follow a consistent 'nwd_' prefix with a verb_noun pattern (e.g., nwd_export_report, nwd_get_clashes). This predictable naming convention enhances readability and usability across the tool set.
With 5 tools, the server is well-scoped for Navisworks coordination tasks, covering key operations like uploading models, listing objects, detecting clashes, retrieving viewpoints, and exporting reports. Each tool earns its place without being overwhelming or insufficient.
The tool set provides complete coverage for Navisworks coordination workflows: upload files, list objects, detect clashes, retrieve viewpoints, and export reports. This covers the core CRUD-like lifecycle from model ingestion to analysis and reporting, with no obvious gaps.
Available Tools
5 toolsnwd_export_reportCInspect
Build a coordination report for a translated Navisworks model: translation status/progress, derivative outputs, available views (2D sheets / 3D viewables), total element count, and a per-category element breakdown. Doubles as the canonical way to poll translation status after nwd_upload.
When to use: after nwd_upload to check whether translation has completed before calling clash/object tools; at the end of a coordination session to generate a status snapshot for the weekly BIM report; when auditing a model revision to confirm expected element counts per discipline.
When NOT to use: do not use for a per-element property dump — use nwd_list_objects; do not use for clash results — use nwd_get_clashes.
APS scopes required: viewables:read data:read bucket:read (read-only).
Rate limits: APS default ~50 req/min per endpoint; this tool issues up to 4 sequential APS calls (manifest, metadata, properties — two with retry). When polling for translation completion, backoff: 5s, 10s, 30s, 60s, 120s — Model Derivative NWD translation typically completes in 1-10 min but large federated models can take 20+ min.
Errors: 401 APS token expired (retry); 403 missing scope (report); 404 URN not found (model was never uploaded or bucket TTL expired); 409 N/A; 422 translation failed permanently — inspect report.translation_status == "failed" and report.derivatives[].status; 429 rate limit (backoff); 5xx APS upstream (retry once). Property extraction may legitimately 202 "isProcessing" — the tool handles retry and then silently swallows to still return manifest/metadata (element_count will be 0 until properties index is built).
Side effects: none. Pure read. Idempotent — report reflects current APS state. Logs usage to D1 usage_log.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | Output shape. "json" (default) returns the full structured report object including derivatives[], views[], category_breakdown. "summary" still returns the same keys — the parameter is preserved for forward-compatibility and currently echoes back in the response for caller templating. | |
| model_id | Yes | Base64url-encoded URN of the translated Navisworks model as returned by nwd_upload. Same value used by the other nwd_* tools. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool generates a report but doesn't describe what that entails—e.g., whether it's a read-only operation, if it requires specific permissions, potential performance impacts (like being resource-intensive), or the output format beyond the schema's enum. For a tool with no annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded with the key action ('Generate a coordination report') and specifies the content clearly, making it easy to parse. Every part of the sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of generating a report with multiple data types (clash summary, element counts, model stats), no annotations, and no output schema, the description is insufficient. It doesn't explain what the report output looks like, how to interpret it, or any behavioral aspects like error handling. For a tool with rich potential output and no structured support, more detail is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('format' and 'model_id') with descriptions. The description adds no additional meaning beyond implying that 'model_id' relates to a model for coordination reporting, but it doesn't clarify parameter interactions (e.g., how format affects output) or usage details. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Generate a coordination report') and specifies the content ('clash summary, element counts, and model stats'), which distinguishes it from siblings like 'nwd_get_clashes' (which likely returns raw clash data) and 'nwd_list_objects' (which likely lists objects without report generation). However, it doesn't explicitly mention the resource (e.g., a BIM model) or differentiate from 'nwd_upload' in terms of output format, keeping it from a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a valid model_id), exclusions (e.g., not for real-time data), or comparisons to siblings like 'nwd_get_clashes' for detailed clash analysis versus summary reports. This leaves the agent without context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nwd_get_clashesBInspect
Detect geometric/logical clashes between two element sets in an already-translated Navisworks model. Uses APS Model Derivative property extraction + same-level proximity heuristics, optionally augmented by VDC rules stored in D1 (table vdc_rules).
When to use: when coordinating federated MEP + structural + architectural models for clash review before issuing an RFI; e.g. "find duct vs. beam clashes on Level 3 before the Wed coordination meeting" or "sanity-check the latest MEP revision against structure before releasing for fabrication." Pair with nwd_export_report to produce a deliverable.
When NOT to use: do not call on a model whose translation is still "inprogress" — call nwd_export_report first and confirm translation_status == "success"; not a substitute for Navisworks Manage Clash Detective for final sign-off (this is a coordination-stage screen, not a regulatory clash report).
APS scopes required: viewables:read data:read (read-only — does not create anything in APS).
Rate limits: APS default ~50 req/min per endpoint; Model Derivative metadata/properties endpoints are the bottleneck. Properties response may return 202 "isProcessing" on first call — the worker retries once after 3s. For very large models (>50k elements) the worker caps analysis at 50x50 element cross-compare and 100 reported clashes; re-run with tighter category_a/category_b filters for exhaustive coverage.
Errors: 401 APS token expired (transient, retry); 403 missing viewables:read/data:read scope (report, do not retry); 404 URN not found or not translated (prompt user to re-run nwd_upload); 409 not applicable; 422 model translated but property index unavailable — typically means source NW version unsupported or translation partially failed (supported: Navisworks 2015+); 429 rate limit (backoff); 5xx APS upstream (retry once). If properties.data.collection is empty the tool returns clash_count: 0 with a note rather than erroring — the agent should treat that as "model not ready" and retry later.
Side effects: none in APS. Reads vdc_rules from D1 when both categories are supplied. Logs usage to D1 usage_log. Idempotent — same inputs on a stable model yield the same clash list.
| Name | Required | Description | Default |
|---|---|---|---|
| model_id | Yes | Base64url-encoded URN of the translated Navisworks model, exactly as returned by nwd_upload.model_id (or the urn field). Do NOT re-encode. Starts with "dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6" for OSS-derived URNs. | |
| category_a | No | First element-set filter, case-insensitive substring match against each element's Revit/Navisworks Category. Common values: "Mechanical", "Ducts", "Pipes", "Plumbing", "Electrical", "Structural Framing", "Structural Columns", "Walls", "Floors", "Ceilings". Omit (with category_b) to auto-split MEP vs. structural. | |
| category_b | No | Second element-set filter, same semantics as category_a. Must be supplied together with category_a to take effect — supplying only one is ignored in favor of auto-split. Provide both to also look up matching VDC rules from D1. | |
| clash_type | No | Clash severity class. "hard" = solid-solid interference (e.g. duct through beam) — returns severity:critical. "soft" = clearance/tolerance violations (e.g. MEP within 50mm of structure) — returns severity:warning. "all" = both. Defaults to "all" when omitted. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions the detection method ('bounding box overlap + D1 VDC rules'), it lacks critical details such as what the output looks like (e.g., clash list, severity levels), whether it's a read-only operation, performance considerations (e.g., processing time for large models), or error handling. For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every element ('Detect clashes', 'object groups', 'translated NWD model', 'bounding box overlap + D1 VDC rules') earns its place by contributing essential context, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of clash detection (involving multiple parameters and no output schema), the description is incomplete. It lacks details on output format, error conditions, or performance implications, which are crucial for an agent to use the tool effectively. With no annotations and no output schema, the description should provide more behavioral context to compensate, but it does not.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly (e.g., model_id as 'Base64-encoded URN', clash_type enum values). The description adds no additional meaning beyond what the schema provides, such as explaining how category filters interact or what 'hard' vs. 'soft' clashes entail. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Detect clashes') on a specific resource ('object groups in a translated NWD model') using specific methods ('bounding box overlap + D1 VDC rules'). It distinguishes itself from siblings like nwd_export_report or nwd_list_objects by focusing on clash detection rather than reporting, viewpoint retrieval, or object listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a translated model), exclusions (e.g., when not to use it), or comparisons with sibling tools. The agent must infer usage solely from the purpose statement without explicit contextual direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nwd_get_viewpointsCInspect
List saved viewpoints / camera positions and top-level view containers for a translated Navisworks model. Pulls the metadata view list and enriches each 3D view with its first two levels of the object tree (viewpoint folders typically live there in NWD files).
When to use: when preparing a coordination meeting and you need a quick index of every saved viewpoint (e.g. "Level 3 Mech Room", "Clash - duct vs beam gridline C-4") to drive screenshots or BCF-style issues; when an agent needs to deep-link a 2D sheet or 3D camera into the APS Viewer.
When NOT to use: does not return camera matrices (position/target/up vectors) — APS Model Derivative does not expose those from the NWD viewpoint XML; for full camera data the source NWD must be opened in Navisworks Manage.
APS scopes required: viewables:read data:read.
Rate limits: APS default ~50 req/min; this tool fans out one object-tree call per 3D view (capped implicitly by metadata view count, usually <5). For federated models with many sheets this can approach the per-minute quota — cache the result.
Errors: 401 token (retry); 403 scope (report); 404 URN not found / translation incomplete; 409 N/A; 422 model returned empty metadata (returns viewpoint_count:0 rather than throwing — agent should verify translation via nwd_export_report); 429 rate limit (backoff); 5xx APS upstream (retry once). Per-view object-tree failures are swallowed so the overall call still returns the metadata-level view list.
Side effects: none. Pure read. Idempotent. Logs usage to D1 usage_log. Results are capped at 100 viewpoint entries.
| Name | Required | Description | Default |
|---|---|---|---|
| model_id | Yes | Base64url-encoded URN of the translated Navisworks model as returned by nwd_upload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states a read operation ('Retrieve'), implying it's likely non-destructive, but doesn't specify permissions, rate limits, or response format. For a tool with zero annotation coverage, this is a significant gap in transparency about how it behaves beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that efficiently conveys the core purpose without unnecessary words. It's front-loaded and every part earns its place, making it highly concise and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (retrieving metadata with no output schema) and lack of annotations, the description is incomplete. It doesn't cover what the return values look like (e.g., format of viewpoints), potential errors, or behavioral constraints. For a tool with no structured output information, this leaves significant gaps for an agent to use it effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'model_id' documented as a 'Base64-encoded URN'. The description doesn't add any extra meaning about parameters beyond what the schema provides, such as explaining what a 'saved viewpoint' entails. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Retrieve') and the resources ('saved viewpoints and camera positions from the model metadata'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'nwd_list_objects' or 'nwd_get_clashes', which might also retrieve model-related data, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context, or exclusions, such as whether it's for specific model types or when other tools like 'nwd_list_objects' might be more appropriate. This leaves the agent with minimal usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nwd_list_objectsCInspect
List elements (objects) in the translated Navisworks model with their objectid, name, externalId, and full property bag, optionally filtered by a case-insensitive keyword matched against name and Category.
When to use: when answering "how many VAV boxes are on Level 3?", "list every steel column with mark C-*", or any per-element question; when dumping a quick takeoff of a discipline before handing off to an estimator; when an agent needs externalIds to cross-reference with a Revit or ACC issue.
When NOT to use: not for clash detection (use nwd_get_clashes); not for camera/viewpoint data (use nwd_get_viewpoints); not for full-model exports — results are capped at 100 objects per call, so use the filter argument to narrow.
APS scopes required: viewables:read data:read.
Rate limits: APS default ~50 req/min; two Model Derivative calls per invocation (metadata guid + properties). Properties endpoint may 202 "isProcessing" on first call after translation — the worker retries once after 3s. For very large models the properties payload can be tens of MB; expect higher latency.
Errors: 401 token (retry); 403 scope (report); 404 URN not found; 409 N/A; 422 property index not yet built — returns object_count:0 (poll via nwd_export_report); 429 rate limit (backoff); 5xx APS upstream (retry once). If property collection is legitimately empty the tool returns success with object_count:0 and an empty objects array.
Side effects: none. Pure read. Idempotent. Logs usage to D1 usage_log. Response includes a note field when the unfiltered collection exceeds the 100-object cap.
| Name | Required | Description | Default |
|---|---|---|---|
| filter | No | Optional case-insensitive substring. Matches if present in the element's name OR its Category property. Use Revit category names ("Ducts", "Pipes", "Structural Columns", "Walls") or mark/type fragments ("VAV", "W12x", "L3-"). Omit to return the first 100 elements of the model in property-collection order. | |
| model_id | Yes | Base64url-encoded URN of the translated Navisworks model as returned by nwd_upload. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states this is a list operation, implying read-only behavior, but doesn't address critical aspects like pagination, rate limits, authentication requirements, error conditions, or what properties are included in the output. The description is too minimal for a tool with potential complexity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that states the core purpose upfront. There's no wasted verbiage, though it could benefit from slightly more detail given the lack of annotations and output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and a tool that likely returns complex object data, the description is insufficient. It doesn't explain what 'model objects' are, what properties are returned, the format of results, or any behavioral constraints. The description leaves too many open questions for effective tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds marginal value by mentioning that filtering is by 'name/category', which slightly expands on the schema's 'keyword to filter objects' description. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('List') and resource ('model objects and their properties'), and mentions optional filtering. It doesn't explicitly differentiate from sibling tools, but the action is distinct from export, clash detection, viewpoint retrieval, and upload operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions optional filtering but doesn't specify scenarios where filtering is appropriate or how this tool relates to sibling tools like nwd_get_viewpoints for similar data retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nwd_uploadCInspect
Upload a Navisworks file (.nwd/.nwf/.nwc) to Autodesk Platform Services (APS) Object Storage and start an SVF2 translation job so the model becomes queryable by the other nwd_* tools.
When to use: at the start of a coordination workflow — e.g. the GC hands off a federated NWD combining MEP + structural + architectural models and the agent needs to stage it for clash review before issuing an RFI, or when a subcontractor publishes a new NWC model revision that must be ingested for weekly BIM coordination. Always the first call in a session for any new model.
When NOT to use: do not call for already-translated models (re-use the returned model_id/URN); do not use for raw Revit .rvt, IFC, or DWG — those go through a different MCP.
APS scopes required: data:read data:write data:create bucket:read bucket:create viewables:read. The worker acquires a 2-legged client-credentials token; the caller does not supply one.
Rate limits: APS default ~50 req/min per app per endpoint; Model Derivative translation job submission ~60 req/min. NWD bundles can be large (hundreds of MB); the upload PUT and translation can take minutes — translation is asynchronous, poll via nwd_export_report (manifest) with exponential backoff (e.g. 5s, 10s, 30s, 60s) before calling clash/properties tools.
Errors the agent should handle: 401 invalid/expired APS token (surface as auth failure — do not retry with same creds); 403 missing scope (report scope gap, do not retry); 404 source file_url unreachable (ask user for a fresh public URL); 409 bucket already exists (handled internally, safe to ignore); 413/422 unsupported Navisworks version — APS Model Derivative supports NWD/NWC from Navisworks 2015 and later (state the unsupported version to the user); 429 rate limited (exponential backoff, retry); 5xx APS upstream (retry once, then surface).
Side effects: creates a fresh transient OSS bucket (scanbim-nwd-, 24h TTL) and uploads the file as an object, then POSTs a Model Derivative translation job. NOT idempotent — each call creates a new bucket/URN even for the same file_url. Logs usage to the D1 usage_log table.
| Name | Required | Description | Default |
|---|---|---|---|
| file_url | Yes | Publicly reachable HTTPS URL from which the worker will GET the Navisworks file bytes. Must return the raw binary (not an HTML landing page). Pre-signed S3 URLs, ACC/BIM360 signed-resource URLs, and Cloudflare R2 public URLs all work. Max practical size ~4 GB (Cloudflare Workers fetch body limit applies). | |
| file_name | Yes | Logical filename for the OSS object. Must end in .nwd, .nwf, or .nwc (case-insensitive) so APS picks the correct translator. Avoid spaces and non-ASCII — the worker sanitizes to [A-Za-z0-9._-]. Follow ScanBIM convention: <project>_<discipline>_<rev>.nwd (e.g. TowerA_MEPStruct_R07.nwd). | |
| project_id | No | Optional free-form project label stored alongside the upload record for caller-side correlation. Not sent to APS. Typical values: ACC project GUID, internal job number, or short slug. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions uploading and translating but doesn't describe what happens during translation, whether it's a long-running process, if it requires authentication, rate limits, error conditions, or what 'coordination viewing' entails. This leaves significant gaps for a tool that likely involves file processing and transformation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that gets straight to the point without unnecessary words. It's appropriately sized for a tool with three parameters and no complex behavioral nuances. However, it could be slightly more structured by separating the upload and translation aspects.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of file upload and translation operations with no annotations and no output schema, the description is insufficient. It doesn't explain what happens after upload, what 'translation' means, what format the output is in, or any error handling. For a tool that likely involves significant processing, this leaves too many unknowns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description doesn't add any additional meaning about parameters beyond what's in the schema, such as file format requirements, URL accessibility constraints, or project_id usage. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('upload') and resource ('NWD/NWC file to APS') with the purpose of translation for coordination viewing. It distinguishes from sibling tools which focus on export, clash detection, viewpoint retrieval, and object listing rather than file upload. However, it doesn't specify what 'APS' stands for or the exact nature of 'coordination viewing'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, constraints, or relationships with sibling tools like nwd_export_report or nwd_list_objects. It simply states what the tool does without contextual usage information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!