cupix-compass
Server Details
Explore and analyze Cupix construction site data: 360 images, progress, and insights.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 10 of 10 tools scored. Lowest: 2.4/5.
Five tools named test1-test5 have identical descriptions to tradeVisibilityAnalyzer, making them indistinguishable from each other and from that tool. An agent cannot reliably choose between them.
Most tools follow camelCase (e.g., listFacilities, projectDataExplorer), but the test1-test5 tools use a completely different numbering scheme, violating consistency. No clear pattern across the set.
With 10 tools, the count is not excessive, but 5 are duplicate stubs, leaving only 6 distinct functionalities. This suggests the server is underdeveloped or misconfigured for its apparent scope.
The distinct tools cover facility listing, natural-language querying, image search, insight analysis, and trade visibility—reasonable for a read-only analysis domain. However, missing operations like updating or creating data are acceptable as the server seems analytical, not administrative.
Available Tools
10 toolslistFacilitiesAInspect
List construction facilities (projects) the user can access within a team.
| Name | Required | Description | Default |
|---|---|---|---|
| team_domain | Yes | Team domain returned by list_teams(). Optional; if omitted, |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint: true, so the description's addition of 'available in a specific team' is useful context but does not significantly expand on behavioral traits beyond what annotations convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that efficiently conveys purpose and scope with no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with readOnlyHint and no output schema, the description covers the basic function but omits details on return format or pagination, leaving room for improvement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description reinforces the schema's team_domain parameter by connecting it to the tool's purpose, adding meaning beyond the schema's example values, which is valuable given 100% schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'List construction facilities (projects)' with a clear verb and resource, and scopes it to 'a specific team', distinguishing it from sibling tools like listTeams.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is used when you need facilities for a known team domain, but does not explicitly compare to alternatives or state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
projectDataExplorerAInspect
Query structured project data using natural language (Text-to-SQL).
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Natural language question (pass as-is, no SQL syntax) | |
| team_domain | Yes | Team domain (REQUIRED) | |
| facility_key | Yes | Facility key (REQUIRED - from list_facilities()) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint, so description's 'Query' aligns; however, beyond that, no additional behavioral traits (e.g., SQL capabilities, error handling) are disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with purpose, no wasted words; highly concise and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description omits return format or limitations of the Text-to-SQL query; though tool is simple, more detail on expected output would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for all 3 parameters; the description adds no new meaning beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool queries structured project data via natural language (Text-to-SQL), distinguishing it from sibling listing tools like listFacilities and listTeams.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives like listFacilities or siteinsightsAnalyzer; context from siblings helps but description lacks direct comparison or exclusion rules.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
siteImageExplorerBInspect
Search panoramic 360-degree site images by visual content analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | Yes | Maximum number of results (default: 10) | |
| query | Yes | Keywords or phrases describing what to find in site images | |
| end_date | Yes | End date filter, YYYY-MM-DD (omit if no date context) | |
| start_date | Yes | Start date filter, YYYY-MM-DD (omit if no date context) | |
| team_domain | Yes | Team domain (REQUIRED) | |
| facility_key | Yes | Facility key (REQUIRED - from list_facilities()) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true. The description adds a phrase about 'visual content analysis' but does not disclose additional behavioral traits like rate limits, pagination, or error handling. Given the annotations, the bar is lower, but the description still lacks helpful context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with the verb 'Search' front-loaded. It is extremely concise and contains no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description should explain what the tool returns (e.g., image URLs, metadata). It does not mention the response format, pagination, or behavior when no results are found, making it incomplete for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter descriptions. The description adds a minor nuance ('visual content analysis') but does not significantly enhance understanding beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: searching panoramic 360-degree site images by visual content. It distinguishes itself from siblings like listFacilities (listing facilities) and siteinsightsAnalyzer (analyzing insights) by focusing on image search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as siteinsightsAnalyzer or projectDataExplorer. The description only states what it does without clarifying context or conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
siteinsightsAnalyzerCInspect
Analyze SiteInsights(SI) progress — completion rates, delays, and forecasts.
| Name | Required | Description | Default |
|---|---|---|---|
| team_domain | Yes | Team domain (REQUIRED) | |
| facility_key | Yes | Facility key (REQUIRED - from list_facilities()) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations include readOnlyHint: true, so the tool is clearly read-only. The description adds no behavioral context beyond that, such as what analysis is performed, or any side effects. It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely short (one sentence), but it is under-specified rather than concise. It fails to provide essential details about the tool's function, making it insufficient for effective use. Better front-loading with more context would improve it.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no output schema and only two parameters, the description should explain what analysis is performed and what the output looks like, but it does not. The sibling tools are more descriptive, leaving this one incomplete for an agent to know when to invoke it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters having clear descriptions (e.g., 'Facility key from listFacilities'). The description does not add any additional meaning or context beyond the schema, which is adequate. Baseline of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Analyze construction progress with SiteInsights', which gives a general sense of the tool's purpose but lacks specificity. It doesn't differentiate from sibling tools like projectDataExplorer or siteImageExplorer, which also might analyze data. The verb 'analyze' is vague and could be more precise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like listFacilities or siteImageExplorer. It doesn't mention prerequisites, such as needing facility_key from listFacilities, though the schema does. No exclusions or context are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
test1AInspect
Generate trade visibility report — which trades are visible/not visible per workarea. Returns a complete, self-contained pivot table per level showing trade detection status (visible/not_visible/not_detected) for every workarea. No follow-up tool calls needed.
| Name | Required | Description | Default |
|---|---|---|---|
| team_domain | Yes | Team domain (REQUIRED) | |
| facility_key | Yes | Facility key (REQUIRED - from list_facilities()) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral transparency. It clearly states the tool returns a report and requires no follow-up, implying it is a read-only, non-destructive operation. However, it does not disclose potential side effects, authorization needs, or rate limits, which would be needed for a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is exceptionally concise: two sentences that front-load the core purpose and immediately clarify the output and lack of follow-up. Every sentence adds value with no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 required params, no enums, no nested objects) and no output schema, the description adequately explains the return value as a pivot table with trade detection status. However, it could be more specific about the format or any limitations, leaving some room for improvement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with descriptions for both parameters (team_domain and facility_key). The tool description does not add any additional meaning or syntax guidance beyond the schema, so the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates a trade visibility report showing visible/not visible trades per workarea. It specifies the output is a complete, self-contained pivot table per level, distinguishing it from potential sibling tools like tradeVisibilityAnalyzer by emphasizing self-containment and no follow-up calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for use: it is the final step for obtaining a visibility report, as indicated by 'No follow-up tool calls needed.' However, it does not explicitly mention when not to use it or compare to alternatives like tradeVisibilityAnalyzer, so it falls short of a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
test2AInspect
Generate trade visibility report — which trades are visible/not visible per workarea. Returns a complete, self-contained pivot table per level showing trade detection status (visible/not_visible/not_detected) for every workarea. No follow-up tool calls needed.
| Name | Required | Description | Default |
|---|---|---|---|
| team_domain | Yes | Team domain (REQUIRED) | |
| facility_key | Yes | Facility key (REQUIRED - from list_facilities()) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. It discloses that the tool generates a report and returns a self-contained pivot table, and that no follow-up calls are needed. Does not explicitly state it is read-only, but the nature implies no side effects. Lacks details on authentication or rate limits, but acceptable for a read-like report tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that are front-loaded with purpose, followed by emphasis on output completeness. No redundant or unnecessary words. Each sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Description covers return value in detail (pivot table with status per workarea) and states completeness. However, lacks explanation of how parameters influence the report and any constraints (e.g., date range). With no output schema, the description does a good job but could be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions already provided. Description does not add meaning beyond schema; does not explain how 'team_domain' and 'facility_key' affect the report or relate to 'workareas'. Baseline 3 is appropriate as description adds no extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Generate' and specific resource 'trade visibility report'. Description explicitly states what it does: shows visible/not visible per workarea, returns pivot table. Differentiates from sibling 'tradeVisibilityAnalyzer' by emphasizing completeness ('No follow-up tool calls needed').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for generating a complete report without extra steps, but does not explicitly state when to use this tool versus alternatives like 'tradeVisibilityAnalyzer'. No exclusion criteria or when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
test3AInspect
Generate trade visibility report — which trades are visible/not visible per workarea. Returns a complete, self-contained pivot table per level showing trade detection status (visible/not_visible/not_detected) for every workarea. No follow-up tool calls needed.
| Name | Required | Description | Default |
|---|---|---|---|
| team_domain | Yes | Team domain (REQUIRED) | |
| facility_key | Yes | Facility key (REQUIRED - from list_facilities()) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses behavior: it returns a complete, self-contained pivot table and requires no follow-up calls. It does not mention any destructive actions or permission requirements, but for a read-only reporting tool, this is sufficient context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. The purpose is front-loaded, and the behavioral note about no follow-up calls is concisely appended. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains the return value (pivot table with trade detection status per workarea) and assures completeness. It could mention potential edge cases or data limits, but for a straightforward reporting tool, it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with descriptions for both parameters. The description adds value by explicitly stating that facility_key comes from list_facilities(), which guides the user on how to obtain it—beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs and resources: 'Generate trade visibility report' and details the output as a 'complete, self-contained pivot table per level showing trade detection status'. It distinguishes from siblings by emphasizing no follow-up calls needed, which contrasts with tools like tradeVisibilityAnalyzer that may require iterative steps.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by stating 'No follow-up tool calls needed', suggesting it's a final step, but lacks explicit guidance on when to use this tool versus alternatives (e.g., tradeVisibilityAnalyzer) or when not to use it. No exclusion criteria are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
test4AInspect
Generate trade visibility report — which trades are visible/not visible per workarea. Returns a complete, self-contained pivot table per level showing trade detection status (visible/not_visible/not_detected) for every workarea. No follow-up tool calls needed.
| Name | Required | Description | Default |
|---|---|---|---|
| team_domain | Yes | Team domain (Optional) | |
| facility_key | Yes | Facility key (Optional - from list_facilities()) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that the tool returns a pivot table and is self-contained, but does not cover side effects, permissions, or constraints. With no annotations, it carries the burden, but the behavior is clear for a reporting tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loaded with the core purpose, and contains no extraneous information. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 required string parameters, no output schema), the description adequately explains the output format and completeness. However, 'per level' is slightly ambiguous and could be clarified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for both parameters. The description does not add any additional meaning beyond what the schema provides, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the verb (Generate), the resource (trade visibility report), and the content (which trades are visible/not visible per workarea). It also describes the output format as a self-contained pivot table, distinguishing it from generic report tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating 'No follow-up tool calls needed,' but it does not explicitly mention when to use this tool versus alternatives like tradeVisibilityAnalyzer. No exclusions or prerequisites are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
test5AInspect
Generate trade visibility report — which trades are visible/not visible per workarea. Returns a complete, self-contained pivot table per level showing trade detection status (visible/not_visible/not_detected) for every workarea. No follow-up tool calls needed.
| Name | Required | Description | Default |
|---|---|---|---|
| team_domain | Yes | Team domain (Optional)sdfsjlkdfsdlfjsdklfjlsdaklfjasdlkfjkasdlfkj;;sdfkjlsdfjkl | |
| facility_key | Yes | Facility key (Optional - from list_facilities())sdvbsvdsvsd |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full burden. It discloses that the tool returns a 'complete, self-contained pivot table' and requires no follow-up, but does not cover side effects, auth needs, or other behavioral traits beyond output characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, each earning its place: first states the action and scope, second describes output and independence. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 2 parameters and no output schema, the description explains the return value (pivot table) and its self-contained nature. It lacks parameter details but is otherwise complete for core functionality.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 50% description coverage, but the existing description is garbled. The tool description does not elaborate on parameters (team_domain, facility_key), leaving their semantics unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Generate' and the resource 'trade visibility report', and specifies it shows which trades are visible/not visible per workarea. It distinguishes from sibling tool 'tradeVisibilityAnalyzer' by emphasizing self-containment and no follow-up needed.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for generating a visibility report and states no follow-up needed, but does not provide explicit when-to-use or when-not-to-use guidance or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tradeVisibilityAnalyzerCInspect
Generate trade visibility report — which trades are visible/not visible per workarea.
| Name | Required | Description | Default |
|---|---|---|---|
| team_domain | Yes | Team domain (REQUIRED) | |
| facility_key | Yes | Facility key (REQUIRED - from list_facilities()) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It only states the action without mentioning side effects, safety, or return format. The agent cannot determine if this is a read-only operation or if it has performance implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is front-loaded with the verb 'Generate'. However, it is overly terse and could be slightly expanded to improve clarity without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description fails to mention the required facility input, and the phrase 'per workarea' is inconsistent because no workarea parameter exists. Given the simple input schema, the description should integrate the facility context to avoid confusion.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter 'facility_key' is fully documented in the schema (100% coverage), but the description adds no additional meaning or context about the parameter, such as how to obtain it or its role in the report.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it generates a trade visibility report specifying which trades are visible/not visible per workarea, distinguishing it from sibling tools like listFacilities or siteinsightsAnalyzer. However, it omits the facility context that is required by the schema, creating a slight gap in clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, nor are there any exclusions or prerequisites mentioned. The agent is left to infer usage from context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!