Skip to main content
Glama
umzcio
by umzcio

Server Quality Checklist

42%
Profile completionA complete profile improves this server's visibility in search results.
  • This repository includes a README.md file.

  • Add a LICENSE file by following GitHub's guide.

    MCP servers without a LICENSE cannot be installed.

  • Latest release: v1.0.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 41 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior1/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden for behavioral disclosure. It states 'Search TDX groups' but gives no information about permissions required, rate limits, pagination behavior (beyond the 'maxResults' parameter), return format, or whether it's read-only or has side effects. For a search tool with zero annotation coverage, this is inadequate.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is extremely concise at just three words, with zero wasted text. It's front-loaded with the core action and resource, though this brevity comes at the cost of completeness. Every word earns its place by stating the essential function.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given no annotations and no output schema, the description is incomplete for a tool with 4 parameters. It doesn't explain what the search returns (e.g., list of groups with specific fields), behavioral aspects like permissions or rate limits, or how it differs from sibling tools. For a search operation in a context with many similar tools, more detail is needed.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the input schema already documents all parameters (searchText, isActive, hasAppId, maxResults) with clear descriptions. The tool description adds no additional meaning about parameters beyond what's in the schema, such as search syntax or default behaviors. Baseline 3 is appropriate when schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose2/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description 'Search TDX groups' is essentially a tautology that restates the tool name 'tdx-group-search'. It specifies the verb 'search' and resource 'TDX groups', but lacks any detail about what kind of search this is or what distinguishes it from other search tools like 'tdx-group-get' (which presumably retrieves a specific group). It doesn't differentiate from siblings beyond the obvious resource focus.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives. With siblings like 'tdx-group-get' (likely for retrieving specific groups) and other search tools (e.g., 'tdx-people-search'), the description doesn't indicate whether this is for broad filtering, exact matching, or when to prefer it over other search methods. Usage is implied only by the name, not explained.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden but only states it's a search operation. It doesn't disclose behavioral traits like whether it's read-only (implied but not explicit), pagination behavior, rate limits, authentication needs, or what happens on no results. For a search tool with 9 parameters, this is a significant gap in transparency.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with no wasted words. It's appropriately sized for a search tool, though it could be more front-loaded with critical information. The brevity is good but comes at the cost of completeness.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's complexity (9 parameters, no annotations, no output schema), the description is inadequate. It doesn't explain what 'assets' are in TDX context, what the search returns, how filters combine, or error conditions. For a search tool in a system with many sibling tools, more context is needed for effective use.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema fully documents all 9 parameters. The description adds no additional parameter semantics beyond 'with filters', which is already implied by the parameter names. Baseline 3 is appropriate since the schema does the heavy lifting, but the description doesn't enhance understanding.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description 'Search TDX assets with filters' clearly states the verb ('Search') and resource ('TDX assets'), but it's vague about what 'assets' are and doesn't differentiate from sibling tools like 'tdx-asset-get' or 'tdx-cmdb-search'. It provides basic purpose but lacks specificity about scope or distinction.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description offers no guidance on when to use this tool versus alternatives like 'tdx-asset-get' (for specific assets) or 'tdx-cmdb-search' (for broader CMDB searches). It mentions 'with filters' but doesn't specify typical use cases or prerequisites, leaving the agent to infer usage from context alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden but offers minimal behavioral insight. It doesn't disclose whether this is a read-only operation, what authentication is needed, rate limits, pagination behavior (beyond maxResults parameter), or what format/search logic is used. 'Search TDX tickets with filters' implies a query operation but lacks operational details.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is extremely concise at just 5 words with zero wasted language. It's front-loaded with the core purpose. However, for a tool with 10 parameters and no annotations, this brevity borders on under-specification rather than optimal conciseness.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a search tool with 10 parameters, no annotations, and no output schema, the description is inadequate. It doesn't explain what kind of search this performs (full-text? filtered list?), what the results look like, how filters combine, or any behavioral constraints. The agent would need to rely heavily on the parameter schema alone.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema fully documents all 10 parameters. The description adds no parameter-specific information beyond mentioning 'filters' generally. This meets the baseline of 3 since the schema does the heavy lifting, but the description doesn't enhance understanding of parameter relationships or usage patterns.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description 'Search TDX tickets with filters' clearly states the verb (search) and resource (TDX tickets), but it's vague about scope and doesn't distinguish from sibling tools like 'tdx-ticket-get' or 'tdx-ticket-feed-get'. It mentions filters but doesn't specify what kind of search this performs.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'tdx-ticket-get' (single ticket retrieval) or 'tdx-ticket-feed-get' (feed-based access). There's no mention of prerequisites, typical use cases, or limitations compared to other ticket-related tools in the sibling list.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden. It mentions 'Search' but doesn't disclose behavioral traits like whether this is a read-only operation, potential rate limits, authentication needs, or what happens on no matches. The description is too vague to inform the agent adequately about how the tool behaves beyond its basic function.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero waste. It's front-loaded with the core action and resource, making it easy to parse quickly without unnecessary details.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given no annotations and no output schema, the description is incomplete for a search tool with 3 parameters. It lacks details on return values, error handling, or behavioral context, which are crucial for an agent to use it effectively. The high schema coverage helps but doesn't compensate for missing operational insights.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all parameters (searchText, isActive, maxResults) with clear descriptions. The description adds no additional meaning beyond what the schema provides, such as examples or usage tips, but this meets the baseline for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states the verb ('Search') and resource ('TDX accounts/departments'), which clarifies the basic purpose. However, it doesn't differentiate this tool from sibling tools like 'tdx-account-get', 'tdx-group-search', or 'tdx-people-search', leaving ambiguity about what specific accounts/departments are targeted versus other search tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives. With multiple sibling search tools (e.g., tdx-asset-search, tdx-cmdb-search, tdx-group-search, tdx-people-search, tdx-project-search, tdx-ticket-search), the description lacks context about scope, prerequisites, or exclusions, offering minimal help for selection.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden for behavioral disclosure. It states 'Delete' which implies a destructive mutation, but doesn't clarify if deletion is permanent, reversible, requires specific permissions, or has side effects (e.g., cascading deletions). This is inadequate for a destructive tool with zero annotation coverage.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, direct sentence with zero wasted words. It's appropriately sized for a simple tool and front-loaded with the core action, making it easy to parse quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a destructive mutation tool with no annotations and no output schema, the description is incomplete. It doesn't address critical context like what happens post-deletion (success/failure responses), permissions required, or how to verify deletion. The 100% schema coverage helps with inputs, but overall context is lacking for safe usage.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, with both parameters ('appId' and 'id') documented in the schema. The description adds no additional parameter information beyond what's in the schema, so it meets the baseline of 3 where the schema does the heavy lifting without compensating for gaps.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description 'Delete a TDX asset' clearly states the verb ('Delete') and resource ('TDX asset'), making the basic purpose understandable. However, it doesn't differentiate this tool from its sibling 'tdx-cmdb-delete' or explain what constitutes a 'TDX asset' versus other deletable entities in the system, leaving some ambiguity about scope.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'tdx-cmdb-delete' or 'tdx-kb-delete', nor does it mention prerequisites (e.g., needing asset ID from a search) or warn about irreversible deletion. It merely restates the action without contextual usage information.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. 'Full update' suggests a mutation that replaces all data, but it doesn't clarify permissions needed, whether it's destructive, rate limits, or what happens on success/failure. This is inadequate for a mutation tool with zero annotation coverage.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero waste. It's front-loaded and appropriately sized for the tool's complexity, though it could be more informative without sacrificing brevity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given this is a mutation tool with no annotations, no output schema, and nested objects in parameters, the description is incomplete. It doesn't address behavioral risks, return values, or usage context, leaving significant gaps for an agent to operate safely and effectively.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all parameters (appId, id, data). The description adds no additional meaning beyond implying 'full update' relates to the 'data' parameter, but it doesn't explain syntax, format, or constraints beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description 'Full update of a TDX asset' clearly states the action (update) and resource (TDX asset), but it's vague about what 'full update' entails compared to siblings like 'tdx-asset-patch' or 'tdx-asset-create'. It distinguishes the resource type (asset) from other TDX entities but doesn't specify scope or differentiate meaningfully from other update operations.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives like 'tdx-asset-patch' or 'tdx-asset-create'. The description implies it's for updates, but it doesn't specify prerequisites, context, or exclusions, leaving the agent to infer usage from the name alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden. While 'Delete' clearly indicates a destructive operation, the description provides no additional behavioral context: no information about permissions required, whether deletion is reversible, what happens to related data, confirmation requirements, or rate limits. For a destructive operation with zero annotation coverage, this is a significant gap.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero wasted words. It's appropriately sized for a simple deletion operation and front-loads the essential information. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a destructive operation with no annotations and no output schema, the description is inadequate. It doesn't explain what a 'configuration item' is (versus assets, KB articles, etc.), doesn't warn about irreversible consequences, doesn't mention authentication or permission requirements, and provides no information about response format or error conditions. Given the tool's destructive nature and lack of structured metadata, more context is needed.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents both parameters (appId defaults to environment variable, id is required CI ID). The description adds no parameter information beyond what's in the schema - it doesn't explain what a 'CI ID' represents, format expectations, or validation rules. Baseline 3 is appropriate when schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states the action ('Delete') and resource ('TDX configuration item'), which provides basic purpose. However, it doesn't differentiate this from sibling tools like 'tdx-asset-delete' or 'tdx-kb-delete' - all are deletion operations on different resource types. The description is vague about what specifically distinguishes a 'configuration item' from other deletable entities.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided about when to use this tool versus alternatives. With multiple deletion tools available (tdx-asset-delete, tdx-kb-delete, tdx-cmdb-delete), the description offers no context about which resource type this applies to, nor any prerequisites or warnings about using a destructive operation. The agent must infer usage from the tool name alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden. It mentions 'Search' and 'filters' but doesn't disclose critical behavioral traits: whether this is read-only (likely, but not stated), what authentication is needed, pagination behavior (only mentions 'maxResults' default), rate limits, or what the output looks like (no output schema).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero waste. It's appropriately sized and front-loaded with the core purpose. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 7 parameters, no annotations, and no output schema, the description is incomplete. It doesn't explain the search scope, result format, error conditions, or authentication requirements. For a search tool with multiple filters and no structured output documentation, more context is needed.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description adds no additional meaning about parameters beyond implying filtering capability ('with filters'), which is already covered in the schema. Baseline 3 is appropriate when schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description 'Search TDX projects with filters' clearly states the verb ('Search') and resource ('TDX projects'), but it's vague about scope and doesn't differentiate from sibling tools like 'tdx-project-get' or 'tdx-project-update'. It doesn't specify whether this searches all projects or has limitations.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided about when to use this tool versus alternatives like 'tdx-project-get' (for retrieving a specific project) or 'tdx-project-create' (for creating new projects). The description mentions 'with filters' but doesn't explain when filtering is appropriate versus using other tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden. It states 'update' implying a mutation, but doesn't disclose behavioral traits such as required permissions, whether changes are reversible, rate limits, or what happens on success/failure. The description adds minimal context beyond the basic action, leaving key operational details unspecified.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single sentence with zero waste—'Update a TDX project' is front-loaded and appropriately sized for the tool's complexity. It earns its place by stating the core action without redundancy or unnecessary elaboration.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's complexity (mutation with nested objects, no annotations, no output schema), the description is incomplete. It lacks details on behavioral aspects (e.g., auth needs, side effects), output expectations, and usage context. While the schema covers parameters well, the description doesn't compensate for missing annotations or output schema, leaving gaps for an AI agent.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, with parameters 'id' and 'data' clearly documented in the schema. The description doesn't add meaning beyond the schema (e.g., it doesn't explain 'PascalCase TDX field names' or provide examples). Baseline is 3 since the schema does the heavy lifting, but no extra semantic value is contributed.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description 'Update a TDX project' clearly states the action (update) and resource (TDX project), but it's vague about what specifically gets updated. It distinguishes from siblings like 'tdx-project-create' and 'tdx-project-get' by implying modification rather than creation or retrieval, but doesn't specify scope (e.g., fields, status) beyond what the schema indicates.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance on when to use this tool versus alternatives is provided. The description doesn't mention prerequisites (e.g., existing project ID), exclusions, or comparisons to siblings like 'tdx-project-patch' (not listed) or 'tdx-project-create'. Usage is implied only by the verb 'update', with no explicit context or alternatives stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. 'Create a new TDX ticket' implies a write operation, but it doesn't mention required permissions, whether the creation is idempotent, what happens on failure, or any rate limits. For a mutation tool with 15 parameters and no annotation coverage, this is a significant gap in behavioral context.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero waste—'Create a new TDX ticket' is front-loaded and appropriately sized for its purpose. Every word earns its place without redundancy or fluff.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the complexity (15 parameters, no output schema, no annotations), the description is inadequate. It doesn't explain what a TDX ticket is, what the tool returns, error handling, or dependencies. For a mutation tool with many parameters, more context is needed to guide effective use beyond the basic schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all 15 parameters thoroughly. The description adds no additional meaning about parameters beyond implying ticket creation. Baseline 3 is appropriate when the schema does the heavy lifting, though the description doesn't compensate for any gaps (there are none in this case).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description 'Create a new TDX ticket' clearly states the action (create) and resource (TDX ticket), but it's generic and doesn't differentiate from sibling tools like 'tdx-project-create' or 'tdx-kb-create' that also create different TDX entities. It lacks specificity about what makes a ticket distinct from other TDX objects.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives. With siblings like 'tdx-ticket-patch' and 'tdx-ticket-update' for modifying tickets, and 'tdx-ticket-search' for finding tickets, the description offers no context on prerequisites, appropriate scenarios, or distinctions between creation and update operations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden but only states the basic action. It lacks behavioral details such as whether this is a read-only operation, error handling for invalid IDs, authentication requirements, rate limits, or what happens if the ID doesn't exist. This is inadequate for a tool with no annotation coverage.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with no wasted words. It's appropriately sized for a simple tool and front-loads the essential information, making it easy to parse quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of annotations and output schema, the description is insufficiently complete. It doesn't explain what the tool returns (e.g., account details, error formats) or behavioral aspects like permissions or side effects, leaving significant gaps for an agent to use it correctly.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents the 'id' parameter as a number representing an Account ID. The description adds no additional parameter semantics beyond what's in the schema, such as format examples or constraints, meeting the baseline for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Get') and resource ('TDX account/department'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'tdx-account-search' or 'tdx-people-get', which would require more specific scope or resource clarification.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives. For example, it doesn't specify if this should be used for retrieving a single known account ID versus searching for accounts with other criteria using 'tdx-account-search', or how it differs from 'tdx-people-get' for similar lookups.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden for behavioral disclosure. It states this is a creation tool, implying it's a write operation, but doesn't mention permissions required, whether it's idempotent, rate limits, or what happens on success/failure. This leaves significant gaps for a mutation tool.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero wasted words. It's perfectly front-loaded and appropriately sized for its purpose.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a complex mutation tool with 19 parameters, no annotations, and no output schema, the description is inadequate. It doesn't explain what happens after creation, error conditions, or behavioral aspects, leaving the agent with insufficient context to use it effectively.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema fully documents all 19 parameters. The description adds no parameter-specific information beyond what's in the schema, meeting the baseline but not providing additional value.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Create') and resource ('new TDX asset'), making the purpose immediately understandable. However, it doesn't distinguish this from sibling tools like 'tdx-asset-patch' or 'tdx-asset-update' that also modify assets, which prevents a perfect score.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'tdx-asset-patch' or 'tdx-asset-update', nor does it mention prerequisites or context for asset creation. It's a bare statement without usage context.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden for behavioral disclosure but provides minimal information. It states the tool adds comments/feed entries but doesn't describe what happens after addition (e.g., whether notifications are sent, if the comment becomes part of the asset's history, or if there are rate limits). For a mutation tool with zero annotation coverage, this represents a significant gap in understanding the tool's behavior and effects.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words or elaboration. It's appropriately sized for a tool with clear parameters documented elsewhere and follows the principle of front-loading the essential information. Every word earns its place in conveying the core functionality.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a mutation tool with no annotations and no output schema, the description is insufficiently complete. It doesn't explain what happens after the comment is added, what the response looks like, whether there are permission requirements, or how this operation affects the asset. Given the complexity of adding feed entries (which may trigger notifications or workflow changes) and the lack of structured behavioral information, the description should provide more context about the tool's effects and limitations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly with clear descriptions of each field's purpose and defaults. The description adds no additional parameter information beyond what's in the schema. According to scoring rules, when schema coverage is high (>80%), the baseline is 3 even with no param info in the description.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Add') and target resource ('comment/feed entry to a TDX asset'), making the purpose understandable. It distinguishes from other asset tools like tdx-asset-create or tdx-asset-update by focusing specifically on adding comments/feed entries rather than creating or modifying assets themselves. However, it doesn't explicitly differentiate from tdx-cmdb-feed-add or tdx-ticket-feed-add which serve similar functions for different resource types.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. There are no explicit instructions about when this tool is appropriate, prerequisites for use, or comparisons to sibling tools like tdx-cmdb-feed-add (for CMDB items) or tdx-ticket-feed-add (for tickets) that perform similar feed operations on different resource types. The agent must infer usage context from the tool name alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. It states 'Get' implies a read operation, but lacks details on permissions, rate limits, error handling, or what constitutes an 'asset' (e.g., hardware, software). This is inadequate for a tool with no annotation coverage.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero waste—it directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy to parse quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given no annotations and no output schema, the description is incomplete. It doesn't explain what an 'asset' entails, the return format, error cases, or how it differs from sibling tools. For a retrieval tool in a complex environment with many siblings, this leaves significant gaps for an AI agent.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, with clear documentation for both parameters ('appId' and 'id'). The description adds no additional meaning beyond the schema, such as format examples or context for 'Asset ID'. Baseline 3 is appropriate since the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb ('Get') and resource ('a TDX asset by ID'), making the purpose immediately understandable. However, it doesn't distinguish this tool from its siblings like 'tdx-asset-search' or 'tdx-cmdb-get', which might also retrieve assets or related data, leaving some ambiguity about when to choose this specific retrieval method.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. With siblings like 'tdx-asset-search' (for broader queries) and 'tdx-cmdb-get' (potentially for related data), there's no indication that this is for direct ID-based lookup, nor any prerequisites or exclusions mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. It states 'partial update' which implies mutation, but doesn't cover permissions needed, whether changes are reversible, rate limits, or what happens to unspecified fields. This is inadequate for a mutation tool with zero annotation coverage.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero waste. It's appropriately sized and front-loaded with the core purpose.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a mutation tool with no annotations and no output schema, the description is insufficient. It doesn't explain behavioral traits like side effects, error conditions, or response format. Given the complexity of partial updates and lack of structured data, more context is needed.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional meaning beyond what's in the schema (e.g., it doesn't clarify 'partial asset data' beyond the schema's description of 'Partial asset data (PascalCase TDX field names)'). Baseline 3 is appropriate when schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('partial update') and resource ('TDX asset'), which is specific and distinguishes it from generic update operations. However, it doesn't explicitly differentiate from its sibling 'tdx-asset-update', which likely performs a full update.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives like 'tdx-asset-update' or 'tdx-asset-create'. The description mentions 'partial update' but doesn't explain scenarios where partial updates are preferred over full updates or other operations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden. It states 'Add a relationship' which implies a write/mutation operation, but doesn't disclose behavioral traits like required permissions, whether the operation is idempotent, potential side effects, error conditions, or what happens if the relationship already exists. For a mutation tool with zero annotation coverage, this is a significant gap.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded with the core functionality, making it easy for an agent to parse quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a mutation tool with 5 parameters, no annotations, and no output schema, the description is insufficiently complete. It doesn't explain what constitutes a valid relationship, what happens after creation, error scenarios, or return values. The agent lacks crucial context to use this tool effectively beyond basic parameter passing.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all 5 parameters with clear descriptions. The description adds no additional meaning beyond what's in the schema - it doesn't explain relationship semantics, what 'typeId' values are valid, or how 'isInverse' affects the relationship direction. Baseline 3 is appropriate when schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Add a relationship') and the resource ('between two TDX configuration items'), providing a specific verb+resource combination. However, it doesn't differentiate from sibling tools like 'tdx-cmdb-create' or 'tdx-cmdb-update' which might also involve CMDB operations, leaving room for ambiguity about when to use this specific relationship tool.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. With many sibling tools for CMDB operations (e.g., tdx-cmdb-create, tdx-cmdb-update), there's no indication of prerequisites, context, or exclusions for adding relationships, leaving the agent to guess based on tool names alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Create' implies a write operation, the description doesn't address important behavioral aspects like required permissions, whether the operation is idempotent, what happens on duplicate names, error conditions, or what the response contains. This leaves significant gaps for an agent trying to use this tool effectively.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is extremely concise - a single sentence that directly states the tool's purpose with no wasted words. It's front-loaded with the essential information and doesn't include any unnecessary elaboration or redundant information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a creation tool with 12 parameters, no annotations, and no output schema, the description is insufficient. It doesn't explain what a 'configuration item' is in the TDX context, doesn't provide usage context relative to sibling tools, and offers no behavioral guidance. The agent would struggle to use this tool correctly without additional context about TDX's data model and this specific operation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema has 100% description coverage, so all parameters are documented in the structured schema. The description adds no additional parameter information beyond what's already in the schema descriptions. This meets the baseline expectation when schema coverage is complete, but doesn't provide any extra value like explaining relationships between parameters or providing examples.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Create') and resource ('TDX configuration item (CI)'), making the purpose immediately understandable. However, it doesn't differentiate this tool from sibling creation tools like 'tdx-asset-create' or 'tdx-kb-create', which would require specifying what makes a CI distinct from assets or knowledge base entries.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. With multiple sibling creation tools (tdx-asset-create, tdx-kb-create, tdx-project-create, tdx-ticket-create), there's no indication of what distinguishes a 'configuration item' from these other entities or when this specific creation tool is appropriate.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It states the action is 'Add,' implying a write operation, but doesn't cover permissions, side effects, error handling, or response format. This leaves significant gaps for a tool that modifies data.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and wastes no space, making it easy to parse quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a write operation with no annotations and no output schema, the description is incomplete. It lacks details on behavioral aspects like authentication needs, rate limits, or what happens on success/failure. Given the complexity of adding comments in a system like TDX, more context is needed.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional meaning beyond what's in the schema, such as examples or constraints. This meets the baseline for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Add') and the resource ('a comment/feed entry to a TDX configuration item'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'tdx-asset-feed-add' or 'tdx-ticket-feed-add', which appear to serve similar functions for different resource types.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context for adding comments, or how it differs from other feed-related tools in the sibling list, such as 'tdx-ticket-feed-add' or 'tdx-asset-feed-add'.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden but only states it's a search operation. It doesn't disclose behavioral traits like whether this is read-only (implied but not explicit), rate limits, authentication needs, pagination behavior, or what happens on errors. For a search tool with 7 parameters and no annotations, this is inadequate.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that gets straight to the point without unnecessary words. However, it could be more front-loaded with critical context given the lack of annotations and sibling tools.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the complexity (7 parameters, no output schema, no annotations, and many sibling tools), the description is incomplete. It doesn't explain return values, error handling, or how this differs from other search tools in the server. For a search operation in a crowded toolset, more contextual guidance is needed.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema fully documents all 7 parameters. The description adds no additional meaning beyond 'with filters', which is already implied by the parameter names and schema descriptions. This meets the baseline of 3 when schema coverage is high.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb ('Search') and resource ('TDX configuration items'), making the purpose understandable. However, it doesn't differentiate this from sibling tools like 'tdx-cmdb-get' or 'tdx-cmdb-update' beyond mentioning 'search' functionality, which is implied by the name.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'tdx-cmdb-get' (for retrieving specific items) or 'tdx-cmdb-update' (for modifications). It mentions 'with filters' but doesn't explain when filtering is appropriate or what scenarios warrant this search tool over others.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden for behavioral disclosure. While 'Full update' implies a mutation operation, it doesn't specify whether this is a destructive replacement, what permissions are required, whether there are rate limits, or what happens on success/failure. The description provides minimal behavioral context beyond the basic operation type.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that gets straight to the point with zero wasted words. It's appropriately sized for a tool with good schema documentation and is perfectly front-loaded.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a mutation tool with no annotations and no output schema, the description is insufficient. It doesn't explain what 'Full update' means operationally (replace vs merge), what the response looks like, error conditions, or how this differs from patch operations. Given the complexity of updating configuration items and the lack of structured behavioral information, the description should provide more complete context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all three parameters. The description adds no additional parameter semantics beyond what's in the schema - it doesn't explain what constitutes 'Full CI data' or provide examples of PascalCase field names. The baseline 3 is appropriate when the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Full update') and resource ('TDX configuration item'), providing a specific verb+resource combination. However, it doesn't distinguish this from sibling tools like 'tdx-cmdb-patch' or 'tdx-asset-update' that might also perform updates on related resources.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. With multiple sibling update tools (tdx-cmdb-patch, tdx-asset-update, tdx-kb-update, tdx-people-update, tdx-project-update, tdx-ticket-update), there's no indication of which resource types this applies to or when to choose this over other update operations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Create' implies a write operation, the description doesn't mention authentication requirements, permission levels, whether the operation is idempotent, what happens on failure, or what the response looks like. For a creation tool with 12 parameters, this leaves significant behavioral gaps.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that communicates the core purpose without unnecessary words. It's appropriately sized for a tool with comprehensive schema documentation and gets straight to the point with zero wasted verbiage.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a creation tool with 12 parameters, no annotations, and no output schema, the description is inadequate. It doesn't explain what happens after creation, what the return value contains, error conditions, or how this tool relates to the broader knowledge base workflow. The agent would need to guess about many important operational aspects.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema already documents all 12 parameters thoroughly. The description adds no parameter-specific information beyond what's in the schema, so it meets the baseline expectation but doesn't provide additional value regarding parameter usage, relationships, or practical examples.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Create') and resource ('new TDX knowledge base article'), making the purpose immediately understandable. However, it doesn't differentiate this tool from its sibling 'tdx-kb-update', which could be important for an agent to distinguish between creation and modification operations.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. With multiple sibling tools including 'tdx-kb-update' and 'tdx-kb-delete', there's no indication of prerequisites, appropriate contexts, or when other tools might be more suitable for knowledge base operations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool deletes an article, implying a destructive mutation, but fails to mention critical details like whether deletion is permanent, requires specific permissions, or has side effects. This is inadequate for a mutation tool with zero annotation coverage.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, direct sentence with zero wasted words, making it highly efficient and front-loaded. It immediately conveys the core action without unnecessary elaboration.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a destructive mutation tool with no annotations and no output schema, the description is incomplete. It lacks essential context like behavioral traits (e.g., permanence, permissions), usage guidelines, and output expectations, leaving significant gaps for an AI agent to operate safely.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage, fully documenting both parameters (appId and id). The description adds no additional meaning beyond the schema, such as explaining the relationship between appId and id or providing examples. Baseline 3 is appropriate when the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Delete') and the resource ('a TDX knowledge base article'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'tdx-cmdb-delete' or 'tdx-asset-delete' by specifying what type of entity is being deleted, though 'kb' in the name helps.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'tdx-kb-update' or other deletion tools. It lacks context about prerequisites, such as needing the article ID, or warnings about irreversible deletion, which are critical for a destructive operation.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It only states the basic action ('Search') without mentioning whether this is a read-only operation, if it requires authentication, what the output format looks like, or any rate limits. For a search tool with zero annotation coverage, this is a significant gap in transparency.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action and resource, making it easy to parse quickly. This is an excellent example of conciseness.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's complexity (6 parameters, no output schema, no annotations), the description is incomplete. It doesn't explain what the search returns (e.g., article summaries, full content, pagination), how results are ordered, or error conditions. For a search tool with rich parameters but no output schema, more context is needed.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The description adds no parameter information beyond what the schema already provides. Since schema description coverage is 100%, the baseline score is 3. The description doesn't compensate with additional context like default behaviors or parameter interactions, but it also doesn't contradict the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose as 'Search TDX knowledge base articles,' which is a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'tdx-kb-get' (which likely retrieves a specific article) or 'tdx-kb-create/update/delete' (which are write operations), so it falls short of a perfect score.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose this over 'tdx-kb-get' (for specific articles) or 'tdx-ticket-search' (for tickets), nor does it specify prerequisites or exclusions. This leaves the agent with minimal context for tool selection.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden for behavioral disclosure. While 'Update' implies a mutation operation, the description doesn't specify required permissions, whether changes are reversible, rate limits, or what happens to unspecified fields. It mentions 'PascalCase TDX field names' in the schema but doesn't explain this in the description itself.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that states the tool's purpose without unnecessary words. It's perfectly front-loaded and wastes no space on redundant information. Every word earns its place in conveying the core functionality.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a mutation tool with no annotations and no output schema, the description is insufficient. It doesn't explain what constitutes a successful update, what data format is expected beyond the schema's mention of PascalCase, error conditions, or response format. The combination of mutation operation with incomplete behavioral context creates significant gaps for an agent.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds no additional parameter information beyond what's in the schema - no examples, format details, or constraints. The baseline score of 3 reflects adequate parameter documentation through the schema alone.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Update') and resource ('TDX knowledge base article'), making the tool's purpose immediately understandable. It distinguishes itself from sibling tools like tdx-kb-create and tdx-kb-delete by specifying the update operation. However, it doesn't explicitly differentiate from tdx-kb-patch or explain what 'update' entails versus 'patch' operations.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. With sibling tools like tdx-kb-patch and tdx-kb-create available, there's no indication of when an update is appropriate versus a patch or create operation. No prerequisites, constraints, or comparison to similar tools are mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden. It states the tool searches with filters but doesn't disclose behavioral traits like whether it's read-only (implied by 'search'), pagination behavior (only mentions maxResults default), rate limits, authentication needs, or what the output looks like (no output schema). This leaves significant gaps for a search tool.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero waste. It's appropriately sized and front-loaded, directly stating the tool's core function without unnecessary elaboration.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given no annotations, no output schema, and a search tool with 9 parameters, the description is incomplete. It lacks behavioral context (e.g., pagination, rate limits), output details, and usage guidance relative to siblings. While concise, it doesn't provide enough information for an agent to use the tool effectively beyond basic purpose.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema fully documents all 9 parameters. The description adds no parameter-specific semantics beyond implying filters are available, which the schema already details. Baseline 3 is appropriate as the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose as 'Search TDX people with filters', specifying the verb (search), resource (TDX people), and scope (with filters). It distinguishes from sibling tools like tdx-people-get (retrieve specific person) and tdx-people-lookup (likely simpler lookup), but doesn't explicitly differentiate them.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives like tdx-people-get (for retrieving a specific person by ID) or tdx-people-lookup (function unclear from name). The description implies usage for filtered searches but doesn't specify scenarios, prerequisites, or exclusions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It states 'Update' which implies a mutation, but doesn't mention required permissions, whether changes are reversible, potential side effects (e.g., on related records), or error conditions. For a mutation tool with zero annotation coverage, this is a significant gap in transparency.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action and resource, making it immediately scannable. Every word earns its place by conveying essential information without redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given this is a mutation tool with no annotations, no output schema, and complex nested parameters ('data' is an object with additionalProperties), the description is incomplete. It doesn't address behavioral aspects like authentication needs, rate limits, or what the tool returns upon success/failure. The agent lacks crucial context for safe and effective use.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents both parameters ('uid' and 'data') with basic descriptions. The description adds no additional meaning about parameter usage, such as format examples for 'data' or how 'uid' relates to other tools. Baseline 3 is appropriate when the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Update') and resource ('a TDX person'), making the purpose immediately understandable. However, it doesn't differentiate this tool from its sibling 'tdx-people-search' or other update tools like 'tdx-asset-update' or 'tdx-ticket-update', which would require more specificity about what constitutes a 'person' versus other entity types.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a person UID from 'tdx-people-get' or 'tdx-people-search'), nor does it clarify use cases like modifying person attributes versus creating new records with 'tdx-account-get' tools. This leaves the agent guessing about appropriate contexts.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden for behavioral disclosure. 'Create a new TDX project' implies a write/mutation operation, but it doesn't disclose permissions required, whether the operation is idempotent, what happens on failure, rate limits, or what the response contains. For a creation tool with 11 parameters, this is insufficient behavioral context.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that communicates the core purpose without any wasted words. It's appropriately sized for a tool with a straightforward name and doesn't bury important information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a creation tool with 11 parameters, no annotations, and no output schema, the description is inadequate. It doesn't explain what happens after creation, what IDs might be returned, error conditions, or how this tool relates to other project tools. The agent has insufficient context to use this tool effectively beyond the basic schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema description coverage is 100%, meaning all parameters are documented in the schema itself. The description adds no additional parameter semantics beyond what's already in the schema descriptions. According to scoring rules, when schema coverage is high (>80%), the baseline is 3 even with no param info in the description.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Create') and resource ('new TDX project'), making the purpose immediately understandable. However, it doesn't differentiate this tool from other creation tools like 'tdx-asset-create', 'tdx-cmdb-create', or 'tdx-kb-create' that exist in the sibling list, which prevents a perfect score.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. There's no mention of prerequisites, when to choose this over other project-related tools like 'tdx-project-update', or any contextual constraints. The agent must infer usage from the name alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. It states 'Get' implies a read operation, but lacks details on permissions, error handling, rate limits, or response format. For a tool with zero annotation coverage, this is a significant gap in transparency.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with no wasted words. It's front-loaded with the core action and resource, making it easy to parse quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of annotations and output schema, the description is incomplete. It doesn't cover behavioral aspects like safety, permissions, or return values, which are crucial for a tool that retrieves data. The high schema coverage helps with parameters, but overall context is lacking.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The description mentions 'by ID', which aligns with the single parameter 'id' in the schema. With 100% schema description coverage, the schema already documents the parameter adequately. The description adds minimal value beyond what the schema provides, meeting the baseline for high coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Get') and resource ('a TDX project by ID'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'tdx-project-search' or 'tdx-project-update', which would require mentioning it's for retrieving a single project by its unique identifier rather than searching or modifying.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It doesn't mention scenarios like needing a specific project by ID, prerequisites, or comparisons to siblings such as 'tdx-project-search' for broader queries or 'tdx-project-update' for modifications.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. 'Link an asset' implies a mutation operation, but the description doesn't specify required permissions, whether this is reversible, potential side effects, or error conditions. For a mutation tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero waste—it directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy to parse quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given that this is a mutation tool with no annotations and no output schema, the description is incomplete. It doesn't cover behavioral aspects like permissions, side effects, or return values, leaving the agent with insufficient context to use the tool safely and effectively. The high schema coverage doesn't compensate for these gaps.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all three parameters (appId, id, assetId) with their types and purposes. The description adds no additional parameter semantics beyond what's in the schema, such as format examples or constraints. Baseline 3 is appropriate when the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Link') and the target resources ('an asset to a TDX ticket'), providing a specific verb+resource combination. However, it doesn't explicitly differentiate this tool from sibling tools like 'tdx-ticket-add-contact' or 'tdx-ticket-patch', which also modify tickets. The purpose is clear but lacks sibling differentiation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., existing ticket and asset), when-not-to-use scenarios, or comparisons with sibling tools like 'tdx-ticket-patch' that might also handle asset linking. Usage is implied from the name but not explicitly stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden but offers minimal behavioral insight. It states it's an 'Add' operation (implying mutation) but doesn't disclose permissions needed, whether the contact addition is reversible, rate limits, or what happens on success/failure. This leaves critical behavioral traits undocumented.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, direct sentence with zero wasted words. It's front-loaded with the core action and resource, making it immediately scannable and efficient.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a mutation tool with no annotations and no output schema, the description is inadequate. It doesn't explain what 'Add' entails operationally, what the result looks like, error conditions, or system behavior. Given the complexity of modifying ticket data, more context is needed.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so parameters are fully documented in the schema. The description adds no additional parameter semantics beyond implying 'id' and 'uid' are required (which the schema already states). Baseline 3 is appropriate when the schema does all the work.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Add') and target ('a contact to a TDX ticket'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'tdx-ticket-add-asset' beyond the resource type (contact vs asset), missing explicit sibling distinction.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, context for adding contacts, or when other tools (like 'tdx-ticket-update') might be more appropriate.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden but offers minimal behavioral insight. It states this is an 'Add' operation (implying mutation) but doesn't disclose permissions required, whether comments are editable/deletable, rate limits, or what happens on success/failure. The mention of 'HTML supported' for comments is useful but insufficient for a mutation tool with zero annotation coverage.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every element ('Add a comment/feed entry to a TDX ticket') directly contributes to understanding the tool's function, with zero waste.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a mutation tool with no annotations and no output schema, the description is inadequate. It doesn't explain what happens after adding a comment (e.g., returns success/failure, new feed entry ID), error conditions, or behavioral nuances. Given the complexity of adding to a ticket feed, more context is needed to guide effective use.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description adds no parameter-specific information beyond what's in the schema (e.g., it doesn't clarify 'comments' HTML support limits or 'notify' UID format). Baseline 3 is appropriate when the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Add') and resource ('comment/feed entry to a TDX ticket'), making the purpose immediately understandable. It distinguishes this as a feed/comment addition tool rather than a general ticket update, though it doesn't explicitly differentiate from sibling tools like 'tdx-asset-feed-add' or 'tdx-cmdb-feed-add' which serve similar functions for different resources.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites (e.g., needing a valid ticket ID), contrast with other feed-related tools (like 'tdx-ticket-feed-get' for reading), or specify use cases (e.g., for customer updates vs internal notes). Usage is implied but not articulated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It states 'Get' implies a read operation, but doesn't disclose behavioral traits like authentication needs, rate limits, pagination, or what the feed includes (e.g., comments, history). This leaves gaps for an agent to understand how to interact with it effectively.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It's front-loaded and wastes no space, making it easy to parse quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given no annotations and no output schema, the description is incomplete. It doesn't explain what the feed includes (e.g., structured comments, timestamps), potential errors, or response format. For a tool with 2 parameters and no structured safety hints, more context is needed to guide proper usage.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, with clear descriptions for both parameters (appId and id). The description adds no additional meaning beyond the schema, such as explaining the relationship between appId and id or typical use cases. Baseline 3 is appropriate since the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Get') and resource ('feed/comments for a TDX ticket'), making the purpose understandable. However, it doesn't differentiate from sibling tools like 'tdx-ticket-get' or 'tdx-ticket-feed-add', which handle ticket data or adding to the feed respectively, so it's not fully specific.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives. For example, it doesn't explain if this is for retrieving comments only versus full ticket details, or how it differs from 'tdx-ticket-get' which might include feed data. The description lacks context for tool selection.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden for behavioral disclosure but only states the action without details on permissions, rate limits, error handling, or response format. It doesn't contradict annotations (none exist), but fails to add meaningful context beyond the basic operation, leaving the agent with insufficient information about how the tool behaves in practice.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero wasted words—'Get a TDX ticket by ID' directly communicates the core functionality without unnecessary elaboration. It's appropriately sized for a simple retrieval tool and front-loads the essential information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's simplicity (2 parameters, 100% schema coverage) but lack of annotations and output schema, the description is incomplete. It doesn't address what the tool returns (e.g., ticket fields, error cases) or behavioral aspects like authentication needs, leaving gaps that could hinder effective agent usage despite the straightforward operation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so parameters are fully documented in the schema. The description doesn't add any semantic information beyond what the schema provides (e.g., explaining ticket ID format or app ID usage), but meets the baseline since the schema handles parameter documentation adequately.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Get') and resource ('a TDX ticket by ID'), making the tool's purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'tdx-ticket-search' or 'tdx-ticket-feed-get', but the specificity of retrieving by ID provides some implicit distinction.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'tdx-ticket-search' for broader queries or 'tdx-ticket-feed-get' for feed-related data. It lacks context about prerequisites or typical use cases, offering only the basic operation without comparative information.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden. It states it's a partial update, implying mutation, but doesn't disclose behavioral traits like required permissions, whether changes are reversible, rate limits, or what happens to unspecified fields. This is inadequate for a mutation tool with zero annotation coverage.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero waste. It's front-loaded with the core purpose and appropriately sized for the tool's complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a mutation tool with no annotations and no output schema, the description is incomplete. It lacks details on behavioral context, error handling, and output expectations, leaving significant gaps for an agent to use it correctly.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema documents all parameters. The description adds minimal value beyond the schema by implying 'data' contains partial fields, but doesn't explain syntax or format details like PascalCase usage beyond what's in the schema descriptions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('partial update') and resource ('TDX ticket'), specifying it only modifies specified fields. It distinguishes from sibling 'tdx-ticket-update' by implying that tool might be a full update, though not explicitly stated.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance on when to use this tool versus alternatives like 'tdx-ticket-update' or 'tdx-ticket-create'. The description implies usage for partial updates but lacks context on prerequisites, error conditions, or comparisons to siblings.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. It states 'Get' implies a read operation, but doesn't cover aspects like authentication requirements, rate limits, error handling, or what happens if the ID doesn't exist. This leaves significant gaps for an agent to understand operational behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded with the core action and resource, making it easy to parse quickly, with zero wasted information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's low complexity (simple retrieval with 2 parameters) and high schema coverage, the description is minimally adequate. However, with no annotations and no output schema, it lacks details on behavioral traits and return values, which could hinder an agent's ability to use it correctly in all scenarios.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents both parameters ('appId' and 'id') with clear descriptions. The description adds no additional meaning beyond implying 'id' is required for retrieval, which is already covered in the schema's required field. Baseline 3 is appropriate when the schema handles parameter documentation effectively.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Get') and resource ('TDX configuration item by ID'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'tdx-cmdb-search' or 'tdx-cmdb-get' (if that existed), which would require more specificity about scope or filtering.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'tdx-cmdb-search' or 'tdx-cmdb-get' (implied from siblings). It lacks context about prerequisites, such as needing a specific ID versus searching, and doesn't mention any exclusions or complementary tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It states 'Get' implies a read operation, but doesn't cover aspects like authentication requirements, rate limits, error handling, or what happens if the ID is invalid. This leaves significant gaps in understanding the tool's behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero waste—it directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy to parse quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's low complexity (one parameter, no output schema, no annotations), the description is minimally adequate. However, it lacks details on return values or error cases, which could be crucial for an agent to handle responses correctly, making it incomplete for robust usage.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage, with the 'id' parameter documented as 'Group ID'. The description adds no additional meaning beyond this, such as format examples or constraints. Since the schema does the heavy lifting, the baseline score of 3 is appropriate.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Get') and resource ('a TDX group by ID'), making the purpose understandable. However, it doesn't explicitly differentiate from its sibling 'tdx-group-search', which might be used for broader group queries, leaving room for ambiguity.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'tdx-group-search' or other get tools (e.g., 'tdx-account-get'). It lacks context about prerequisites, such as needing a specific group ID, or exclusions, leaving the agent without usage direction.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. It states this is a 'Get' operation, implying it's likely read-only, but doesn't confirm if it's safe, requires authentication, has rate limits, or what happens on errors (e.g., if the ID doesn't exist). For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded with the core action and resource, making it easy to parse quickly. Every word earns its place, and there's no redundancy or fluff.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's low complexity (simple retrieval by ID), 100% schema coverage, and no output schema, the description is minimally adequate. However, without annotations or output details, it lacks context on authentication needs, error handling, or return format. For a basic read operation, it's passable but could be more informative to fully guide an agent.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The description mentions 'by ID', which aligns with the 'id' parameter in the schema, but doesn't add meaning beyond what the schema already provides (100% coverage with clear descriptions for both 'appId' and 'id'). It doesn't explain the relationship between parameters or provide usage examples. With high schema coverage, the baseline is 3, and the description doesn't significantly enhance parameter understanding.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Get') and resource ('TDX knowledge base article by ID'), making the purpose immediately understandable. It distinguishes itself from sibling tools like 'tdx-kb-search' by focusing on retrieval by specific ID rather than searching. However, it doesn't explicitly contrast with 'tdx-kb-create/delete/update' for full sibling differentiation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose 'tdx-kb-get' over 'tdx-kb-search' (e.g., when you have a specific article ID vs. need to find articles by criteria) or other sibling tools like 'tdx-kb-create' for different operations. Usage is implied but not explicitly stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It states it's a read operation ('Get'), implying non-destructive behavior, but doesn't disclose any behavioral traits such as authentication requirements, rate limits, error handling (e.g., what happens if the UID is invalid), or response format. This leaves significant gaps for an agent to understand how to use it effectively.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded with the core action and resource, making it easy to parse. Every part of the sentence contributes essential information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's low complexity (single parameter, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks behavioral details and usage guidelines. Without annotations or output schema, the agent must infer behavior from the description alone, which is insufficient for fully informed use.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage, with the 'uid' parameter documented as 'Person UID'. The description adds no additional semantic context beyond this, such as UID format examples or constraints. With high schema coverage, the baseline score of 3 is appropriate, as the schema already provides adequate parameter information.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Get') and resource ('a TDX person'), and specifies the lookup method ('by UID'). It distinguishes from sibling tools like 'tdx-people-search' and 'tdx-people-lookup' by focusing on direct UID-based retrieval. However, it doesn't explicitly contrast with 'tdx-account-get' which might retrieve similar data through a different identifier.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'tdx-people-search' (for broader queries) or 'tdx-people-lookup' (for other lookup methods). It lacks context about prerequisites (e.g., needing a valid UID) or exclusions (e.g., not for creating or updating people).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It mentions 'quick lookup' and searchable fields, but lacks details on behavioral traits like rate limits, authentication needs, error handling, or what the output looks like (e.g., list of people objects). For a search tool with no annotations, this leaves significant gaps in understanding its operation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that front-loads the key information: it's a quick lookup tool for TDX people using search strings. There's no wasted verbiage, making it highly concise and well-structured for immediate understanding.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's low complexity (2 parameters, no nested objects) and 100% schema coverage, the description is somewhat complete but lacks output details (no output schema provided) and behavioral context. It adequately covers the basic purpose but falls short in explaining usage scenarios or result format, making it minimally viable but with clear gaps.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema description coverage is 100%, with clear descriptions for both parameters ('searchText' and 'maxResults'). The description adds minimal value beyond the schema, only reiterating the searchable fields. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description doesn't provide additional semantic context like examples or edge cases.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose: 'Quick lookup of TDX people by search string (name, email, or username).' It specifies the verb ('lookup'), resource ('TDX people'), and search criteria. However, it doesn't explicitly differentiate from sibling tools like 'tdx-people-get' or 'tdx-people-search,' which is why it doesn't reach a 5.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'tdx-people-get' (likely for retrieving a specific person by ID) and 'tdx-people-search' (possibly more advanced search), there's no indication of when this 'lookup' is preferred, such as for quick, simple searches versus comprehensive ones.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden. It mentions 'replaces all fields,' indicating destructive behavior, but doesn't disclose other traits like authentication needs, rate limits, error handling, or what happens to unspecified fields. This is inadequate for a mutation tool with zero annotation coverage.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with zero waste—it directly states the tool's purpose and key behavioral trait (full replacement). It's appropriately sized and front-loaded, making it easy to understand quickly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given this is a mutation tool with no annotations and no output schema, the description is incomplete. It lacks details on behavioral aspects (e.g., permissions, side effects), return values, or error cases. The high schema coverage helps, but overall context is insufficient for safe and effective use.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all parameters (appId, id, data). The description adds no additional meaning beyond what's in the schema, such as examples or constraints for 'data' (e.g., PascalCase requirement). Baseline 3 is appropriate when schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Full update') and resource ('TDX ticket'), specifying it replaces all fields. It distinguishes from 'tdx-ticket-patch' by indicating a full replacement rather than partial update. However, it doesn't explicitly mention the sibling 'tdx-ticket-update' doesn't exist, so it's not a perfect 5.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage by specifying 'replaces all fields,' which suggests it should be used when a complete ticket update is needed, as opposed to 'tdx-ticket-patch' for partial updates. However, it doesn't explicitly state when not to use it or mention alternatives like 'tdx-ticket-create' or 'tdx-ticket-get,' leaving some ambiguity.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden. It discloses the return format ('attribute IDs, names, types, and choices') which is valuable, but doesn't mention authentication requirements, rate limits, error conditions, or whether this is a read-only operation (though 'Get' implies it). For a tool with no annotations, this leaves significant behavioral gaps.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is perfectly concise - two sentences that each earn their place. The first sentence states purpose and scope, the second explains the return value and practical application. No wasted words, well-structured, and front-loaded with the core functionality.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given no annotations and no output schema, the description provides adequate but incomplete context. It explains what the tool does and what it returns, but doesn't cover authentication, error handling, or operational constraints. For a metadata retrieval tool with 3 parameters, this is minimally viable but leaves important contextual gaps.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. According to guidelines, when schema coverage is high (>80%), the baseline score is 3 even without parameter details in the description.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Get custom attribute definitions'), specifies the target resource ('for a TDX component'), and provides concrete examples of components ('e.g. tickets, assets, CIs'). It distinguishes itself from sibling tools by focusing on metadata retrieval rather than data manipulation, which is evident from the sibling list containing primarily CRUD operations.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage context by stating 'needed for creating/updating items with custom attributes', suggesting this tool should be used before those operations. However, it doesn't explicitly state when to use this versus alternatives or provide any exclusion criteria. The guidance is helpful but not comprehensive.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

TeamDynamix-MCP-Connector MCP server

Copy to your README.md:

Score Badge

TeamDynamix-MCP-Connector MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/umzcio/TeamDynamix-MCP-Connector'

If you have feedback or need assistance with the MCP directory API, please join our Discord server