Skip to main content
Glama

Server Quality Checklist

83%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    Tools are semantically distinct with clear resource targeting (card vs board vs column vs label) and action specificity (create vs duplicate vs graduate, add vs toggle). While the large set creates cognitive load, descriptive prefixes prevent overlap between similar operations like list_cards (filtered) and search_cards (keyword across all boards).

    Naming Consistency5/5

    Exemplary consistency across all 32 tools. Every name follows snake_case with standardized verb_object structure (e.g., create_card, add_labels_to_card, set_active_board, toggle_checklist_item). Verbs are uniform throughout the CRUD and lifecycle operations.

    Tool Count2/5

    32 tools significantly exceeds the threshold for effective agent selection (25+). The surface is overly granular, exposing fine-grained operations (separate tools for color, checklist items, single vs batch create) that could be consolidated into parameters, increasing selection error rates.

    Completeness4/5

    Strong coverage of the project management domain including full card lifecycle, board/column management, labeling, assignment, time tracking, and git integration. Minor gaps exist (no delete board, no update/delete comment), but core video production workflows (graduate_to_production, Idea Pool) are well supported.

  • Average 3.5/5 across 32 of 32 tools scored. Lowest: 2.7/5.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 32 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations indicate non-idempotent writes (idempotentHint=false, readOnlyHint=false), but the description adds no behavioral context such as 'each call creates a distinct comment' or whether notifications are triggered.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely brief single sentence with no redundancy. However, brevity borders on under-specification given the rich sibling tool context that could cause confusion.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Simple 2-parameter tool with complete schema coverage and annotations. Description lacks output semantics (no output schema exists) and domain-specific context, but meets minimum threshold for invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with both parameters documented. The description adds no syntax guidance beyond the schema (e.g., does not clarify that partial title matching is supported for card_id per the schema). Baseline 3 appropriate.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states the core action (add comment) and target (card), but is minimal and borders on tautology ('add_comment' → 'Add a comment'). It does not distinguish from siblings like 'add_checklist_item' or clarify the domain context.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Description provides no guidance on when to use this versus alternatives like 'update_card' (for description changes) or 'log_work'. No prerequisites or exclusion criteria stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    The word 'existing' correctly implies the card must already be present (consistent with annotations) and suggests this is not a create operation. However, it does not disclose what happens to unspecified fields (partial vs full replacement), nor explain the partial matching behavior hinted at in the schema's 'id' parameter.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is extremely brief at only four words with no filler. While not wasting words, it is arguably too minimal—missing the opportunity to add value that structured fields cannot provide (like sibling differentiation).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the well-documented schema (6 params, 100% coverage) and presence of safety annotations, the description meets minimum viability. However, it omits explanation of domain terminology ('frame') and doesn't address the tool's relationship to the 30+ sibling operations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the baseline is 3. The description adds no parameter-specific context (e.g., it doesn't clarify that 'column' accepts stage names or that priority accepts empty strings to clear values), but it also doesn't need to compensate for schema gaps.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states the specific action ('Update') and resource ('card'), confirming this is a mutation operation. However, the '/frame' terminology is unexplained jargon that adds confusion, and it fails to differentiate from siblings like 'move_card' (which also changes column) or 'set_card_color'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines1/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this tool versus alternatives like 'move_card' (which overlaps on the column/stage parameter) or 'archive_card'. No prerequisites or conditions are mentioned despite having many sibling tools with overlapping functionality.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations indicate non-idempotent and open-world behavior, but description adds no context about side effects (e.g., duplicate creation on retries), authorization requirements, or what the tool returns. Relies entirely on structured annotations for safety profile disclosure.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence with zero redundancy, appropriately front-loaded. However, extreme brevity results in underspecification for a tool with openWorld and non-idempotent characteristics that agents should know about.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Minimally viable given comprehensive schema and present annotations covering read/write safety. However, lacks critical workflow context for board creation (e.g., relationship between 'project' and 'board' nomenclature, whether creation implies activation) and omits output behavior description.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage (all 4 parameters documented including enum explanations for mode), the schema carries full semantic weight. Description adds no parameter guidance, which meets baseline expectations when schema coverage is complete.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb 'Create' and resource 'project/board', with platform context 'Framedeck'. Distinguishes from siblings creating cards/columns by specifying board-level resource. The slash notation hints at the dual mode types (classic vs creator) matching the schema enum, though this could be more explicit.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance on when to use versus alternatives, workflow prerequisites (e.g., whether to set as active after creation), or how it relates to sibling tools like set_active_board or graduate_to_production. Zero exclusion criteria or alternative suggestions provided.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    While annotations declare write/non-destructive status, the description adds no context about behavioral traits like the idempotentHint=false implication (duplicates allowed?) or openWorldHint=true side effects. It also fails to disclose the scope (board-level vs global) or return value structure.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely concise at three words. Front-loaded with verb-first structure and no redundancy. However, extreme brevity sacrifices necessary behavioral context, preventing a score of 5.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a simple 2-parameter creation tool with full schema coverage and safety annotations. Lacks depth regarding output format, error conditions (e.g., duplicate names), or scope, but sufficient for basic invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with clear descriptions ('Label name', 'Hex color'). The description 'Create a new label/tag' adds no additional parameter semantics (syntax, validation rules, or examples), which matches baseline expectations when schema is comprehensive.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Create' + resource 'label/tag' clearly identifies the tool's function. However, it does not explicitly distinguish from sibling 'add_labels_to_card' (which applies existing labels), potentially causing confusion about whether this creates the label entity or an association.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this vs. 'list_labels' (to check existence first) or prerequisites like active board selection. No mention of uniqueness constraints or idempotency behavior implied by the schema.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already declare readOnlyHint=true and openWorldHint=true, covering the safety profile. The description adds minimal value beyond the 'tags' synonym; it does not clarify pagination, caching, or the structure of returned label objects.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely brief (four words) with no redundant language. However, the extreme brevity contributes to under-specification rather than earning a perfect score for 'appropriately sized' given the lack of output schema.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    No output schema exists, yet the description fails to document the return structure (e.g., label name, color, ID fields) or the scoping behavior (which board's labels), leaving significant gaps for an agent attempting to use the returned data.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema contains zero parameters, triggering the baseline score of 4. The description appropriately does not invent parameter semantics where none exist.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States the basic action (List) and resource (labels/tags), and the 'tags' synonym helps clarify the domain. However, it fails to specify the scope of 'all' (e.g., active board, organization-wide), which is ambiguous given the Trello-like context of sibling tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to invoke this tool versus alternatives like 'get_board_overview' or 'get_card_details' that might also return label information, nor does it mention prerequisite context (e.g., active board selection).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    While annotations indicate this is a non-read-only, non-destructive, non-idempotent operation with open-world implications, the description adds no context about what gets created, side effects, or the meaning of 'openWorld' in this domain (e.g., external time tracking integration). The description merely restates the tool name without behavioral elaboration.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of a single efficient sentence with no redundant words, immediately conveying the core action. However, appropriate conciseness for a write operation requires more content than five words to adequately inform agent behavior.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given this is a write operation with open-world implications and no output schema, the description lacks critical context about the created artifact, validation rules, or expected outcomes. It relies entirely on the schema and annotations without addressing the tool's transactional behavior.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema already documents all three parameters adequately. The description offers no additional semantic context, examples, or format constraints (e.g., positive hours only) beyond the structured schema fields, meeting the baseline expectation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('logs') and identifies the target resource (time spent) and scope (on a card). However, it does not explain what 'logging' entails (creating a record vs. updating a field) nor distinguishes this from similar descriptive actions like 'add_comment'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no explicit guidance on when to use this tool versus alternatives, prerequisites such as card existence, or whether multiple logs aggregate or overwrite. Users must infer usage solely from the parameter names and schema.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    While annotations declare readOnly=false and idempotent=false, the description doesn't explain the non-idempotent toggle behavior (especially when 'checked' is omitted) or partial matching risks. It adds minimal behavioral context beyond what annotations and schema provide.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely concise at 7 words. Front-loaded with action verbs, but potentially too minimal for a mutation tool—leaves no room for behavioral nuances or warnings about partial matching.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Inadequate for a mutation tool with partial matching logic. Fails to address failure modes (item not found, multiple matches), doesn't clarify the idempotent=false implication, and omits output expectations despite having no output schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema coverage, the description serves baseline documentation. It maps 'Check or uncheck' to the checked parameter and 'subtask/checklist item' to item_text, but doesn't add syntax details, format constraints, or explain the partial matching semantics beyond the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action ('Check or uncheck') and target resource ('subtask/checklist item'). However, it fails to distinguish from sibling 'add_checklist_item'—both mention checklist items but one creates while this modifies state.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this tool versus alternatives, nor does it mention the partial matching behavior implied by the schema. No mention of prerequisites like card existence or handling multiple matching items.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    The description adds valuable behavioral context by explaining the active board fallback logic, which is not covered by the readOnlyHint and openWorldHint annotations. This clarifies how the tool behaves when the optional parameter is omitted.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is appropriately brief at two sentences, but the terminology inconsistency between 'columns' (tool name/siblings) and 'stages' (description) wastes interpretive effort without adding clarity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a single-parameter list operation with readOnly annotations, the description adequately covers the core functionality and default behavior. However, it lacks any indication of return structure or relationship to the column lifecycle defined by sibling tools.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% and the description largely repeats the schema's explanation of the board parameter ('uses active board if omitted and set'). It adds minimal semantic meaning beyond what the schema already provides, meeting the baseline for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states 'List stages in a board' but the tool is named list_columns and siblings use 'column' terminology (create_column, delete_column). This terminology mismatch creates ambiguity about whether stages and columns are the same entity or different resources.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description explains the active board fallback behavior ('Uses active board if set and no board specified'), which provides implicit guidance on when the board parameter can be omitted. However, it lacks explicit guidance on when to use this versus list_boards or get_board_overview.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With annotations declaring idempotentHint=true and destructiveHint=false, the description's burden is reduced. It adds minimal context beyond the tool name, though it does clarify the scope is 'from a card' (not system-wide deletion). It fails to mention idempotency, error cases (e.g., label not present), or side effects, but does not contradict the safety profile indicated by annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is extremely efficient at six words with no redundancy. While appropriately sized for a simple two-parameter operation, it borders on being overly terse—functioning as a sentence fragment rather than providing rich guidance—but achieves clarity without waste.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the simple operation, complete parameter schema, and rich annotations covering safety (idempotent, non-destructive), the description is minimally sufficient. However, it lacks output expectations, error handling details, or domain context that would help an agent understand failure modes.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema coverage, the input parameters are fully documented in the schema itself ('Card ID or title' and 'Label name or ID'). The description adds no additional semantic meaning, parameter relationships, or format examples beyond what the schema provides, warranting the baseline score.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description provides a clear verb ('Remove') and resource ('label') with contextual scope ('from a card'), clarifying this removes the association rather than deleting the label globally. However, it fails to explicitly distinguish from the sibling tool 'add_labels_to_card' or clarify when to use this versus editing labels.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description offers no guidance on when to use this tool versus alternatives (e.g., 'add_labels_to_card'), nor does it mention prerequisites such as whether the label must already exist on the card. Usage context is entirely absent.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already declare readOnlyHint=false and destructiveHint=false. The description adds context that this creates a 'subtask', implying mutation, but does not elaborate on the non-idempotent behavior (creating duplicates) or openWorldHint implications beyond what annotations provide.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single, efficient sentence with no redundancy. The action and target are front-loaded, and every word serves a purpose. Appropriate length for the tool's complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the simple 2-parameter input with complete schema coverage and existing behavioral annotations, the description is minimally sufficient. However, it lacks mention of return values, error conditions (e.g., invalid card_id), or side effects, leaving gaps in contextual completeness.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100% (both 'card_id' and 'text' are well-documented in the schema). The description reinforces these concepts ('card', 'subtask text') but does not add additional semantic detail or usage syntax beyond the schema definitions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Add') and clearly identifies the resource ('subtask/checklist item') and target ('card'). It adequately distinguishes from siblings like 'add_comment' or 'add_labels_to_card' by specifying 'checklist item', though it does not explicitly differentiate from 'toggle_checklist_item'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives (e.g., when to add a checklist item vs. a comment) or prerequisites (e.g., card must exist). It merely restates the action without contextual usage criteria.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Adds 'existing card' prerequisite and 'one or more' cardinality context. However, given openWorldHint=true, it fails to clarify whether non-existent labels are auto-created or cause errors. Annotations cover idempotency and non-destructive nature.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single 7-word sentence with zero redundancy. Front-loaded and efficient, though minimalism leaves gaps in behavioral disclosure.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a simple 2-parameter operation with full schema coverage and annotations, but lacks error handling context (e.g., card not found, invalid labels) and implications of openWorldHint for label resolution.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage ('Card ID or title', 'Label names or IDs'). Description reinforces the array cardinality ('one or more') but adds no syntax details, format constraints, or examples beyond the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb ('Add'), resource ('labels'), and target ('existing card'). However, it does not explicitly differentiate from sibling 'create_label' (which creates label definitions vs. attaching them to cards), potentially causing confusion.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance on when to use this versus 'remove_label_from_card', 'set_card_color', or prerequisite steps like ensuring labels exist first. No mention of idempotent behavior (though covered by annotations).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    The description adds that checklist items are included in the duplication, supplementing the annotations. However, it does not address other card attributes (labels, comments, assignees), the implications of `openWorldHint: true`, or whether the original card is modified. No contradiction with annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single sentence is efficiently front-loaded with the primary action and object, and the qualifying phrase ('including its checklist items') adds necessary behavioral detail without verbosity. Zero wasted words.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    While the description covers core duplication behavior and specifically mentions checklist items, it leaves significant ambiguity regarding other card properties (labels, comments, attachments). Given the mutation nature and lack of output schema, additional scope clarification would improve completeness.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema fully documents both parameters including partial match support for IDs and default naming conventions. The description implies the semantic relationship (id = source, new_title = destination) but adds no syntax details beyond the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states a specific verb ('Duplicate') and resource ('card'), and adds scope detail ('including its checklist items') that hints at behavioral distinctiveness from `create_card`. However, it does not explicitly differentiate from siblings or clarify if labels, comments, or attachments are also duplicated.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to select this tool versus alternatives like `create_card` or `move_card`, nor does it mention prerequisites such as requiring an existing card ID to duplicate.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare readOnlyHint=true (safe read) and openWorldHint=true. The description adds valuable behavioral context that results are 'grouped by status' (organization pattern), but omits return format details, pagination behavior, or what happens when no active board is set.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence of 9 words with front-loaded action verb. No redundancy or waste. Every word earns its place by conveying core purpose, filtering scope, and result organization.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a simple 2-parameter read operation with good schema coverage, but lacking given the absence of output schema. The description mentions grouping but does not describe the data structure returned, nor does it acknowledge the default board behavior mentioned in the schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the baseline is appropriately 3. The description implies filtering (assigned to current user) but does not add syntax details, accepted formats, or usage notes for the optional board and include_done parameters beyond what the schema provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly identifies the action (Get), resource (cards), and scope (assigned to current user), while distinguishing from siblings via the 'current user' filter and 'grouped by status' organization. However, it doesn't explicitly differentiate when to use this versus list_cards or search_cards.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this tool versus alternatives like list_cards, search_cards, or get_card_details. Does not clarify if this should be used before or after set_active_board, or when include_done should be true.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare readOnlyHint=true and openWorldHint=true. The description adds scope context ('across all boards') aligning with openWorldHint, but doesn't disclose search behavior details like case sensitivity, partial matching logic, or result ordering.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence of seven words with zero redundancy. Information is front-loaded and every word earns its place describing what, how, and where the search operates.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a simple read-only search tool with complete input schema, though lacking description of return values or result structure given the absence of an output schema. Could specify what card fields are returned.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with complete descriptions for both parameters. The description adds minimal semantic value beyond the schema, briefly noting the keyword-based nature of the search without detailing syntax or query construction.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb ('Search') and resource ('cards') with explicit scope ('across all boards'). The scope distinguishes it from board-specific tools like list_cards, though it doesn't explicitly name sibling alternatives.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance on when to use this versus list_cards, get_my_cards, or get_card_details. The phrase 'across all boards' implies global scope but doesn't constitute usage guidelines or prerequisites.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    The description aligns with annotations (readOnlyHint=false, idempotentHint=true) by implying a mutating but safe operation, and the explicit mention of 'unassign' clarifies scope. However, it adds no details about side effects, notification triggers, or failure modes beyond what the structured annotations provide.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single sentence efficiently conveys both supported operations without redundancy, placing the primary actions first and maintaining appropriate length for the tool's scope. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the comprehensive input schema and helpful annotations (idempotent, non-destructive), the description adequately covers the tool's dual purpose without needing to specify return values. It appropriately relies on the schema's required field indicators to show that email is optional for unassignment.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema fully documents all parameters including the partial match support for IDs and the toggle logic for unassign. The description adds no parameter-specific syntax or relationships beyond what the schema already provides, meeting the baseline for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description 'Assign or unassign a user to a card' uses specific verbs (assign/unassign) and identifies the resource (card). It distinguishes from siblings like add_comment or archive_card by specifying the assignment operation, though it does not explicitly differentiate from update_card which might also modify card properties.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the description indicates the tool handles both assignment and unassignment, it provides no guidance on when to use this tool versus alternatives like update_card, nor does it explain prerequisites such as whether the user must already be a board member or the relationship between the email and unassign parameters.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare readOnly=false and destructive=false, confirming this is a safe write operation. The description adds the active board default behavior but omits explanation of openWorldHint=true implications (external side effects) or the idempotentHint=false consequences (duplicate creation on retries).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences, front-loaded with the primary action. Efficient length, though the second sentence repeats information already present in the schema parameter description, slightly wasting space.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a simple 3-parameter creation tool with good schema coverage and safety annotations. However, it omits behavioral details like uniqueness constraints, what constitutes an 'active board,' or the nature of the created resource's identifier return value (though no output schema exists to require this).

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% coverage with all parameters documented. The description's second sentence ('Uses active board if not specified') is redundant with the schema's board parameter description ('uses active board if omitted'), adding no new semantic value.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action (create) and resource (stage), though uses 'stage' instead of 'column' which conflicts with the tool name and sibling tools like list_columns/delete_column without clarifying they are synonyms. Lacks explicit differentiation from related creation tools like create_board or create_card.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides guidance on the optional 'board' parameter fallback ('Uses active board if not specified'), but lacks explicit when-to-use guidance, prerequisites, or alternatives compared to siblings like create_label or create_card.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Adds 'Permanently' to reinforce the destructive nature (aligns with destructiveHint=true), but fails to explain idempotency (safe retries) or openWorldHint implications (behavior when card doesn't exist) that annotations declare. Minimal behavioral context beyond the single adverb.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely concise at 4 words with no filler. Front-loaded with the most critical qualifier ('Permanently') first. Slightly too minimal—could accommodate one sentence on alternatives without sacrificing clarity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a single-parameter destructive tool with complete annotations, but lacks usage context (delete vs archive), error handling details, or confirmation that the operation is irreversible beyond the single word 'Permanently'. No output schema exists, but description doesn't address return values.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema coverage, the schema fully documents the 'id' parameter including partial match support. The description mentions no parameters, earning baseline score 3 for not duplicating well-documented schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action (delete) and target resource (card/frame), clearly distinguishing from sibling 'archive_card' by emphasizing permanence. The 'frame' synonym suggests coverage of related concepts without ambiguity.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use permanent deletion versus the sibling 'archive_card' (archiving), nor mentions prerequisites like confirmation requirements or active board selection. Zero explicit when/not-when guidance.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare readOnlyHint=true and openWorldHint=true, covering safety and external data access. The description adds the temporal scope ('past N days'), but fails to specify what constitutes 'activity' (card moves? comments? creation?) or the aggregation method. With annotations covering safety, the lack of behavioral detail is acceptable but not exemplary.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with the action verb front-loaded. While appropriately sized at 11 words, it sacrifices valuable detail (what activity is tracked, output format) for brevity. No redundancy or filler present.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given one optional parameter with 100% schema coverage and read-only annotations, the description is minimally complete. However, for a reporting tool with no output schema, the vague term 'activity' and lack of output structure description leaves gaps in understanding what data is returned.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with the 'days' parameter fully documented as 'How many days to look back (default 7)'. The description references 'past N days' which aligns with the parameter, but adds no additional semantic guidance (e.g., acceptable ranges, format details). Baseline 3 is appropriate given complete schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves ('Shows') a summary of activity across all boards for a time range. It distinguishes itself from siblings like get_board_overview (single board) and get_card_details (individual items) by specifying 'across all boards' and 'past N days', though it could more explicitly contrast with other reporting tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage through 'across all boards' (suggesting multi-board reporting needs), but lacks explicit guidance on when to use this versus get_board_overview or other reporting tools. No prerequisites or alternatives are named.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare readOnly and openWorld behavior; description adds 'all' to indicate unfiltered scope. However, it omits pagination behavior, rate limits, or what constitutes a 'project/production' in this context. Minimal added value beyond annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single efficient sentence with no redundancy. Given zero parameters and simple read-only behavior, this length is appropriate and front-loaded.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Lacks output schema coverage; description should ideally characterize the returned project/production objects or structure. For a tool accessing an external system (openWorldHint), more context on the Framedeck domain would strengthen completeness.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Zero input parameters (empty schema), meeting the baseline score of 4. No parameters require semantic elaboration.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb (List) and resource (projects/productions), with 'all' indicating broad scope. Implicitly distinguishes from siblings like get_active_board (specific) and create_board (mutation) through scope, though explicit contrast is absent.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance on when to use versus alternatives (e.g., get_active_board for the current board, get_board_overview for details) or prerequisites. Simply states functionality without context.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations cover idempotentHint=true and destructiveHint=false, indicating safe repetition. The description adds minimal behavioral context beyond this, failing to mention that partial title matching is supported (per schema) or what happens when moving to the same stage.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence with zero waste. Front-loaded with verb ('Move') and immediately conveys resource and destination. Appropriate length for the tool's complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Covers basic operation but misses opportunity to leverage the rich schema context (partial matching support, name-or-ID lookups) or explain idempotent behavior in domain terms. Adequate for a simple move operation but thin given the flexible identification scheme.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Effectively bridges domain terminology ('stage' in description aligns with 'column' parameter name and description). Reinforces that 'card' is the primary resource for the 'id' parameter. With 100% schema coverage, this semantic mapping adds useful clarification.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States a specific action (move) and target (card/frame to different stage) clearly. However, it does not explain what a 'frame' is relative to a 'card', nor does it distinguish from siblings like update_card that might also modify card state.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this versus update_card or create_card, no mention of prerequisites (e.g., card must exist), and no warnings about partial name matching behavior that could accidentally match multiple cards.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Aligns with annotations (non-destructive clear operation matches destructiveHint=false, idempotent nature implied by 'Set'). Adds 'visual categorization' context. However, it fails to mention the partial matching behavior disclosed in the schema parameter description, or open-world implications (what happens if multiple cards match the partial title).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single efficient sentence with zero waste. Front-loaded with action verb ('Set or clear'). Every phrase earns its place: operation modes, target, and purpose are all covered without redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriate for a simple 2-parameter mutation tool. Annotations cover behavioral safety (idempotent, non-destructive, non-readonly) and schema covers parameter details fully. Lacks only error-handling context (e.g., what happens with ambiguous partial matches) to be a 5.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema coverage, baseline is 3. The description reinforces the 'clear' capability mentioned in the color parameter description but adds no significant semantic value beyond the schema. The 'id' parameter supports partial matching per schema, which the description doesn't contextualize.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb ('Set or clear') and resource ('color of a card') with purpose context ('visual categorization'). However, it doesn't distinguish from label-based color coding available via add_labels_to_card, leaving ambiguity about when to use this native color property vs labels.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies dual operation modes (set vs clear) but provides no explicit when-to-use guidance, prerequisites, or comparison to sibling tools like add_labels_to_card which may also affect card appearance. No mention of when clearing is appropriate.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    The description adds valuable behavioral context not in annotations: the fallback logic to Idea Pool when no board is specified. However, with idempotentHint=false and no output schema, it fails to disclose that duplicate calls create multiple cards or what the return value contains (e.g., card ID). The openWorldHint=true annotation is also unexplained.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two efficient sentences with zero redundancy. The first establishes the core action; the second immediately clarifies the default board behavior. Every word serves a purpose and the critical information is front-loaded.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the absence of an output schema, the description should explain what the tool returns (e.g., the created card ID or object), but it omits this. While annotations cover safety (destructive/read-only), the openWorldHint implications and idempotency behavior remain undisclosed, leaving gaps for a tool with 7 parameters.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the baseline is 3. The description reinforces the board parameter's fallback behavior ('uses active board... if omitted'), but this largely restates the schema description rather than adding new semantic depth like validation rules or format examples beyond what the schema already provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states 'Create a new card/frame' with specific verb and resource, distinguishing from siblings like create_board or create_column. However, it does not explicitly differentiate from create_multiple_cards (batch creation), which is relevant given the sibling tool list.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The phrase 'quick idea capture' implies a use case for the fallback behavior, providing implicit context for when to use the Idea Pool default. However, it lacks explicit guidance on when to use this single-card tool versus create_multiple_cards, or prerequisites like requiring an active board vs. explicit board parameter.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare readOnlyHint=true (safe read) and openWorldHint=true. The description adds valuable behavioral context beyond annotations by explaining the active board fallback logic, which affects how the tool behaves when board is omitted.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences totaling 13 words. The first establishes scope (listing with filters), the second explains the active board fallback. Every word earns its place with zero redundancy or fluff.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a read-only listing tool with comprehensive schema coverage and safety annotations, the description provides adequate context. The absence of output schema is noted, but the description sufficiently covers the input behavior and filtering capabilities.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with all 4 parameters (board, column, priority, limit) fully documented. The description notes 'optional filters' which reinforces the schema, but does not add significant semantic value beyond what the schema already provides, warranting the baseline score of 3.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action (List) and resource (cards/frames). The 'frames' synonym suggests domain context (likely creative/production). However, it does not explicitly distinguish when to use this versus sibling 'search_cards' or 'get_my_cards', which would be helpful given the overlapping functionality.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides specific guidance on the 'board' parameter fallback behavior ('Uses active board if set'), which helps agents understand when the parameter can be omitted. However, it lacks explicit comparison to sibling tools like 'search_cards' to clarify selection criteria.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Adds valuable context beyond annotations by specifying the voice input use case and the active board fallback behavior. However, it does not address the idempotentHint=false implication (repeated calls create duplicates) or the openWorldHint=true behavior, which are important for a batch creation tool.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three tightly constructed sentences with no waste. The first sentence establishes purpose, the second provides usage context, and the third specifies default behavior. Information is appropriately front-loaded.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    While the schema and annotations cover input parameters and basic safety (non-destructive), the description lacks information about return values or success/failure behavior. Without an output schema, this omission leaves a gap for an agent determining how to handle the creation results.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, providing detailed documentation for all parameters including the nested card objects. The description mentions the board default behavior ('uses active board if board is not specified'), which aligns with but does not significantly extend the schema documentation. Baseline 3 applies.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Create multiple cards/frames'), the resource type (cards/frames), and distinguishes from the singular 'create_card' sibling by emphasizing 'at once' and 'batch' processing. It uses specific verbs and resources without tautology.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear context for when to use ('Ideal for batch idea capture via voice'), implying the batch vs. single-use case. However, it does not explicitly name the sibling alternative (create_card) or state when NOT to use this tool (e.g., for single card creation).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare destructiveHint=true and idempotentHint=true. Description adds crucial behavioral context beyond annotations: cards are not deleted but moved to the first remaining stage (cascading behavior). Does not contradict annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste. First sentence states the action; second sentence fronts the critical side effect (card movement). Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a destructive operation with 2 parameters and no output schema, the description adequately covers the primary risk (card relocation). Annotations handle safety profile (destructive, idempotent). Missing advanced details like error conditions or auth requirements, but sufficient for the complexity level.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with clear descriptions ('Column/stage name or ID', 'Board name or ID'). Description reinforces the stage/column terminology equivalence and implies the board parameter's purpose, but does not add format details, examples, or constraints beyond the schema. Baseline 3 appropriate for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Delete' paired with resource 'stage/column' and scope 'from a board'. Explicitly distinguishes from siblings like create_column or delete_card by specifying the target entity and cascading effect on cards.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides implied guidance through the warning that 'Cards in this stage will be moved to the first remaining stage,' allowing users to infer when not to use it. However, lacks explicit 'when to use vs alternatives' guidance or prerequisites (e.g., permissions).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare readOnlyHint=true and openWorldHint=true. The description adds valuable behavioral context by specifying exactly what data is retrieved (comments, checklists, attachments, labels) beyond generic 'details', though it could add information about external resource access implied by openWorldHint or cache behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single efficient sentence where every word earns its place. It front-loads the action ('Get full details'), identifies the target ('of a card'), and clarifies scope ('including...') without redundant verbosity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given this is a simple read operation with one parameter and annotations covering safety (readOnlyHint), the description adequately compensates for the missing output schema by enumerating the specific data fields returned. It is complete enough for an agent to understand the tool's value proposition.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage (the 'id' parameter is fully documented as accepting 'Card ID or title (partial match supported)'), the baseline is 3. The description does not add additional parameter semantics, but none are needed given the complete schema documentation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses the specific verb 'Get' with the resource 'card' and explicitly scopes the operation as 'full details'. By listing specific included resources (comments, checklists, attachments, labels), it clearly distinguishes this from sibling list operations like list_cards or search_cards, which likely return summary data.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The term 'full details' implies usage when comprehensive card data is needed versus simple listing, but there is no explicit guidance on when to use this versus list_cards, search_cards, or get_my_cards, and no named alternatives or exclusions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare openWorldHint=true and readOnlyHint=false (write operation to external system). The description adds valuable behavioral context not in annotations: the side effect that commit info is automatically added as a comment. Deducted one point for not explaining the idempotency implications (idempotentHint=false) or what happens on duplicate calls.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste. Front-loaded with the primary action ('Link a git commit to a card'), followed by the secondary effect. Every sentence earns its place in conveying essential function and side effects.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 100% schema coverage and comprehensive annotations (safety hints, openWorld), the description is appropriately complete. It covers the core function and automatic side effect. Minor gap: could mention error conditions or duplicate handling given idempotentHint=false, but not required for competent usage.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema already documents all parameters including partial match support for card_id and SHA format flexibility. The description implies how parameters are used ('adds commit info as a comment') but doesn't add syntax, format constraints, or examples beyond the schema definitions, warranting the baseline score of 3.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verb 'Link' with clear resources ('git commit' to 'card'), and distinguishes itself from the sibling tool 'add_comment' by specifying it 'Automatically adds commit info as a comment'—indicating this is for git integration specifically rather than manual commenting.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage through the automatic comment behavior, suggesting when to use this tool (linking git metadata), but provides no explicit when/when-not guidance or alternative selection criteria versus manual comment addition or other card update tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations cover idempotency and safety (idempotentHint=true, destructiveHint=false). The description adds crucial behavioral context that this sets implicit state affecting subsequent operations, which is essential for an agent to understand side effects. No contradiction with annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste. First sentence states action, second sentence explains scope/side effects. Front-loaded with the essential verb and object. No redundant phrases.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriate for a simple 1-parameter state-setting tool with good annotations. Explains the immediate action and side effects clearly. Could improve by mentioning persistence scope (session vs permanent) or validation behavior for invalid boards.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage for the single 'board' parameter. The description mentions 'production context' which semantically aligns with the parameter accepting production names, but mostly mirrors the schema's 'Board/production name or ID' description. Baseline 3 appropriate for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description uses specific verb 'Set' with clear resource 'active board/production context'. The mention of 'production context' distinguishes this from create_board and links to graduate_to_production sibling. However, it doesn't explicitly contrast with get_active_board or when to prefer explicit board parameters.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The second sentence explicitly states the tool's effect on other tools ('other tools will use this board by default'), which implicitly guides when to use it (to establish session context) and indicates the alternative (specifying board explicitly in other calls). Lacks explicit 'when not to use' guidance.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Adds significant value beyond annotations: while annotations declare destructiveHint=true and idempotentHint=true, the description clarifies that this is a 'soft delete' and explicitly states recoverability ('can be restored'), which moderates the 'destructive' annotation and informs the agent about the safety/reversibility of the operation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Perfectly economical at one sentence with a high-value parenthetical. The action ('Archive a card') is front-loaded, and every word (especially 'soft delete' and 'restored') conveys essential behavioral information. No waste.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriately complete given the tool's simplicity (single parameter) and the richness of structured metadata. The combination of annotations (idempotency, destructiveness) and description (recoverability) fully covers the behavioral contract, though it could optionally mention what happens if the ID is not found.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage (the 'id' parameter is fully documented as accepting 'Card ID or title (partial match supported)'), the schema carries the semantic burden. The description does not add additional parameter guidance, meeting the baseline expectation for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Excellent specificity: 'Archive' is a precise verb, 'card' is the resource, and the parenthetical '(soft delete — can be restored in the UI)' crucially distinguishes this from the sibling 'delete_card' tool by clarifying the recoverable nature of the operation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear behavioral context that the operation is reversible ('can be restored in the UI'), which implicitly guides usage toward temporary removal scenarios. However, it does not explicitly name 'delete_card' as the alternative for permanent removal or state 'when to use' vs 'when to avoid'.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare readOnlyHint=true; description appropriately adds context about what 'active' means ('currently set as the active context') without contradicting safety hints. Could add more about response format given no output schema exists.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single 11-word sentence with zero waste. Front-loaded with action verb and immediately qualified with specific scope ('active context'). Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriate for low-complexity tool with 0 parameters and clear annotations. Description identifies what's returned (active board/production) even without output schema, though could specify format or relationship to board-scoped operations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Zero parameters present, triggering baseline score of 4. No parameters require semantic elaboration.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear specific verb ('Show') and resource ('board/production'), with explicit scope qualification ('currently set as the active context') that distinguishes it from siblings like list_boards, get_board_overview, and set_active_board.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies usage through 'active context' concept but lacks explicit when-to-use guidance or contrasts with alternatives like list_boards. No mention of relationship to set_active_board or operations requiring an active board context.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations indicate read-only and potential external access (openWorldHint). Description adds valuable content disclosure beyond annotations by specifying exactly what data structures are returned (stages, counts, overdue items) to compensate for missing output schema. No contradiction with readOnlyHint.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two efficient clauses with zero waste. Front-loaded with the core action and return value, followed by parameter default behavior. Every word earns its place—'quick overview,' 'stages,' 'overdue items' all convey specific semantics.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of output schema, the description adequately discloses the return structure (stages, counts, overdue status). Tool is simple (1 optional param) with clear annotations; description sufficiently covers the behavioral contract without redundancy.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, so baseline applies. The description's 'Uses active board if not specified' largely restates the schema's '(uses active board if omitted)' without adding new constraints, format details, or examples beyond the structured description.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description states specific action ('Get a quick overview'), resource ('board'), and detailed scope ('stages with card counts, overdue items'). Clearly distinguishes from sibling tools like get_card_details (specific card) or list_cards (flat list) by emphasizing summary/aggregated nature.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear context that this returns a high-level snapshot ('quick overview') rather than detailed data. Explicitly documents the default behavior ('Uses active board if not specified'), guiding invocation when the optional parameter is omitted. Lacks explicit naming of alternatives for detailed queries.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare non-destructive write (readOnly:false, destructive:false, openWorld:true). Description adds valuable behavioral specifics: the side effect of moving the original card to 'In Production' stage and the auto-archive lifecycle detail ('auto-archived after the configured days'), which is critical operational context.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two tightly constructed sentences with zero redundancy. Front-loaded with the main action (Promote), followed by critical side effects and defaults. Every clause earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriately complete for a workflow transition tool. Covers the conceptual model (Idea Pool vs Production), default stage values, and lifecycle side effects (archiving). No output schema exists, but the description sufficiently explains the transformation behavior.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% coverage, but description adds domain context by mapping the 'stages' parameter to the concrete default production pipeline shown in parentheses: '(Idea → Scripting → Filming → Editing → Published)'. This enriches the semantic meaning beyond the schema's generic 'Custom stage names'.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Promote' targeting a clear resource ('idea from the Idea Pool') into a specific destination ('dedicated production'). Distinguishes from generic siblings like create_card or move_card by referencing the specific Idea Pool → Production workflow and naming the full stage pipeline.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Describes the outcome (moving card to 'In Production', auto-archiving), which implies the use case (graduating approved ideas), but lacks explicit 'when-not-to-use' guidance or named alternatives (e.g., distinguishing from simple move_card or duplicate_card).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

framedeck-mcp MCP server

Copy to your README.md:

Score Badge

framedeck-mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Lukaris/framedeck-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server