Skip to main content
Glama

Server Details

Universal task protocol — manage projects, tasks, workers, QR codes, and reports.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
snowbikemike/tascan-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

32 tools
tascan_add_tasksAInspect

Add one or more tasks to an event (task list). Supports bulk creation. IMPORTANT: Set response_type correctly — use "text" for info collection (names, phones, emails, notes), "photo" for visual verification (inspections, serial numbers, damage checks), "checkbox" only for simple confirmations. NOTE: To dispatch tasks to the Claude Code agent running on Mike's PC, use tascan_dispatch_to_agent instead — it routes directly to the agent's inbox with zero configuration needed.

ParametersJSON Schema
NameRequiredDescriptionDefault
tasksYesArray of tasks to create
list_idYesTask list (event) ID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds behavioral context beyond annotations: mentions bulk creation capability, maps response_type values to concrete use cases (names/phones vs inspections vs confirmations), and clarifies task creation semantics. Annotations cover basic safety (readOnly/destructive) so description doesn't need to carry that burden.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose front-loaded, followed by critical constraints (IMPORTANT/NOTE sections). Every sentence delivers value: bulk capability, detailed response_type mapping, and explicit sibling routing instruction. No filler text despite length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a task creation tool: covers what it creates, how to configure key parameters (response_type), volume capabilities (bulk), and critical routing distinction from sibling tascan_dispatch_to_agent. With 100% schema coverage and good annotations, no additional context needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% but description adds crucial usage semantics for response_type parameter—mapping abstract enum values to concrete scenarios ('text' for emails/notes, 'photo' for serial numbers/damage checks). This helps agents select correct enum values beyond raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Explicitly states 'Add one or more tasks to an event (task list)' with specific verb, resource, and scope clarification that event equals task list. Also notes 'Supports bulk creation' which distinguishes volume expectations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit alternative with 'NOTE: To dispatch tasks to the Claude Code agent running on Mike's PC, use tascan_dispatch_to_agent instead'. Also gives clear when-to-use guidance for response_type values ('text' for info collection, 'photo' for verification, etc.).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_analyze_issueA
Idempotent
Inspect

Step 1 of the Closed-Loop Autonomous Operations Protocol. Retrieves full issue context including worker info, message thread, project history, and recent similar issues. Use this data to reason about the root cause and generate a remediation plan. Also supports server-side AI analysis via POST (calls Anthropic API directly).

ParametersJSON Schema
NameRequiredDescriptionDefault
issue_idYesIssue ID to analyze
server_side_aiNoIf true, the server calls Anthropic API directly for AI analysis (default: false — returns raw data for MCP client to analyze)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover idempotency, destructiveness, and open-world hints. The description adds valuable behavioral context: it discloses the external Anthropic API dependency for server-side mode and details the specific data sources retrieved (e.g., message threads, project history) that agent needs to plan reasoning steps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences with no waste: establishing position (Step 1), detailing retrieval scope and usage intent, and explaining the AI execution modes. Appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking a formal output schema, the description effectively documents the return value by enumerating the specific context types retrieved (worker info, threads, history, similar issues). Also covers the external API behavior. Sufficient for an agent to understand what data will be available for reasoning.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds significant meaning beyond the schema by explaining the architectural choice embedded in server_side_ai: whether to return raw data for local MCP client analysis vs invoking server-side AI inference, including the specific provider (Anthropic).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Retrieves full issue context' including specific data types (worker info, message thread, project history, similar issues). The 'Step 1' framing helps position it within a workflow, though it doesn't explicitly differentiate from siblings like tascan_list_issues or tascan_recommend_fix.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context via the 'Step 1' protocol framing and explains what to do with the output ('reason about the root cause'). Explicitly documents the two operational modes (client-side analysis vs server-side AI via POST), guiding when to use each. Lacks explicit 'when not to use' or named sibling alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_apply_templateBInspect

Apply a pre-built template to a task list, adding all template tasks

ParametersJSON Schema
NameRequiredDescriptionDefault
list_idYesTask list (event) ID
template_slugYesTemplate slug (e.g. "conference-load-in", "warehouse-receiving")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds that tasks will be 'added' (created), complementing the annotations which indicate this is a write operation (readOnlyHint: false). However, it fails to explain the implications of idempotentHint: false—that multiple invocations may create duplicate tasks—or whether the operation is atomic.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single efficient sentence with zero filler. It front-loads the action and captures the essence of the operation without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the core operation is described, the description lacks critical behavioral context for a non-idempotent mutation tool—specifically, whether duplicates are created on re-application. Given the lack of output schema, the description should carry more behavioral detail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is met. The description references 'task list' and 'template' which map to the parameters, but adds no additional semantic context, examples, or validation rules beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific verb ('Apply'), resource ('pre-built template'), target ('task list'), and effect ('adding all template tasks'). It implicitly distinguishes from tascan_add_tasks (individual tasks) and tascan_list_templates (viewing only), though it doesn't explicitly name siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like tascan_add_tasks for individual task creation, nor does it mention prerequisites such as requiring an existing task list (list_id) or knowing the template slug.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_auto_resolveBInspect

FULL Closed-Loop Autonomous Operations Protocol in one call. Server-side AI analyzes the issue, generates remediation tasks, creates a task list, and dispatches to the worker — all without human intervention. This executes Patent Claim 7: autonomous operations from issue detection through physical-world instruction delivery.

ParametersJSON Schema
NameRequiredDescriptionDefault
issue_idYesIssue ID to auto-resolve
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate openWorld=true and readOnly=false; the description adds valuable behavioral context by disclosing the multi-step server-side process (analysis → task generation → dispatch) and 'physical-world instruction delivery,' elaborating on the external effects. However, it omits failure mode details (e.g., whether partial failures leave orphaned tasks) despite idempotentHint=false suggesting non-repeatability concerns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The patent claim reference ('This executes Patent Claim 7...') is complete waste of space for an AI agent. The first sentence is buzzword-heavy ('FULL Closed-Loop...') when a simple 'Automatically resolves issues end-to-end' would suffice. Structure is poor: front-loaded with marketing language rather than the actionable description in sentence two.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single simple parameter but complex multi-step operation (4+ logical stages), the description adequately explains the functional flow but lacks critical operational details: success/failure indicators, whether the operation is atomic, or what happens if dispatch fails after task creation. No output schema exists to compensate for these gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the single parameter 'issue_id' fully documented. The description provides no additional parameter constraints, accepted formats, or validation rules beyond the schema, warranting the baseline score of 3 for well-documented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs server-side AI analysis, generates remediation tasks, creates a task list, and dispatches to workers. However, it buries this clarity beneath buzzwords ('FULL Closed-Loop Autonomous Operations Protocol') and a patent reference that add no value for tool selection. It implies end-to-end automation but doesn't explicitly distinguish from siblings like tascan_analyze_issue that presumably perform partial steps.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'all without human intervention' provides implicit guidance for autonomous scenarios, suggesting when to use this tool versus manual step-by-step alternatives. However, it lacks explicit 'when to use/when not to use' guidance and doesn't name sibling alternatives (e.g., tascan_analyze_issue, tascan_add_tasks) that require human oversight or multiple calls.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_complete_taskAInspect

Complete a task on behalf of a worker. Inserts a completion record and timer event. Use this to simulate or record task completions via the API.

ParametersJSON Schema
NameRequiredDescriptionDefault
notesNoOptional completion notes
task_idYesTask ID to complete
worker_idYesWorker ID performing the completion
response_valueNoResponse value (for text/number/choice tasks)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate non-readonly/non-destructive. Description adds valuable behavioral specifics: 'Inserts a completion record and timer event' discloses side effects, and 'simulate' indicates this may create synthetic/test data. Does not mention idempotency constraints (idempotentHint=false) but this is adequately covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each with distinct purpose: action definition, side effect disclosure, and usage context. No redundancy or filler. Front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description adequately covers the mutation intent and side effects (record insertion, timer event) for agent use. Could improve by noting error conditions (e.g., already completed tasks) or return value structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (task_id, worker_id, notes, response_value all documented), establishing baseline 3. Description adds semantic context for worker_id ('on behalf of a worker') implying proxy/delegation pattern, but does not elaborate on response_value semantics beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb ('Complete') with clear resource ('task') and scope ('on behalf of a worker'). It distinguishes from siblings like update_task or delete_task by emphasizing the completion semantics and proxy pattern.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage context ('Use this to simulate or record task completions via the API') indicating when to prefer this over UI-based completion. However, it does not explicitly contrast with update_task or clarify when completion vs modification is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_create_eventAInspect

Create a new event (task list) within a project. Supports team_mode (shared completions) and multi_instance (each worker gets isolated copy — great for surveys, onboarding, info collection). team_mode and multi_instance cannot both be true.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesEvent name
team_modeNoTeam mode — shared completions
project_idYesProject ID
descriptionNoEvent description
multi_instanceNoMulti-instance — each worker gets isolated copy
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish mutation/non-destructive profile; description adds crucial business logic: mutual exclusivity constraint and semantic meaning of team_mode (shared completions) vs multi_instance (isolated copies) not fully captured in schema descriptions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient sentences: purpose statement, feature explanations with parenthetical clarification, and constraint warning. Zero redundancy, logically ordered with critical constraint at the end.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a creation tool with behavioral complexity; covers mutual exclusivity constraint essential for correct invocation. No output schema exists, but description adequately covers input semantics given strong schema coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage (baseline 3), description adds significant value: expanded use-case context for multi_instance and explicit mutual exclusivity constraint between the two boolean flags.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Create') and resource ('event/task list') with parenthetical clarification distinguishing it from siblings like create_project (scope is 'within a project').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Documents specific use cases for multi_instance ('surveys, onboarding, info collection') and critical constraint ('cannot both be true'), though lacks explicit comparison to sibling tools like update_event.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_create_projectBInspect

Create a new TaScan project (top-level container for events)

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesProject name
locationNoProject location / venue
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish this is a non-destructive write operation (readOnlyHint=false, destructiveHint=false). The description adds useful domain context that projects are containers, but fails to disclose behavioral specifics like the non-idempotent nature (calling twice creates duplicate projects) or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the action verb 'Create', no redundant text, and appropriate length for a simple two-parameter creation tool. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Sufficient for a straightforward creation tool with complete parameter schemas and clear annotations. The description establishes the essential purpose and domain relationship; absence of return value documentation is acceptable given the lack of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for both 'name' and 'location' parameters, the baseline score applies. The description adds no additional parameter semantics (e.g., explaining location format or name uniqueness requirements), relying entirely on the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Create') and resource ('TaScan project'), and the parenthetical '(top-level container for events)' effectively distinguishes this from siblings like 'tascan_create_event' by clarifying the hierarchical relationship. However, it does not explicitly name alternative tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies a domain model where projects contain events, it lacks explicit guidance on when to use this tool versus siblings (e.g., when to create vs. update a project) or any stated prerequisites like uniqueness constraints on the name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_create_workerAInspect

Create a new worker (taskee) in the organization

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesWorker name
emailNoEmail
phoneNoPhone number
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnlyHint:false, destructiveHint:false). Description adds organizational scope and clarifies 'taskee' terminology, but fails to disclose idempotency behavior (idempotentHint:false implies duplicates possible), validation rules, or return value format despite no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action and object, no redundancy. However, extreme brevity leaves room for one additional sentence covering return value or error behavior without violating conciseness principles.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple 3-parameter flat schema is well-covered by annotations and schema descriptions. However, for a creation tool lacking output schema, description should indicate what identifier is returned or how to reference the created worker; absence of this information leaves functional gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for all 3 parameters (name, email, phone). Description adds no parameter-specific guidance, but baseline 3 is appropriate since schema already documents semantics comprehensively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Create' + resource 'worker' (with parenthetical synonym 'taskee') + scope 'in the organization'. Clearly distinguishes from sibling tascan_update_worker (implied by 'new') and tascan_list_workers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for new workers only via 'new worker', distinguishing from updates, but provides no explicit when-to-use guidance regarding sibling tascan_register_agent (domain distinction unclear) or prerequisites like email uniqueness.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_delete_eventA
DestructiveIdempotent
Inspect

Delete an event (task list) and all its tasks and completions. This action is irreversible.

ParametersJSON Schema
NameRequiredDescriptionDefault
list_idYesTask list (event) ID to delete
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare destructive/idempotent hints; description adds valuable cascading context ('all its tasks and completions') explaining the blast radius beyond the single resource. 'Irreversible' reinforces the destructive annotation with actionable narrative urgency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two high-density sentences with zero waste. First sentence front-loads the action and scope; second delivers critical safety warning. No redundancy with structured fields or annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a simple destructive operation with 100% param coverage and no output schema. Description adequately covers intent, scope of destruction, and irreversibility without needing to explain return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. Description does not explicitly discuss the list_id parameter, but the opening clause 'Delete an event (task list)' implicitly aligns the parameter name with the resource type, offering minimal semantic bridge.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb (Delete) + resource (event/task list) with clear scope expansion (and all its tasks and completions). The parenthetical clarification '(task list)' effectively maps the domain term to the API resource, distinguishing from siblings like delete_task which handles individual items.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies scope through 'all its tasks and completions,' suggesting this is for container-level deletion, but provides no explicit when-to-use guidance or named alternatives (e.g., delete_task for individual task removal). Lacks prerequisites or warnings about dependencies.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_delete_projectA
DestructiveIdempotent
Inspect

Delete a project and all its events, tasks, and completions. This action is irreversible.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID to delete
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical cascade behavior (deletes events, tasks, completions) beyond annotations. Confirms destructive nature with 'irreversible' narrative. Does not contradict annotations (destructiveHint=true, idempotentHint=true are consistent with the description).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences with zero waste. First sentence establishes action and scope; second provides critical warning. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent coverage for a high-stakes destructive tool with simple parameters. Captures the cascade behavior essential for user safety. Minor gap: does not mention the idempotent nature (though covered by annotation) or return behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the project_id parameter fully documented in the schema itself. Description does not add parameter-specific details, which is acceptable given the high schema coverage and single-parameter simplicity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specifies the exact action (Delete), resource (project), and scope (all its events, tasks, and completions), clearly distinguishing it from sibling tools like tascan_delete_event or tascan_delete_task which handle individual items.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context on consequences (irreversible) and scope (cascading deletion of children), which implicitly guides usage. Lacks explicit 'when-not' guidance comparing to tascan_update_project or archiving alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_delete_taskA
DestructiveIdempotent
Inspect

Delete a specific task and its completions. This action is irreversible.

ParametersJSON Schema
NameRequiredDescriptionDefault
task_idYesTask ID to delete
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm destructive/idempotent nature; description adds critical behavioral detail that deletions include 'its completions' (associated completion records) and emphasizes irreversibility. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences. First establishes scope (task + completions), second delivers irreversibility warning. Front-loaded action verb, zero waste words. Appropriately terse for a high-stakes destructive operation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Sufficiently complete for a single-parameter destructive operation given strong annotations covering safety profile. Absence of output schema is acceptable gap for idempotent delete, though explicit return description would improve.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with task_id fully documented. Description mentions 'specific task' reinforcing single-resource targeting but adds no syntax, format, or sourcing details beyond the schema. Baseline 3 appropriate for high-coverage schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Delete' with resource 'task' and scope 'its completions' clearly distinguishes from siblings like delete_event/delete_project and from update_task or complete_task. Zero ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides safety warning 'This action is irreversible' implying cautious use, but lacks explicit guidance on when to use this versus tascan_complete_task or alternatives. No explicit when-not-to-use or prerequisites stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_dispatch_instructionAInspect

Step 3 of the Closed-Loop Autonomous Operations Protocol. Dispatches remediation to the worker via MULTI-CHANNEL delivery: (1) issue thread message, (2) in-app notification, (3) progress feed update, (4) SMS if phone on file, (5) optional remediation task list creation. Closes the loop from digital AI analysis to physical worker execution.

ParametersJSON Schema
NameRequiredDescriptionDefault
ai_agentNoName of the AI agent dispatching (default: TaScan AI)
issue_idYesIssue ID this instruction relates to
send_smsNoSend SMS to worker (default: true if phone on file)
worker_idNoTarget worker ID (defaults to the worker who reported the issue)
instructionYesClear, actionable instruction for the worker to execute
remediation_tasksNoOptional array of tasks to create as a remediation task list. Each: { title, description, response_type, requires_photo, is_safety_checkpoint, sort_order }
recommendation_summaryNoOne-line summary for the task list description
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=false and openWorldHint=true; description adds significant value by detailing the 5 specific delivery channels (thread message, in-app notification, progress feed, SMS, optional task list) and explaining the 'close the loop' concept from digital to physical execution.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient sentences: protocol context, mechanism details, and purpose statement. 'Step 3' protocol reference is slightly jargon-heavy but acceptable. Zero wasted words describing implementation internals.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complex tool with 7 parameters including nested remediation_tasks objects. Description adequately covers multi-channel behavior and optional task list creation. No output schema exists, but description sufficiently prepares for side effects given rich input schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are well-documented in schema. Description adds semantic context by referencing 'MULTI-CHANNEL delivery' which ties together send_sms, worker_id, and remediation_tasks parameters, justifying baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Dispatches' with resource 'remediation/instruction' and scope 'MULTI-CHANNEL delivery'. Explicitly targets 'worker' which distinguishes from sibling tool 'tascan_dispatch_to_agent' (likely for AI agents). The 'Step 3' framing clarifies sequential protocol position.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit context via 'Step 3 of the Closed-Loop Autonomous Operations Protocol' and 'closes the loop from digital AI analysis to physical worker execution', suggesting it follows analysis steps. However, lacks explicit when-to-use vs sibling 'tascan_dispatch_to_agent' or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_dispatch_to_agentAInspect

PREFERRED tool for sending work to an AI agent. Dispatches a task to the agent's inbox — picked up and executed automatically. No list ID needed. Supports prefixes: CODE: SHELL: RESEARCH: WRITE: PLAN: for routing. Use "agent" param to target a specific agent (default: claude-code-local). Use tascan_list_agents to discover available agents.

ParametersJSON Schema
NameRequiredDescriptionDefault
taskYesThe task description. Prefix with CODE: SHELL: RESEARCH: WRITE: PLAN: for routing, or just plain text.
agentNoAgent ID or name to dispatch to (default: claude-code-local). Use tascan_list_agents to see options.
priorityNoPriority level (default: normal)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (not read-only, open world, not idempotent). Description adds crucial behavioral context: 'picked up and executed automatically' clarifies the async/inbox mechanism. Missing: what the function returns (task ID? confirmation?) and error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly front-loaded with 'PREFERRED' status. Zero waste: every sentence delivers unique value (mechanism, constraints, routing prefixes, defaults, sibling reference). Dense but readable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter dispatch tool with no output schema, covers the essential behavioral context (async execution, routing logic). Minor gap: doesn't describe the return value or dispatch confirmation mechanism, which would help the agent understand the tool's success criteria.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description adds value by emphasizing 'routing' semantics for prefixes (CODE:, SHELL:, etc.) and explicitly stating the default agent (claude-code-local), which helps the agent make correct invocations without guessing.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specific verb ('Dispatches') + resource ('agent's inbox') combination. The 'PREFERRED tool' designation immediately establishes hierarchy, and 'No list ID needed' explicitly differentiates from list-based siblings like tascan_add_tasks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Strong 'PREFERRED' guidance and explicit reference to sibling tascan_list_agents for discovery. The 'No list ID needed' implies when to use vs. traditional task tools. Minor gap: doesn't contrast with tascan_dispatch_instruction sibling or explicit 'when not to use' scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_generate_qrA
Idempotent
Inspect

Generate a QR code for a task list (event) that workers can scan to access tasks

ParametersJSON Schema
NameRequiredDescriptionDefault
list_idYesTask list (event) ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (idempotentHint=true, destructiveHint=false). Description adds functional context beyond annotations by explaining the access mechanism (scanning). However, fails to disclose output format (image data, URL, or base64 string) which is critical given no output schema exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single well-structured sentence with zero filler. Information hierarchy is optimal: action → object → target → purpose. Every clause earns its place by specifying either the resource, the domain entity, or the end-user workflow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately scoped for a low-complexity tool (1 parameter, discrete action). Annotations handle the safety/operational characteristics. Minor gap: lacking return value description expected given no output schema, but sufficient for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter description ('Task list (event) ID'). Description uses consistent terminology ('task list (event)') reinforcing the parameter semantics, but adds no additional format constraints, validation rules, or examples beyond the schema definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Generate), output (QR code), and target resource (task list/event). The parenthetical '(event)' clarifies domain terminology, distinguishing from other entities like projects or workers in the sibling set. Slightly held back from 5 by potential ambiguity about whether this creates/returns an image, URL, or token.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context (workers scanning to access tasks) which hints at the mobile/worker-distribution use case. However, lacks explicit guidance on when to use this versus alternatives like tascan_send_task_email or manual sharing, or prerequisites like needing an existing event.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_get_eventB
Read-onlyIdempotent
Inspect

Get details of a specific event (task list) including its tasks

ParametersJSON Schema
NameRequiredDescriptionDefault
list_idYesTask list (event) ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, covering the safety profile. The description adds value by disclosing return-value scope ('including its tasks'), indicating the response contains nested task data—a behavioral trait not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence efficiently front-loads the action ('Get details'), target resource ('specific event'), and return scope ('including its tasks'). The parenthetical clarification '(task list)' earns its place by resolving domain ambiguity with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter input, rich annotations covering safety/idempotency, and the description's disclosure that tasks are included in the return, the definition is adequately complete for agent selection. The lack of output schema is partially mitigated by the description indicating nested data retrieval.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (list_id fully documented) and only one self-explanatory parameter, the description appropriately relies on the schema. It provides baseline adequacy without redundant elaboration, meeting the threshold for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('event'), with the parenthetical '(task list)' providing crucial domain context that distinguishes it from calendar events. The phrase 'including its tasks' clarifies the hierarchical relationship, differentiating it from sibling get_task (single entity) and implying singular retrieval versus list_events.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance is provided on when to use this retrieval tool versus list_events for discovery, nor are prerequisites mentioned (e.g., needing to obtain list_id beforehand from a list operation). Usage is left implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_get_projectC
Read-onlyIdempotent
Inspect

Get details of a specific project

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already disclose readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds no behavioral context beyond this, failing to mention what 'details' are returned, error handling (e.g., project not found), or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single six-word sentence is efficiently structured and front-loaded, though its extreme brevity leaves significant informational gaps that could have been filled without harming conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple read-by-ID operation given the rich annotations and complete input schema, but lacks discussion of error cases or output structure that would make it fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for the single 'project_id' parameter ('Project ID'). The description adds no additional semantic information about the parameter format, constraints, or relationship to other entities, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States clear verb ('Get') and resource ('details of a specific project'), and the word 'specific' implies single-item retrieval. However, it does not explicitly distinguish from sibling tascan_list_projects or clarify when to use this versus other project retrieval tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus tascan_list_projects (which returns multiple projects) or other project-related siblings. No prerequisites or conditions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_get_reportA
Read-onlyIdempotent
Inspect

Get completion report for a task list (event) including task status, completions, workers, and photos

ParametersJSON Schema
NameRequiredDescriptionDefault
list_idYesTask list (event) ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true, covering safety characteristics. The description adds value by disclosing what data the report contains (task status, completions, workers, photos), providing behavioral context about the operation's scope and return payload that annotations don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence front-loaded with the action ('Get completion report') followed by parenthetical clarification and content enumeration. No filler words; every phrase describes the resource or its contents.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with one parameter and good annotations, the description is reasonably complete. It compensates for the missing output schema by enumerating report contents (task status, workers, photos), though it could explicitly note pagination or formatting constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage and only one parameter, the schema fully documents the input requirement. The description aligns with the schema by referencing 'task list (event)' but does not add semantic details beyond the schema's description of list_id.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Get') and identifies the resource clearly ('completion report for a task list'). It distinguishes from siblings like tascan_get_event or tascan_get_task by emphasizing this returns aggregated report data (status, completions, workers, photos) rather than individual entity details. However, it could explicitly contrast with these siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the specific data returned ('completions', 'workers', 'photos'), suggesting this is for reporting on completion status rather than general retrieval. However, it lacks explicit guidance on when to choose this over tascan_get_event or tascan_list_tasks, or prerequisites like requiring a specific list_id.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_get_taskB
Read-onlyIdempotent
Inspect

Get details of a specific task including completions

ParametersJSON Schema
NameRequiredDescriptionDefault
task_idYesTask ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent/destructive status, so description's burden is low. Adds value by specifying 'including completions' (indicates what data is returned), but lacks context on error handling (e.g., behavior when task_id not found).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence (9 words) front-loaded with action verb. 'Including completions' justifies the sentence's existence by specifying return content. Could benefit from second sentence covering error cases or usage context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 1-parameter read operation. Mentions 'completions' to hint at output structure despite lacking output schema. Missing: error behavior for invalid task_id and guidance on task_id provenance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (task_id is self-documenting), so baseline 3 applies per rubric. Description appropriately delegates to schema and does not attempt to redescribe the obvious parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Get' and resource 'task' with specific scope 'including completions' that hints at depth of retrieval. However, lacks explicit differentiation from sibling 'tascan_list_tasks' (when to use specific lookup vs. search).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like tascan_list_tasks. No mention of prerequisites (e.g., needing task_id from prior list operation) or error conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_list_agentsA
Read-onlyIdempotent
Inspect

List all registered AI agents with their capabilities, inbox IDs, and status. Like reading input labels on a video matrix — discover which agents are available and what they can do before dispatching work.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already confirm read-only/idempotent status, but description adds valuable context about return structure (specifically listing capabilities, inbox IDs, status) and uses the video matrix metaphor to clarify the discovery pattern. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences. First delivers core function; second provides metaphor and workflow context without redundancy. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Absence of output schema is partially mitigated by explicit enumeration of return fields (capabilities, inbox IDs, status). Given strong annotations and zero parameters, this provides sufficient context for a simple listing operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, meeting the baseline expectation of 4. Description correctly implies no filtering capabilities by stating 'List all'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (List) + resource (registered AI agents) + return fields (capabilities, inbox IDs, status). The phrase 'before dispatching work' effectively distinguishes this discovery tool from sibling dispatch/register operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear temporal context ('before dispatching work') establishing when to use it in a workflow. However, it does not explicitly name alternative tools like 'tascan_register_agent' or state exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_list_eventsA
Read-onlyIdempotent
Inspect

List all events (task lists) within a project

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description doesn't contradict annotations (consistent with readOnly/destructive hints). It adds valuable domain context by defining 'events' as 'task lists', which annotations don't provide. However, it omits behavioral details like pagination behavior, what constitutes an 'event' beyond the parenthetical, or error handling for invalid project IDs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient single-sentence structure. Every word earns its place, particularly the parenthetical '(task lists)' which disambiguates the domain term 'events'. Front-loaded with the verb. Minor nit: 'all' could potentially be omitted, but acceptable here.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a simple 1-parameter list operation with good annotations covering safety profiles. The description successfully clarifies the domain concept (events as task lists) which compensates somewhat for the missing output schema. Could improve by mentioning pagination or result limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description mentions 'within a project' which semantically maps to the 'project_id' parameter, but doesn't add format specifications, example values, or sourcing information beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a clear verb ('List'), resource ('events'), and scope ('within a project'). The parenthetical clarification '(task lists)' is crucial domain context that distinguishes this from 'list_tasks' (likely listing individual tasks) and 'list_projects', though it doesn't explicitly differentiate from 'get_event' for single-item retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'within a project' (suggesting a project_id is required), but lacks explicit guidance on when to use this versus 'get_event' (single retrieval) or 'list_tasks', and doesn't mention prerequisites like project existence.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_list_issuesA
Read-onlyIdempotent
Inspect

List all issues for a task list (event). Returns open, acknowledged, and resolved issues with severity, type, and category. Use this to discover issues that need AI analysis via tascan_analyze_issue.

ParametersJSON Schema
NameRequiredDescriptionDefault
list_idYesTask list (event) ID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm read-only/idempotent safety, while description adds return payload structure: 'Returns open, acknowledged, and resolved issues with severity, type, and category.' This compensates for missing output_schema by describing what data comes back. Does not mention pagination or rate limits, preventing a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: action+scope, return values, usage guideline. No redundancy. Front-loaded with the core action. Every sentence earns its place by adding distinct information not present in structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read operation, the description is complete. It compensates for lack of output_schema by detailing return values (status types, fields). Annotations cover safety profile. Relationship to sibling workflow is documented.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with list_id described as 'Task list (event) ID'. Description uses the phrase 'task list (event)' which aligns with but does not significantly expand upon the schema definition. With high schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent clarity with specific verb 'List', resource 'issues', and scope 'for a task list (event)'. Clearly distinguishes from sibling tascan_analyze_issue by positioning this as the discovery step before analysis, and from tascan_list_tasks by targeting issues rather than tasks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit workflow guidance: 'Use this to discover issues that need AI analysis via tascan_analyze_issue.' Names the specific sibling tool to use next, creating a clear when-to-use chain. No other tool in the description provides this specific handoff instruction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_list_projectsB
Read-onlyIdempotent
Inspect

List all TaScan projects in the organization

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, non-destructive, idempotent). Description adds scope ('in the organization') but omits critical behavioral details like return format, pagination limits, and whether deleted/archived projects are included, which is necessary given no output schema exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with strong front-loaded verb ('List'). Zero redundancy; every word conveys necessary scope and action information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Minimally sufficient for a simple read-only list operation given good annotations, but lacks description of return values or structure despite absent output schema, and omits pagination details implied by the 'all' keyword.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Correctly implies no input parameters are required (schema has 0 params). Description does not need to elaborate further since baseline for zero-parameter tools is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('List') and resource ('TaScan projects') with explicit scope ('in the organization'). However, it does not explicitly distinguish from sibling 'tascan_get_project' (singular retrieval) by stating this returns multiple/unfiltered items versus a specific project.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus 'get_project' or filtered alternatives, and does not warn about the implications of listing 'all' projects (e.g., performance with large datasets) or pagination behavior.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_list_tasksB
Read-onlyIdempotent
Inspect

List all tasks in an event (task list)

ParametersJSON Schema
NameRequiredDescriptionDefault
list_idYesTask list (event) ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations comprehensively declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. Description aligns by using 'List' but adds minimal behavioral context beyond the 'all' scope indicator. No mention of response format or limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Nine words with zero redundancy. Parenthetical clarification efficiently binds domain terms 'event' and 'task list' without additional sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple read-only listing tool with strong annotations. Missing only optional elaboration on return structure and differentiation from single-task retrieval.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for the single list_id parameter ('Task list (event) ID'). Description mirrors this by referencing 'event (task list)' but adds no formatting guidance or examples beyond the schema's definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'List' and resource 'tasks' with scope 'in an event'. Parenthetical '(task list)' helps clarify domain terminology and distinguishes from listing projects/workers via sibling names, though it doesn't explicitly contrast with tascan_get_task.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this versus tascan_get_task (single retrieval) or how to obtain the list_id prerequisite. No mention of pagination behavior or filtering capabilities.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_list_templatesA
Read-onlyIdempotent
Inspect

List available task templates (built-in and saved)

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNoFilter by category (e.g. "live-events", "hospitality", "logistics")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds scope context by specifying 'built-in and saved' templates, which annotations don't cover. However, with annotations already declaring readOnly/idempotent/destructive hints, the description carries limited additional burden and omits pagination, rate limits, or return structure details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Seven words with zero waste. Parenthetical '(built-in and saved)' efficiently clarifies scope without verbosity. Front-loaded with verb and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a simple list operation with rich annotations and full schema coverage. Absence of output schema is partially mitigated by clear resource naming, though return value structure remains unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (category parameter fully documented), establishing baseline 3. Description mentions neither the category filter nor its semantics, relying entirely on schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specifies exact action (List), resource (task templates), and scope (built-in and saved). Effectively distinguishes from siblings like tascan_list_tasks or tascan_list_projects by clearly identifying the template resource type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage by describing available template types, but lacks explicit guidance on when to use versus alternatives (e.g., when to use this instead of tascan_apply_template directly) or prerequisites for filtering.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_list_workersA
Read-onlyIdempotent
Inspect

List all workers (taskees) in the organization

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, so the description doesn't need to cover safety. It adds valuable domain context by clarifying 'workers (taskees)' and specifying 'all' workers in the organization, but lacks details on pagination, rate limits, or return structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no redundant words. The parenthetical '(taskees)' efficiently clarifies domain terminology without adding bulk. Information density is high relative to length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless read operation with rich annotations covering safety properties, the description adequately covers scope (organization-wide) and entity type (workers/taskees). No output schema exists, so return value explanation is not expected.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters. Per the evaluation rules, 0 params establishes a baseline of 4. The description does not need to compensate for missing schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (List) and resource (workers/taskees) with organizational scope. However, it does not explicitly distinguish from the sibling tool tascan_list_agents, which is important given the similar naming pattern.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives (e.g., tascan_list_agents), nor any mention of prerequisites or typical usage contexts. The description only states what the tool does, not when to invoke it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_recommend_fixAInspect

Step 2 of the Closed-Loop Autonomous Operations Protocol. Post an AI-generated recommendation to an issue thread. Accepts both a text recommendation and an optional structured_recommendation object with task definitions for auto-dispatch. The recommendation is persisted in the AI audit trail.

ParametersJSON Schema
NameRequiredDescriptionDefault
ai_agentNoName of the AI agent posting (default: TaScan AI)
issue_idYesIssue ID to recommend a fix for
recommendationYesThe AI-generated recommendation text (clear, actionable instructions)
structured_recommendationNoOptional structured recommendation with tasks for auto-dispatch. Format: { recommendation_summary, confidence_score, tasks: [{ title, description, response_type, requires_photo, is_safety_checkpoint, sort_order }], estimated_duration_minutes, required_responder_role }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate write operations (readOnlyHint=false, destructiveHint=false). The description adds valuable behavioral context beyond these flags: it explicitly mentions persistence in the 'AI audit trail' (explaining side effects) and clarifies that the structured_recommendation enables 'auto-dispatch' (explaining downstream automation). No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The four-sentence structure is efficient and front-loaded with the protocol context. Each sentence delivers distinct value: protocol positioning, core action, parameter details, and persistence behavior. Minor redundancy exists between 'Accepts both...' and the schema descriptions, but overall prose is tight.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of annotations covering safety profiles and a well-documented input schema with nested objects, the description adequately covers the tool's scope. It explains the audit trail persistence (compensating for missing output schema) and clarifies the purpose of the complex structured_recommendation object. It could benefit from mentioning error conditions or return behavior, but is sufficiently complete for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, establishing a baseline of 3. The description reinforces the relationship between the text and structured_recommendation parameters and emphasizes the 'optional' nature of the structured object, but does not add syntax details, validation rules, or semantic meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the core action (post AI-generated recommendation to issue thread) and identifies the resource. The 'Step 2 of the Closed-Loop Autonomous Operations Protocol' positions it within a workflow sequence, distinguishing it from analysis (likely Step 1) and dispatch tools (likely Step 3). However, it does not explicitly name sibling alternatives or contrast with tools like tascan_dispatch_instruction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'Step 2' protocol reference provides implicit sequencing guidance, suggesting use after analysis and before dispatch. However, it lacks explicit when-to-use criteria, exclusion conditions ('do not use if...'), or named alternative tools for different scenarios. The guidance remains implied rather than prescriptive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_register_agentA
Idempotent
Inspect

Register a new AI agent in the agent registry. The agent will appear in tascan_list_agents and can receive dispatched tasks. Self-registration for AI agents joining the TaScan network.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesUnique agent ID (e.g. "my-agent-1")
nameYesDisplay name (e.g. "Research Bot")
typeYesAgent type
modelNoModel powering this agent (e.g. "claude-sonnet-4-6")
inbox_idYesTask list ID this agent monitors for new tasks
locationNoWhere the agent runs (e.g. "AWS us-east-1")
worker_idNoTaScan worker ID for this agent
descriptionNoWhat this agent does
capabilitiesYesTask type prefixes this agent handles (e.g. ["RESEARCH", "WRITE"])
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the annotations (which indicate idempotent, non-destructive, non-read-only), the description adds crucial behavioral context: the agent 'will appear in tascan_list_agents and can receive dispatched tasks' (side effects), and specifies the 'Self-registration' pattern (auth context). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each with distinct value: action definition, side effects/consequences, and usage context. Front-loaded with the verb, no redundant phrases, and no information that merely restates the schema or annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 9-parameter registration tool with complete schema annotations and behavioral hints, the description adequately covers purpose, side effects, and invocation pattern. It appropriately omits return value details (no output schema exists) while explaining the registration's functional consequences.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all 9 parameters including examples for capabilities and model. The description adds no specific parameter syntax or selection guidance beyond the 'Self-registration' hint, meeting the baseline expectation for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the specific action ('Register'), the resource ('AI agent'), and the location ('agent registry'). It distinguishes from siblings like tascan_create_worker by specifying 'AI agent' and referencing tascan_list_agents, clarifying this creates a register entry rather than a worker infrastructure resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'Self-registration for AI agents joining the TaScan network' provides clear context for when to invoke this tool (during agent onboarding). While it doesn't explicitly name alternatives to avoid, it implies the caller is the agent itself, distinguishing it from administrative registration patterns.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_send_task_emailAInspect

Send a branded TaScan task notification email via SendGrid. Can notify anyone about a specific task list or task. Includes QR code, task summary, and "Open in TaScan" button.

ParametersJSON Schema
NameRequiredDescriptionDefault
list_idYesTask list (event) ID
messageNoOptional custom message to include in the email body
subjectNoCustom email subject (defaults to auto-generated)
task_idNoOptional specific task ID to highlight
to_nameNoRecipient display name
to_emailYesRecipient email address
include_qrNoInclude QR code for the task list in the email (default: true)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish the safety profile (destructiveHint: false, openWorldHint: true). The description adds valuable behavioral context not in annotations: it identifies SendGrid as the external provider, and discloses email content features (QR code, branding, 'Open in TaScan' button) that explain what recipients experience.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficiently structured sentences. First sentence establishes core action and provider; second sentence enumerates capabilities and content features. No redundancy or filler—every phrase adds distinguishing information about the email's branded nature and interactive elements.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 7-parameter external service tool with no output schema, the description adequately covers the user-facing behavior (email composition, QR inclusion). It could be improved by mentioning SendGrid failure modes, delivery status returns, or idempotency concerns (idempotentHint: false), but sufficiently explains the primary function.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. The description adds semantic meaning by mapping parameters to user-facing features: 'Includes QR code' explains include_qr, 'task list or task' clarifies list_id/task_id relationship, and 'notify anyone' contextualizes to_email/to_name. This helps agents understand parameter interplay.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Send' with clear resource 'branded TaScan task notification email' and mechanism 'via SendGrid'. It clearly distinguishes from sibling tools like tascan_dispatch_instruction (internal) or tascan_generate_qr (QR-only) by specifying email delivery with included components.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage context ('Can notify anyone about a specific task list or task'), suggesting external notifications. However, it lacks explicit guidance on when to use this versus internal dispatch tools like tascan_dispatch_instruction or tascan_dispatch_to_agent, and omits prerequisites like valid list_id requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_update_eventA
Idempotent
Inspect

Update an event / task list (name, description, team_mode, multi_instance, timer_mode). team_mode and multi_instance cannot both be true.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoNew name
list_idYesTask list (event) ID
team_modeNoTeam mode — shared completions
timer_modeNoTimer mode (auto or manual)
descriptionNoNew description
multi_instanceNoMulti-instance — each worker gets isolated copy
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare idempotentHint=true and destructiveHint=false. The description adds valuable behavioral context beyond these: the constraint that 'team_mode and multi_instance cannot both be true' discloses a validation rule that would cause invocation failure, aiding agent planning.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Exactly two sentences with zero waste. The first sentence front-loads the purpose and enumerates updatable fields; the second sentence delivers the critical constraint. No redundancy with schema or annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 100% schema coverage, presence of annotations covering safety/idempotency, and the description's inclusion of the key validation constraint, the definition is sufficiently complete for an update operation despite lacking an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds value by identifying the specific parameter relationships (the mutual exclusivity constraint between team_mode and multi_instance), which single-parameter schema descriptions cannot convey.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action ('Update') and resource ('an event / task list'), clarifying that 'event' and 'task list' are synonymous via the slash notation. It distinguishes from siblings like tascan_create_event, tascan_delete_event (different actions) and tascan_update_project/task/worker (different resources).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit validation guidance ('team_mode and multi_instance cannot both be true'), which constrains how to use the parameters. However, it lacks explicit guidance on when to select this tool versus siblings like tascan_create_event or tascan_get_event (implied only by the verb choice).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_update_projectB
Idempotent
Inspect

Update a project (name, location, status, dates)

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoNew name
statusNoStatus
end_dateNoEnd date (ISO)
locationNoNew location
project_idYesProject ID
start_dateNoStart date (ISO)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (idempotent, non-destructive) so description burden is reduced. Description adds field-level context about what can be modified. However, it fails to disclose critical update semantics: whether omitted fields are preserved (PATCH-like) or cleared, which is essential for mutation tools.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no verbosity. Front-loaded with action verb. However, extreme brevity leaves gaps in behavioral disclosure that could be addressed with minimal additional text without sacrificing structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 100% schema coverage and comprehensive annotations, structured data carries significant weight. Description adequately identifies updatable fields but lacks explanation of partial update behavior. Adequate but minimal for a mutation tool with no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with basic descriptions for all 6 parameters. Description provides minimal additive value, essentially grouping parameters into logical categories (dates covers start/end) without adding syntax details, validation rules, or explaining that all fields except project_id are optional.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Update' and resource 'project' with specific field enumeration (name, location, status, dates). Distinguishes from sibling tools like tascan_update_task by specifying the target resource, though relies on the operation name for distinction from tascan_create_project.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like tascan_create_project or tascan_get_project. No mention of prerequisites such as requiring an existing project or validation constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_update_taskC
Idempotent
Inspect

Update a task (title, description, response_type, flags, sort_order)

ParametersJSON Schema
NameRequiredDescriptionDefault
titleNoNew title
task_idYesTask ID
sort_orderNoSort position
descriptionNoNew description
response_typeNoSee tascan_add_tasks for guidance. "text" for info collection, "photo" for visual proof, "checkbox" for yes/no only.
requires_photoNoRequire photo
is_safety_checkpointNoSafety-critical flag
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations cover the safety profile (idempotent, non-destructive, write operation), the description adds no behavioral context. It fails to clarify whether this performs a partial update (PATCH-like, leaving omitted fields untouched) or full replacement, and does not mention error handling for invalid task_ids.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no redundancy. However, it is arguably too minimal—it front-loads the action but wastes the opportunity to include critical usage context within the same concise format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich schema (100% coverage, clear enum descriptions) and comprehensive annotations, the description meets minimum completeness for an update operation. However, it lacks operational context (e.g., partial update semantics, relationship to tascan_add_tasks) that would elevate it to fully self-sufficient documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the parameter documentation is complete in the schema. The description parenthetically lists fields but adds no semantic depth beyond what the schema already provides (e.g., syntax examples, validation rules, or cross-references), warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Update) and resource (a task), and lists specific updatable fields (title, description, response_type, flags, sort_order) providing concrete scope. However, it does not explicitly differentiate from sibling update tools like tascan_update_event or tascan_update_project.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., tascan_add_tasks for creation or tascan_delete_task followed by add for replacement). It omits prerequisites like requiring a valid task_id from a preceding get or list call.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tascan_update_workerB
Idempotent
Inspect

Update a worker profile (name, phone, email)

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoNew name
emailNoNew email
phoneNoNew phone
worker_idYesWorker ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare idempotentHint=true, destructiveHint=false, and readOnlyHint=false, covering the safety profile. Description adds field-specific context but omits behavioral details like whether omitted fields are preserved or what response to expect.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with verb-fronted structure. No wasted words, though brevity comes at the cost of missing usage context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Minimum viable for a 4-parameter mutation tool. Schema and annotations carry most of the weight; description suffices for basic selection but fails to clarify partial update semantics or return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with 'New name/email/phone' descriptions. Description mirrors this by listing the same fields, adding minimal semantic value beyond the schema baselines.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Update' and resource 'worker profile' with specific field enumeration (name, phone, email). Accurately distinguishes from siblings like tascan_create_worker and tascan_update_task via the resource specified, though lacks explicit comparison to create_worker in the text.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use versus tascan_create_worker, nor any mention of prerequisites (e.g., worker must exist). Simply lists updatable fields without context on partial vs full updates.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.