Skip to main content
Glama
backloghq

backlog

Complete Task

task_done

Mark pending or active tasks as completed, record end timestamps, and track finished work in your backlog. Triggers next recurring instances automatically.

Instructions

Mark a task as completed, setting its status to 'completed' and recording the end timestamp. The task remains in the database and appears in completed task queries. If the task is a recurring instance, completing it may trigger generation of the next instance. To record work that was already done without creating a pending task first, use task_log instead.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
idYesTask ID number (e.g. '1') or UUID. Task must be in pending or active status.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Excellently discloses: status change mechanics, end timestamp recording, persistence behavior ('remains in the database'), query visibility, and side effects for recurring instances.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste. Front-loaded with core action, followed by persistence clarification, side-effect warning, and alternative tool reference. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single-parameter mutation with no output schema, description comprehensively covers state changes, side effects, and alternatives. Minor gap: does not describe return value or error cases for already-completed tasks, though agent can infer success/failure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed parameter descriptions (including status constraints). Tool description implies operating on a task but does not add syntax or format details beyond the schema, warranting the baseline score for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verbs ('Mark a task as completed', 'setting its status') and identifies the resource clearly. The mention of 'task_log' for alternative workflows distinguishes this from siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly directs users to 'task_log' instead when recording work that bypassed pending task creation. Clear when/when-not guidance that helps select between sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/backloghq/backlog'

If you have feedback or need assistance with the MCP directory API, please join our Discord server