Skip to main content
Glama
backloghq

backlog

List Tasks

task_list

:

Instructions

Query and return tasks matching a filter expression. Returns a JSON array of task objects with all fields (uuid, id, description, status, priority, due, tags, urgency, etc). Use this for browsing, searching, and reading task data. For just a count, use task_count instead. Filter syntax supports: attribute matching (project:X, status:pending, priority:H), tags (+bug, -old), virtual tags (+OVERDUE, +ACTIVE, +BLOCKED, +READY), date comparisons (due.before:friday), and boolean operators (and, or, parentheses). Empty filter returns all pending tasks.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filterYesFilter expression to match tasks. Examples: 'status:pending', 'project:backend +bug', 'due.before:tomorrow', '+OVERDUE', '+BLOCKED', 'priority:H', 'agent:explorer'. Combine with 'and'/'or' and parentheses. Leave empty for all pending tasks.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully specifies the return format ('JSON array of task objects with all fields'), lists the included fields, and details empty-filter behavior ('returns all pending tasks'). It could be improved by explicitly stating this is read-only/non-destructive, but 'Query' and 'reading task data' clearly imply safe read semantics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Appropriately sized at roughly 4 sentences, front-loaded with core purpose first, followed by return format, usage guidance, alternative reference, and detailed syntax specification. Every sentence earns its place. The filter syntax sentence is dense but necessary given the lack of separate documentation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description adequately compensates by detailing return values and field contents. It comprehensively documents the complex filter syntax required to use the single parameter effectively. For a 1-parameter read tool, this covers the essential context needed for successful invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds significant value by categorizing the filter syntax ('attribute matching', 'tags', 'virtual tags', 'date comparisons', 'boolean operators') and providing additional examples (-old, +ACTIVE, +READY) beyond the schema's enumerated list. This semantic structure helps agents construct valid filter expressions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb phrase ('Query and return') followed by the resource ('tasks') and scoping mechanism ('matching a filter expression'). It effectively distinguishes from siblings by explicitly naming 'task_count' as the alternative for counting operations only.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance ('Use this for browsing, searching, and reading task data') and explicitly names the sibling alternative ('For just a count, use task_count instead'). This clear differentiation helps agents select the correct tool based on whether they need full data or just a count.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/backloghq/backlog'

If you have feedback or need assistance with the MCP directory API, please join our Discord server