Skip to main content
Glama

board_get_tasks

List and filter tasks within a project, sorted by priority. Excludes completed tasks by default but can include them when needed. Use status filters like 'in_progress' or 'todo' to focus on current work items.

Instructions

List tasks in a project with optional filters. Results are sorted client-side by priority (critical → low) — not by creation time. By default excludes done tasks (pass include_done=true or set status='done' to see them). Use this for mid-session checks: almost always pass a status filter (e.g., 'in_progress' or 'todo') to keep responses tight. For a single task by ID, use board_get_task instead. Returns an array of task objects with id, project_id, title, description, status, priority, assigned_agent, parent_task_id, depends_on, riper_mode, metadata, and ISO timestamps (created_at, updated_at, started_at, completed_at).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (from board_get_projects) whose tasks to list
statusNoFilter to a single status. Omit to return all non-done tasks (unless include_done=true).
priorityNoFilter to a single priority. Omit to return all priorities.
assigned_agentNoFilter to tasks assigned to this agent name (exact match). Omit to return all assignments.
include_doneNoInclude tasks with status='done' (default false — done tasks are hidden to keep responses small). Ignored if an explicit status filter is set.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: sorting ('Results are sorted client-side by priority (critical → low) — not by creation time'), default exclusions ('By default excludes done tasks'), and response characteristics ('Returns an array of task objects...'). However, it doesn't mention potential side effects like rate limits or authentication requirements, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the core purpose stated first. Every sentence adds value: it explains sorting, default exclusions, usage tips, sibling differentiation, and return values without redundancy. The structure flows logically from general to specific details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, no annotations, no output schema), the description does a good job of covering essential context. It explains the tool's behavior, usage guidelines, and return format. However, without an output schema, it could benefit from more detail on error handling or pagination, though the current level is largely sufficient for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already documents all parameters thoroughly. The description adds minimal parameter semantics beyond the schema, such as clarifying the default behavior for 'include_done' and the interaction between filters. This meets the baseline of 3 when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('List tasks in a project with optional filters') and distinguishes it from its sibling board_get_task by specifying 'For a single task by ID, use board_get_task instead.' It also mentions the resource (tasks) and scope (project).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs. alternatives: 'For a single task by ID, use board_get_task instead.' It also offers practical advice: 'Use this for mid-session checks: almost always pass a status filter (e.g., 'in_progress' or 'todo') to keep responses tight.' This covers both alternative tools and optimal usage contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/HuntsDesk/ve-vibe-board'

If you have feedback or need assistance with the MCP directory API, please join our Discord server