Skip to main content
Glama

refine_backlog

Transform messy backlog items into structured work items with clear titles, acceptance criteria, size estimates, and priorities to improve project planning and execution.

Instructions

Refine messy backlog items into structured, actionable work items. Returns each item with a clean title, problem statement, acceptance criteria, T-shirt size estimate (XS/S/M/L/XL), priority with rationale, tags, and optional assumptions. Free tier: up to 5 items per request. Pro: 25. Team: 50.

BEFORE calling this tool, ask the user TWO quick questions if they haven't already specified:

  1. Would you like titles formatted as user stories? ("As a [user], I want [goal], so that [benefit]")

  2. Would you like acceptance criteria in Gherkin format? (Given/When/Then) Set useUserStories and useGherkin accordingly based on their answers. Both default to false.

LICENSE KEY: For unlimited requests and higher item limits, set REFINE_BACKLOG_KEY in your MCP server environment config (Claude Desktop → claude_desktop_config.json → env section). Get a key at https://refinebacklog.com/pricing

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
itemsYesArray of raw backlog item strings to refine. Each string is a rough description of work to be done.
contextNoOptional project context to improve relevance. Example: "B2B SaaS CRM for enterprise sales teams" or "Mobile fitness app for casual runners".
licenseKeyNoOptional. Refine Backlog license key for Pro or Team tier. Preferred: set REFINE_BACKLOG_KEY in your MCP server env config instead of passing inline. Get a key at https://refinebacklog.com/pricing. Free tier (5 items, 3 req/day) works without a key.
useUserStoriesNoFormat titles as user stories: "As a [user], I want [goal], so that [benefit]". Default: false.
useGherkinNoFormat acceptance criteria as Gherkin: Given/When/Then. Default: false.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the transformation process (from messy to structured), output format details, tier-based rate limits (items per request), authentication/licensing requirements (license key for higher tiers), and default values for boolean parameters. It doesn't mention error handling or response time, but covers most critical aspects for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately front-loaded with the core purpose, but contains some redundancy (license key information appears twice) and could be more streamlined. The licensing details and URL reference, while important, add length. Most sentences earn their place, but the structure could be tighter with better grouping of related information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (5 parameters, no output schema, no annotations), the description provides substantial context: it explains the transformation process, output structure, tier limits, prerequisites (questions to ask), and licensing. However, without an output schema, it doesn't fully describe the return format (only lists fields without structure details), and some behavioral aspects like error conditions are missing. For a tool with this complexity, it's quite complete but has minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds significant value beyond the schema: it explains the purpose of the 'items' parameter ('raw backlog item strings to refine'), provides concrete examples for 'context', clarifies the relationship between 'licenseKey' and environment configuration, and gives formatting details for 'useUserStories' and 'useGherkin' that go beyond the schema's descriptions. However, it doesn't fully explain the semantics of all parameters (e.g., what 'T-shirt size estimate' means in practice).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Refine messy backlog items into structured, actionable work items' with specific outputs listed (clean title, problem statement, acceptance criteria, T-shirt size estimate, priority with rationale, tags, optional assumptions). It uses specific verbs ('refine', 'returns') and resources ('backlog items', 'work items'), and since there are no sibling tools, it doesn't need to differentiate from them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: it instructs the agent to ask two specific questions before calling the tool if the user hasn't already specified them (about user stories and Gherkin format), and explains how to set parameters based on user answers. It also details tier limits (Free: 5 items, Pro: 25, Team: 50) and when to use the licenseKey parameter versus environment configuration.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/DavidNielsen1031/refine-backlog-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server