Skip to main content
Glama

list_labels

Retrieve all labels or tags from your FreshRSS instance to organize and categorize RSS feed content.

Instructions

List all labels/tags

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler implementation for the list_labels tool, which fetches labels using client.tags.list() and formats the output string.
    wrapTool('list_labels', async () => {
      const { labels } = await client.tags.list();
    
      if (labels.length === 0) {
        return textResult('No labels found.');
      }
    
      const formatted = labels
        .map((l) => {
          const unread = l.unreadCount !== undefined ? ` (${l.unreadCount.toString()} unread)` : '';
          return `- ${l.name}${unread}`;
        })
        .join('\n');
    
      return textResult(formatted);
    })
  • The MCP tool registration for 'list_labels' within registerTagTools.
    server.registerTool(
      'list_labels',
      {
        description: 'List all labels/tags',
        inputSchema: z.object({}).strict(),
      },
      wrapTool('list_labels', async () => {
        const { labels } = await client.tags.list();
    
        if (labels.length === 0) {
          return textResult('No labels found.');
        }
    
        const formatted = labels
          .map((l) => {
            const unread = l.unreadCount !== undefined ? ` (${l.unreadCount.toString()} unread)` : '';
            return `- ${l.name}${unread}`;
          })
          .join('\n');
    
        return textResult(formatted);
      })
    );
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention pagination, caching behavior, whether results are user-scoped, or the return format. 'List all' implies no filtering but doesn't clarify if this includes system labels or only user-created ones.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at only four words with no wasted sentences. However, given the complete absence of annotations and output schema, the 'appropriate size' standard might require additional context about return values, preventing a perfect score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and no annotations, the description should compensate by describing the return structure (e.g., whether it returns IDs, names, colors, or counts). The current description provides none of this necessary context for a list operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters with 100% description coverage (vacuously). Per the baseline rules for zero-parameter tools, this scores a 4 as there are no parameters requiring semantic clarification beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the basic operation ('List all labels/tags') with a clear verb and resource, but it is minimal and does not differentiate from sibling tools like add_labels or delete_label beyond the obvious verb difference. It clarifies that labels and tags may be synonymous in this system.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like list_folders, or prerequisites for invocation. The agent is given no context about whether this should be called before add_labels or when labels need refreshing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/alysson-souza/freshrss-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server