Skip to main content
Glama
nulab

Backlog MCP Server

get_watching_list_count

Retrieve the count of items a user is watching in Backlog by providing their user ID.

Instructions

Returns count of watching items for a user

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
userIdYesUser ID
organizationNoOptional organization name. Use list_organizations to inspect available organizations.

Implementation Reference

  • The handler function for the get_watching_list_count tool. It takes a userId and calls backlog.getWatchingListCount(userId) to return the count of watching items for a user.
    export const getWatchingListCountTool = (
      backlog: Backlog,
      { t }: TranslationHelper
    ): ToolDefinition<
      ReturnType<typeof getWatchingListCountSchema>,
      (typeof WatchingListCountSchema)['shape']
    > => {
      return {
        name: 'get_watching_list_count',
        description: t(
          'TOOL_GET_WATCHING_LIST_COUNT_DESCRIPTION',
          'Returns count of watching items for a user'
        ),
        schema: z.object(getWatchingListCountSchema(t)),
        outputSchema: WatchingListCountSchema,
        handler: async ({ userId }) => backlog.getWatchingListCount(userId),
      };
    };
  • Input schema defining userId (number) as the required input parameter.
    const getWatchingListCountSchema = buildToolSchema((t) => ({
      userId: z
        .number()
        .describe(t('TOOL_GET_WATCHING_LIST_COUNT_USER_ID', 'User ID')),
    }));
  • Output schema (WatchingListCountSchema) defining the response shape: { count: number }
    export const WatchingListCountSchema = z.object({
      count: z.number(),
    });
  • Import of the getWatchingListCountTool in the central tools registration file.
    import { getWatchingListCountTool } from './getWatchingListCount.js';
    import { getWatchingListItemsTool } from './getWatchingListItems.js';
  • Registration of getWatchingListCountTool in the 'issue' toolset within the allTools function.
    getWatchingListCountTool(backlog, helper),
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must carry the full burden. It only states it returns a count, but does not disclose that it is a read operation, any authentication requirements, or performance implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single 7-word sentence. Extremely concise with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is minimal. It does not specify the output format (e.g., a single number) or that it returns a count of all watching items. Adequate for a simple tool but lacks completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds no additional meaning beyond the schemas. The schema already describes userId and organization, with organization referring to list_organizations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns a count of watching items for a user. It uses a specific verb ('returns') and resource ('count of watching items'), distinguishing it from sibling tools like get_watching_list_items.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines or when-to-use vs alternatives are provided. The description does not mention when to use this tool instead of get_watching_list_items or other count tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nulab/backlog-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server