Skip to main content
Glama
nulab

Backlog MCP Server

update_watching

Update the note associated with a watch in Backlog, specifying the watch ID and new note content.

Instructions

Updates an existing watch note

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
watchIdYesWatch ID
noteYesUpdated note for the watch
organizationNoOptional organization name. Use list_organizations to inspect available organizations.

Implementation Reference

  • The handler function that executes the 'update_watching' tool logic. It calls backlog.patchWatchingListItem(watchId, note) to update an existing watch note.
    handler: async ({ watchId, note }) =>
      backlog.patchWatchingListItem(watchId, note),
  • Input schema definition using zod, defining 'watchId' (number) and 'note' (string) as the required parameters.
    const updateWatchingSchema = buildToolSchema((t) => ({
      watchId: z.number().describe(t('TOOL_UPDATE_WATCHING_WATCH_ID', 'Watch ID')),
      note: z
        .string()
        .describe(t('TOOL_UPDATE_WATCHING_NOTE', 'Updated note for the watch')),
    }));
  • Output schema import (WatchingListItemSchema) from backlogOutputDefinition.ts, defining the shape of the return value.
    import { WatchingListItemSchema } from '../types/zod/backlogOutputDefinition.js';
  • Import of updateWatchingTool from the updateWatching module.
    import { updateWatchingTool } from './updateWatching.js';
  • Registration of the 'update_watching' tool within the 'issue' toolset by calling updateWatchingTool(backlog, helper).
    updateWatchingTool(backlog, helper),
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears the full burden but only states it updates a note, omitting behavioral traits like whether it overwrites existing notes, required permissions, or error handling. No contradiction with annotations (none exist).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single 5-word sentence, extremely concise and front-loaded. Every word earns its place, though additional structure (e.g., bullet list of effects) is not provided.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description fails to explain return values (e.g., updated object or success status) and side effects. For an update operation with 3 parameters, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all parameters. The description does not add meaning beyond the schema (e.g., 'Watch ID' for watchId, 'Updated note' for note). Since bar is high, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Updates an existing watch note' clearly states the verb (updates) and resource (existing watch note), distinguishing it from siblings like add_watching (create) and delete_watching (delete). It is specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives (e.g., add_watching for creation or delete_watching for removal). The optional organization parameter references list_organizations, but overall usage context is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nulab/backlog-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server