Skip to main content
Glama

Add Label

confluence_add_label
Destructive

Add labels to Confluence pages, blog posts, or attachments to categorize content, track status, and enable filtering by topic or version.

Instructions

Add label to Confluence content (pages, blog posts, or attachments).

Useful for:

  • Categorizing attachments (e.g., 'screenshot', 'diagram', 'legal-doc')

  • Tracking status (e.g., 'approved', 'needs-review', 'archived')

  • Filtering content by topic or version

Args: ctx: The FastMCP context. page_id: Content ID (page or attachment). name: Label name to add.

Returns: JSON string representing the updated list of label objects.

Raises: ValueError: If in read-only mode or Confluence client is unavailable.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
page_idYesConfluence content ID to label. For pages/blogs: numeric ID (e.g., '123456789'). For attachments: ID with 'att' prefix (e.g., 'att123456789'). Use get_attachments to find attachment IDs.
nameYesLabel name to add (lowercase, no spaces). Examples: 'draft', 'reviewed', 'confidential', 'v1.0'. Labels help organize and categorize content.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the annotations. While the annotations include 'destructiveHint: true', the description elaborates with specific error conditions ('Raises: ValueError: If in read-only mode or Confluence client is unavailable'), which informs the agent about prerequisites and failure modes. It also mentions the return format ('JSON string representing the updated list of label objects'), though this is partially covered by the output schema. No contradiction with annotations is present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and concise, with clear sections (purpose, usage guidelines, args, returns, raises) and no wasted sentences. Each part adds value, such as the bullet-pointed 'Useful for' list that efficiently communicates use cases, making it easy for an agent to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, destructive operation), the description is complete. It covers purpose, usage, parameters, returns, and error conditions, and with annotations providing destructiveHint and an output schema existing, there are no significant gaps. The description compensates for any lack in annotations by detailing behavioral aspects like read-only mode checks.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description includes an 'Args' section that lists parameters ('page_id', 'name') and provides basic semantics, such as examples for 'name' (e.g., 'draft', 'reviewed'). However, the input schema already has 100% description coverage with detailed documentation for each parameter (e.g., format rules for 'page_id', examples for 'name'), so the description adds minimal extra value. The baseline score of 3 reflects adequate but not enhanced parameter clarification.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Add label') and resource ('Confluence content'), and distinguishes it from siblings like 'confluence_get_labels' (which retrieves labels) and 'confluence_update_page' (which modifies page content). It explicitly lists the types of content it applies to (pages, blog posts, or attachments), making the scope unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines with a 'Useful for' section that gives concrete examples of when to use the tool (e.g., categorizing attachments, tracking status, filtering content). It implicitly distinguishes from siblings by focusing on label addition rather than retrieval or other operations, though it does not explicitly name alternatives like 'confluence_get_labels' for checking existing labels.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/GeiserX/atlassian-browser-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server