Skip to main content
Glama

Search User

confluence_search_user
Read-only

Search Confluence users by name or query across Cloud, Server, or Data Center instances. Find team members using CQL queries or group-based searches.

Instructions

Search Confluence users using CQL (Cloud) or group member API (Server/DC).

Args: ctx: The FastMCP context. query: Search query - a CQL query string for user search. limit: Maximum number of results (1-50). group_name: Group to search within on Server/DC.

Returns: JSON string representing a list of simplified Confluence user search result objects.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query - a CQL query string for user search. Examples of CQL: - Basic user lookup by full name: 'user.fullname ~ "First Last"' Note: Special identifiers need proper quoting in CQL: personal space keys (e.g., "~username"), reserved words, numeric IDs, and identifiers with special characters.
limitNoMaximum number of results (1-50)
group_nameNoGroup to search within on Server/DC instances (default: 'confluence-users'). Ignored on Cloud.confluence-users

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations include readOnlyHint=true, indicating a safe read operation. The description adds value by specifying the deployment-dependent behavior (CQL for Cloud, group member API for Server/DC) and the return format (JSON string of simplified user objects), which are not covered by annotations. However, it lacks details on rate limits, authentication needs, or error handling, leaving some behavioral aspects unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and deployment context, followed by a structured Args/Returns section. It is efficient with no wasted sentences, though the Args section slightly repeats schema information, keeping it from a perfect score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, deployment variations), the description covers purpose, usage context, and return format. With annotations (readOnlyHint) and an output schema (implied by 'Returns' statement), it is mostly complete. However, it could improve by addressing authentication or error scenarios more explicitly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing detailed descriptions for all parameters (query, limit, group_name) including examples and defaults. The description adds minimal value beyond the schema, only reiterating parameter names without new semantics. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Search') and resource ('Confluence users'), specifying the search mechanism (CQL for Cloud, group member API for Server/DC). It distinguishes from sibling 'confluence_search' which searches content, not users, making the purpose specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context by mentioning the two deployment types (Cloud vs. Server/DC) and their respective search methods, which helps guide usage. However, it does not explicitly state when to use this tool versus alternatives like 'jira_get_user_profile' or other user-related tools, nor does it provide exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/GeiserX/atlassian-browser-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server