Skip to main content
Glama

Server Details

Confluence MCP — wraps the Confluence Cloud REST API v2 (OAuth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-confluence
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.8/5 across 10 of 10 tools scored. Lowest: 2.9/5.

Server CoherenceB
Disambiguation4/5

Tools are generally distinct, but 'ask_pipeworx' overlaps with 'confluence_search' and 'discover_tools', as all can retrieve information. However, 'ask_pipeworx' is a natural language interface that abstracts over other tools, which could cause confusion.

Naming Consistency2/5

Tool names are inconsistent: some use 'confluence_*' prefix, while others like 'ask_pipeworx', 'discover_tools', 'forget', 'recall', 'remember' do not follow any prefix pattern, mixing verbs and nouns irregularly.

Tool Count3/5

10 tools is a reasonable number, but the server mixes Confluence-specific tools with general memory and tool discovery functions, making it feel slightly overloaded for a single server.

Completeness3/5

Confluence CRUD is missing update and delete operations. The memory and discovery tools are complete for their purpose, but the Confluence surface has notable gaps.

Available Tools

10 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden for behavioral traits. It discloses that Pipeworx chooses the tool and fills arguments, indicating some delegation of decision-making, but does not detail data sources, privacy implications, or limits on question complexity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is three sentences plus examples. Front-loaded with core function, no redundancy, examples are concrete. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a single required parameter, no output schema, and no annotations, the description is adequate. It explains the abstraction and gives examples, but could be more complete about what kinds of questions are out of scope or what happens if the best source fails.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by explaining that the parameter is a natural language question, with examples showing typical usage. This goes beyond the schema's 'Your question or request in natural language' by illustrating scope.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: answering questions in plain English by selecting the best data source. It distinguishes itself from other tools by acting as an abstraction layer over schemas and tool browsing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes examples of appropriate questions, but does not specify when not to use this tool or mention alternatives. Given the sibling tools are mostly confluence and memory operations, the tool seems broadly applicable, but no explicit usage boundaries are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

confluence_create_pageAInspect

Create a new Confluence page with title and content. Specify parent page ID or space key (e.g., "ENG"). Returns page ID and URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
bodyYesPage body content in Confluence storage format (XHTML)
titleYesPage title
statusNoPage status: "current" (published) or "draft". Default: "current"
spaceIdYesSpace ID to create the page in
parentIdNoParent page ID (optional, for nesting)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries the full burden. It states the return values (ID, title, URL) but does not disclose side effects (e.g., notifications, permissions required) or behavioral traits like synchronous creation. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: first states purpose, second states output. It is concise and front-loaded, but could be slightly more structured by including parameter hints. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the moderate complexity (5 parameters, 3 required, no output schema), the description is adequate but not complete. It lacks details on return format beyond ID/title/URL, and does not mention error conditions or rate limits. The schema covers parameter descriptions, so the description is not insufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds no extra meaning beyond the schema; it does not explain the 'body' format (storage format) or optional parameters like 'status' default. The return description hints at the output but not parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Create') and resource ('Confluence page'), and states what the tool returns ('created page ID, title, and URL'). It clearly distinguishes from sibling tools like 'confluence_get_page' (retrieval) and 'confluence_search' (search).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., space must exist) or when not to use it. However, the context of creating a page is clear, and sibling names imply distinct purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

confluence_get_pageAInspect

Get full content of a Confluence page by ID. Returns title, body content, status, version, and space info.

ParametersJSON Schema
NameRequiredDescriptionDefault
page_idYesPage ID
body_formatNoBody format to return: "storage" (HTML) or "atlas_doc_format" (ADF). Default: "storage"
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must cover behavior. It states it returns specific fields and mentions body_format parameter, but does not disclose potential errors, rate limits, or authentication needs. Acceptable for a simple read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, no fluff. First sentence states purpose, second lists key return fields. Perfectly front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (2 params, no output schema, no nested objects), the description is complete enough. It explains what the tool returns and mentions the optional body_format parameter. A brief note on possible error conditions would push to 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters. The description adds no extra meaning beyond the schema; it only repeats that body_format controls return format. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the verb 'get' and resource 'single Confluence page by ID', and lists the exact returned fields (title, body content, status, version, space info). Distinct from siblings like confluence_create_page and confluence_list_pages.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage: use when you need details of a specific page by ID. No explicit guidance on when not to use or alternatives (e.g., for listing pages use confluence_list_pages, for search use confluence_search).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

confluence_list_pagesBInspect

List all pages in a Confluence space. Returns page ID, title, status, and version. Specify space key (e.g., "ENG", "SALES").

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort order: "created-date", "-created-date", "modified-date", "-modified-date", "title" (default: "-modified-date")
limitNoNumber of pages to return (default 25, max 100)
space_idYesSpace ID to list pages from
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It mentions return fields (ID, title, status, version) which is helpful, but does not disclose pagination behavior, sorting details beyond what's in schema, or whether the tool is read-only. The description adds moderate value but lacks deeper behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that states the core purpose and return fields. It is front-loaded and efficient, with no superfluous text. Could be slightly improved by front-loading the return fields more explicitly, but overall well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 parameters with full schema coverage and no output schema, the description provides the basic purpose but lacks completeness. It doesn't mention pagination behavior, error conditions, or permission requirements. It is minimally adequate but leaves gaps for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (space_id, sort, limit). The description does not add new semantics beyond the schema. Baseline score of 3 is appropriate as the description does not compensate beyond what's already in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'list' and resource 'pages in a Confluence space', and lists the return fields (page ID, title, status, version). It distinguishes from sibling tools like 'confluence_get_page' (single page) and 'confluence_search' (search across content) but does not explicitly differentiate from 'confluence_list_spaces' (lists spaces, not pages).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when you need to list pages in a specific space (via space_id). It does not provide explicit when-to-use vs alternatives, such as when to use 'confluence_search' instead for query-based retrieval. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

confluence_list_spacesAInspect

List all Confluence spaces in your instance. Returns space ID, key, name, type, and status. Use to discover documentation areas.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoFilter by space type: "global" or "personal"
limitNoNumber of spaces to return (default 25, max 100)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries the burden. It states that it returns ID, key, name, type, and status, but does not disclose pagination behavior (only limit param), rate limits, or whether it returns all spaces or just accessible ones. It is minimally transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the action and output, no redundant words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given it is a simple list tool with no output schema, the description covers the purpose and return fields adequately. However, it lacks details on edge cases (e.g., empty result, error handling) and does not mention if type filtering is required or optional.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description does not add meaning beyond the schema; it repeats the return fields but does not elaborate on parameter usage or behavior beyond what schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (list), resource (Confluence spaces), and what is returned (space ID, key, name, type, and status). It is specific and distinguishes from siblings like 'confluence_list_pages' which lists pages, not spaces.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies basic usage but does not provide guidance on when to use this tool vs alternatives. For example, it doesn't mention that for more advanced search or filtering, one might use 'confluence_search' instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It clearly states the tool returns the most relevant tools with names and descriptions, and that it searches by describing what you need. While it doesn't mention performance or side effects, it is transparent about the core behavior. A small deduction for not explaining what happens if the query is vague or no matches.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each valuable: first states the action, second describes the return, third gives usage guidance. No wasted words. Well structured and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that the tool is simple (2 parameters, no nested objects, no output schema), the description is largely complete. It covers purpose, input format, and usage context. However, it does not mention the output format or that it may return empty results, which would be helpful for an AI agent. Slight deduction for missing that detail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the description adds value beyond schema: it explains that the query should be a natural language description and gives examples. The limit parameter is also described in schema, but the description reinforces its purpose. Could be improved by noting that limit defaults to 20.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it searches the Pipeworx tool catalog using natural language and returns relevant tools with names and descriptions. Differentiates from siblings as a discovery/search tool, distinct from query tools like ask_pipeworx and knowledge tools like recall.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly instructs to call this first when there are 500+ tools and need to find the right ones. Provides strong usage context, though does not mention when not to use it or alternatives. However, the directive is clear and actionable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It states deletion but does not mention if it is irreversible, whether confirmation is needed, or what happens if the key does not exist. Lacks details on safety or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, zero waste, front-loaded with verb and object. Every word serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool is simple (one required param, no output schema), but the description omits behavioral details (e.g., idempotency, error handling) that would be useful for a deletion tool. Sibling tools for memory (recall, remember) lack contrast.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the schema fully describes the 'key' parameter). The description does not add new meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete'), resource ('stored memory'), and scope ('by key'). It distinguishes from siblings like 'recall' and 'remember' by specifying deletion rather than retrieval or storage.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., memory must exist), nor does it contrast with other memory operations like 'recall' or 'remember'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates this is a retrieval operation (read-only), which is consistent with the absence of destructive annotations. It adds context about cross-session persistence ('saved earlier in the session or in previous sessions'). No contradictions with annotations (none provided).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core functionality. No redundant information. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool (1 optional parameter, no output schema), the description is complete. It explains both invocation modes (with/without key) and the cross-session persistence. No additional details are needed for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single parameter 'key' well-described in the schema. The description adds value by explaining the behavior when key is omitted (list all memories), which is not in the schema. This clarifies the optional nature beyond the required array.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a memory by key or lists all memories when key is omitted. It specifies the resource ('memory') and the action ('retrieve' or 'list'). This distinguishes it from siblings like 'remember' (store) and 'forget' (delete).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool: 'to retrieve context you saved earlier'. It implies that omitting key lists all memories, but does not explicitly state when not to use it or compare to alternatives. However, given sibling names, the context is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries the full burden. It discloses that the tool stores data in session memory, notes persistence differences for authenticated vs. anonymous users, and implies the data can be retrieved later. No behavioral contradictions are present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences long, front-loaded with the core action, and every sentence adds value: what it does, when to use it, and persistence behavior. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there is no output schema and no nested objects, the description adequately covers what the tool does and its persistence behavior. However, it does not specify whether the tool overwrites existing keys or returns any confirmation, which would be helpful for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of parameters with descriptions. The description adds context by explaining the purpose of key-value pairs (e.g., subject_property, target_ticker) and the nature of the value (any text). This goes beyond the schema's parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'store' and resource 'key-value pair in session memory'. It explicitly mentions the use case: saving intermediate findings, user preferences, or context across tool calls, which distinguishes it from siblings like 'forget' and 'recall'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use this tool (to save intermediate findings, preferences, context) and provides persistence context (authenticated users get persistent memory; anonymous sessions last 24 hours). However, it does not explicitly say when not to use it or mention alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.