Skip to main content
Glama

Server Details

ClickUp MCP — wraps the ClickUp REST API v2 (BYO API key)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-clickup
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.7/5 across 10 of 10 tools scored.

Server CoherenceA
Disambiguation3/5

The ClickUp tools (clickup_create_task, clickup_get_task, etc.) are clearly distinct, but ask_pipeworx and discover_tools overlap in purpose (both help find tools/info). remember/recall/forget are memory tools distinct from the rest, but within ClickUp tools there's no confusion.

Naming Consistency3/5

ClickUp tools follow a consistent 'clickup_verb_noun' pattern, but the memory tools (remember, recall, forget) and discover_tools/ask_pipeworx break the pattern. Mixed conventions reduce consistency.

Tool Count4/5

10 tools is a reasonable count for a project management server. The set covers core operations without being overwhelming.

Completeness4/5

The ClickUp tools provide create, read, list, and some query capabilities. Missing update/delete for tasks and lists, but the addition of memory and discovery tools adds value beyond basic CRUD.

Available Tools

10 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description reveals that the tool picks the right tool and fills arguments, providing insight into its internal behavior. Since no annotations are provided, this adds necessary transparency about its autonomous nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, using three short sentences plus examples. It is front-loaded with the purpose, and every sentence contributes meaningful information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single parameter and lack of output schema, the description is complete enough. It explains the tool's autonomous selection behavior and provides examples. A minor gap is not specifying what happens if the question cannot be answered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description adds context by explaining the parameter as 'Your question or request in natural language' and providing examples. This adds value beyond the schema, but the schema already adequately describes the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool answers natural language questions by selecting the best data source, with specific examples like 'What is the US trade deficit with China?'. This distinguishes it from sibling tools like clickup_create_task or discover_tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'No need to browse tools or learn schemas' and gives examples, implying when to use it. However, it does not explicitly state when not to use it or mention alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

clickup_create_taskAInspect

Create a new task in a ClickUp list. Provide list ID, task name, and optionally priority and assignee. Returns task ID, name, status, and URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesTask name
_apiKeyYesClickUp API token
list_idYesList ID to create the task in
due_dateNoDue date as Unix timestamp in milliseconds
priorityNoPriority: 1 (urgent), 2 (high), 3 (normal), 4 (low)
descriptionNoTask description (markdown supported)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description clarifies it creates a task (mutation) and returns specific fields. However, lacks details on authentication requirements or side effects beyond creation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundancy. First sentence states action and result, second sentence lists return fields. Efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and 6 parameters, description covers basic return fields but misses details like whether due_date is optional, expected response shape, or error handling. Adequate but not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters with descriptions. Description adds no additional meaning beyond schema, e.g., does not explain date format or priority enum values beyond what schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Create', resource 'task in a ClickUp list', and explicitly lists return values (ID, name, status, URL). Distinguishes from sibling tools like 'clickup_get_task' which retrieves tasks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage from description but no explicit guidance on when to use vs. alternatives like clickup_list_tasks. No exclusions or prerequisites mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

clickup_get_taskAInspect

Fetch full task details including name, description, status, priority, assignees, tags, and time tracking. Provide task ID (e.g., "9hz6c").

ParametersJSON Schema
NameRequiredDescriptionDefault
_apiKeyYesClickUp API token
task_idYesTask ID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool returns full task details and lists the included fields, which is useful behavioral context. Since no annotations are provided, the description carries the full burden, and it adequately describes the read-only nature and scope of data returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. The first sentence clearly states the action and identifier, and the second summarizes the return fields. Every word is functional.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (two simple parameters, no nested objects, no output schema), the description is complete enough. It covers what the tool does and what it returns. No additional context is necessary for an agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description does not add meaning beyond what the schema provides for '_apiKey' and 'task_id'. It mentions returning details but does not elaborate on parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Get' and resource 'a single ClickUp task by ID'. It distinguishes from siblings like 'clickup_list_tasks' (which returns multiple tasks) and 'clickup_create_task' (which creates). The details listed (name, description, status, etc.) clarify the return content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. For example, it doesn't mention that this tool is for retrieving a specific task by ID, while 'clickup_list_tasks' is for querying tasks in a list. The description implies usage but lacks explicit when-to-use or when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

clickup_list_foldersBInspect

List all folders in a ClickUp space. Provide space ID (e.g., "789"). Returns folder ID, name, and list count.

ParametersJSON Schema
NameRequiredDescriptionDefault
_apiKeyYesClickUp API token
space_idYesSpace ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It reveals that it returns folder ID, name, and list count, but does not disclose any side effects, error conditions, or pagination behavior. Adequate for a simple read-only list operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no wasted words. Information is front-loaded. Could potentially be more structured, but efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is a simple list operation with two well-documented parameters and no output schema, the description is mostly complete. However, it lacks details on output format or pagination, which would be helpful for agents.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are already documented. The description does not add additional meaning beyond what the schema provides, which is acceptable given high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it lists folders in a ClickUp space, and specifies the returned fields (ID, name, list count). It distinguishes from siblings like clickup_list_tasks and clickup_list_spaces by focusing on folders.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage for listing folders given a space_id, but no explicit guidance on when to use this versus other list tools (e.g., when to list folders vs tasks vs spaces) or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

clickup_list_spacesBInspect

List all spaces in your ClickUp workspace. Returns space ID, name, and status.

ParametersJSON Schema
NameRequiredDescriptionDefault
_apiKeyYesClickUp API token
team_idYesTeam/workspace ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries full burden. It states it is a read operation ('List') and provides output fields (space ID, name, status info). However, it does not mention pagination, rate limits, or any authorization requirements beyond the API key.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that efficiently conveys purpose and output. It is front-loaded with the action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 params, no nested objects, no output schema), the description is mostly complete. However, it could mention that spaces are top-level containers and that team_id is required. The lack of output schema means description could clarify return format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds no additional meaning beyond the schema. It does not explain what team_id represents or how to obtain it, but the schema's description is sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists spaces in a ClickUp team/workspace and returns space ID, name, and status info. However, it does not explicitly distinguish itself from sibling tools like clickup_list_folders or clickup_list_tasks, which might be useful for disambiguation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. For instance, if the agent wants to list folders or tasks, it might need to use different tools, but no such distinction is made.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

clickup_list_tasksBInspect

List all tasks in a ClickUp list. Returns task ID, name, status, priority, assignees, due date, and URL. Provide list ID (e.g., "123456").

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (0-indexed, default 0)
_apiKeyYesClickUp API token
list_idYesList ID to fetch tasks from
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description mentions returned fields but does not disclose pagination behavior (page parameter is described in schema), rate limits, or authentication requirements. No annotations provided, so description carries full burden; it is adequate but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-load purpose and output fields. No wasted words; could optionally include usage notes without sacrificing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description partially compensates by listing return fields. However, lacks pagination details, filtering options, or error handling info. Adequate for a straightforward list endpoint but could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. Description adds no extra meaning beyond schema for page, _apiKey, and list_id; it correctly omits redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it lists tasks in a ClickUp list and enumerates returned fields (task ID, name, status, etc.). Distinguishes from sibling tools like clickup_create_task and clickup_get_task by focusing on listing, though not explicitly differentiating from other list tools like clickup_list_folders.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. Does not mention that it returns tasks from a single list, nor when to use clickup_get_task for a specific task. Lacks usage context such as filtering or sorting capabilities.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description must fully disclose behavior. It states the tool returns 'most relevant tools with names and descriptions', but does not mention whether it is read-only, destructive, or has any side effects. The description is neutral and does not contradict annotations (none), but it lacks explicit behavioral traits beyond the search result. Since there are no annotations, a score of 3 is appropriate: it gives basic behavioral info but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each serving a purpose: what it does, what it returns, and when to call it. No extraneous words. Slightly more verbose than strictly necessary, but the instruction 'Call this FIRST' earns its place. Score 4 for being efficient but not maximally concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simple search functionality, 2 parameters, no output schema, and no annotations, the description covers the core purpose and usage context. It explains when to use it (first, with many tools). It does not describe the output format (e.g., whether it includes tool IDs or just names), but the tool is straightforward and the description is sufficient for an agent to decide when to call it. A score of 4 reflects good completeness for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%: both parameters ('query' and 'limit') have descriptions in the schema. The description adds no extra parameter-level meaning beyond the schema. Baseline 3 is appropriate because the schema already handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Search' and resource 'Pipeworx tool catalog', clearly stating the tool's function: to find relevant tools by natural language description. It distinguishes itself from sibling tools (which are about ClickUp, memory, or AI chat) by being the discovery/search tool for the tool catalog.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear guidance on when to use it (before other tools) and the context (large tool set). No sibling alternatives are mentioned, but the call-to-action is strong and unambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden. It correctly describes a destructive action (delete) but does not disclose consequences (e.g., irreversible? requires auth?). The description is accurate but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no waste. Front-loaded with verb and resource. Ideal length for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple (1 param, no output schema). The description fully captures the purpose and parameter. Additional context like auth requirements or side effects would be nice but not essential given the tool's trivial nature.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter 'key'. The schema's description 'Memory key to delete' already explains it. The tool description adds no extra meaning, but given full coverage, the baseline is 3. However, the description clarifies the action context (delete), earning a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a clear verb 'Delete' with a specific resource 'stored memory by key'. It uniquely identifies the action and distinguishes from siblings like 'recall' (read) and 'remember' (create).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states what it does but provides no guidance on when to use it vs alternatives. Sibling tools like 'recall' and 'remember' suggest a memory management context, but no explicit exclusions or prerequisites are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses the dual behavior (retrieve by key or list all) and mentions persistence across sessions, but does not detail what happens if key is missing, or any error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that front-load the core purpose. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with no required parameters and no output schema, the description sufficiently explains behavior. It mentions session persistence which adds valuable context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds the detail that omitting key lists all memories, which is already implied by optionality in schema. No additional parameter info needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool retrieves a memory by key or lists all memories when key is omitted, which distinguishes it from 'remember' and 'forget' siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly says when to use the tool ('retrieve context you saved earlier') and implies when not to (omit key to list all), but does not explicitly mention alternatives or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds behavioral context beyond what annotations provide (none here): authenticated users get persistent memory, anonymous sessions last 24 hours. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with action and use cases. Efficient and informative, though could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and only 2 parameters, the description is complete enough: explains persistence behavior, use cases, and key-value nature. No missing context for typical use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so description adds little beyond schema. It gives example keys like 'subject_property' and clarifies value as 'any text', but these are also present in schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Store a key-value pair in your session memory', with specific verb 'store' and resource 'session memory'. It distinguishes itself from siblings (recall, forget) by explicitly mentioning memory persistence and use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases: 'save intermediate findings, user preferences, or context across tool calls'. However, it does not explicitly state when not to use or compare to siblings like recall or forget.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.