Skip to main content
Glama

Server Details

Monday.com MCP — wraps the Monday.com GraphQL API (BYO API key)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-monday
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 10 of 10 tools scored. Lowest: 3.2/5.

Server CoherenceA
Disambiguation4/5

Most tools have clearly distinct purposes: Monday.com CRUD operations, memory storage, and a meta-tool for discovering other tools. However, `ask_pipeworx` overlaps conceptually with `discover_tools` as both involve querying a catalog, though they serve different use cases.

Naming Consistency3/5

Monday tools follow a consistent 'monday_verb_noun' pattern, but memory tools use plain verbs ('remember', 'recall', 'forget') and meta-tools use 'ask_pipeworx' and 'discover_tools', creating mixed conventions.

Tool Count4/5

10 tools is reasonable for a server combining Monday.com integration with memory and meta-tool discovery. The count is well-scoped, though the meta-tools suggest a larger underlying catalog that is not directly exposed.

Completeness3/5

Monday.com tools provide basic CRUD (list, get, create) and search for items and boards, but lack update and delete operations, which may cause dead ends. Memory tools are minimal but complete for key-value storage. The meta-tools are an interesting addition but not fully integrated with the Monday tools.

Available Tools

10 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description must disclose behavior. It states the tool chooses data sources and fills arguments, but does not mention limitations (e.g., latency, potential inaccuracies, or which specific data sources are used). Score 3 is baseline for adequate but incomplete disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is three sentences: purpose, how it works, and examples. Every sentence adds value, no fluff. Front-loaded with the key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input (one free-text parameter) and no output schema, the description sufficiently covers what the agent needs to know to use the tool. Could benefit from clarifying that the tool returns answers (not raw data) or that it may call other tools, but overall complete for the use case.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with only one parameter 'question' described as 'Your question or request in natural language'. The description adds context like 'plain English' and examples, which aligns with the schema. Baseline 3 since schema already covers meaning, but description adds value through examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: answering plain English questions by selecting the best data source and filling arguments. The verb 'ask' and resource 'Pipeworx' are specific, and examples illustrate typical use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use this tool: when you have a natural language question and want the system to handle tool selection. It contrasts with siblings by saying 'no need to browse tools or learn schemas', implying alternatives require manual tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations were provided, so the description bears full responsibility. It clearly states that the tool searches and returns tools with names and descriptions, implying a read-only, non-destructive operation. The mention of '500+ tools' sets expectations for scale. However, it does not disclose if there are any side effects (unlikely) or performance considerations, but the clarity is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two sentences), front-loads the key action, and every sentence provides value. No redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simple nature (search), no output schema, and rich schema, the description is complete. It explains when to use it, what it does, and how to phrase queries. No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers both parameters (query and limit) with descriptions. The description adds context by clarifying the purpose of the query ('natural language description') and the default and max limit. Since schema coverage is 100%, the description adds modest but useful value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: searching a tool catalog by natural language description to find relevant tools. It specifies the action ('search'), the resource ('Pipeworx tool catalog'), and the method ('describing what you need'), effectively distinguishing it from sibling tools like 'ask_pipeworx' or the monday.com tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' It implies an ordering relative to other tools, which is highly actionable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It correctly states the action (delete) and the parameter (key), but does not disclose behavioral traits like whether deletion is irreversible, requires specific permissions, or affects other data. For a simple key-based deletion, this is adequate but not exceptional.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that conveys the core purpose without any fluff. Every word is necessary, and it is immediately understandable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one required parameter, no output schema), the description is complete enough. It specifies what the tool does and what the parameter represents. No additional context is needed for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the description adds no new information about the parameter beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'delete' and the resource 'stored memory by key', making the tool's purpose evident. It distinguishes itself from siblings like 'remember' (store) and 'recall' (retrieve) by specifying deletion, but doesn't explicitly contrast them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing to delete a memory by its key, but provides no guidance on when not to use it or alternatives. Given that sibling tools include 'remember' and 'recall', a brief note on when to use those instead would improve clarity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

monday_create_itemBInspect

Create a new item in a board (e.g., board ID "12345", name "New Task"). Returns created item ID and name.

ParametersJSON Schema
NameRequiredDescriptionDefault
_apiKeyYesMonday.com API token
board_idYesBoard ID
group_idYesGroup ID within the board
item_nameYesName for the new item
column_valuesNoJSON string of column values to set (e.g., {"status":"Working on it"})
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries full burden. It states that the tool creates an item and returns ID and name, which is appropriate. However, it does not disclose side effects (e.g., is creation idempotent? any rate limits?), and does not mention authentication or permission requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two short sentences, front-loaded with the main action. Every sentence adds value. Could be slightly more structured but overall concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a create tool with 5 parameters and no output schema, the description covers the basic purpose and return value. However, it lacks information about the format or content of the returned ID and name, and does not mention any error conditions or prerequisites.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description does not add any additional meaning beyond the schema; all parameters are already well-documented in the schema itself. No extra context provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Create') and the resource ('a new item in a Monday.com board'). It also specifies the return value ('Returns the created item ID and name'). However, it does not differentiate itself from siblings like 'monday_get_board' or 'monday_list_items' which have distinct purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., when to create vs. update vs. list). The description implies creating a new item, but does not state prerequisites or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

monday_get_boardBInspect

Get board structure and metadata (e.g., board ID "12345"). Returns name, columns, groups, and item count.

ParametersJSON Schema
NameRequiredDescriptionDefault
_apiKeyYesMonday.com API token
board_idYesBoard ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries full burden. It describes the operation (get) and return fields (name, columns, groups, item count) but does not mention potential errors, rate limits, or whether board_id must be numeric or string. Adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise single sentence with front-loaded purpose. No unnecessary words, though could be slightly expanded for usage guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple tool with 2 params and no output schema, the description is adequate but lacks usage guidelines and behavioral details like whether the board must exist. Meets minimum viability.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description does not add extra meaning beyond schema. _apiKey and board_id are self-explanatory from schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a single Monday.com board by ID and lists returned details. It distinguishes itself from siblings like monday_list_boards which likely return multiple boards.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like monday_list_boards or monday_list_items. No context on prerequisites or edge cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

monday_list_boardsBInspect

List all boards in your account. Returns board ID, name, state, and item count to discover available boards.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of boards to return (default 20, max 50)
_apiKeyYesMonday.com API token
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries the burden. It states that the tool lists all boards and returns specific fields, which is straightforward. However, it does not mention pagination or rate limits, which are common concerns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences and front-loads the purpose. No unnecessary words, but could be slightly more concise by omitting the redundant 'in your Monday.com account'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity, the description is largely adequate. However, it lacks details on sorting, pagination, or how to handle large numbers of boards. The absence of output schema is not compensated by description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for both parameters (limit and _apiKey), so the baseline is 3. The description adds no additional meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'list' with the resource 'boards' and mentions the data returned (ID, name, state, item count). It distinguishes from siblings like monday_get_board (singular board) and monday_list_items (items).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies this is for listing boards but does not explicitly state when to use it versus alternatives like monday_get_board or when not to use it. No guidance on filtering or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

monday_list_itemsAInspect

List items in a board (e.g., board ID "12345"). Returns item ID, name, group, column values, and created date.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of items to return (default 20, max 50)
_apiKeyYesMonday.com API token
board_idYesBoard ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries full burden. It states the return fields (item ID, name, group, column values, created date) and implies a read operation, which is appropriate. However, it doesn't disclose any side effects, pagination behavior (limit parameter is mentioned in schema but not described), or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 14 words, front-loaded with the action. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is a list operation with no output schema and 3 parameters. The description mentions return fields but omits details on pagination, sorting, or filtering. Given the absence of annotations and output schema, more context (e.g., default limit, pagination) would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description does not add meaning beyond schema; it lists return fields but no parameter details. Since schema already documents parameters adequately, this is acceptable but no extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'List items in a Monday.com board' and specifies the return fields, which clearly identifies the verb (list) and resource (items in a board). However, it doesn't explicitly distinguish from sibling tool 'monday_search_items', which might also list items, though the sibling name suggests search capabilities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for listing all items with specific fields, but no explicit guidance on when to use this versus siblings like monday_search_items or monday_get_board. No exclusions or context for selection are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

monday_search_itemsAInspect

Search for items across all boards by keyword. Returns matching items with ID, name, board name, and column values.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (default 20, max 50)
queryYesSearch query text
_apiKeyYesMonday.com API token
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries the burden. It states the tool returns specific fields (ID, name, board name, column values), which is helpful. However, it does not disclose any behavioral traits like rate limiting, auth requirements (though _apiKey parameter hints at it), or whether results are paginated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that conveys purpose and return structure. Every word adds value, and no redundant information is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only 3 parameters (all with schema descriptions) and no output schema, the description sufficiently covers the purpose and return fields. It could mention pagination or max results, but the limit parameter already defines defaults and max.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by listing return fields, which indirectly clarifies that query is used for text search. However, it does not explain the limit parameter beyond what the schema provides, nor does it clarify _apiKey usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Search items across all boards by text query') and specifies the resource ('items across all boards'). It distinguishes itself from siblings like monday_list_items by emphasizing cross-board search vs. board-specific listing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies this tool is for text-based search across all boards, but does not explicitly state when not to use it (e.g., for filtering by specific board, use monday_list_items). No alternative tools are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It explains the dual behavior (single key vs. list all) but does not disclose any side effects, permissions, or limits. Adequate for a simple read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, clear and front-loaded. No extraneous words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (one optional parameter, no output schema), the description is complete enough. It explains both usage modes and the purpose. No obvious gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (one parameter with description). The description reiterates that omitting key lists all, which adds slight value beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool retrieves a stored memory by key or lists all memories if key is omitted. The verb 'retrieve' and resource 'memory' are specific and the description distinguishes the two modes of operation (by key vs. list all).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says to use when you need to retrieve context saved earlier. It does not provide when-not-to-use or mention alternatives, but the sibling tools include 'remember' and 'forget', which are related but different operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries the full burden. It fully discloses the memory behavior: session-based, persistent for authenticated users, 24-hour expiry for anonymous. This goes well beyond what annotations could provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each serving a distinct purpose: what it does, when to use it, and behavioral details. No fluff, front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 params, no output schema, no nested objects), the description is complete. It covers purpose, usage, and behavioral constraints (persistence, expiry) without needing output schema details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by providing example keys ('subject_property', 'target_ticker', 'user_preference') and clarifying that value is any text, which enhances understanding but does not fully exhaust the meaning of parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool stores a key-value pair in session memory, with specific verbs 'store' and 'save'. It distinguishes itself from siblings like 'recall' (retrieve) and 'forget' (delete) by explicitly focusing on persistence and memory across calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use this to save intermediate findings, user preferences, or context across tool calls', providing clear guidance on when to use it. It also differentiates behavior for authenticated vs anonymous users, though it does not explicitly exclude scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.