Skip to main content
Glama

Server Details

Holidays MCP — wraps Nager.Date API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-holidays
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 8 of 8 tools scored. Lowest: 2.9/5.

Server CoherenceB
Disambiguation3/5

Most tools have distinct purposes (e.g., holiday-related vs. memory vs. discovery), but there is some overlap between 'ask_pipeworx' and 'discover_tools' as both help find tools or information, which could cause confusion. The holiday tools are clearly differentiated from each other.

Naming Consistency2/5

Naming is inconsistent across the set: holiday tools use verb_noun patterns (e.g., 'get_holidays'), memory tools use single verbs (e.g., 'remember', 'recall'), and other tools use mixed styles like 'ask_pipeworx' (verb_prefix) and 'discover_tools' (verb_noun). This lack of a uniform convention reduces predictability.

Tool Count4/5

With 8 tools, the count is reasonable for a server that combines holiday lookup with memory and discovery functions. It's slightly high for a pure holiday server but acceptable given the mixed scope, avoiding being overly thin or bloated.

Completeness3/5

For holiday operations, the surface is complete with get, check, and next functions, but there are notable gaps in memory management (e.g., no update or delete-all for memories) and no clear integration between holiday and other tools, leaving the overall domain coverage somewhat fragmented.

Available Tools

8 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: Pipeworx 'picks the right tool, fills the arguments, and returns the result,' indicating automated tool selection and parameter filling. However, it doesn't mention potential limitations like rate limits, error handling, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: it starts with the core functionality, explains the automation benefit, and provides concrete examples. Every sentence adds value without redundancy, making it easy to understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (automated tool selection) and lack of annotations/output schema, the description does well by explaining the workflow and providing examples. However, it doesn't detail what types of answers or data sources to expect, which could be important for an AI agent to manage expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the baseline is 3. The description adds value by explaining that the 'question' parameter should be 'in plain English' and providing examples like 'Look up adverse events for ozempic,' which clarifies the expected format and scope beyond the schema's 'natural language' description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and distinguishes itself from siblings by emphasizing natural language interaction without needing to browse tools or learn schemas.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It provides clear alternatives (implicitly suggesting not to use other tools for simple queries) and includes examples like 'What is the US trade deficit with China?' to illustrate appropriate use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it performs a search based on natural language queries and returns ranked results. However, it doesn't mention potential limitations like rate limits, authentication requirements, or error conditions that would be helpful for complete transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and well-structured in two sentences. The first sentence explains the core functionality, and the second provides crucial usage guidance. Every word earns its place, with no redundancy or unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with no annotations and no output schema, the description provides good context about the tool's purpose and usage. However, it doesn't describe the format of returned results (beyond mentioning 'names and descriptions') or potential limitations. Given the complexity is moderate and schema coverage is complete, the description is mostly adequate but could benefit from output format details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. It mentions 'describing what you need' which aligns with the query parameter, but provides no additional semantic context about parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('search', 'returns') and resources ('Pipeworx tool catalog', 'most relevant tools with names and descriptions'). It distinguishes itself from sibling tools (get_holidays, is_today_holiday, next_holidays) by focusing on tool discovery rather than holiday-related operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This gives clear context about when to use this tool (large tool catalog scenarios) and implies it should be prioritized over other tools for discovery purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It states 'Delete' implying a destructive mutation, but doesn't disclose behavioral traits such as whether deletion is permanent, requires specific permissions, has side effects, or what happens if the key doesn't exist. This leaves significant gaps for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It's front-loaded with the core action and resource, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive mutation tool with no annotations and no output schema, the description is incomplete. It lacks details on behavior (e.g., error handling, permanence), usage context, and output expectations, which are critical for safe and effective tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'key' documented as 'Memory key to delete'. The description adds no additional meaning beyond this, such as format examples or constraints. Baseline 3 is appropriate since the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and resource ('a stored memory by key'), making the purpose unambiguous. It doesn't explicitly distinguish from siblings like 'recall' or 'remember', but the verb 'Delete' implies a destructive operation versus retrieval or storage.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. While 'Delete' suggests it's for removing memories, there's no mention of prerequisites (e.g., needing to know the key), when not to use it, or how it relates to siblings like 'recall' (which likely retrieves memories).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_holidaysAInspect

Get all public holidays for a country and year. Returns holiday names and dates. Provide country code (e.g., "US", "GB", "DE") and year.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearYesThe year to retrieve holidays for (e.g., 2025)
country_codeYesISO 3166-1 alpha-2 country code (e.g., US, GB, DE, FR)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the use of ISO country codes, which is useful, but does not cover other important traits such as data source reliability, rate limits, error handling, or response format. For a tool with no annotations, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, consisting of two concise sentences that directly state the purpose and provide key usage details without any unnecessary information. Every sentence earns its place by contributing essential context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity, 2 parameters with full schema coverage, and no output schema, the description is adequate but incomplete. It covers the basic purpose and parameter format but lacks details on behavioral aspects like response structure or limitations, which are important for a tool with no annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds minimal value by reiterating the ISO code format with examples, but does not provide additional semantics beyond what the schema specifies. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get public holidays') with the target resource ('for a country and year'), distinguishing it from sibling tools like 'is_today_holiday' and 'next_holidays' which focus on different temporal queries. It uses precise terminology that immediately communicates the tool's function without redundancy.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool by specifying it retrieves holidays for a given country and year, but it does not explicitly mention when not to use it or name alternatives like the sibling tools. The context is sufficient for basic usage but lacks explicit differentiation from similar tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

is_today_holidayAInspect

Check if today is a public holiday in a given country. Returns whether it's a holiday and the holiday name if applicable. Provide country code (e.g., "US", "GB").

ParametersJSON Schema
NameRequiredDescriptionDefault
country_codeYesISO 3166-1 alpha-2 country code (e.g., US, GB, DE, FR)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool's function but does not describe behavioral traits such as whether it requires authentication, has rate limits, returns a boolean or detailed response, or handles edge cases (e.g., invalid country codes). This is a significant gap for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that is front-loaded with the core purpose. It contains zero wasted words, making it highly concise and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks details on behavioral aspects and output format, which are needed for full contextual understanding despite the simple schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'country_code' fully documented in the schema (ISO 3166-1 alpha-2 code). The description adds no additional parameter semantics beyond implying the country context, so it meets the baseline of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Check') and resource ('public holiday'), specifying the temporal scope ('today') and contextual parameter ('given country'). It distinguishes from sibling tools like 'get_holidays' (likely lists holidays) and 'next_holidays' (likely future holidays) by focusing on today's status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('today', 'given country') but does not explicitly state when to use this tool versus alternatives like 'get_holidays' or 'next_holidays'. It provides basic guidance but lacks explicit exclusions or named alternatives, leaving room for ambiguity in tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

next_holidaysBInspect

Get upcoming public holidays from today onward for a country. Returns holiday names and dates. Provide country code (e.g., "US", "GB", "DE").

ParametersJSON Schema
NameRequiredDescriptionDefault
country_codeYesISO 3166-1 alpha-2 country code (e.g., US, GB, DE, FR)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool retrieves data 'from today forward,' which implies a read-only, non-destructive operation, but it does not cover other aspects such as rate limits, error handling, authentication needs, or the format of returned data. For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that efficiently conveys the tool's purpose without unnecessary words. It is front-loaded with the core functionality and includes the scope ('from today forward') in a compact manner. Every part of the sentence earns its place, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one required parameter, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose and scope but lacks details on behavioral traits, usage guidelines compared to siblings, and output expectations. While it meets the minimum for a simple tool, it does not provide a complete picture for optimal agent use, especially without annotations or output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'country_code' parameter fully documented as an ISO 3166-1 alpha-2 code. The description does not add any additional semantic details beyond what the schema provides, such as examples of country codes or handling of invalid inputs. Given the high schema coverage, a baseline score of 3 is appropriate, as the description does not compensate but also does not detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get upcoming public holidays for a country (from today forward).' It specifies the verb ('Get'), resource ('public holidays'), and scope ('from today forward'), making it easy to understand. However, it does not explicitly differentiate from sibling tools like 'get_holidays' or 'is_today_holiday', which might have overlapping or distinct functionalities, preventing a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance, stating only that it retrieves holidays 'from today forward.' It does not specify when to use this tool versus alternatives like 'get_holidays' or 'is_today_holiday,' nor does it mention any prerequisites or exclusions. This lack of comparative context limits its effectiveness in guiding the agent's selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It adequately describes the core behavior (retrieval vs listing based on key presence) but lacks details about error handling, session persistence specifics, or performance characteristics like rate limits or memory size constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place: the first explains the dual functionality, the second provides usage context. It's front-loaded with the core purpose and wastes no words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with 100% schema coverage but no annotations or output schema, the description provides good context about behavior and usage. It could be more complete by addressing potential edge cases (e.g., what happens with invalid keys) or describing the return format, but it covers the essentials well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single parameter. The description adds valuable semantic context by explaining the dual behavior (retrieve vs list) based on whether the key is provided, which goes beyond the schema's technical specification.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from sibling tools like 'remember' (which stores) and 'forget' (which removes) by focusing on retrieval operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'to retrieve context you saved earlier in the session or in previous sessions.' It also specifies when to omit the key parameter ('omit key to list all keys'), offering clear usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool stores data in session memory, distinguishes between authenticated users (persistent memory) and anonymous sessions (24-hour duration), and implies it's a write operation. However, it doesn't cover potential limitations like storage limits, error conditions, or data format constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the core purpose and usage context, the second adds important behavioral details about persistence. Every sentence adds value without redundancy, making it appropriately sized and front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (write operation with persistence behavior), no annotations, and no output schema, the description does a good job covering the essential context: purpose, usage, and key behavioral traits. However, it lacks details about return values, error handling, or storage limitations, which would be helpful for a tool with no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with both parameters ('key' and 'value') well-documented in the schema itself. The description adds minimal semantic value beyond the schema, mentioning what can be stored but not providing additional syntax, format, or constraint details. This meets the baseline of 3 when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Store a key-value pair') and resource ('in your session memory'), distinguishing it from sibling tools like 'recall' (retrieve) and 'forget' (remove). It explicitly mentions what can be stored ('intermediate findings, user preferences, or context across tool calls'), making the purpose unambiguous and well-differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), but does not explicitly mention when not to use it or name alternatives. While it implies usage for persistence across calls, it lacks explicit exclusions or comparisons to siblings like 'recall' or 'forget'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.