Skip to main content
Glama

Server Details

Google Calendar MCP Pack

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-google_calendar
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 10 of 10 tools scored. Lowest: 3.1/5.

Server CoherenceB
Disambiguation3/5

Tools like ask_pipeworx and discover_tools have overlapping purposes (both help find information), though descriptions help differentiate. The memory tools (remember, recall, forget) are distinct from calendar tools, but ask_pipeworx could be confused with search/query tools.

Naming Consistency2/5

Tool names are inconsistent: some use 'gcal_' prefix for calendar tools, while others use plain verbs (ask_pipeworx, discover_tools, remember, recall, forget). No consistent pattern across the set.

Tool Count4/5

10 tools is reasonable for a server that combines calendar operations with a general-purpose query system. Not too many or too few.

Completeness3/5

Calendar CRUD is mostly covered (create, get, list, search, list calendars) but missing update and delete for events. The memory and query tools seem complete but their integration with calendar is unclear.

Available Tools

10 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool automatically selects data sources and fills arguments, which is key behavioral info. Since no annotations are provided, the description carries the full burden, and it adequately describes the tool's autonomous decision-making without contradicting any structured data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (three sentences) and front-loaded with the core purpose. Each sentence adds value: purpose, behavior, and examples. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single parameter, no output schema, no nested objects), the description is complete. It covers what the tool does, how to use it, and what to expect, with examples for clarity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage, so the baseline is 3. The description adds value by explaining the 'question' parameter accepts natural language and provides examples, going beyond the schema's generic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: answering plain English questions by selecting the best data source. It provides a specific verb ('ask') and resource ('Pipeworx'), and distinguishes itself from sibling tools like 'discover_tools' and 'forget' by focusing on natural language queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool ('when you need an answer from data') and provides example queries. However, it does not explicitly state when not to use it or mention alternatives, though the examples help clarify usage scope.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It states the tool returns 'the most relevant tools with names and descriptions', which is useful but does not disclose whether it modifies state, requires authentication, or has rate limits. It is safe to assume it is read-only, but not explicit. Score 3 as it covers basic behavior but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each serving a distinct purpose: describing the function, the output, and when to use it. No wasted words, front-loaded with the key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is a search/discovery tool with only two simple parameters and no output schema, the description is complete enough. It explains what it returns (names and descriptions) and when to use it. Missing details about pagination or sorting, but acceptable for this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description does not add additional meaning beyond the schema; it only mentions the query parameter indirectly by saying 'describe what you need'. The description is sufficient given schema already documents both parameters well.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool searches a tool catalog by natural language query and returns relevant tools. It specifies the resource (Pipeworx tool catalog) and the action (search/return), clearly distinguishing it from siblings like 'ask_pipeworx' which is for asking questions, not finding tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This gives clear when-to-use guidance and implicitly contrasts with alternatives like 'ask_pipeworx' for other purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetBInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden of behavioral disclosure. It states the operation (delete) but does not disclose side effects (e.g., whether deletion is permanent, if confirmation is needed, or if associated data is also removed). This is a significant gap for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 5 words, conveying the essential purpose without any fluff. It is front-loaded and every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (single parameter, no output schema, no annotations), the description is too brief. It fails to clarify whether the deletion is irreversible, what happens if the key doesn't exist, or any other behavioral aspects. For a mutation tool, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds no extra information beyond the schema: 'by key' reiterates the parameter purpose. The schema already describes 'Memory key to delete', so the description adds minimal value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Delete a stored memory by key' clearly states the action (delete), the resource (stored memory), and the parameter (key). It succinctly distinguishes from sibling tools like 'remember' (store) and 'recall' (retrieve).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (when you want to delete a memory), but does not explicitly state when not to use it or mention alternatives. Sibling tool names provide context, but the description itself lacks explicit usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gcal_create_eventAInspect

Create a new calendar event with summary, start/end times, optional description, location, and attendee emails. Returns the created event ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
endYesEnd time as RFC3339 timestamp or date for all-day events
startYesStart time as RFC3339 timestamp (e.g., "2024-06-15T10:00:00-07:00") or date for all-day events ("2024-06-15")
summaryYesTitle of the event
locationNoLocation of the event
attendeesNoList of attendee email addresses
time_zoneNoTime zone (e.g., "America/Los_Angeles"). Defaults to calendar's time zone.
calendar_idNoCalendar ID (default: "primary")
descriptionNoDescription or notes for the event
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It states the tool creates an event (mutating operation) but does not disclose potential side effects (e.g., overwriting existing events, sending invites to attendees, or rate limits). The description is straightforward but lacks depth for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two sentences) and front-loaded with the primary action. Every sentence adds value: first sentence states purpose and required fields, second enumerates optional fields. Could be slightly more structured with a clearer separation of required vs. optional.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 8 parameters and no output schema, the description adequately covers the creation action and key inputs. However, it does not explain return values (e.g., created event ID or confirmation), which is a gap for a create operation. The tool is moderately complex, and the description meets minimum completeness but could be more thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds value by listing optional fields (description, location, attendees) beyond required ones, reinforcing their role. However, it does not clarify interactions between parameters (e.g., how time_zone affects start/end), so slightly above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Create' and the resource 'a new event on a Google Calendar', and lists key properties (summary, start/end times, optional fields). It distinguishes the tool from siblings like gcal_get_event and gcal_list_events, but does not explicitly contrast with gcal_search_events or others, leaving minor ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions what the tool does but provides no explicit guidance on when to use it vs. alternatives (e.g., gcal_search_events for finding events, or gcal_list_events for viewing). It implies creation use case but lacks exclusions or context about prerequisites like calendar selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gcal_get_eventBInspect

Get full details of a specific event by ID (e.g., "event_12345"). Returns summary, description, times, attendees, location, and video conferencing links.

ParametersJSON Schema
NameRequiredDescriptionDefault
event_idYesThe ID of the event to retrieve
calendar_idNoCalendar ID (default: "primary")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description must carry the behavioral burden. It states 'returns full event details' which is informative, but does not disclose any side effects, rate limits, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two sentences) and front-loads the core purpose. However, it could be slightly more structured, e.g., separating the output details into a bullet list.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and empty annotations, the description adequately explains what the tool does but lacks completeness on edge cases, error handling, or special behavior. It covers the basic return fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are already documented. The description adds no additional meaning beyond what the schema provides, warranting a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves a specific Google Calendar event by ID and lists the types of details returned. However, it does not differentiate from siblings like gcal_list_events or gcal_search_events, which also return event details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when you have an event ID and want full details, but provides no guidance on when not to use it (e.g., for listing events) or alternatives like gcal_search_events.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gcal_list_calendarsAInspect

List all accessible calendars. Returns calendar IDs, names, time zones, and your access level for each. Use to identify which calendar to query or modify.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description accurately discloses that the tool returns specific fields (calendar IDs, summaries, time zones, access roles) and lists all calendars accessible by the user. Since annotations are empty, the description carries full burden for behavioral transparency. It effectively communicates the scope and output content, though it does not mention pagination or potential limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, concise and front-loaded with the core action. It efficiently lists what is returned, with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters and no output schema, the description adequately covers what the tool does and what it returns. It is complete for a simple list-calendars operation, though it could mention that calendars are returned as a list or array. Overall, it is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no parameters, so there is no need for parameter descriptions. The description adds value by specifying the return fields (calendar IDs, summaries, time zones, access roles), which is not captured by the schema. With 100% schema description coverage, a baseline of 3 applies, but the description enhances clarity beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all calendars accessible by the authenticated user, specifying return fields like calendar IDs, summaries, time zones, and access roles. This distinguishes it from siblings like gcal_list_events (which lists events) and gcal_search_events (which searches events), making its purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool: to get a list of all accessible calendars, likely before performing operations on specific calendars. However, it does not explicitly state when not to use it or mention alternatives. For example, it could note that if the user needs events, they should use gcal_list_events or gcal_search_events instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gcal_list_eventsBInspect

List calendar events with optional date filtering. Returns event summaries, start/end times, attendees, and locations. Use to view upcoming or past events.

ParametersJSON Schema
NameRequiredDescriptionDefault
order_byNoSort order (default: startTime). startTime requires singleEvents=true.
time_maxNoUpper bound (exclusive) for event end time as RFC3339 timestamp
time_minNoLower bound (inclusive) for event start time as RFC3339 timestamp (e.g., "2024-01-01T00:00:00Z")
calendar_idNoCalendar ID (default: "primary" for the user's main calendar)
max_resultsNoMaximum number of events to return (default 10, max 250)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must fully disclose behavioral traits. It does not mention that the tool is read-only, whether it requires authentication, or any side effects. The description only hints at optional filtering but lacks detail on behavior like pagination or timezone handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two sentences) and front-loaded with the main purpose. Each sentence contributes meaning. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that the tool has 5 parameters, no output schema, and no annotations, the description is minimally complete. It states the basic function and return value types but lacks details on default values, ordering behavior, or error conditions. Could be more helpful with more context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds value by summarizing what the tool returns (event summaries, times, etc.), which is not in the schema. However, it does not explain any parameter details beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists events from Google Calendar, with optional filtering by time range. It also lists what is returned (summaries, times, attendees, locations). However, it does not differentiate from sibling tools like gcal_search_events, which may also list events.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool vs. alternatives like gcal_search_events. The description does not mention prerequisites, default behaviors, or when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gcal_search_eventsAInspect

Search events by keyword across summaries, descriptions, locations, and attendees. Returns matching event details and times. Use to find events by topic or participant.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesFree-text search query to match against event fields
time_maxNoUpper bound for event end time as RFC3339 timestamp
time_minNoLower bound for event start time as RFC3339 timestamp
calendar_idNoCalendar ID (default: "primary")
max_resultsNoMaximum number of events to return (default 10, max 250)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries burden. Describes match fields, but lacks details on behavior like partial matching, case sensitivity, or handling of empty results.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states purpose, second specifies match fields. Concise and front-loaded, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with complete schema and no output schema, description is adequate. Covers what is searched and matches. Could mention return format (events list) but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description does not add meaning beyond schema; it only mentions match fields but not parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it searches events using a text query and matches against summary, description, location, and attendees. Differentiates from sibling tools like gcal_list_events which lists without text search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implicitly suggests use when searching by text, but no explicit guidance on when to use vs. gcal_list_events or other siblings. No mention of when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description discloses key behavioral trait: listing all memories when key is omitted. No annotations provided, so description carries full burden. Mentions persistence across sessions ('earlier in the session or in previous sessions'), which is beyond what the schema conveys.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero wasted words. First sentence covers action and dual behavior, second gives usage context. Perfectly front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and single optional parameter, description sufficiently covers input behavior and purpose. Could optionally mention return format, but not required for a simple retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. Description adds value by explaining the optional nature and listing behavior when omitted, which enriches the semantic understanding beyond the schema's description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Retrieve' and resource 'memory by key' with specific behavior: retrieving a specific memory or listing all if key is omitted. Differentiates from sibling 'remember' (store) and 'forget' (delete).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use: 'to retrieve context you saved earlier'. Provides guidance on key omission to list all memories. No explicit when-not-to-use or alternatives beyond the implicit context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Clearly discloses persistence behavior ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), which is critical for understanding tool behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each adding value: first defines action, second gives usage context, third explains persistence. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple schema (2 string params, no output schema), description adequately covers purpose, usage, and behavioral nuances. Could mention maximum key/value length if applicable, but not required for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with good descriptions for both parameters. Description adds value by explaining purpose and storage behavior, but does not add parameter details beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Store' and resource 'key-value pair in your session memory', with specific examples of use cases. Distinguishes from sibling 'recall' (retrieval) and 'forget' (deletion).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use ('save intermediate findings, user preferences, or context across tool calls'), but does not explicitly say when not to use or suggest alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.