Skip to main content
Glama

Server Details

Google Analytics MCP Pack

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-google_analytics
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 9 of 9 tools scored. Lowest: 3.2/5.

Server CoherenceB
Disambiguation3/5

The tools 'ask_pipeworx' and 'discover_tools' serve similar discovery purposes but descriptions differentiate them (ask vs. search). Memory tools (forget, recall, remember) are clearly distinct. The GA tools are well-differentiated by function. Some overlap in purpose between ask_pipeworx and the specific GA tools, but overall acceptable.

Naming Consistency2/5

Naming is inconsistent: snake_case (ask_pipeworx, discover_tools) mixed with lowercase with underscores (ga_get_metadata, ga_run_report) and plain lowercase (forget, recall, remember). No consistent verb_noun pattern. Two conventions (generic tool prefix vs. ga_ prefix) but within ga_ tools there is inconsistency (ga_get_metadata vs. ga_run_report vs. ga_list_properties).

Tool Count4/5

9 tools is a reasonable count for a Google Analytics-focused server. The inclusion of general utility tools (ask_pipeworx, discover_tools, memory) expands scope but not excessively. Slightly high for pure analytics but appropriate for the integrated nature.

Completeness3/5

The GA tools cover listing properties, metadata, reports, and realtime, which covers basic analytics needs. Missing are administrative tools (e.g., create/update properties, manage accounts) and data manipulation (e.g., filtering, segmentation). The generic tools (ask_pipeworx, discover_tools) partially compensate but are not specific to GA.

Available Tools

9 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that Pipeworx picks the right tool, fills arguments, and returns the result, indicating automated delegation. It does not describe edge cases like unsupported questions or error handling, but with no annotations provided, the description covers the core behavioral promise well. No contradiction with annotations since none exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each adding value: first states purpose, second explains mechanics, third gives examples. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single parameter and no output schema, the description is complete enough for a natural language query tool. It explains input format with examples and the expected behavior. Minor omission: does not specify if the tool can handle follow-ups or multi-turn context, but that is not critical for this single-query interface.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents the 'question' parameter well. The description adds context by showing example questions, but doesn't add technical constraints or format details beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to answer a question in plain English using the best available data source, eliminating the need for the user to browse tools or learn schemas. This distinguishes it from siblings like ga_run_report or discover_tools, which are more specialized.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly explains when to use this tool: when the user wants to ask a question in natural language without specifying which underlying tool to invoke. It provides clear examples and contrasts with the need to browse tools manually, implying that this is the go-to for open-ended queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries full burden. It states the tool searches by description and returns names and descriptions, but does not disclose any side effects, authentication needs, or performance characteristics. For a search tool, this is adequate but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-loading the core action and use case, with no wasted words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (search, no output schema), the description covers the key aspects: purpose, when to call, and example queries. It could mention that results are limited to names and descriptions, but that is implied. Completeness is high for this type of tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by suggesting example queries (e.g., 'analyze housing market trends') and specifying default/max for limit, improving clarity beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it searches a tool catalog by description and returns relevant tools with names and descriptions. The specific verb 'Search' and resource 'Pipeworx tool catalog' distinguish it from siblings like ask_pipeworx or ga_run_report.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task,' providing clear when-to-use guidance and implying it's a discovery step before using other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description states the core behavior (delete) and identifies the required input (key). No annotations are provided, so the description carries the full burden. It does not mention side effects (e.g., is deletion permanent? are there confirmation prompts?). Could be improved by stating that the action is irreversible.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no unnecessary words. Perfectly front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple delete operation with one required parameter and no output schema, the description is adequate but minimal. It lacks behavioral details (e.g., whether the key must exist, error behavior). The sibling tools context suggests a memory system, but the description doesn't clarify constraints or outcomes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for the 'key' parameter. The description adds no extra semantic beyond the schema. Since schema already documents the parameter well, a score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a strong verb ('Delete') and a specific resource ('stored memory') with a clear parameter ('by key'). It clearly distinguishes from siblings like 'remember' (store) and 'recall' (retrieve).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. For example, no mention that deletion is irreversible or that it requires the exact key. With no sibling differentiation in description, the agent must infer from tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ga_get_metadataAInspect

Discover available dimensions and metrics for a GA4 property. Returns field names, descriptions, and data types to build accurate ga_run_report queries.

ParametersJSON Schema
NameRequiredDescriptionDefault
property_idYesGA4 property ID (numeric)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description must carry burden. Discloses that it lists metadata (dimensions and metrics) but does not mention if it requires specific permissions or if the list is exhaustive. Adequate but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no filler. First sentence states action and object, second sentence adds usage context. Concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (1 param, no output schema), description is complete enough. Explains purpose and use case. Could mention return format but not strictly necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for property_id (described as 'GA4 property ID (numeric)'). Description adds no further meaning beyond schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'List all available dimensions and metrics for a Google Analytics 4 property', specifying verb (list) and resource (dimensions and metrics) with target (GA4 property). Distinguishes from siblings like ga_run_report which runs reports.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States 'useful for discovering what fields can be used in reports', implying use before ga_run_report. However, no explicit when-not-to-use or alternative tools mentioned, such as ga_list_properties for listing properties.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ga_get_realtimeBInspect

Check live user activity in a GA4 property right now. Returns current active user count and real-time engagement metrics. Specify property ID (e.g., "123456789").

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of rows (default 100)
metricsNoRealtime metrics (e.g., ["activeUsers", "screenPageViews"]). Defaults to ["activeUsers"].
dimensionsNoRealtime dimensions (e.g., ["city", "unifiedScreenName", "platform"])
property_idYesGA4 property ID (numeric)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It implies read-only behavior (realtime report) and mentions specific metrics like activeUsers, which adds some context. However, it does not disclose rate limits, data freshness, or whether the tool modifies any state. The description is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two sentences) and front-loaded with the purpose. However, it could be slightly more structured by separating the purpose from the details of what it shows.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters, no output schema, and no annotations, the description is somewhat complete but lacks details on return format, pagination, and error handling. For a realtime report tool, users would benefit from knowing the maximum time window or how to interpret the response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description does not add additional meaning beyond the schema for the parameters. It restates 'realtime metrics' and 'realtime dimensions' but does not clarify how they differ from standard metrics/dimensions or provide format constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a realtime report for a GA4 property and specifies it shows currently active users and realtime metrics. However, it does not differentiate itself from sibling tools like ga_run_report or ga_get_metadata, which could be used for non-realtime data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus alternatives. It lacks context about prerequisites, limitations (e.g., only works for GA4 properties with realtime view), or when to choose ga_run_report instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ga_list_propertiesBInspect

List all GA4 properties you can access. Returns property IDs, names, creation dates, and account info. Use to find the property ID for ga_run_report queries.

ParametersJSON Schema
NameRequiredDescriptionDefault
page_sizeNoMaximum number of account summaries to return (default 50)
page_tokenNoToken for fetching the next page of results
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description must disclose behavior. It states it uses the Admin API and returns account summaries with property details. However, it does not mention pagination behavior beyond the token parameter, rate limits, or whether the tool is read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences, front-loaded with the key action and resource. It earns its place without extraneous details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no required params, no output schema), the description is adequate but could mention that it lists all accessible properties, the response structure (list of account summaries), or that it uses the Admin API for broader context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are well-documented. The description adds no additional semantic context beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and the resource 'Google Analytics 4 properties', and specifies it uses the Admin API. It is distinct from sibling tools like ga_run_report or ga_get_realtime, which focus on reporting rather than listing properties.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when the agent needs to list accessible GA4 properties, but it does not explicitly state when to use this tool versus alternatives, nor does it mention prerequisites like authentication.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ga_run_reportBInspect

Query GA4 analytics data by dimensions (e.g., "city", "pagePath") and metrics (e.g., "activeUsers", "sessions") for a date range. Returns aggregated data rows with dimension and metric values.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of rows to return (default 100, max 10000)
metricsYesList of metric names (e.g., ["activeUsers", "sessions", "screenPageViews"])
end_dateYesEnd date (YYYY-MM-DD or relative: "today", "yesterday")
order_bysNoOptional ordering. Each item: { dimension: { dimensionName, orderType? } } or { metric: { metricName }, desc? }
dimensionsNoList of dimension names (e.g., ["city", "pagePath", "date"])
start_dateYesStart date (YYYY-MM-DD or relative: "today", "yesterday", "7daysAgo", "30daysAgo")
property_idYesGA4 property ID (numeric, e.g., "123456789")
dimension_filterNoOptional dimension filter object (GA4 FilterExpression format)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries full burden. Description correctly states it 'runs a report' (read operation) but does not disclose limits (default 100, max 10000 rows) or potential delays for large queries. The limit is documented in schema but not in description.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with clear structure, front-loaded with purpose. Could be slightly more concise by removing redundant 'Retrieve analytics data' phrase, but overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description should explain return format or behavior. It does not. Also lacks details on relative dates (e.g., '7daysAgo') which are in schema but not in description. Adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds examples for dimensions and metrics but does not add meaning beyond schema for parameters like order_bys or dimension_filter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it runs a report on a GA4 property, specifying dimensions, metrics, and date ranges. However, does not distinguish from sibling tools like ga_get_metadata or ga_get_realtime, which also involve GA data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. Does not mention prerequisites (e.g., property_id must be valid) or cases where ga_get_realtime or other tools might be more appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears the burden. It discloses that memories can be retrieved by key or listed, and that they persist across sessions. However, it does not mention whether retrieval is destructive, what happens if key doesn't exist, or performance implications. The basic behavior is clear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the action and key functionality. The second sentence adds context about use case. Efficient, though the second sentence could be integrated.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple schema and no output schema, the description covers the main behavior. However, it lacks details on return format, error handling (e.g., key not found), and whether the memory list is ordered. Adequate but not complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds the context that omitting key lists all memories, which aligns with the schema's optional key. No additional detail beyond the schema's own description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a stored memory by key or lists all memories. The verb 'retrieve' and resource 'memory' are specific, and the dual functionality (by key or list all) is clearly distinguished from siblings like 'remember' and 'forget'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says when to use it: 'to retrieve context you saved earlier.' It also implies when not to use key (to list all). However, it does not provide explicit alternatives or exclusions relative to other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses behavioral traits: stores key-value pairs, persists for authenticated users vs. 24-hour expiration for anonymous sessions. It doesn't mention any destructive behavior or limits like maximum key/value size.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each adding value: purpose, use cases, and persistence details. No wasted words, front-loaded with the core function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and simple key-value nature, the description fully covers what an agent needs to know: what it stores, when to use it, and persistence behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the description adds little beyond the schema. The description mentions 'key-value pair' but does not provide additional meaning about parameter constraints beyond what the schema already describes.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool stores a key-value pair in session memory, with specific verb 'store' and resource 'key-value pair'. It distinguishes itself from siblings like 'recall' and 'forget' by explaining the memory type and persistence behavior.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly mentions when to use it (save intermediate findings, user preferences, context across calls) and provides context about persistence differences between authenticated and anonymous users. However, it doesn't explicitly state when not to use it or name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.