Skip to main content
Glama

Server Details

Science MCP — free science data APIs

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-science
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 9 of 9 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, such as get_air_quality for environmental data and get_apod for astronomy, but ask_pipeworx and discover_tools could cause confusion as both relate to finding or accessing tools, with overlapping functionality in tool discovery and usage. The other tools are clearly differentiated by their specific data sources or memory functions.

Naming Consistency4/5

The naming follows a consistent verb_noun pattern across most tools, such as get_air_quality and recall, with clear actions and targets. However, ask_pipeworx deviates slightly by using 'ask' instead of 'get' or similar, and forget uses a verb without a noun, but overall the conventions are readable and mostly uniform.

Tool Count5/5

With 9 tools, the count is well-scoped for a science-oriented server, covering diverse data sources like air quality, earthquakes, and ISS location, along with utility functions for memory and tool discovery. Each tool earns its place without feeling excessive or insufficient for the domain.

Completeness3/5

The server provides good coverage for accessing various science-related data sources, but there are notable gaps in lifecycle operations, such as no update or delete functions for the data tools, and the memory tools (remember, recall, forget) are basic without advanced querying. The domain is broad, but the surface lacks comprehensive CRUD or analytical capabilities for deeper science workflows.

Available Tools

9 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It explains that Pipeworx 'picks the right tool, fills the arguments, and returns the result,' which covers the automation aspect. However, it lacks details on error handling, rate limits, authentication needs, or what happens if no data source is found. The description adds some context but is incomplete for behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core functionality in the first sentence, followed by explanatory details and examples. Every sentence earns its place by clarifying usage, benefits, or providing concrete examples. It is appropriately sized and structured for easy comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (natural language querying with automated tool selection) and no annotations or output schema, the description does a good job explaining the purpose and usage. However, it lacks details on output format, error cases, or limitations, which would be helpful for completeness. The examples partially compensate, but some gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'question' parameter documented as 'Your question or request in natural language.' The description reinforces this by stating 'Ask a question in plain English' and providing examples, adding practical meaning beyond the schema. Since there's only one parameter, the baseline is high, and the description effectively complements the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'). It distinguishes from siblings by emphasizing natural language input versus specific tool selection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It provides clear alternatives by implication (use other tools for specific, schema-driven queries) and includes examples ('What is the US trade deficit with China?', etc.) to illustrate appropriate use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by explaining key behavioral traits: it's a search operation ('Search the Pipeworx tool catalog'), returns results ('Returns the most relevant tools with names and descriptions'), and has a specific use case context ('500+ tools available'). It doesn't mention rate limits or authentication needs, but covers the core behavior adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place: the first explains what the tool does, the second provides crucial usage guidance. No wasted words, front-loaded with core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with no annotations, 100% schema coverage, but no output schema, the description is quite complete. It explains purpose, usage context, and behavior. The main gap is not describing the return format in detail, but given the tool's relative simplicity, this is a minor omission.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema, maintaining the baseline score of 3 for adequate but no extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Search the Pipeworx tool catalog') and resources ('tool catalog'), and explicitly distinguishes it from siblings by mentioning it's for discovering tools among '500+ tools available' when other tools are specific data retrieval functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This gives clear when-to-use instructions and implies alternatives (use other tools once you've discovered the right ones).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. While 'Delete' implies a destructive operation, the description doesn't specify whether deletion is permanent, reversible, requires specific permissions, or has side effects. It also doesn't mention what happens if the key doesn't exist or describe the response format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that communicates the core functionality without any wasted words. It's appropriately sized for a simple tool with one parameter.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive operation with no annotations and no output schema, the description is insufficient. It doesn't explain what constitutes a 'stored memory', whether deletion has confirmation steps, what the response looks like, or how this tool relates to sibling memory operations. The context signals indicate this is a mutation tool that needs more behavioral disclosure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'key' documented as 'Memory key to delete'. The description adds minimal value beyond this, only restating that deletion is by key. With complete schema coverage, the baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and resource ('a stored memory by key'), making the purpose immediately understandable. However, it doesn't distinguish this tool from potential siblings like 'recall' or 'remember' that might also interact with memories, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'recall' and 'remember' that likely interact with the same memory system, there's no indication of when deletion is appropriate versus retrieval or storage operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_air_qualityBInspect

Check air quality at a location (e.g., 'New York', 'London'). Returns AQI score, PM2.5, PM10, ozone, and NO2 levels.

ParametersJSON Schema
NameRequiredDescriptionDefault
latitudeYesLatitude
longitudeYesLongitude
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states what the tool does but doesn't describe important behavioral aspects: what 'near a location' means (radius, precision), what measurements are returned (PM2.5, AQI, etc.), whether there are rate limits, authentication requirements, or error conditions. The description is functional but lacks operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core purpose and includes all essential elements: action, resource, location context, and data source. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 2-parameter tool with no annotations and no output schema, the description provides adequate basic context about what the tool does. However, it lacks important operational details that would be helpful for an AI agent: what format/units the measurements are in, what 'near' means, typical response structure, or any limitations. The description is minimally complete but could be more informative.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters (latitude, longitude) with basic descriptions. The description adds context that these parameters define 'a location' for air quality measurements, but doesn't provide additional semantic details beyond what the schema states. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('air quality measurements') with source attribution ('from OpenAQ') and location context ('near a location'). It doesn't differentiate from siblings, which are unrelated (APOD, earthquakes, ISS location), but that's not needed here since they serve completely different domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided about when to use this tool versus alternatives. The description mentions 'from OpenAQ' which implies this is the data source, but there's no discussion of when to choose this over other air quality APIs or tools, nor any prerequisites or constraints for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_apodBInspect

Get NASA's Astronomy Picture of the Day with image URL, title, and explanation. Optionally specify a date (e.g., '2024-01-15').

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoDate in YYYY-MM-DD format (default: today)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states what the tool does but doesn't cover key traits like rate limits, authentication needs, error handling, or what happens if an invalid date is provided. This is a significant gap for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without any wasted words. It's appropriately sized for a simple tool, earning a high score for conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 optional parameter, no output schema, no annotations), the description is minimally adequate but lacks completeness. It doesn't explain return values or potential errors, which would be helpful since there's no output schema, leaving gaps in understanding the tool's full behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'date' parameter documented as 'Date in YYYY-MM-DD format (default: today)'. The description doesn't add any meaning beyond this, so it meets the baseline of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('NASA Astronomy Picture of the Day'), making it immediately understandable. However, it doesn't differentiate from sibling tools (e.g., get_air_quality, get_earthquakes), which are distinct but not directly comparable, so a 4 is appropriate rather than a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as whether it's for current or historical images, or if there are limitations like date ranges. It lacks explicit context or exclusions, leaving usage implied at best.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_earthquakesCInspect

Search recent earthquakes by location and magnitude threshold. Returns magnitude, depth, coordinates, and timestamp for each event.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoLook back N days (1-30, default 1)
min_magnitudeNoMinimum magnitude (default 4.0)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. 'Get recent earthquakes' implies a read-only operation, but the description doesn't explicitly state this. It also doesn't mention rate limits, authentication requirements, data freshness, or what format/scope the data returns. For a data retrieval tool with zero annotation coverage, this leaves significant behavioral questions unanswered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that communicates the core functionality without any wasted words. It's appropriately sized for a simple data retrieval tool and front-loads the essential information. Every word earns its place in this minimal description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is insufficiently complete. It doesn't explain what data is returned, in what format, or any limitations of the USGS data source. Without annotations or output schema, the agent has no information about the response structure or behavioral constraints beyond the basic purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with both parameters clearly documented in the schema itself. The description doesn't add any parameter information beyond what's already in the schema. According to the scoring rules, when schema_description_coverage is high (>80%), the baseline is 3 even with no param info in the description, which applies here.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('recent earthquakes from USGS'), making the purpose immediately understandable. It doesn't distinguish from sibling tools (which are unrelated geospatial/astronomy APIs), but that's not necessary since they serve completely different domains. The description is specific enough for the agent to understand this is a data retrieval tool for earthquake information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While the sibling tools are unrelated (air quality, astronomy picture, ISS location), there's no mention of whether this is the primary earthquake data source, if there are other earthquake tools, or any prerequisites for use. The agent must infer usage purely from the tool name and description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_iss_locationBInspect

Get the current position of the International Space Station. Returns latitude, longitude, altitude, and velocity.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states what the tool does but doesn't describe how it behaves: Is it real-time or cached data? What's the update frequency? Are there rate limits? Does it require authentication? For a tool with zero annotation coverage, this is a significant gap in behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that states exactly what the tool does with zero wasted words. It's appropriately sized for a simple zero-parameter tool and is perfectly front-loaded with the essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple zero-parameter tool with no output schema, the description adequately covers the basic purpose. However, without annotations or output schema, it lacks important behavioral context about data freshness, reliability, and format. The description is complete enough for basic usage but leaves gaps that could affect agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters with 100% schema description coverage, so the schema already fully documents the input requirements. The description appropriately doesn't waste space discussing parameters that don't exist. A baseline of 4 is appropriate for zero-parameter tools where the description focuses on purpose rather than input semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('current location of the International Space Station'). It distinguishes from siblings by focusing on ISS location rather than air quality, astronomy pictures, or earthquakes. However, it doesn't explicitly differentiate from hypothetical similar tools like 'get_iss_crew' or 'get_iss_speed', so it's not a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, timing considerations, or comparisons to sibling tools. The agent must infer usage from the name and description alone without explicit context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses key behavioral traits: retrieval vs. listing based on parameter presence, persistence across sessions, and that memories are 'previously stored' (implying 'remember' must be used first). However, it doesn't mention error handling (e.g., if key doesn't exist), return format, or any rate limits. The description adds useful context but lacks comprehensive behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core functionality, and every phrase earns its place. The first sentence states the dual operations, and the second provides usage context without redundancy. It's efficiently structured and avoids unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (retrieval/listing with one optional parameter), no annotations, and no output schema, the description is largely complete. It covers purpose, usage, and parameter semantics well. However, it lacks details on return values (e.g., format of retrieved memory or list) and error cases, which would be helpful for an agent. It compensates somewhat with clear behavioral context but has minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the optional 'key' parameter. The description adds meaningful semantics by explaining the conditional behavior: 'omit to list all keys' clarifies the dual functionality. It also ties the parameter to 'memory key to retrieve' and 'context you saved earlier', providing context beyond the schema's technical description. With 0 required parameters, this exceeds the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory by key', 'all stored memories'). It distinguishes from siblings like 'remember' (which stores) and 'forget' (which removes). The phrase 'context you saved earlier' reinforces the retrieval function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('to retrieve context you saved earlier') and provides clear conditional usage ('omit key to list all keys'). It distinguishes from alternatives by referencing 'saved earlier in the session or in previous sessions', implying 'remember' is for saving. No misleading guidance is present.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the persistence model (authenticated users get persistent memory, anonymous sessions last 24 hours) and the scope (session memory for cross-tool context). It does not cover aspects like rate limits or error conditions, but the disclosed traits are highly relevant for tool selection.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the core purpose with examples, and the second adds critical behavioral context about persistence. Every sentence earns its place with no redundant or vague language, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (storage with persistence rules), no annotations, and no output schema, the description does well by covering purpose, usage, and key behavioral traits. It lacks details on return values or error handling, but for a storage tool, the provided information is largely complete for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters ('key' and 'value'). The description adds minimal value beyond the schema by implying the parameters are used for storage but does not provide additional syntax, format, or constraints. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Store a key-value pair') and resource ('in your session memory'), distinguishing it from sibling tools like 'recall' (likely for retrieval) and 'forget' (likely for deletion). It provides concrete examples of what can be stored ('intermediate findings, user preferences, or context across tool calls'), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), providing clear context. However, it does not mention when NOT to use it or explicitly name alternatives (e.g., 'recall' for retrieval), which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.