Skip to main content
Glama

Server Details

Econdata MCP — wraps BLS (Bureau of Labor Statistics) public API v2

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-econdata
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.8/5 across 9 of 9 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation3/5

Most tools have distinct purposes, but there is some overlap that could cause confusion. For example, 'get_cpi' and 'get_unemployment' are specific economic indicators, while 'get_series' can fetch these same series (e.g., using IDs like CUUR0000SA0 or LNS14000000), creating redundancy. Additionally, 'ask_pipeworx' serves as a high-level query tool that might overlap with the functionality of other tools, though its description clarifies it as a meta-tool for accessing data sources.

Naming Consistency3/5

The naming conventions are mixed, with some tools using verb_noun patterns (e.g., 'get_cpi', 'get_unemployment', 'get_series', 'get_employment_by_industry') and others using single verbs or different structures (e.g., 'ask_pipeworx', 'discover_tools', 'forget', 'recall', 'remember'). This inconsistency makes the set less predictable, though the names are generally readable and descriptive.

Tool Count4/5

With 9 tools, the count is reasonable and well-scoped for an economic data server. It covers core economic indicators, data retrieval, and memory management, without being overly bloated. However, the inclusion of meta-tools like 'ask_pipeworx' and 'discover_tools' might slightly inflate the count relative to the core data functions.

Completeness4/5

The tool set provides good coverage for economic data retrieval, including key indicators like CPI, unemployment, and employment by industry, plus a general 'get_series' for broader BLS data. Memory tools ('remember', 'recall', 'forget') add useful context management. Minor gaps exist, such as lack of tools for international data or more advanced economic metrics, but the core U.S. economic domain is adequately covered for basic queries.

Available Tools

9 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: Pipeworx 'picks the right tool, fills the arguments, and returns the result,' which explains the automated tool selection and parameter filling process. However, it doesn't mention rate limits, authentication needs, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core functionality, followed by supporting details and concrete examples. Every sentence earns its place by clarifying the tool's value proposition, usage context, and practical applications without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no annotations and no output schema, the description provides good context about the automated backend process and example use cases. However, it doesn't explain the format or structure of returned answers, which would be helpful given the lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single 'question' parameter. The description adds minimal value beyond the schema by reinforcing it's 'in plain English' and providing examples, but doesn't elaborate on constraints or format details. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer from data source'), and distinguishes from siblings by emphasizing natural language interaction versus structured tool selection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided: 'No need to browse tools or learn schemas — just describe what you need.' This clearly positions it as an alternative to sibling tools like discover_tools or get_series, indicating when to use this tool (natural language queries) versus others (structured tool invocation).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the behavioral trait of returning 'most relevant tools with names and descriptions' and suggests calling it first in large tool environments. However, it doesn't mention rate limits, authentication needs, or what happens with no matches, leaving some behavioral aspects unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly concise with two sentences that each earn their place. The first sentence explains the core functionality, and the second provides crucial usage guidance. No wasted words, and information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search functionality with 2 parameters) and no output schema, the description provides good context about what the tool does and when to use it. However, without annotations or output schema, it could benefit from more detail about return format or error conditions to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters (query and limit). The description adds marginal value by emphasizing the natural language aspect of the query parameter ('by describing what you need'), but doesn't provide additional semantic context beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb ('search') and resource ('Pipeworx tool catalog') with the purpose of finding relevant tools by describing needs. It distinguishes from sibling tools like get_cpi or get_employment_by_industry by focusing on tool discovery rather than data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides when-to-use guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This gives clear context about appropriate usage scenarios and distinguishes it from direct data access tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While 'Delete' implies a destructive mutation, the description doesn't specify whether this operation is reversible, what permissions are required, what happens on success/failure, or any rate limits. This leaves significant behavioral gaps for a destructive operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words. It's appropriately sized for a simple tool and front-loads the essential information. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive mutation tool with no annotations and no output schema, the description is insufficient. It doesn't explain what constitutes a 'stored memory', what happens after deletion, error conditions, or return values. Given the complexity of a delete operation and lack of structured documentation, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the 'key' parameter is fully documented in the schema), so the baseline is 3. The description adds no additional parameter semantics beyond what's already in the schema - it doesn't explain key format, constraints, or examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and the target resource ('a stored memory by key'), providing specific verb+resource information. However, it doesn't differentiate this tool from its sibling 'recall' (which likely retrieves memories) or 'remember' (which likely stores memories), missing explicit sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'recall' or 'remember'. There's no mention of prerequisites, conditions for use, or exclusions. The agent must infer usage from tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_cpiBInspect

Get US Consumer Price Index for all urban consumers over time. Returns monthly index values by year and month to track inflation.

ParametersJSON Schema
NameRequiredDescriptionDefault
end_yearNoEnd year as 4-digit string (e.g. "2024"). Optional.
start_yearNoStart year as 4-digit string (e.g. "2020"). Optional.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool returns data but doesn't mention whether it's a read-only operation, potential rate limits, data freshness, error conditions, or authentication requirements. For a data retrieval tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise - two sentences that efficiently convey the tool's purpose and return format without any wasted words. It's front-loaded with the core functionality and follows with return details, making every sentence earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (economic data retrieval with optional date filtering), no annotations, no output schema, and 100% schema coverage, the description is minimally adequate. It explains what data is returned but doesn't cover behavioral aspects like data sources, update frequency, or error handling. The description meets basic requirements but leaves important contextual gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description doesn't mention any parameters, but schema description coverage is 100% with both parameters well-documented in the schema (start_year and end_year as optional 4-digit strings). Since the schema does the heavy lifting, the baseline score of 3 is appropriate - the description adds no parameter information beyond what's already in the structured schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get'), identifies the exact resource ('US Consumer Price Index for All Urban Consumers (BLS series CUUR0000SA0)'), and distinguishes it from siblings by specifying the particular economic indicator. It also details what data is returned ('year, month, and index value for each period'), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the sibling tools (get_employment_by_industry, get_series, get_unemployment). It doesn't mention alternatives, prerequisites, or any context for choosing this specific CPI data tool over others. The agent must infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_employment_by_industryAInspect

Get US non-farm payroll employment by industry (manufacturing, construction, retail, financial, government, etc.). Returns employment figures in thousands by period.

ParametersJSON Schema
NameRequiredDescriptionDefault
end_yearNoEnd year as 4-digit string (e.g. "2024"). Optional.
industryNoIndustry to retrieve. One of: "total_nonfarm", "manufacturing", "construction", "retail", "financial", "government". Defaults to "total_nonfarm".
start_yearNoStart year as 4-digit string (e.g. "2020"). Optional.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the return format ('employment in thousands') but lacks details on data freshness, source, rate limits, error handling, or whether it's a read-only operation. For a data retrieval tool with zero annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and includes essential details like industry options and return format. Every word earns its place without redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is adequate but incomplete. It covers the core purpose and return units, but lacks behavioral context (e.g., data source, update frequency) and does not fully compensate for the absence of annotations or output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema fully documents all parameters. The description adds value by listing industry options and specifying the default, but does not provide additional context beyond what the schema already covers, such as date range implications or data availability constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get'), resource ('US non-farm payroll employment figures'), and scope ('by industry'), with specific industry options listed. It distinguishes from sibling tools like 'get_cpi' or 'get_unemployment' by focusing on employment data rather than inflation or unemployment rates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving employment data by industry, but does not explicitly state when to use this tool versus alternatives like 'get_series' or 'get_unemployment'. No guidance on prerequisites, exclusions, or comparative contexts is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_seriesAInspect

Fetch any economic time series by ID (e.g., "CPUR0000SA0" for CPI, "LNS14000000" for unemployment). Returns historical data points with dates and values.

ParametersJSON Schema
NameRequiredDescriptionDefault
end_yearNoEnd year as 4-digit string (e.g. "2024"). Optional.
series_idYesBLS series ID (e.g. "CUUR0000SA0" for CPI)
start_yearNoStart year as 4-digit string (e.g. "2020"). Optional.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the return format ('data points with year, period, and value') and provides example series IDs, which adds useful context. However, it lacks details on behavioral traits like rate limits, error handling, or data freshness, which are important for a data-fetching tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by return details and examples, all in two efficient sentences with zero waste. Every sentence earns its place by clarifying functionality and usage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is moderately complete for a simple data-fetching tool. It covers the purpose, return format, and examples, but lacks details on error cases, rate limits, or output structure beyond basic fields, which could be important for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (series_id, start_year, end_year) with descriptions and optionality. The description adds value by providing example series IDs, but does not explain parameter semantics beyond what the schema provides, such as date format constraints or interactions between start_year and end_year.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Fetch a BLS time series by series ID') and resource ('BLS time series'), distinguishing it from siblings like get_cpi, get_employment_by_industry, and get_unemployment by being a general-purpose series fetcher rather than a specific metric tool. It provides concrete examples of series IDs to illustrate its scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by providing example series IDs for common metrics (CPI, unemployment rate, employment), suggesting this tool is for general time series retrieval rather than specialized sibling tools. However, it does not explicitly state when to use this tool versus the alternatives or any exclusions, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_unemploymentAInspect

Get US civilian unemployment rate trends over time. Returns monthly rates by year and month for analysis and forecasting.

ParametersJSON Schema
NameRequiredDescriptionDefault
end_yearNoEnd year as 4-digit string (e.g. "2024"). Optional.
start_yearNoStart year as 4-digit string (e.g. "2020"). Optional.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the return format (year, month, rate for each period) which is valuable behavioral information, but doesn't mention data freshness, source reliability, rate limits, error conditions, or whether this is a read-only operation (though 'Get' implies reading).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place: first establishes purpose and data source, second specifies return format. No wasted words, well-structured, and front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only data retrieval tool with 100% schema coverage and no output schema, the description provides good context: purpose, specific data series, and return format. However, it lacks information about data range defaults (what happens when no parameters provided), temporal granularity, or potential limitations that would be helpful for complete understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both optional parameters (start_year and end_year). The description adds no additional parameter information beyond what's in the schema, maintaining the baseline score for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb ('Get') and resource ('US civilian unemployment rate over time'), identifies the exact data series (BLS series LNS14000000), and distinguishes from siblings by specifying the unemployment rate data rather than CPI, employment by industry, or generic series data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving unemployment rate time series data, but provides no explicit guidance on when to use this tool versus alternatives like get_cpi or get_employment_by_industry. There's no mention of prerequisites, limitations, or comparative use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the tool retrieves stored memories and works across sessions, which is useful behavioral context. However, it doesn't mention potential limitations like memory size, retrieval speed, or error conditions (e.g., if a key doesn't exist).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences. The first sentence explains the core functionality with parameter behavior, and the second provides usage context. Every word serves a purpose with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description provides adequate basic information about what the tool does and when to use it. However, it lacks details about return format (e.g., structure of retrieved memories), error handling, or session persistence specifics that would be helpful given the absence of structured metadata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the baseline is 3. The description adds meaningful context by explaining that omitting the key parameter results in listing all stored memories, which clarifies the optional parameter's semantic effect beyond the schema's technical description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving previously stored memories by key or listing all memories. It specifies the verb 'retrieve' and resource 'memory', but doesn't explicitly differentiate from sibling tools like 'remember' or 'forget' beyond mentioning context saved earlier.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool: 'to retrieve context you saved earlier in the session or in previous sessions.' It also explains the key parameter behavior: 'omit key to list all keys.' However, it doesn't explicitly contrast with alternatives like 'discover_tools' or 'get_series'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it explains storage persistence differences (authenticated vs. anonymous sessions with 24-hour limit) and the tool's purpose for cross-call context. However, it doesn't cover potential limitations like size constraints or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by usage context and behavioral details. Every sentence adds value without redundancy, making it efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description does a good job covering purpose, usage, and key behavioral aspects like persistence. However, it lacks details on return values or error handling, which would be helpful given the mutation nature and absence of structured output information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds no additional parameter-specific information beyond what's in the schema, but it contextualizes their use with examples like 'subject_property' and 'user_preference', aligning with the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Store') and resource ('key-value pair in your session memory'), and distinguishes from siblings like 'recall' (retrieve) and 'forget' (delete). It specifies the storage mechanism and purpose, making the tool's function unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use it ('to save intermediate findings, user preferences, or context across tool calls'), but does not explicitly mention when not to use it or name alternatives like 'recall' for retrieval. It gives practical examples but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.