Skip to main content
Glama

Server Details

FAS MCP — USDA Foreign Agricultural Service (trade & global production data)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-fas
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 9 of 9 tools scored. Lowest: 3.2/5.

Server CoherenceA
Disambiguation4/5

Most tools have clearly distinct purposes (e.g., fas_exports vs fas_imports vs fas_production). However, ask_pipeworx and discover_tools overlap slightly in intent (both help find information), though their mechanisms differ enough to be distinguishable.

Naming Consistency4/5

Tools follow a consistent snake_case pattern, with verbs like 'ask', 'discover', 'forget', 'recall', 'remember', and 'fas_' prefix for agricultural tools. Minor inconsistency: 'fas_commodity_codes' is a noun phrase while others are verb_noun (e.g., 'fas_exports').

Tool Count5/5

9 tools is well-scoped for a server that combines memory management, tool discovery, and USDA FAS data access. Each tool earns its place without redundancy or excess.

Completeness3/5

The FAS tools cover exports, imports, production, and commodity codes but miss obvious operations like updating or deleting data (though those may not be applicable for read-only data). Memory tools provide basic CRUD. The ask_pipeworx tool acts as a catch-all, but there is no explicit tool for bulk operations or data export.

Available Tools

9 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description explains that the tool selects the right data source and fills arguments, which adds behavioral context beyond the schema. However, with no annotations, more details on limitations (e.g., data recency, failure modes) would improve transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, with the first sentence stating the core functionality. It uses bullet-style examples without wasting words, earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple schema and lack of output schema, the description adequately covers the tool's behavior. It explains that the result is returned without needing to specify the source, which is sufficient for a straightforward natural language interface.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The only parameter 'question' is well-described in the schema, and the description adds value by explaining that the question should be in natural language and providing examples. Schema coverage is 100%, so the description complements it effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to answer questions in plain English by selecting the best data source. It distinguishes itself from sibling tools by acting as an orchestrator rather than a direct data fetcher.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance by including examples and implying that users should describe what they need without browsing tools. However, it does not specify when not to use this tool or mention alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavior. It mentions the tool returns tools with names and descriptions, but does not detail how search ranking works, whether it uses embeddings or keyword matching, or if it has any side effects. However, as a read-only search tool, the behavior is relatively straightforward.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences and front-loaded with the main purpose. Every sentence adds value, but the second sentence could be more concise (e.g., 'Call this first when many tools are available'). No fluff, but slight improvement possible.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is a search tool with no output schema, the description explains what it returns (relevant tools with names and descriptions) and when to use it. It lacks detail on result format or behavior with edge cases (e.g., empty query), but is sufficient for the typical use case.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds value by explaining the purpose of the query parameter with examples ('analyze housing market trends'). The limit parameter is described in the schema and is straightforward. The description does not elaborate on the limit parameter beyond the schema, but the examples for query add meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches a tool catalog by natural language query, returns relevant tools with names and descriptions, and specifies it should be called first when many tools are available. This differentiates it from sibling tools which are domain-specific (e.g., fas_exports, fas_imports) rather than a discovery tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear when-to-use guidance and implies it is a preliminary step before using other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fas_commodity_codesAInspect

Search agricultural commodity codes and names. Returns commodity IDs, descriptions, and categories. Use results with fas_production, fas_exports, and fas_imports.

ParametersJSON Schema
NameRequiredDescriptionDefault
searchNoSearch keyword (e.g., "soy", "wheat"). Optional.
categoryNoFilter by category: "Grains", "Oilseeds", "Meat", "Dairy", "Fiber", "Sugar", "Tropical" (optional)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries full burden. It discloses the tool lists codes and supports filtering, but does not mention if the list is complete, paginated, or if there are any rate limits. However, given the simple nature of a list tool, the description is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three sentences that front-load the purpose, state usage context, and mention filtering. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description doesn't need to explain return values. The tool has 2 optional params with good schema coverage, and the description explains the purpose and usage context. It is complete enough for a simple list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds that parameters support filtering by category or keyword, but the schema already provides similar descriptions. No additional meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists USDA FAS PSD commodity codes with names and categories, which is a specific verb+resource combination. It distinguishes itself from siblings by explaining these codes are used with other FAS tools like fas_production, and it's the only tool for listing codes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use this tool: to get commodity codes for use with other FAS tools. It mentions filtering options but does not explicitly state when not to use it or compare to alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fas_exportsAInspect

Check US agricultural exports by commodity and destination. Returns export volumes, values, and trade partner details. Use fas_commodity_codes to find commodity codes (e.g., "corn", "wheat").

ParametersJSON Schema
NameRequiredDescriptionDefault
countryNoDestination country code (e.g., "CN" for China, "MX" for Mexico, "JP" for Japan). Optional — omit for all destinations.
end_yearNoEnd year (e.g., "2024"). Optional.
commodityYesCommodity name (e.g., "corn", "soybeans", "wheat", "beef", "pork", "cotton") or commodity code
start_yearNoStart year (e.g., "2020"). Optional.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description explains the data source (GATS) and outputs (volumes and values), but lacks detail on data freshness, update frequency, or any constraints. With no annotations, it should provide more behavioral context, but it does not contradict the (empty) annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at three sentences, front-loading the main purpose. Every sentence adds value, though it could be slightly more structured with usage hints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters and no output schema, the description partially compensates by noting output types (volumes/values) and data source. However, it doesn't explain the time range parameters or any default behavior, leaving some gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already has 100% coverage with descriptions. The description adds context about the data system (GATS) and output types (volumes, values), which goes beyond the schema's field descriptions. It helps understand the purpose of parameters like commodity and country.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it gets US agricultural export data by commodity and destination country, specifying the data source (USDA FAS GATS). This distinguishes it from sibling tools like fas_imports and fas_production, though it doesn't explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for agricultural export queries, but provides no guidance on when to use this vs. fas_imports or fas_production. No explicit when-not-to-use or alternative tools mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fas_importsBInspect

Check US agricultural imports by commodity and origin country. Returns import volumes, values, and source country details. Use fas_commodity_codes to find commodity codes (e.g., "coffee", "cocoa").

ParametersJSON Schema
NameRequiredDescriptionDefault
countryNoOrigin country code (e.g., "BR" for Brazil, "CO" for Colombia). Optional — omit for all origins.
end_yearNoEnd year (optional)
commodityYesCommodity name (e.g., "coffee", "cocoa", "sugar", "beef") or commodity code
start_yearNoStart year (optional)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries full burden. It mentions getting data by commodity and origin but does not disclose whether the tool is read-only, idempotent, or any side effects. No mention of rate limits or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently states purpose and data scope. No wasted words, but could benefit from brief notes on optional parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and simple parameters, the description adequately covers what the tool does. However, it lacks details on return structure (e.g., time series or single values) and how optional parameters affect results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description does not add meaning beyond the schema; it reiterates commodity and origin. No extra context for parameters like start_year or end_year.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it gets US agricultural import data by commodity and origin country, mentioning import volumes and values from USDA FAS trade data. It differentiates from sibling tools like fas_exports and fas_production by specifying 'imports' and the data source.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for US agricultural import queries but does not explicitly state when to use this vs. other FAS tools. No guidance on when not to use or prerequisites beyond the required commodity parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fas_productionAInspect

Get global agricultural production, consumption, and inventory data by commodity and country. Returns production volumes, supply estimates, consumption figures, and trade flows by year.

ParametersJSON Schema
NameRequiredDescriptionDefault
countryNoCountry code (e.g., "US", "BR", "CN"). Optional — omit for world totals.
commodityYesCommodity name (e.g., "corn", "soybeans", "wheat") or PSD commodity code (e.g., "0440000")
market_yearNoMarket year (e.g., "2024"). Optional.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It indicates the tool retrieves estimates but does not disclose if the data is read-only, if it has rate limits, or if it requires authentication. The description is accurate but lacks detail on potential delays or data freshness. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, front-loaded with the tool's purpose and scope. Every word adds value; no fluff. It efficiently communicates the core functionality without over-explaining.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description should hint at the return format. It does not, leaving the agent unsure what to expect. However, the tool's name and siblings suggest it returns structured data. The description covers the domain and data types adequately but omits details like pagination or response size limits. It is minimally complete for a data retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers all parameters with descriptions, so baseline is 3. The description adds context by listing the types of data returned (consumption, stocks, trade flows), which complements the schema. However, it does not explain how the commodity parameter accepts both names and codes, which is already in the schema. The description adds marginal value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves USDA FAS PSD data, specifying the domain (agricultural commodities) and the types of data (production, supply, distribution, consumption, stocks, trade flows). This distinguishes it from sibling tools like fas_exports and fas_imports, which focus on specific trade flows. However, it could be more precise by noting that it covers estimates and historical data, not just current values.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (for global production and supply data) but does not explicitly contrast with siblings like fas_exports or fas_imports, which are more specialized. No guidance on when not to use it or alternatives is provided. The context signals show three parameters, but the description only hints at optionality for country and market year without fully explaining how they modify the query.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetBInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must convey behavioral traits. It states 'Delete' which implies destructive action, but does not confirm whether deletion is immediate, requires confirmation, or affects other data. Returns no output schema, so agent doesn't know response format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is one short sentence, no wasted words. Could be slightly improved by adding 'Permanently' or 'Immediately' to clarify behavior.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool is simple (1 param, no output schema, no annotations). Description covers basic purpose but lacks behavioral detail like permanence, error cases, or what happens after deletion. For a simple tool this is adequate but not complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter 'key', and description adds minimal value beyond schema's 'Memory key to delete'. The description implies the key identifies the memory, but no further detail (e.g., format, uniqueness, case sensitivity).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses clear verb 'Delete' and resource 'stored memory by key', indicating precise action. Sibling tools include 'recall' and 'remember', which have different purposes (retrieval vs storage), so this tool is well-differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this vs alternatives like 'recall' or 'remember'. No mention of prerequisites or side effects (e.g., whether deletion is permanent or reversible).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool retrieves stored memories and that omitting the key lists all. It implies a read-only operation, which is appropriate. No contradiction with missing annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, concise and front-loaded. Every word adds value, no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with one optional parameter and no output schema, the description is nearly complete. It explains the two modes and usage context. Could mention that it retrieves from session memory, but that's implied.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description adds meaning by explaining that omitting the key lists all memories. The schema describes 'key' as a string to retrieve; the description clarifies its optionality and dual behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'retrieve' and the resource 'memory', and distinguishes two modes: retrieving a specific key or listing all. It differentiates from siblings like 'remember' and 'forget' by focusing on retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to omit the key (to list all) and mentions using it for context from earlier sessions. However, it does not explicitly state when not to use it or mention alternatives like 'discover_tools'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries the full burden. It clearly discloses that memory is session-based, with persistence differences for authenticated vs. anonymous users, and implies the operation is non-destructive. This goes beyond minimal disclosure, though it could mention if overwriting is allowed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three clear sentences. The first sentence states purpose, the second provides usage scenarios, and the third explains persistence behavior. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (two parameters, no output schema, no annotations), the description is nearly complete. It covers purpose, usage, and persistence behavior. It could mention behavior on duplicate keys or memory limits, but these are minor gaps for a simple store tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds context about typical values (e.g., 'subject_property', 'target_ticker') but does not add meaning beyond what the schema already provides. It offers useful examples but no additional parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool stores a key-value pair in session memory, with a specific verb ('store') and resource ('key-value pair in your session memory'). It distinguishes itself from siblings like 'recall' and 'forget' by focusing on saving data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use the tool (save intermediate findings, user preferences, context across tool calls) and includes details about persistence (authenticated vs. anonymous). However, it does not explicitly say when not to use it or mention alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.