Skip to main content
Glama

CIPHER x402 — Paid Solana & Crypto Tools

Server Details

8 CIPHER tools — Solana scan, breach check, Jito, FRED, Drift, repo health, more. x402-paid on Base.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
cryptomotifs/cipher-x402-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 8 of 8 tools scored. Lowest: 3.5/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose targeting different domains: Solana finance (drift exposure, wallet scan, Jito tips), security (password breach, security audit), data retrieval (FRED series, GitHub health, premium chapters). No overlap exists—an agent can easily select the right tool based on the task.

Naming Consistency5/5

All tools follow a consistent verb_noun or verb_noun_noun pattern (e.g., check_drift_exposure, get_premium_cipher_chapter, solana_wallet_scan). The naming is uniform across the set with no mixing of conventions, making it predictable and readable.

Tool Count4/5

With 8 tools, the count is reasonable for a server offering paid crypto and utility tools. It covers multiple domains without being overwhelming, though the scope is broad (Solana, security, data, GitHub), which might feel slightly scattered but remains manageable.

Completeness3/5

The server lacks clear domain cohesion, making completeness hard to assess. For Solana tools, there's coverage for exposure, scanning, and tips, but no create/update/delete operations. For other areas like security or data, tools are isolated without lifecycle coverage, leaving gaps for agent workflows.

Available Tools

15 tools
audit_comp_live_listingsAInspect

Return a curated snapshot of currently-live audit competitions and bug-bounty programs across Code4rena, Cantina, Sherlock, and direct-protocol channels. Useful for solo wardens triaging which contests to enter. Snapshot updates with each cipher-x402-mcp release; treat the data as a hint, always cross-check the platform before submitting. Free, no payment required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description fully discloses data freshness (updates per release), reliability ('hint', 'cross-check'), and cost ('free, no payment required'). No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with core purpose, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless tool with no output schema, description covers purpose, usage, data source, freshness, caveats, and cost completely.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist (schema coverage 100%), so baseline is 4. Description adds no parameter info but none needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Describes specific action ('Return a curated snapshot') and resource ('currently-live audit competitions and bug-bounty programs') with explicit platform names. Distinguishes from diverse sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear use case ('solo wardens triaging which contests to enter') and advises cross-checking. Does not explicitly state when not to use or name alternatives, but context is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_drift_exposureAInspect

Check a Solana wallet's exposure to Drift Protocol — open perp positions, collateral, unrealized PnL, liquidation risk distance. Priced at $0.01 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletYesBase58 Solana wallet address.
_paymentNoOptional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the dual behavior based on payment presence, the pricing model ($0.01 USDC on Base), the payment mechanism (x402 v2 authorization), and the two possible return types. However, it doesn't mention rate limits, error conditions, or authentication requirements beyond the payment mechanism.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero wasted words. The first sentence establishes purpose, the second provides pricing and payment context, and the third explains the dual behavior. Every sentence earns its place by delivering essential information about the tool's operation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a paid API tool with dual behaviors and no output schema, the description does well to explain the core functionality, payment mechanism, and behavioral logic. However, it lacks information about response formats, error handling, or what specific data fields are returned in each scenario, which would be helpful for a tool with this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds some value by explaining the behavioral implications of the '_payment' parameter (determines whether you get paid data vs. accept-list), but doesn't provide additional semantic context beyond what's already in the schema descriptions for either parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check a Solana wallet's exposure to Drift Protocol') and enumerates the exact resources examined (open perp positions, collateral, unrealized PnL, liquidation risk distance). It distinguishes itself from sibling tools like 'solana_wallet_scan' or 'solana_wallet_security_audit_rules' by focusing specifically on Drift Protocol exposure rather than general scanning or security auditing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: when checking Drift Protocol exposure for a Solana wallet. It clearly distinguishes between two usage scenarios based on the presence of the '_payment' parameter: with payment it returns paid response data, without payment it returns the 402 accept-list for signing. This eliminates ambiguity about alternative approaches within the same tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_password_breachAInspect

Check whether a password hash prefix (SHA-1, first 5 chars) appears in the HIBP breach corpus. k-anonymity, no plaintext passwords sent. Priced at $0.005 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.

ParametersJSON Schema
NameRequiredDescriptionDefault
_paymentNoOptional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list.
sha1_prefixYesFirst 5 uppercase hex characters of the SHA-1 hash of the password (e.g. '21BD1').
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does so effectively. It discloses key behavioral traits: pricing ($0.005 USDC on Base), payment mechanism (x402 v2 authorization), dual response behavior (breach results vs. accept-list), and security approach (k-anonymity, no plaintext passwords). It doesn't mention rate limits or error handling, but covers the essential operational characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero wasted words. The first sentence establishes core functionality, the second explains pricing and payment mechanism, and the third clarifies the dual behavior based on payment presence. Every sentence earns its place by providing essential operational information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description provides excellent context about the dual-mode operation, payment requirements, and security approach. It doesn't describe the format of breach results or accept-list responses, which would be helpful given the lack of output schema. However, it covers the essential operational context thoroughly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds meaningful context beyond the schema: it explains that '_payment' unlocks paid responses and that its absence triggers accept-list return, and clarifies that sha1_prefix represents 'first 5 uppercase hex characters' for password hash checking. This provides valuable semantic understanding of how parameters affect tool behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check whether a password hash prefix appears in the HIBP breach corpus'), identifies the resource (HIBP breach corpus), and specifies the technical method (k-anonymity with SHA-1 first 5 chars). It distinguishes itself from sibling tools by focusing on password breach checking rather than drift exposure, financial data, or wallet security.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use the tool: for checking password breach status via hash prefix. It clearly distinguishes between paid and unpaid usage scenarios: with payment returns breach results, without payment returns the 402 accept-list for signing. No sibling tools offer similar functionality, making alternatives clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

coinalyze_funding_ratesAInspect

Fetch perpetual-futures funding-rate intelligence for a given base asset (e.g. 'BTC', 'ETH', 'SOL') aggregated across 17 major perp venues — Binance, Bybit, OKX, BitMEX, Deribit, dYdX, Hyperliquid, Bitfinex, Huobi, Kraken, Phemex, WOO X, Aster, Lighter, Coinbase, Gate.io, Vertex. Returns per-exchange rate + USD open interest, OI-weighted aggregate funding, divergence in bps, and max/min funding exchange — pre-computed for perp-arbitrage bots. Priced at $0.01 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolYesBase asset ticker (1-12 alphanumerics), e.g. 'BTC', 'ETH', 'SOL', 'DOGE', 'XRP', 'AVAX', 'LINK', 'APT', 'SUI'.
_paymentNoOptional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: the tool returns 'per-exchange rate + USD open interest, OI-weighted aggregate funding, divergence in bps, and max/min funding exchange,' discloses payment requirements ('Priced at $0.01 USDC on Base'), and explains conditional behavior based on the '_payment' parameter. It lacks details on rate limits or error handling, but covers core operational aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by details on returns, pricing, and parameter behavior. Each sentence adds value, such as listing venues, specifying use cases, and explaining payment logic. It could be slightly more concise by trimming the venue list or integrating details more tightly, but overall it's well-structured and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (financial data aggregation with payment logic), no annotations, and no output schema, the description does a solid job. It covers purpose, usage, behavioral traits, and parameter implications. However, it lacks explicit details on output format (e.g., structure of returned data) and error cases, which would enhance completeness for a paid API tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds marginal value by explaining the '_payment' parameter's effect ('to unlock the paid response' vs. 'returns the 402 accept-list'), but doesn't provide additional syntax or format details beyond the schema. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose: 'Fetch perpetual-futures funding-rate intelligence for a given base asset aggregated across 17 major perp venues.' It specifies the verb ('fetch'), resource ('funding-rate intelligence'), and scope ('perpetual-futures', 'base asset', '17 major perp venues'), clearly distinguishing it from sibling tools like 'check_drift_exposure' or 'solana_wallet_scan' which target different domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: for 'perp-arbitrage bots' needing aggregated funding rates. It also details when-not scenarios: 'Without [payment], the tool returns the 402 accept-list for your wallet to sign,' clarifying the alternative behavior. No sibling tools overlap in functionality, making this guidance comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fred_macro_seriesAInspect

Fetch a Federal Reserve Economic Data (FRED) series by ID — e.g. 'DGS10' (10Y yield), 'WALCL' (Fed balance sheet), 'T10Y2Y' (yield curve). Returns cleaned latest observations. Priced at $0.005 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.

ParametersJSON Schema
NameRequiredDescriptionDefault
_paymentNoOptional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list.
series_idYesFRED series ID (e.g. 'DGS10', 'WALCL', 'T10Y2Y', 'DFF').
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure and does so comprehensively. It explains the payment mechanism, pricing details, the conditional behavior based on the '_payment' parameter, what data is returned ('cleaned latest observations'), and the 402 accept-list fallback behavior. This provides complete transparency about how the tool operates.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero wasted words. It starts with the core purpose, provides examples, explains the return value, mentions pricing, and clearly documents the conditional payment behavior - all in three well-organized sentences that each serve a distinct purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description provides complete context. It explains what the tool does, how to use it, the payment mechanism, pricing, conditional behavior, and return values. Given the complexity of the payment system, this level of detail is appropriate and comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema description coverage is 100%, the description adds significant value beyond the schema by explaining the practical implications of the '_payment' parameter - specifically what happens when it's present vs. absent, and that it unlocks paid responses. It also provides real-world examples of series_id values that help users understand what to pass.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('Fetch') and resource ('FRED series by ID'), provides concrete examples of series IDs, and distinguishes it from sibling tools by focusing on economic data retrieval. It goes beyond the name/title to explain what the tool actually does.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool (to fetch FRED economic data series) and includes clear conditional logic for the payment parameter - specifying exactly what happens with and without the '_payment' argument. It also mentions pricing details which helps users understand the cost context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_premium_cipher_chapterAInspect

Fetch a CIPHER premium chapter (markdown). Four chapters available: 'mev-deep-dive', 'three-tier-wallet', 'canadian-compliance', 'oracle-cloud-free-tier'. Priced at $0.25 USDC on Base (x402) per chapter. Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesChapter slug: 'mev-deep-dive' | 'three-tier-wallet' | 'canadian-compliance' | 'oracle-cloud-free-tier'.
_paymentNoOptional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does so effectively. It discloses key behavioral traits: the payment mechanism (USDC on Base, x402), the conditional response based on '_payment', and the specific chapters available. It doesn't cover rate limits or error handling, but provides substantial context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by essential details (chapters, pricing, payment logic). Every sentence adds critical information with zero waste, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (payment-based tool with conditional behavior), no annotations, and no output schema, the description is largely complete. It explains the tool's purpose, usage, parameters, and behavioral outcomes. A slight gap exists in not detailing the exact format of the 402 accept-list or error cases, but it's sufficient for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds minimal value beyond the schema by mentioning the payment context for '_payment' and listing the chapters, but doesn't provide additional syntax or format details. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Fetch a CIPHER premium chapter (markdown)') and resource ('premium chapter'), distinguishing it from sibling tools which focus on different domains like security audits, financial calculations, or data checks. It precisely identifies what the tool delivers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states when to use this tool (to fetch premium chapters) and provides clear alternatives based on the '_payment' argument: with payment it unlocks the paid response, without it returns the 402 accept-list. This covers both usage scenarios and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

github_repo_healthAInspect

Score a GitHub repository's health — commit cadence, issue-close latency, contributor diversity, CI green rate, release frequency. Returns 0-100 health score with per-factor breakdown. Priced at $0.02 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.

ParametersJSON Schema
NameRequiredDescriptionDefault
repoYesRepo name (e.g. 'solana').
ownerYesGitHub org or user (e.g. 'solana-labs').
_paymentNoOptional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool returns a 0-100 health score with per-factor breakdown, requires payment via a signed authorization for full functionality, and defaults to returning an accept-list without payment. It mentions pricing ($0.02 USDC on Base) and the payment mechanism (x402 v2), though it could add more on error handling or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose and metrics, then detailing payment requirements. Every sentence adds value: the first explains what the tool does, the second specifies the return format and pricing, and the third clarifies usage based on payment. It could be slightly more concise by combining some details, but overall it's efficient with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a paid tool with conditional behavior and no output schema, the description is largely complete. It covers the purpose, metrics, return format, payment mechanism, and usage scenarios. However, it lacks details on error cases (e.g., invalid repository inputs) and does not fully describe the output structure beyond '0-100 health score with per-factor breakdown,' which could be more explicit without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (owner, repo, _payment). The description adds context for the '_payment' parameter by explaining its effect on tool behavior (unlocks paid response vs. returns accept-list), but does not provide additional meaning for 'owner' and 'repo' beyond what the schema states. This meets the baseline of 3 when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Score a GitHub repository's health') and resources ('GitHub repository'), listing the exact metrics analyzed (commit cadence, issue-close latency, contributor diversity, CI green rate, release frequency). It distinguishes itself from sibling tools by focusing on repository health scoring rather than security audits, wallet scans, or other functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use the tool: to assess repository health with specific metrics. It clearly distinguishes between paid and unpaid usage scenarios, specifying that without the '_payment' argument, it returns an accept-list for signing, and with it, it returns the health score. This directly addresses when to use the tool versus not using it based on payment requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jito_tip_calculatorAInspect

Compute an expected-value-maximizing Jito tip for a Solana arbitrage bundle. Inputs: pool_depth (USD), expected_profit (USD), slot_probability [0..1]. Returns tip lamports + EV breakdown. Priced at $0.01 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.

ParametersJSON Schema
NameRequiredDescriptionDefault
_paymentNoOptional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list.
slot_probYesProbability the leader slot is a Jito validator [0..1].
pool_depthYesTarget pool depth in USD.
expected_profitYesGross expected profit of the bundle in USD.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It thoroughly explains the tool's behavior: it computes a tip based on inputs, returns tip lamports and EV breakdown, requires payment for full response, and returns a 402 accept-list without payment. It also specifies pricing details and authorization requirements, covering mutation, authentication, and response format aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by essential details in a logical flow: inputs, returns, payment mechanism, and alternative behavior. Every sentence adds critical information without redundancy, making it highly efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the tool (involving financial calculations, payment, and Solana-specific context) and the absence of annotations and output schema, the description is complete. It covers purpose, inputs, outputs, behavioral nuances, payment requirements, and error/alternative responses, providing all necessary context for an agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the purpose of the parameters ('inputs: pool_depth (USD), expected_profit (USD), slot_probability [0..1]') and clarifying the '_payment' argument's role in unlocking paid responses, which enhances understanding beyond the schema's technical definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb ('compute') and resource ('Jito tip for a Solana arbitrage bundle') with precise scope ('expected-value-maximizing'). It distinguishes this tool from all sibling tools, which are unrelated to Solana arbitrage or tip calculation, making the purpose immediately distinct and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('to compute an expected-value-maximizing Jito tip') and provides clear context for when not to use it (e.g., for other Solana operations like wallet scans or unrelated tasks like password checks). It also details the payment requirement and alternative behavior if payment is absent, offering comprehensive usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openfda_adverse_eventsAInspect

openFDA drug adverse-event lookup for the last 12 months. Returns top reactions, report count, and seriousness breakdown (serious vs non-serious). Searches brand, generic, and medicinal-product names in parallel. Priced at $0.005 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.

ParametersJSON Schema
NameRequiredDescriptionDefault
drugYesDrug name (brand or generic), e.g. 'ozempic', 'metformin'.
limitNoTop-reaction count to return (1-25). Default 10.
_paymentNoOptional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the payment mechanism (x402 authorization), cost ($0.005 USDC), dual-mode operation (paid response vs. accept-list return), and search scope (brand, generic, and medicinal-product names in parallel). It doesn't mention rate limits or error handling, but covers most critical aspects for a paid API tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: first states core functionality, second explains payment mechanism and dual behavior, third clarifies search scope. Every sentence adds essential information with zero wasted words, making it easy to parse and understand the tool's operation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (payment integration, dual modes) and lack of annotations/output schema, the description does well to cover key aspects: purpose, usage conditions, behavioral traits, and cost. It could benefit from mentioning response format details or error cases, but for a tool with 100% schema coverage and clear operational logic, it provides sufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds minimal parameter semantics beyond the schema, mentioning that '_payment' unlocks paid responses and that searches run in parallel across name types, but doesn't provide additional syntax or format details. This meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'openFDA drug adverse-event lookup for the last 12 months. Returns top reactions, report count, and seriousness breakdown.' It specifies the verb (lookup), resource (drug adverse events), time scope (last 12 months), and output details, distinguishing it from sibling tools like pubmed_medical_search or usda_food_nutrition.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: 'Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.' It clearly states when to use the tool (for paid responses vs. accept-list retrieval) and the alternative behavior based on the presence of the payment argument.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

osm_geocodeAInspect

Forward geocoding via OpenStreetMap Nominatim. Address string → lat/lon + normalized address block (country_code, state, city, postcode, road) + match_quality label. Priced at $0.001 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesAddress to geocode (e.g. '1600 Pennsylvania Ave NW, Washington DC').
_paymentNoOptional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing pricing ($0.001 USDC), payment mechanism (x402 v2 authorization), and the conditional behavior based on '_payment' presence. However, it doesn't mention rate limits, error conditions, or response format details beyond what's listed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: first states purpose and outputs, second explains pricing and payment, third explains behavior without payment. Every sentence earns its place by providing essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter tool with no annotations and no output schema, the description does well by explaining the conditional behavior and payment system. However, it doesn't describe the actual return format (what the 'normalized address block' structure looks like) or error handling, leaving some gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds context about the payment mechanism's purpose but doesn't provide additional semantic meaning beyond what's in the schema descriptions. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'Forward geocoding via OpenStreetMap Nominatim' with specific outputs (lat/lon + normalized address block + match_quality label), distinguishing it from sibling 'osm_reverse_geocode' which presumably does the opposite operation. The verb 'geocode' and resource 'address string' are precisely defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool (for forward geocoding) and provides clear alternatives: with '_payment' argument for paid response, without it to get the 402 accept-list. This gives complete guidance on usage scenarios and prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

osm_reverse_geocodeAInspect

Reverse geocoding via OpenStreetMap Nominatim. lat/lon → normalized address + place class/type. Priced at $0.001 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.

ParametersJSON Schema
NameRequiredDescriptionDefault
latYesLatitude (WGS84).
lonYesLongitude (WGS84).
_paymentNoOptional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the payment mechanism ('Priced at $0.001 USDC on Base'), the conditional response based on '_payment' parameter, and the specific return types (normalized address + place class/type with payment, 402 accept-list without). It doesn't cover rate limits or error handling, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence covers the core purpose, followed by payment details and parameter guidance. Every sentence adds value—no wasted words—and the structure logically flows from what the tool does to how to use it effectively.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description does well to cover purpose, usage, payment behavior, and parameter implications. However, it lacks details on output format specifics (e.g., structure of normalized address) and error cases, which would be helpful for a tool with conditional responses. It's mostly complete but has minor gaps in full context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds meaningful context beyond the schema: it explains that '_payment' is a 'signed x402 v2 authorization' and clarifies the behavioral impact (unlocks paid response vs. returns accept-list). This enhances understanding of parameter purpose and interaction, though it doesn't detail all parameter semantics beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('reverse geocoding') and resource ('OpenStreetMap Nominatim'), plus the transformation ('lat/lon → normalized address + place class/type'). It distinguishes from its sibling 'osm_geocode' by specifying reverse geocoding (coordinates to address) versus forward geocoding (address to coordinates).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: for reverse geocoding with OpenStreetMap. It also specifies when-not scenarios: without payment, it returns an accept-list instead of the paid response, and it mentions the alternative behavior based on the '_payment' parameter. This gives clear context for usage decisions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

solana_wallet_scanAInspect

Scan a Solana wallet for portfolio value, dust accounts, stale stake accounts, and low-liquidity position warnings. Returns findings + referral CTAs. Priced at $0.01 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesBase58 Solana wallet address (32-44 chars).
_paymentNoOptional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does so well: it discloses payment requirements (priced at $0.01 USDC), authorization needs (signed x402 v2 authorization), and behavioral differences based on '_payment' argument (paid response vs. 402 accept-list). It doesn't mention rate limits or other constraints, but covers key operational traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with core functionality, followed by payment details and behavioral conditions. Every sentence adds value: first explains what the tool does, second covers pricing and authorization, third clarifies the payment argument's effect. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description does well by explaining the tool's purpose, usage, payment model, and behavioral outcomes. It could improve by detailing the exact structure of returned findings or error cases, but it's largely complete for a tool with 2 parameters and clear operational logic.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds some context by explaining the purpose of '_payment' (to unlock paid response or return 402 accept-list), but doesn't provide additional meaning beyond what the schema already documents for parameters like 'address' format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('scan', 'returns') and resources ('Solana wallet'), detailing what it analyzes (portfolio value, dust accounts, stale stake accounts, low-liquidity position warnings) and what it returns (findings + referral CTAs). It distinguishes from siblings like 'solana_wallet_security_audit_rules' by focusing on scanning rather than auditing rules.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use the tool (to scan a wallet for specific issues) and provides clear alternatives based on payment: with '_payment' argument for paid response, without it for 402 accept-list. It also mentions pricing ($0.01 USDC on Base) and authorization requirements, giving comprehensive guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

solana_wallet_security_audit_rulesAInspect

Return metadata for the cipher-solana-wallet-audit v1.4.0 ruleset — the free MIT GitHub Action that catches Solana wallet anti-patterns in CI: plaintext private keys (base58 OR hex), seed phrases (in comments OR string literals), Anchor.toml wallet leaks, Token2022 transfer-hook abuse, Drift-hack-derived admin bundles, leaked .env files, hardcoded RPC URLs. Free, no payment required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool returns metadata (a read operation) and mentions the ruleset is free and MIT-licensed, adding some behavioral context like cost and licensing. However, it lacks details on rate limits, error handling, or response format, which are important for a tool with no output schema. It doesn't contradict annotations (none exist).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first part states the core purpose, followed by additional context about the ruleset's function and intended use. It avoids unnecessary fluff, though the repetition of 'free' could be slightly trimmed. Every sentence adds value, such as clarifying the ruleset's purpose and usage context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and 0 parameters, the description provides basic completeness: it explains what the tool does and some context (free, MIT, educational use). However, for a tool returning metadata, it doesn't detail the structure or content of the returned data, which could be important for an AI agent. It's adequate but has clear gaps in output expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameters need documentation. The description doesn't add parameter info, but that's fine since there are none. Baseline is 4 for 0 parameters, as it doesn't need to compensate for any gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Return metadata for the cipher-solana-wallet-audit v1.1.0 ruleset.' It specifies the resource (the ruleset) and the action (return metadata), though it doesn't explicitly distinguish from sibling tools like 'solana_wallet_scan' which might have overlapping domains. The additional context about the ruleset's function (catching plaintext Solana private keys, etc.) helps clarify scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context: 'Intended for educational use and agent-driven repo hardening,' which suggests when to use it. However, it doesn't explicitly state when not to use it or name alternatives among siblings (e.g., vs. 'solana_wallet_scan' or 'github_repo_health'). The mention of 'free, no payment required' hints at a cost consideration but isn't a full guideline.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

usda_food_nutritionAInspect

USDA FoodData Central nutrition lookup. Query any food (brand or generic) and receive the best-matching record with a clean per_100g macro block (calories_kcal, protein_g, carb_g, fat_g, fiber_g, sugar_g, potassium_mg), plus alternates. Priced at $0.002 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoAlternates to return (1-10). Default 5.
queryYesFood name (e.g. 'banana', 'chicken breast raw').
_paymentNoOptional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does so effectively. It discloses critical behavioral traits: the payment requirement ($0.002 USDC on Base), the two distinct response modes (paid nutrition data vs. 402 accept-list), and the need for signed x402 v2 authorization. This goes well beyond what the input schema provides.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three sentences: purpose statement, payment details, and behavioral consequence of missing payment. Each sentence earns its place, though the first sentence is somewhat dense with multiple clauses.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description provides substantial context about behavior, payment requirements, and response formats. It could be more complete by specifying the exact format of the 'clean per_100g macro block' or what 'alternates' includes, but covers the essential operational aspects well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing good documentation for all parameters. The description adds some value by explaining the purpose of '_payment' (unlocks paid response vs. returns 402 accept-list) and mentioning 'query' for food names, but doesn't significantly enhance parameter understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a 'nutrition lookup' on 'any food' using the USDA FoodData Central database, specifying it returns the best-matching record with detailed per_100g macro data and alternates. It distinguishes itself from siblings by focusing on food nutrition data rather than financial, security, or other domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use the tool (for USDA food nutrition queries) and provides clear guidance on payment requirements: with '_payment' argument for paid responses, without it for 402 accept-list. It distinguishes from siblings by its specific data source and payment mechanism.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.