Skip to main content
Glama

Server Details

DeFi Yield Intelligence MCP — 8 tools: 19K+ pools, risk-adjusted APY, RWA yields.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ToolOracle/yieldoracle
GitHub Stars
0
Server Listing
YieldOracle

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 3.2/5 across 8 of 8 tools scored. Lowest: 2.3/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: chain_yields focuses on chain-specific data, health_check provides server status, risk_adjusted offers risk-adjusted rankings, rwa_yield covers real-world assets, stablecoin_yield targets stablecoins, top_yields shows highest APYs, yield_compare enables protocol comparisons, and yield_scan provides deep pool analysis. The descriptions reinforce these unique roles, eliminating any ambiguity.

Naming Consistency5/5

All tool names follow a consistent snake_case pattern with clear, descriptive naming: chain_yields, health_check, risk_adjusted, rwa_yield, stablecoin_yield, top_yields, yield_compare, and yield_scan. The naming convention is uniform throughout, making it easy to predict and understand each tool's function.

Tool Count5/5

With 8 tools, the count is well-scoped for a yield analysis server, covering key areas like chain-specific yields, risk adjustment, stablecoins, real-world assets, comparisons, and deep scans. Each tool earns its place without redundancy, providing a comprehensive yet manageable surface for DeFi yield research.

Completeness5/5

The tool set offers complete coverage for yield analysis in DeFi, including chain-specific data, risk-adjusted rankings, stablecoin and RWA yields, top yield listings, protocol comparisons, and detailed pool scans. There are no obvious gaps; agents can perform comprehensive yield research and strategy development without dead ends.

Available Tools

8 tools
chain_yieldsBInspect

Best yields on a specific chain. Shows top pools, total chain TVL, and average APY. Great for chain-specific farming strategies.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoChain: ethereum, solana, arbitrum, base, bsc, polygon, etc. (default: ethereum)
limitNoResults (default: 15)
min_tvlNoMin TVL (default: 50000)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but offers minimal behavioral disclosure. It mentions outputs (top pools, TVL, APY) but doesn't cover critical aspects like data freshness, rate limits, authentication requirements, error conditions, or whether results are sorted/filtered. For a data query tool with zero annotation coverage, this leaves significant gaps in understanding operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that efficiently convey purpose and usage context. It's front-loaded with the core function and avoids unnecessary elaboration. However, the second sentence could be slightly more precise about the strategic application.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 3 parameters, no annotations, and no output schema, the description is incomplete. It doesn't explain return format (e.g., structure of 'top pools'), data sources, update frequency, or error handling. For a yield analysis tool with multiple siblings, more context is needed to distinguish functionality and set proper expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are fully documented in the schema. The description adds no parameter-specific information beyond what the schema provides (chain selection, limit, min_tvl). It mentions 'top pools' which relates to results but doesn't clarify parameter interactions or default behaviors beyond schema defaults.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Best yields on a specific chain' with specific outputs (top pools, total chain TVL, average APY) and context (chain-specific farming strategies). It distinguishes from siblings like 'top_yields' (likely global) by specifying chain focus, but doesn't explicitly contrast with 'yield_scan' or 'yield_compare' which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('Great for chain-specific farming strategies') but doesn't explicitly state when to use this tool versus alternatives like 'top_yields' (likely global yields) or 'yield_compare' (comparative analysis). No explicit exclusions or prerequisites are mentioned, leaving some ambiguity about tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

health_checkCInspect

Server health, data stats (pools/chains/protocols), API status, tool list, pricing.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While it lists what information the tool provides, it doesn't describe how it behaves: whether it performs active checks or returns cached data, what format the output takes, whether it has rate limits, or what authentication is required. The description is purely informational without behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (one phrase) but poorly structured as a comma-separated list rather than a coherent sentence. While it's brief, the list format makes it read like feature bullet points rather than a tool description. It's front-loaded but lacks grammatical structure that would improve clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete for a diagnostic tool. It lists what information is returned but provides no context about format, structure, or interpretation of results. For a health check tool that presumably returns system status information, the description should at minimum indicate what constitutes 'healthy' versus 'unhealthy' states or how to interpret the various metrics mentioned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters with 100% schema description coverage, so the baseline is 4. The description appropriately doesn't waste space discussing non-existent parameters. The schema already fully documents the empty parameter set, so no additional parameter semantics are needed or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description lists multiple aspects ('Server health, data stats, API status, tool list, pricing') but doesn't specify a clear action verb or primary purpose. It reads more like a feature list than a tool definition, making the purpose somewhat vague. However, it does indicate this is a diagnostic/monitoring tool rather than a data analysis tool like its siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The description doesn't mention when this health check should be performed, what triggers its use, or how it differs from the yield-focused sibling tools. The agent receives no usage context beyond the tool's name and description list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

risk_adjustedBInspect

APY adjusted for risk — pools ranked by real expected return after factoring liquidity, volatility, IL, and sustainability. The smart way to compare yields.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoResults (default: 15)
queryNoFilter by token/protocol (optional)
min_tvlNoMin TVL (default: 100000)
pool_idNoSpecific pool ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While it mentions the risk factors considered (liquidity, volatility, IL, sustainability), it doesn't describe how the adjustment is calculated, what data sources are used, update frequency, rate limits, or error conditions. For a complex financial calculation tool with zero annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that each add value. The first sentence establishes purpose and scope, the second provides usage context. No wasted words, though it could be slightly more structured with clearer separation of function vs. guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex financial calculation tool with 4 parameters, no annotations, and no output schema, the description is incomplete. It doesn't explain the return format, what the ranking criteria are, how results are sorted, or what the expected output looks like. The description should provide more context about the calculation methodology and result format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 4 parameters with their types and basic descriptions. The description doesn't add any parameter-specific information beyond what's in the schema. The baseline of 3 is appropriate when the schema does the heavy lifting, though the description could have explained how parameters interact with the risk adjustment.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to provide APY adjusted for risk, ranking pools by real expected return after factoring specific risk factors (liquidity, volatility, IL, sustainability). It distinguishes from siblings by focusing on risk-adjusted yields rather than raw yields or specific categories. However, it doesn't explicitly name which sibling tools it differs from.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('smart way to compare yields') suggesting this tool should be used when risk-adjusted comparisons are needed rather than raw yield metrics. However, it doesn't explicitly state when to use this vs. alternatives like 'yield_compare' or 'top_yields', nor does it provide exclusion criteria or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rwa_yieldCInspect

Tokenized treasury and Real World Asset yields — BlackRock, Ondo, Maple, Centrifuge, Goldfinch and more. The institutional DeFi layer.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoResults (default: 15)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'yields' but doesn't specify if this is a read-only operation, what data format is returned, potential rate limits, or authentication needs. The description lacks details on behavioral traits like whether it fetches real-time data, historical data, or aggregated statistics, leaving significant gaps for an agent to understand how to invoke it correctly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief but includes unnecessary marketing phrasing ('The institutional DeFi layer') that doesn't aid functionality. It's front-loaded with the core purpose but could be more structured to separate functional details from promotional content. While not overly verbose, it doesn't maximize efficiency in conveying essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity implied by multiple sibling tools and no annotations or output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., a list of yields, detailed metrics), how it differs from similar tools, or any prerequisites for use. For a tool in a crowded namespace with no structured support, more context is needed to ensure proper agent selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with one parameter ('limit'), clearly described as 'Results (default: 15)'. The description doesn't add any parameter-specific information beyond what the schema provides, such as explaining what 'limit' controls (e.g., number of items, pagination). Since schema coverage is high, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool provides 'Tokenized treasury and Real World Asset yields' for various platforms, which gives some indication of purpose. However, it's vague about what specific action the tool performs (e.g., list, fetch, compare) and doesn't clearly differentiate from sibling tools like 'chain_yields', 'stablecoin_yield', or 'top_yields', which appear related. The phrase 'The institutional DeFi layer' adds marketing language rather than functional clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance is provided on when to use this tool versus alternatives. The description mentions specific platforms (BlackRock, Ondo, etc.), but it doesn't clarify if this tool is for general yields, filtered results, or comparisons with siblings like 'yield_compare' or 'yield_scan'. Without such context, users must infer usage based on tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stablecoin_yieldAInspect

Safe stablecoin yields ranked by APY. Only pools with real TVL, filtered for sustainability (<200% APY). Perfect for treasury management.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoFilter by chain
limitNoResults (default: 15)
min_tvlNoMin TVL (default: 500000)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: safety focus, APY ranking, TVL filtering, and sustainability criteria (<200% APY). However, it doesn't mention rate limits, authentication needs, response format, or potential side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (two sentences) and front-loaded with the core purpose. Every word earns its place, with no redundant information or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, 100% schema coverage, but no annotations and no output schema, the description provides adequate context about filtering logic and use case. However, it lacks details about response format, error conditions, or behavioral constraints that would be helpful for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters. The description adds context about filtering criteria (real TVL, <200% APY) but doesn't provide additional parameter-specific semantics beyond what's in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to provide 'safe stablecoin yields ranked by APY' with specific filtering criteria (real TVL, sustainability <200% APY). It mentions a use case ('treasury management') but doesn't explicitly differentiate from sibling tools like 'top_yields' or 'yield_scan'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('Perfect for treasury management') and mentions filtering criteria, but doesn't provide explicit guidance on when to use this tool versus alternatives like 'risk_adjusted' or 'yield_compare'. No exclusions or prerequisites are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

top_yieldsAInspect

Get the highest APY yield pools across all chains and protocols. Filter by min TVL, chain, stablecoin-only. Data from 19K+ DeFi pools via DeFiLlama.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoFilter by chain: ethereum, solana, arbitrum, base, etc.
limitNoResults (1-30, default: 15)
min_tvlNoMinimum TVL in USD (default: 100000)
stablecoinNo'true' for stablecoin pools only
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the data source and scope ('across all chains and protocols'), but lacks details on behavioral traits such as rate limits, authentication needs, or how results are sorted (e.g., by APY). The description is adequate but misses key operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and efficiently lists filter options and data source in two sentences. Every sentence adds value without redundancy, making it appropriately sized and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is moderately complete. It covers the tool's purpose and basic filters but lacks details on output format, error handling, or performance considerations. For a tool with 4 parameters and no structured output, it should provide more context to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description adds minimal value by mentioning filter options ('Filter by min TVL, chain, stablecoin-only'), but does not provide additional semantics beyond what the schema specifies. Baseline 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get the highest APY yield pools') and resources ('across all chains and protocols'), distinguishing it from siblings like 'chain_yields' (chain-specific) or 'stablecoin_yield' (stablecoin-only). It specifies the data source ('Data from 19K+ DeFi pools via DeFiLlama'), adding further clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning filter options ('Filter by min TVL, chain, stablecoin-only'), but does not explicitly state when to use this tool versus alternatives like 'stablecoin_yield' or 'chain_yields'. It lacks clear exclusions or direct comparisons to sibling tools, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

yield_compareAInspect

Compare two protocols side by side — average APY, max APY, TVL, chains supported, top pools. E.g. 'aave-v3' vs 'compound-v3'.

ParametersJSON Schema
NameRequiredDescriptionDefault
protocol_aNoFirst protocol (e.g. 'aave-v3') (required)
protocol_bNoSecond protocol (e.g. 'compound-v3') (required)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions what metrics are compared, it lacks details on how the comparison is performed (e.g., real-time data, historical averages), potential data sources, rate limits, or error handling. This leaves gaps in understanding the tool's operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured, consisting of a single sentence that front-loads the core functionality followed by a practical example. Every word contributes to understanding the tool's purpose and usage without any redundancy or unnecessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (comparing multiple metrics), lack of annotations, and no output schema, the description is somewhat complete but has gaps. It covers what the tool does and provides an example, but it does not address behavioral aspects like data freshness, error cases, or output format, which are important for effective use by an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters ('protocol_a' and 'protocol_b') clearly documented in the schema. The description adds value by providing an example ('aave-v3' vs 'compound-v3') that illustrates parameter usage, but it does not explain parameter semantics beyond what the schema already states, such as protocol naming conventions or validation rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('compare') and resource ('two protocols'), listing the exact comparison metrics (average APY, max APY, TVL, chains supported, top pools). It distinguishes itself from sibling tools like 'top_yields' or 'yield_scan' by focusing on side-by-side protocol comparison rather than ranking or scanning.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool—when comparing two specific protocols—and includes an example ('aave-v3' vs 'compound-v3') that illustrates proper usage. However, it does not explicitly state when not to use it or name alternatives among sibling tools, such as using 'top_yields' for ranking instead of direct comparison.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

yield_scanBInspect

Deep scan a specific pool or token — APY breakdown (base + reward), TVL, risk score, IL exposure, 30d average. Search by name, symbol, or pool ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoToken or protocol name (e.g. 'USDC', 'aave')
pool_idNoDeFiLlama pool ID for exact match
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool performs a 'deep scan' and returns specific metrics, but doesn't clarify whether this is a read-only operation, if it requires authentication, rate limits, or what happens on errors. For a tool with no annotation coverage, this leaves significant behavioral gaps unaddressed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in a single sentence that front-loads the core action ('Deep scan a specific pool or token') followed by the returned metrics and search methods. Every element earns its place with no wasted words, making it highly concise and well-organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is minimally adequate. It covers the purpose and parameters but lacks details on behavioral aspects like safety, performance, or error handling. Without annotations or output schema, more context would be beneficial, but it meets basic requirements.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters ('query' and 'pool_id'). The description adds marginal value by mentioning 'Search by name, symbol, or pool ID', which aligns with the schema but doesn't provide additional syntax or format details. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Deep scan a specific pool or token' and lists the specific metrics returned (APY breakdown, TVL, risk score, IL exposure, 30d average). It distinguishes itself from siblings like 'top_yields' or 'yield_compare' by focusing on detailed analysis of a single entity rather than comparisons or rankings. However, it doesn't explicitly contrast with 'chain_yields' or 'risk_adjusted', leaving some sibling differentiation incomplete.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'Search by name, symbol, or pool ID' and mentioning 'specific pool or token', suggesting this is for detailed analysis rather than broad queries. However, it lacks explicit guidance on when to use this versus alternatives like 'yield_compare' for comparisons or 'top_yields' for rankings. No exclusions or prerequisites are mentioned, leaving usage somewhat ambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.