Skip to main content
Glama

PreFlyte — DeFi Financial Intelligence for AI Agents

Server Details

DeFi financial intelligence for AI agents — returns oracle & market verification.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 9 of 9 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes with clear boundaries: assess_opportunity provides initial recommendations, check_entry_viability and check_pool_viability validate specific positions, estimate_net_position calculates financial projections, gas_timing advises on transaction timing, get_market_snapshot and get_ranking provide current data, get_returns offers historical data, and verify_claim validates assertions. However, get_ranking and get_returns could potentially overlap in providing yield/return information, though their scopes differ (current ranking vs. historical returns).

Naming Consistency5/5

All tool names follow a consistent snake_case pattern with clear verb-noun combinations. The verbs are descriptive and appropriate: assess_, check_, estimate_, gas_, get_, verify_. This consistency makes the tool set predictable and easy to navigate for an agent.

Tool Count5/5

With 9 tools, this server is well-scoped for its DeFi financial intelligence purpose. Each tool serves a specific function in the decision-making workflow, from initial assessment to position validation and market verification. The count is neither too sparse nor overwhelming, covering the domain comprehensively without redundancy.

Completeness4/5

The tool set provides excellent coverage of the DeFi intelligence domain, including opportunity assessment, market validation, financial estimation, gas timing, market snapshots, rankings, historical returns, and claim verification. A minor gap exists in execution tools—while the server focuses on intelligence, agents might need separate tools to actually execute trades or positions after using these assessment tools, though this is arguably outside the server's stated scope.

Available Tools

9 tools
assess_opportunityAInspect
Assess the best DeFi opportunity for a given capital amount and strategy.

This is the "cold start" tool — call it first to understand where your
capital is viable before making any moves. One call gives you chain
viability, ranked opportunities, gas impact, and an actionable recommendation.

Args:
    api_key: Your PreFlyte API key (required).
    asset: Token symbol, e.g. "USDC", "WETH".
    action: "supply" or "borrow".
    position_size_usd: Capital amount in USD.
    strategy: One of "yield_farming", "active_trading", "idle_capital".
    chain: "ethereum", "arbitrum", or "any" (default: "any").
    trades_per_day: For active_trading strategy only. Default 10.

Returns:
    JSON with chain viability, ranked opportunities, gas analysis,
    break-even calculations, and an actionable recommendation.
ParametersJSON Schema
NameRequiredDescriptionDefault
assetYes
chainNoany
actionYes
api_keyYes
strategyYes
trades_per_dayNo
position_size_usdYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool does (assesses opportunities), its scope (chain viability, ranked opportunities, gas impact, recommendations), and output format (JSON with specific fields). However, it doesn't mention potential limitations like rate limits, authentication needs beyond the api_key parameter, or error conditions, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It starts with a clear purpose statement, follows with usage guidelines, details parameters with explanations, and concludes with return value information. Every sentence adds value without redundancy, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, no annotations, no output schema), the description does an excellent job covering purpose, usage, parameters, and expected output. However, it doesn't fully address behavioral aspects like error handling or performance characteristics, and without an output schema, the return structure description could be more detailed, though it lists key components.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains each parameter's purpose: api_key (required for authentication), asset (token symbol with examples), action (specific values 'supply' or 'borrow'), position_size_usd (capital amount in USD), strategy (with enumerated options), chain (with defaults and options), and trades_per_day (strategy-specific default). This fully compensates for the schema's lack of descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Assess the best DeFi opportunity for a given capital amount and strategy.' It specifies the verb ('assess'), resource ('DeFi opportunity'), and scope ('for a given capital amount and strategy'), distinguishing it from siblings like 'check_entry_viability' or 'get_ranking' by focusing on comprehensive opportunity assessment rather than specific checks or data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'This is the "cold start" tool — call it first to understand where your capital is viable before making any moves.' It positions this as the initial step in a workflow, implying alternatives (sibling tools) should be used after this assessment, though it doesn't name specific alternatives, the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_entry_viabilityAInspect
Before entering a lending position, verify that the market is actually
open and has capacity.

Checks market status (active, frozen, borrowing enabled), supply/borrow
caps, and available liquidity. Returns a clear viable/not-viable verdict.

Args:
    api_key: Your PreFlyte API key (required).
    protocol: Protocol name — "aave-v3" or "compound-v3".
    chain: Chain name — "ethereum" or "arbitrum".
    asset: Asset symbol — "USDC", "WETH", etc.
    action: "supply" or "borrow".
    amount_usd: Intended position size in USD (optional, for context).

Returns:
    Dictionary with viable (true/false), market_status, capacity details,
    current rate, and confidence level.
ParametersJSON Schema
NameRequiredDescriptionDefault
assetYes
chainYes
actionYes
api_keyYes
protocolYes
amount_usdNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes what the tool checks (market status, caps, liquidity) and the return format, but doesn't mention authentication requirements (beyond the api_key parameter), rate limits, error conditions, or whether this is a read-only operation. It provides basic behavioral context but misses important operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It starts with the purpose, explains what gets checked, then provides parameter details and return format. Every sentence adds value with no redundancy or wasted words. The parameter documentation is efficiently formatted.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no annotations, no output schema), the description does a good job explaining purpose, parameters, and returns. However, it lacks details about authentication requirements beyond the api_key parameter, doesn't mention rate limits or error handling, and could benefit from more context about what 'viable' means in practice. For a tool with no structured output schema, the return description is helpful but could be more detailed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing clear semantics for all 6 parameters. It explains what each parameter represents (api_key as 'Your PreFlyte API key', protocol with specific values, chain options, asset examples, action types, amount_usd purpose) and indicates which are required vs optional.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('verify', 'checks') and resources ('market status', 'supply/borrow caps', 'available liquidity'). It distinguishes from siblings by focusing on pre-entry viability verification rather than opportunity assessment, pool viability, or other functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Before entering a lending position'), but doesn't explicitly state when not to use it or name specific alternatives among the sibling tools. The context is well-defined but lacks explicit exclusions or comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_pool_viabilityAInspect
Before swapping on Uniswap V3, check if pools have sufficient liquidity
for the intended trade size.

Finds all matching pools for a token pair, assesses trade impact relative
to pool TVL, and recommends the best pool.

Args:
    api_key: Your PreFlyte API key (required).
    chain: Chain name — "ethereum" or "arbitrum".
    token_pair: Token pair separated by "/" — e.g. "WETH/USDC", "WBTC/WETH".
    trade_size_usd: Intended swap size in USD.

Returns:
    Dictionary with per-pool viability (TVL, trade impact, fee cost),
    a recommendation, and data freshness.
ParametersJSON Schema
NameRequiredDescriptionDefault
chainYes
api_keyYes
token_pairYes
trade_size_usdYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes what the tool does (finds pools, assesses impact, recommends best) and mentions data freshness in returns, but doesn't cover important behavioral aspects like rate limits, authentication requirements beyond the api_key parameter, error conditions, or whether this is a read-only operation. The description adds value but leaves gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized: purpose statement first, then what it does, followed by parameter explanations and return value description. Every sentence earns its place with no redundant information. The formatting with clear sections enhances readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description does a good job covering the tool's purpose, parameters, and return structure. However, for a tool with 4 parameters and no structured safety/behavior annotations, it could benefit from more explicit behavioral context like whether this is a read-only operation, potential rate limits, or error scenarios. The return description is helpful but not as complete as an output schema would be.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing clear semantic explanations for all 4 parameters: api_key is explained as 'Your PreFlyte API key (required)', chain specifies allowed values 'ethereum' or 'arbitrum', token_pair shows format with examples 'WETH/USDC', and trade_size_usd clarifies 'Intended swap size in USD'. This adds substantial meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('check', 'finds', 'assesses', 'recommends') and resources ('pools', 'liquidity', 'token pair', 'trade size'). It distinguishes from siblings by focusing on pool viability assessment rather than opportunity analysis, returns calculation, or market snapshots.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Before swapping on Uniswap V3') and what it does relative to the intended action. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools, though the context implies it's for pre-swap assessment.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

estimate_net_positionAInspect
Detailed financial estimate for a specific lending position — what will
I actually earn (or pay) after gas costs?

Combines current live rates, gas costs, and historical computed returns
to project net yield over a holding period.

Args:
    api_key: Your PreFlyte API key (required).
    protocol: Protocol name — "aave-v3" or "compound-v3".
    chain: Chain name — "ethereum" or "arbitrum".
    asset: Asset symbol — "USDC", "WETH", etc.
    action: "supply" or "borrow".
    position_size_usd: Amount in USD.
    duration_days: Intended holding period in days. Default 30.

Returns:
    Dictionary with position details, current snapshot, cost breakdown,
    projected return, historical context, and confidence level.
ParametersJSON Schema
NameRequiredDescriptionDefault
assetYes
chainYes
actionYes
api_keyYes
protocolYes
duration_daysNo
position_size_usdYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses key behavioral traits: it's a projection tool that combines live rates, gas costs, and historical returns. However, it doesn't mention rate limits, authentication requirements beyond the api_key parameter, error conditions, or whether this is a read-only vs. write operation. The description adds meaningful context but leaves gaps in operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and front-loaded: purpose statement first, then implementation details, followed by parameter explanations and return format. Every sentence earns its place by adding essential information. The formatting with clear sections makes it easy to parse while maintaining conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, financial projections) and lack of both annotations and output schema, the description does an excellent job covering purpose, parameters, and return structure. The main gap is that without an output schema, the description could provide more detail about the return dictionary's structure. However, it lists the key components (position details, current snapshot, etc.), which is substantial for a tool with no structured output documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing detailed parameter semantics in the 'Args' section. Each parameter is clearly explained with examples and constraints (e.g., protocol options, asset examples, default values). This adds substantial value beyond the bare schema, making parameter meanings and usage clear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('estimate', 'combines', 'project') and resources ('financial estimate', 'lending position', 'net yield'). It distinguishes from siblings by focusing on net position calculations after gas costs, unlike tools like 'get_returns' or 'get_market_snapshot' that likely provide raw data without cost analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('what will I actually earn (or pay) after gas costs?'), but doesn't explicitly state when not to use it or name specific alternatives among siblings. The context implies it's for detailed financial projections rather than quick checks, but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gas_timingAInspect
Tell agents whether now is a good or bad time to transact, based on
historical gas patterns.

Compares current gas to 24-hour and 7-day averages, identifies the
cheapest hours of the day, and estimates reference transaction costs.

Args:
    api_key: Your PreFlyte API key (required).
    chain: Chain name — "ethereum" or "arbitrum".

Returns:
    Dictionary with current gas, 24h/7d context, timing assessment
    with cheapest hours, and reference transaction costs in USD.
ParametersJSON Schema
NameRequiredDescriptionDefault
chainYes
api_keyYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool does (compares current gas to averages, identifies cheapest hours, estimates costs) and the return format (dictionary with specific fields). However, it does not mention potential limitations like rate limits, data freshness, or error conditions, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the first sentence stating the core purpose. Each subsequent sentence adds value by detailing functionality and parameters without redundancy. The structure is efficient with zero wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is mostly complete. It covers purpose, parameters, and return format adequately. However, it could benefit from more detail on behavioral aspects like error handling or data sources to fully compensate for the lack of annotations and output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains that 'api_key' is a required PreFlyte API key and 'chain' accepts 'ethereum' or 'arbitrum', providing essential context not present in the schema. This fully compensates for the schema's lack of descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('tell agents whether now is a good or bad time to transact') and resources ('based on historical gas patterns'), distinguishing it from sibling tools focused on market analysis, viability checks, or returns calculations. It precisely communicates the tool's unique function of gas timing assessment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to transact' and 'based on historical gas patterns'), but it does not explicitly state when not to use it or name specific alternatives among the sibling tools. The context is sufficient for general usage but lacks explicit exclusions or comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_snapshotAInspect
Get current market snapshot for a specific lending market.

Returns the latest rates, risk parameters, token price, and gas price.
This is the foundation of RealityCheck — giving agents ground truth
about current market conditions before they act.

Args:
    api_key: Your PreFlyte API key (required).
    protocol: Protocol name — "aave-v3" or "compound-v3".
    chain: Chain name — "ethereum" or "arbitrum".
    asset: Asset symbol — "USDC", "USDT", "DAI", "WETH", "WBTC", "wstETH".

Returns:
    Dictionary with latest lending rates, risk parameters (if available),
    current token price, current gas price, data freshness timestamps,
    and rate_context (7-day averages, standard deviations, anomaly flags).
ParametersJSON Schema
NameRequiredDescriptionDefault
assetYes
chainYes
api_keyYes
protocolYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool returns current data with freshness timestamps and contextual information like 7-day averages, which adds useful behavioral context. However, it does not mention rate limits, authentication needs beyond the api_key parameter, error handling, or data availability constraints, leaving gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded, starting with the core purpose, followed by return details, context, and then parameter and return explanations. Every sentence earns its place by adding value—no redundant or vague statements. It efficiently conveys necessary information without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 required parameters, no annotations, no output schema), the description is largely complete: it covers purpose, parameters, and return values in detail. However, it lacks explicit error cases or data limitations, which could be important for a data-fetching tool. The absence of an output schema is mitigated by the detailed return description, but some behavioral aspects remain uncovered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing detailed semantics for all 4 parameters: it explains that 'api_key' is required and for PreFlyte, lists specific enums for 'protocol' (aave-v3, compound-v3) and 'chain' (ethereum, arbitrum), and specifies allowed 'asset' symbols. This adds significant meaning beyond the bare schema, effectively documenting parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get current market snapshot') and resource ('for a specific lending market'), distinguishing it from siblings like 'get_ranking' or 'get_returns' by focusing on foundational market data rather than rankings or return calculations. It explicitly mentions what data is returned (rates, risk parameters, token price, gas price), making the purpose distinct and comprehensive.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating this is 'the foundation of RealityCheck — giving agents ground truth about current market conditions before they act,' suggesting it should be used as a preliminary step. However, it does not explicitly state when to use this tool versus alternatives like 'assess_opportunity' or 'check_entry_viability,' nor does it provide exclusions or prerequisites beyond the required arguments.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_rankingAInspect
Get ranked list of best DeFi lending returns, sorted by net APY.

Args:
    api_key: Your PreFlyte API key (required).
    window_days: Return window — 7, 14, 30, or 90 days. Default 7.
    strategy: "supply" or "borrow". Default "supply".
    top_n: Number of top results. Default 10, max 30.

Returns:
    Ranked list with position number, net_apy_pct, gross_apy_pct,
    gas details, and data completeness scores.
ParametersJSON Schema
NameRequiredDescriptionDefault
top_nNo
api_keyYes
strategyNosupply
window_daysNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses that it returns a ranked list with specific fields (net_apy_pct, gas details, etc.), but lacks details on permissions, rate limits, or data freshness. It adds some behavioral context but is incomplete for a tool with multiple parameters and no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by clear sections for arguments and returns. Every sentence adds value without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (4 parameters, no annotations, no output schema), the description is largely complete—it explains the tool's purpose, parameters, and return structure. However, it lacks usage guidelines and some behavioral details (e.g., error handling), leaving minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains each parameter's purpose, required status, default values, and constraints (e.g., 'window_days' options, 'top_n' max), fully compensating for the schema's lack of documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get ranked list'), resource ('best DeFi lending returns'), and sorting criteria ('sorted by net APY'), distinguishing it from siblings like 'get_returns' or 'get_market_snapshot' by focusing on ranked returns rather than raw or snapshot data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives is provided. While the purpose implies ranking returns, it doesn't specify scenarios or differentiate from siblings like 'assess_opportunity' or 'get_returns', leaving usage context unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_returnsAInspect
Query historical DeFi lending returns from the ProfitLens engine.

Returns real, empirically measured after-fee returns — not theoretical
APY projections. Data is computed from on-chain index ratios every 30 min.

Args:
    api_key: Your PreFlyte API key (required).
    chain: Filter by chain — "ethereum" or "arbitrum". Empty = all chains.
    protocol: Filter by protocol — "aave-v3" or "compound-v3". Empty = all.
    asset: Filter by asset symbol — "USDC", "WETH", etc. Empty = all.
    strategy: Filter by strategy — "supply" or "borrow". Empty = both.
    window_days: Return window — 7, 14, 30, or 90 days. Default 7.
    limit: Max results to return. Default 20, max 50.

Returns:
    Dictionary with 'results' array and metadata. Each result includes:
    gross_apy_pct, net_apy_pct, gas_cost_pct, data_completeness_pct,
    chain, protocol, asset, strategy, and more.
ParametersJSON Schema
NameRequiredDescriptionDefault
assetNo
chainNo
limitNo
api_keyYes
protocolNo
strategyNo
window_daysNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: data source ('on-chain index ratios every 30 min'), data type ('real, empirically measured after-fee returns'), and output structure. However, it misses details like rate limits, error handling, or authentication depth beyond the required api_key.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by parameter details and return format. Every sentence adds value: the first paragraph explains data nature, the second details computation, and the bulleted lists efficiently document inputs and outputs without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters, no annotations, and no output schema, the description does an excellent job covering inputs and return structure. It falls slightly short by not mentioning potential errors, rate limits, or data freshness, which would help an agent use it more robustly in a DeFi context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate fully. It provides detailed semantics for all 7 parameters, including purpose, allowed values, defaults, and constraints (e.g., 'Empty = all chains', 'Default 7', 'max 50'). This adds significant meaning beyond the basic schema titles.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Query historical DeFi lending returns from the ProfitLens engine.' It specifies the verb 'query' and resource 'historical DeFi lending returns,' distinguishing it from siblings like 'get_market_snapshot' or 'get_ranking' by focusing on returns data rather than market states or rankings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for querying historical returns data, but does not explicitly state when to use this tool versus alternatives like 'assess_opportunity' or 'get_market_snapshot.' It provides context on data type (real returns vs. projections) but lacks direct guidance on tool selection among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_claimAInspect
Verify factual claims about current DeFi market conditions.

Supports two modes:
- Single claim: provide claim_type, value, operator (and protocol/chain/asset
  as needed). Returns one verification result.
- Batch mode: provide a JSON-encoded array in 'claims'. Each element has
  the same fields (claim_type, value, operator, protocol, chain, asset).
  Returns all results in one response. If 'claims' is provided, single-claim
  parameters are ignored.

Args:
    api_key: Your PreFlyte API key (required).
    claim_type: What you're checking. One of:
        "supply_rate" — current supply APY (%)
        "borrow_rate" — current borrow APY (%)
        "price" — current token price (USD)
        "gas" — current base fee (gwei)
        "utilization" — current pool utilization (%)
    value: The numeric value you believe to be true.
    operator: Comparison operator. One of:
        "above" — actual must be >= value
        "below" — actual must be <= value
        "around" — actual must be within 10% of value
    protocol: Required for supply_rate, borrow_rate, utilization.
              Use "aave-v3" or "compound-v3".
    chain: Chain name — "ethereum" or "arbitrum". Default "ethereum".
    asset: Required for supply_rate, borrow_rate, price, utilization.
           Use token symbol like "USDC", "WETH", etc.
    claims: JSON-encoded array of claim objects for batch verification.
            Each object contains: claim_type, value, operator, and optionally
            protocol, chain, asset. When provided, single-claim params are ignored.

Returns:
    Single mode: Dictionary with status (TRUE/FALSE), actual_value,
    claimed_value, delta, delta_pct, data_timestamp, and summary.

    Batch mode: Dictionary with 'mode', 'results' array, 'summary' counts,
    and 'verified_at' timestamp.

Examples:
    Single:
        verify_claim(api_key="...", claim_type="supply_rate", value=5.0,
                     operator="above", protocol="aave-v3", asset="USDC")

    Batch:
        verify_claim(api_key="...", claims='[{"claim_type": "supply_rate", ...}, ...]')
ParametersJSON Schema
NameRequiredDescriptionDefault
assetNo
chainNoethereum
valueNo
claimsNo
api_keyYes
operatorNoabove
protocolNo
claim_typeNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool supports two distinct modes (single and batch), explains how parameters interact (e.g., 'claims' overrides single-claim params), and outlines the return structure for both modes. However, it lacks details on error handling, rate limits, or authentication requirements beyond the api_key parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded, starting with the core purpose and modes. Each section (Args, Returns, Examples) is clearly labeled, and sentences are informative without redundancy. However, it is moderately lengthy due to the complexity of the tool, but every part earns its place by explaining necessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, two modes) and lack of annotations or output schema, the description is largely complete. It covers purpose, usage, parameters, and return values with examples. The main gap is the absence of error handling or performance details, but it provides enough context for an agent to use the tool effectively in most scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant semantic value beyond the input schema, which has 0% description coverage. It explains each parameter's purpose, enumerates valid values for 'claim_type' and 'operator', specifies dependencies (e.g., 'protocol' required for certain claim types), provides defaults (e.g., 'chain' defaults to 'ethereum'), and clarifies the JSON structure for 'claims'. This compensates fully for the schema's lack of descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Verify factual claims about current DeFi market conditions.' It specifies the verb ('verify'), resource ('factual claims'), and domain ('current DeFi market conditions'), distinguishing it from siblings like 'get_market_snapshot' (which retrieves data) or 'assess_opportunity' (which evaluates opportunities).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance by detailing two modes (single vs. batch) and specifying when to use each. It states that in batch mode, 'single-claim parameters are ignored,' and includes examples for both scenarios. This helps the agent choose the appropriate mode based on the verification needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources