Skip to main content
Glama

Philidor DeFi Vault Risk Analytics

Server Details

Search 700+ DeFi vaults, compare risk scores, analyze protocols. No API key needed.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Philidor-Labs/philidor-mcp
GitHub Stars
4
Server Listing
Philidor MCP Server

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

10 tools
compare_vaultsAInspect

Compare 2-3 DeFi vaults side-by-side on TVL, APR, risk score, risk tier, audited status, and more.

ParametersJSON Schema
NameRequiredDescriptionDefault
vaultsYesArray of 2-3 vaults to compare
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full disclosure burden. It compensates partially by listing comparison dimensions (TVL, APR, risk score, audited status), but omits mutation status, rate limits, auth requirements, and response format details. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with action front-loaded. The phrase 'and more' introduces minor vagueness after an otherwise specific list, but overall zero waste. Appropriate length for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description effectively compensates by enumerating the comparison dimensions returned (TVL, APR, risk metrics). With 100% input schema coverage and complete parameter descriptions, the definition provides sufficient context for invocation decisions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds domain context (DeFi vaults) and reinforces the 2-3 vault constraint, but does not elaborate on parameter syntax (e.g., address format requirements, valid network slugs) beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb (Compare) with a clear resource (DeFi vaults) and explicitly limits scope to 2-3 vaults. The 'side-by-side' phrasing and comparison metrics (TVL, APR, risk score) clearly distinguish this from sibling tools like get_vault (single vault retrieval) and search_vaults (discovery/filtering).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies a comparison use case but provides no explicit guidance on when to use this tool versus alternatives (e.g., when to use compare_vaults vs. calling get_vault multiple times, or prerequisites like needing specific vault addresses beforehand). No 'when not to use' exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

explain_risk_scoreAInspect

Explain what a Philidor risk score means, including the tier (Prime/Core/Edge), how it is calculated, and what the thresholds are.

ParametersJSON Schema
NameRequiredDescriptionDefault
scoreYesRisk score (0-10) to explain
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses content scope well (tiers, calculation method, thresholds) but omits operational traits: doesn't indicate if this is static cached data, expensive computation, or if it returns formatted text vs structured data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single front-loaded sentence with zero waste. Each element earns its place: identifies subject (Philidor risk score) and specific aspects covered (tier, calculation, thresholds). No redundant phrases or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter reference tool. Describes what knowledge is returned despite lacking output schema. Could improve by indicating return format (educational text) or that Philidor is a specific proprietary risk framework, but sufficient for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete parameter documentation ('Risk score (0-10) to explain'). Description provides baseline value by mentioning 'risk score' in context but adds no additional parameter guidance, syntax examples, or usage clarification beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb 'Explain' combined with specific resource 'Philidor risk score'. Distinguishes from siblings like get_vault_risk_breakdown (which analyzes specific vaults) by focusing on the scoring methodology and meaning rather than vault-specific analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies educational use case versus operational vault queries, but lacks explicit 'when to use' guidance. Does not clarify relationship to get_vault_risk_breakdown (use this to interpret scores returned by that tool) or when user needs this context versus raw data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_safest_vaultsAInspect

Find the safest (highest risk-scored) DeFi vaults, optionally filtered by asset, chain, or minimum TVL. Returns top 10 audited, high-confidence vaults sorted by risk score.

ParametersJSON Schema
NameRequiredDescriptionDefault
assetNoFilter by asset symbol (e.g. USDC, WETH)
chainNoFilter by chain name (e.g. Ethereum, Base)
minTvlNoMinimum TVL in USD
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses key behavioral traits: returns exactly top 10 results, pre-filters for audited/high-confidence vaults, sorts by risk score. Missing: error behavior, auth requirements, or pagination details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Front-loaded with primary action ('Find the safest...'), followed by filtering options, then return specification. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates for missing output schema by describing return behavior (top 10, sorted, audited). Addresses all 3 optional parameters. Minor gap: no mention of error cases or response structure details, but acceptable for a discovery tool with simple parameter structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (all 3 params documented with examples), establishing baseline 3. Description confirms parameters are optional filtering criteria but does not add significant semantic depth beyond parameter names (e.g., validation rules, case sensitivity).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Find' plus resource 'DeFi vaults' and clear qualifier 'safest (highest risk-scored)'. Distinguishes from sibling search_vaults by emphasizing curation ('top 10 audited, high-confidence') and safety-specific ranking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through filter description ('optionally filtered') and return characteristics ('top 10'), but lacks explicit when-to-use guidance or contrast with siblings like search_vaults or list_vaults_with_incidents.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_curator_infoBInspect

Get detailed information about a vault curator including their managed vaults, TVL, chain distribution, and performance.

ParametersJSON Schema
NameRequiredDescriptionDefault
curatorIdYesCurator ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It compensates for missing output schema by disclosing returned data fields (managed vaults, TVL, chain distribution, performance), but lacks behavioral details like error handling for invalid curator IDs, rate limits, or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single well-structured sentence that is front-loaded with the action verb and efficiently lists specific return data categories. No wasted words or redundant phrasing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 1-parameter lookup tool with no output schema, the description adequately compensates by outlining what detailed information is returned. Sufficient for agent selection despite missing error behavior documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (curatorId: 'Curator ID'), establishing baseline 3. The description adds no additional semantics about parameter format, valid ID patterns, or how to obtain curator IDs, relying entirely on schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' and resource 'vault curator', and distinguishes from vault-focused siblings by detailing curator-specific metrics (managed vaults, TVL, chain distribution, performance). However, it doesn't explicitly clarify when to use this vs. get_protocol_info or get_vault.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to prefer this tool over alternatives like get_vault or get_protocol_info, nor any prerequisites mentioned. The description only states what the tool does, not when to invoke it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_overviewAInspect

Get a high-level overview of the DeFi vault market: total TVL, vault count, risk distribution, and TVL by protocol.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It adequately discloses the data domains covered (TVL, risk distribution, etc.) acting as proxy for return value documentation, but lacks details on data freshness, caching, or whether market data is real-time versus aggregated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with action-oriented front-loading. Lists four specific data points without redundancy or waste. Appropriate length for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool without output schema, the description provides sufficient context by enumerating the specific market metrics retrieved. Absence of authentication or rate limit details prevents a 5, but adequate for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters. Per evaluation rules, zero parameters establishes a baseline score of 4, as there are no parameter semantics to clarify beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Get') + resource ('high-level overview of the DeFi vault market') and explicitly enumerates the four aggregate metrics returned (TVL, vault count, risk distribution, TVL by protocol). Effectively distinguishes from sibling vault-specific tools by positioning as broad market-level data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit guidance on when to use this aggregate tool versus specific alternatives like get_protocol_info (which may also return TVL data) or search_vaults. No 'when not to use' or prerequisite conditions specified.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_protocol_infoAInspect

Get detailed information about a DeFi protocol including TVL, vault count, versions, auditors, and security incidents.

ParametersJSON Schema
NameRequiredDescriptionDefault
protocolIdYesProtocol ID (e.g. morpho, aave-v3, yearn-v3, beefy)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It compensates partially by disclosing the specific data fields returned (TVL, vault count, versions, auditors, security incidents), which helps set expectations. However, it lacks operational details like read-only status, error behavior for invalid protocol IDs, or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with zero waste. The structure front-loads the action ('Get detailed information') and immediately qualifies the scope with specific examples of returned data, ensuring every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single required parameter, no nested objects, no output schema), the description is appropriately complete. It compensates for the missing output schema by enumerating the specific fields returned (TVL, auditors, etc.), providing sufficient context for an agent to select and invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the protocolId parameter well-documented in the schema including examples (morpho, aave-v3). The description adds no additional parameter semantics, but with high schema coverage, the baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('DeFi protocol'), and clearly identifies this as a protocol-level operation by listing protocol-specific attributes (TVL, auditors, security incidents, versions). This implicitly distinguishes it from vault-centric siblings like get_vault and search_vaults, though it could be more explicit about the protocol/vault distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the enumeration of returned data fields (protocol metadata), suggesting when to use it (for protocol analysis). However, it lacks explicit when-to-use guidance or named alternatives, such as clarifying to use get_vault for specific vault details rather than protocol overview.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_vaultAInspect

Get detailed information about a specific DeFi vault including risk breakdown, recent events, and historical snapshots. Lookup by ID or by network + address.

ParametersJSON Schema
NameRequiredDescriptionDefault
idNoVault ID (e.g. morpho-ethereum-0x...)
addressNoVault contract address (0x...)
networkNoNetwork slug (e.g. ethereum, base, arbitrum)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses data content returned (risk breakdown, recent events, historical snapshots) but omits safety profile (read-only status, auth requirements) and error behavior (e.g., invalid vault handling) critical for DeFi operations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficiently structured sentences: first identifies resource and detailed content returned, second specifies lookup methods. Zero redundancy; every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates for missing output schema by detailing return content (risk, events, snapshots). However, with zero annotations and no safety disclosure for a DeFi context, significant behavioral gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (baseline 3). The description adds crucial semantic constraint that lookup requires either ID OR the combination of network+address, which is essential for correct parameter selection despite all fields being technically optional.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb pattern 'Get detailed information about a specific DeFi vault' and explicitly signals singleton retrieval via 'Lookup by ID', distinguishing it from sibling list/search tools like search_vaults and list_vaults_with_incidents.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear parameter usage guidance explaining the lookup pattern ('by ID or by network + address'), but lacks explicit guidance on when to select this over siblings such as get_vault_risk_breakdown or find_safest_vaults.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_vault_risk_breakdownBInspect

Get a detailed breakdown of a vault's risk vectors: Asset Composition, Platform Code, and Governance scores with sub-metrics.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesVault contract address (0x...)
networkYesNetwork slug (e.g. ethereum, base, arbitrum)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It adequately specifies the output structure (three risk categories with scores and sub-metrics) but omits safety information, error handling (e.g., invalid vault address), authentication requirements, and side effects. The mention of sub-metrics adds value beyond the tool name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficiently structured sentence that front-loads the action ('Get a detailed breakdown') and uses a colon to list specific output components without verbosity. Every word earns its place; there is no repetition of the tool name or tautology.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 2-parameter input schema and lack of output schema, the description provides sufficient context by enumerating the three specific risk vectors and mentioning sub-metrics, adequately hinting at the return structure. It does not cover error scenarios or data freshness, but is complete enough for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with clear descriptions for both address ('Vault contract address') and network ('Network slug'). The description mentions 'vault's risk vectors' which implicitly connects to the address parameter, but does not need to replicate the well-documented schema details. Baseline 3 is appropriate given the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('vault's risk vectors'), and distinguishes this from sibling tools like get_vault by specifying the exact risk categories returned (Asset Composition, Platform Code, Governance scores with sub-metrics). However, it does not explicitly differentiate when to use this versus explain_risk_score or compare_vaults.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like explain_risk_score (which likely explains methodology) or compare_vaults (which compares multiple vaults). There are no prerequisites, exclusions, or conditional usage patterns mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_vaults_with_incidentsAInspect

List all vaults that had a recent critical incident (last 365 days). Critical = severity Critical or incident_severity major. Results are sorted by TVL descending, then by time since event ascending (most recent first).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden and succeeds well: defines the 365-day time window, clarifies severity logic ('Critical = severity Critical or incident_severity major'), and discloses sorting behavior (TVL desc, then time since event asc). Missing return structure details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two information-dense sentences with zero waste. Front-loaded with the core action, followed immediately by scope constraints and result ordering. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless tool, description comprehensively documents the query logic (severity criteria, time window, sorting). Minor gap: lacks description of return values, though 'List vaults' provides some implicit guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero input parameters per schema, which establishes baseline score of 4. Description correctly implies no user-configurable filters are available (all filtering is preset by the tool's business logic).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specific purpose: verb 'List' + resource 'vaults' + specific scope 'that had a recent critical incident (last 365 days)'. The incident-focused criteria clearly distinguishes this from siblings like search_vaults, get_vault, or find_safest_vaults.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Defines the specific domain (critical incidents within 365 days) which implicitly suggests when to use it (incident investigation), but lacks explicit contrasts with alternatives like search_vaults or guidance on when this vs. get_vault_risk_breakdown is more appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_vaultsAInspect

Search and filter DeFi vaults by chain, protocol, asset, risk tier, TVL, and more. Returns a paginated list with risk scores and APR.

ParametersJSON Schema
NameRequiredDescriptionDefault
assetNoFilter by asset symbol (e.g. USDC, WETH)
chainNoFilter by chain name (e.g. Ethereum, Base, Arbitrum)
limitNoMax results (default 10, max 50)
queryNoSearch by vault name, symbol, asset, protocol, or curator
minTvlNoMinimum TVL in USD
sortByNoSort field: tvl_usd, apr_net, name, last_synced_at
protocolNoFilter by protocol ID (e.g. morpho, aave-v3, yearn-v3)
riskTierNoFilter by risk tier: Prime, Core, or Edge
sortOrderNoSort order: asc or desc
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden. Discloses pagination behavior and return contents ('paginated list with risk scores and APR'), which helps agents understand response structure. However, lacks rate limits, real-time vs cached data indicators, or pagination mechanism details (cursor vs offset).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence establishes action and filterable fields; second discloses return format. Front-loaded with the most critical information. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a search tool with 9 optional parameters. Compensates for missing output schema by noting return structure (paginated list with risk scores/APR). Minor gap: doesn't note that all parameters are optional (unusual for search tools) or describe the free-text 'query' parameter's behavior beyond schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with detailed parameter docs (e.g., 'Filter by chain name (e.g. Ethereum, Base, Arbitrum)'). Description lists filter categories but doesn't add semantic relationships or constraints (e.g., that all parameters are optional) beyond the schema. Baseline 3 appropriate for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific action ('Search and filter') and resource ('DeFi vaults'). Lists key filterable dimensions (chain, protocol, asset, risk tier, TVL) that hint at capabilities. Missing explicit differentiation from siblings like 'get_vault' (direct lookup) or 'find_safest_vaults' (safety-optimized), though the general filtering scope is distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context through 'filter' and available parameters, suggesting it's for discovery and filtering scenarios. However, lacks explicit guidance on when to prefer this over alternatives like 'find_safest_vaults' for safety-first queries or 'get_vault' for specific known vaults.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.