Skip to main content
Glama

Tenzro Li.Fi MCP

Server Details

Li.Fi MCP: cross-chain aggregation — quotes, routes, status, chains, tokens, execution.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
tenzro/tenzro-network
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 9 of 9 tools scored.

Server CoherenceA
Disambiguation5/5

All tools have distinct purposes targeting different aspects of cross-chain swaps: chains, connections, gas prices, quotes, routes, status, tokens, and tools. No overlap in functionality.

Naming Consistency5/5

All tool names follow a consistent 'lifi_<verb>_<noun>' pattern using snake_case, with the prefix 'lifi_' and verbs like 'get' throughout, ensuring predictability.

Tool Count5/5

With 9 tools, the set is well-scoped for a cross-chain swap service, covering discovery, quotes, routes, and status without excessive or insufficient tools.

Completeness4/5

The tool surface covers key operations for cross-chain swaps: chain/token queries, quote/route retrieval, gas prices, and status checking. Missing a tool for initiating a swap (though quote provides transaction data) and possibly a cancel tool, but these are minor gaps.

Available Tools

9 tools
lifi_get_chainsAInspect

Get all blockchain networks supported by LI.FI for cross-chain transfers and swaps. Returns chain IDs, names, native tokens, and supported bridge/exchange protocols.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the output but fails to disclose behavioral traits such as idempotency, side effects, authentication requirements, or rate limits. The verb 'get' suggests a read operation, but this is not explicitly stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that front-loads the purpose and then details the output. Every word earns its place, with no unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (no parameters, no output schema, no annotations), the description covers the purpose and return data adequately. However, it could be more complete by mentioning that the list is exhaustive or if any filtering is possible.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, and schema description coverage is 100%. The description adds context about the return values, but since there are no parameters, no additional meaning is needed. Baseline 4 applies due to zero parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Get all blockchain networks supported by LI.FI', specifying the resource (blockchain networks), the action (get), and the scope (all supported). It also lists return values (chain IDs, names, native tokens, protocols), which distinguishes it from siblings like lifi_get_tokens or lifi_get_connections.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is used to retrieve available chains, but it does not provide explicit guidance on when to use it versus alternatives (e.g., lifi_get_connections). No exclusion or prerequisite information is given, relying on implied context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lifi_get_connectionsAInspect

Get available cross-chain connections between two chains. Optionally filter by source and destination tokens. Shows which bridges and exchanges can transfer between the specified chains.

ParametersJSON Schema
NameRequiredDescriptionDefault
to_chainYesDestination chain ID (e.g. 137 for Polygon)
to_tokenNoDestination token address (optional, filters connections to specific token)
from_chainYesSource chain ID (e.g. 1 for Ethereum)
from_tokenNoSource token address (optional, filters connections to specific token)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It indicates a read operation and outlines the result type (bridges/exchanges), but does not disclose side effects, authentication needs, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no fluff, front-loaded with the primary action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has 4 parameters all well-documented, but no output schema. The description hints at the output (bridges and exchanges), providing adequate completeness for a connection listing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description mentions filtering by tokens but adds no significant meaning beyond the schema's parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves available cross-chain connections between two chains, with optional token filters. This distinguishes it from sibling tools like lifi_get_quote and lifi_get_routes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for discovering connections, but lacks explicit guidance on when to use this tool vs alternatives, and no conditions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lifi_get_gas_pricesAInspect

Get current gas prices for LI.FI-supported chains. Optionally filter by chain IDs. Returns gas prices in native units for each chain.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainsNoComma-separated chain IDs (e.g. '1,137,42161'). Omit for all chains.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description specifies it returns gas prices in native units, which is transparent. However, does not confirm non-destructiveness or potential rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states purpose, second adds optional filter. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (1 optional param, no output schema, no annotations), the description sufficiently covers functionality and optional filtering.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter, and description matches the schema's 'Comma-separated chain IDs' without adding extra meaning beyond optionality.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'Get' and resource 'gas prices' with scope 'for LI.FI-supported chains'. Distinguishes from sibling tools like lifi_get_chains which retrieve chain data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides optional filtering by chain IDs, but does not explicitly state when to use this tool versus alternatives or any prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lifi_get_quoteAInspect

Get a single best quote for a cross-chain swap or bridge transfer. Returns the optimal route with estimated output, fees, execution time, and transaction data ready to sign.

ParametersJSON Schema
NameRequiredDescriptionDefault
slippageNoSlippage tolerance as decimal (default 0.03 = 3%)
to_chainYesDestination chain ID (e.g. 137 for Polygon)
to_tokenYesDestination token address
from_chainYesSource chain ID (e.g. 1 for Ethereum)
from_tokenYesSource token address (e.g. '0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48' for USDC)
from_amountYesAmount in smallest unit (wei for ETH, base units for ERC-20)
from_addressYesSender wallet address (0x...)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided. The description discloses that the tool returns optimal route, estimated output, fees, execution time, and transaction data ready to sign. However, it does not mention any behavioral traits like time sensitivity, prerequisites, rate limits, or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences. The first states the core purpose, and the second elaborates on return content. Zero fluff, well-structured and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately describes the return content (optimal route, estimated output, fees, execution time, transaction data). It lacks mention of error cases or limitations, but for a tool with 7 well-documented parameters, this is sufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description does not add additional meaning to parameters beyond what the schema already provides. It provides general context but no extra semantic detail.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'single best quote' for a cross-chain swap or bridge transfer. It distinguishes from sibling 'lifi_get_routes' by specifying 'single best quote' versus presumably multiple routes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the name and description imply this is for getting one best quote, it does not explicitly state when to use this tool versus alternatives like lifi_get_routes for multiple quotes. No when-not or exclusionary guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lifi_get_routesAInspect

Get multiple route options for a cross-chain swap or bridge transfer, ranked by output amount. Use this for comparing routes across different bridges and DEXes. Returns routes with fees, estimated time, and step-by-step breakdown.

ParametersJSON Schema
NameRequiredDescriptionDefault
slippageNoSlippage tolerance as decimal (default 0.03 = 3%)
from_amountYesAmount in smallest unit (wei for ETH, base units for ERC-20)
to_chain_idYesDestination chain ID
from_addressYesSender wallet address (0x...)
from_chain_idYesSource chain ID
to_token_addressYesDestination token address
from_token_addressYesSource token address
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavior. It mentions returns (routes, fees, time, breakdown) but does not discuss side effects, auth requirements, or rate limits. The name implies read-only, but more detail would improve transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three short sentences, front-loaded with purpose, no redundant information. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description partially explains returns (routes, fees, time, breakdown). It doesn't cover constraints like max routes or error conditions, but is sufficient for a straightforward list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 7 parameters have schema descriptions (100% coverage). The tool description adds no extra parameter-specific meaning beyond what the schema provides, so baseline score 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and resource 'multiple route options for a cross-chain swap or bridge transfer', and distinguishes from sibling tools like lifi_get_quote by emphasizing multiple routes ranked by output.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description says 'Use this for comparing routes across different bridges and DEXes', providing explicit context for when to use. It doesn't list exclusions or alternatives, but the purpose is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lifi_get_statusAInspect

Check the status of a cross-chain transfer by transaction hash. Returns status (PENDING, DONE, FAILED, NOT_FOUND), bridge used, source/destination chain info, and amounts.

ParametersJSON Schema
NameRequiredDescriptionDefault
bridgeNoBridge name (optional, e.g. 'stargate', 'hop', 'across')
tx_hashYesTransaction hash to check status for
to_chainNoDestination chain ID (optional, helps disambiguate)
from_chainNoSource chain ID (optional, helps disambiguate)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description partially covers return values (status types, bridge info). Lacks disclosure of polling behavior, rate limits, or whether it's a one-time check. Since it's a read operation, missing details are less critical but still a gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, no redundancy. Efficiently conveys core purpose and return information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description compensates by listing return fields. Could mention optional params role (disambiguation) but schema covers that. Generally complete for a simple status check tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (all params have descriptions). Description adds 'by transaction hash' but doesn't add meaning beyond schema for optional parameters. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description specifies verb 'check status' and resource 'cross-chain transfer', and lists return values. Clearly distinguishes from sibling tools that handle different functionalities like chains, connections, or quotes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like lifi_get_routes or lifi_get_quote. Does not mention prerequisites or scenarios for optional parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lifi_get_tokenAInspect

Get detailed information about a specific token on a specific chain, including address, symbol, decimals, name, and logo URI.

ParametersJSON Schema
NameRequiredDescriptionDefault
chain_idYesChain ID (e.g. 1 for Ethereum, 137 for Polygon, 42161 for Arbitrum)
token_addressYesToken contract address (e.g. '0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48' for USDC on Ethereum). Use '0x0000000000000000000000000000000000000000' for native tokens.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states 'Get detailed information', which implies a read-only operation, but does not explicitly confirm idempotency, safety, or absence of side effects. With no annotations, a 3 is appropriate as it gives a basic indication but lacks explicit behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words. Front-loaded with the main purpose. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple token lookup with two well-documented parameters and no output schema, the description is mostly complete. It lists the key return fields. However, it could mention the return format (e.g., JSON object) or that the data is fetched from an external source, but given the tool's simplicity, a 4 is reasonable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers both parameters with descriptions (chain_id and token_address), achieving 100% coverage. The description lists output fields (address, symbol, decimals, name, logo URI) but does not add new meaning beyond the schema. Since schema already fully describes parameters, baseline 3 is correct.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool gets detailed information about a specific token on a specific chain, listing the included fields (address, symbol, decimals, name, logo URI). This distinguishes it from siblings like lifi_get_tokens (which likely lists all tokens) and lifi_get_quote (which deals with quotes). The verb 'Get' and resource 'token' are precise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when you need details of a specific token, but it does not explicitly state when to use this vs alternatives. No mention of when not to use it or comparisons to siblings like lifi_get_tokens for listing. Guidelines are inferred but not explicitly provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lifi_get_tokensAInspect

Get tokens available on LI.FI-supported chains. Optionally filter by chain IDs. Returns token addresses, symbols, decimals, and logos.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainsNoComma-separated chain IDs to filter tokens (e.g. '1,137,42161'). Omit for all chains.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It only states the return fields (addresses, symbols, decimals, logos) and filtering option, but does not mention any behavioral traits like read-only nature, rate limits, authentication requirements, or result limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. The description is front-loaded with the primary purpose and then adds optional behavior and return information. Every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (1 optional param, no output schema), the description covers what the tool returns and how to filter. It could mention that the result is a list or any pagination details, but it is largely sufficient for an agent to understand the tool's function.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers the only parameter ('chains') with a complete description, including an example. Schema description coverage is 100%, so baseline is 3. The tool description adds no extra meaning beyond 'Optionally filter by chain IDs', which is redundant with the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get tokens available on LI.FI-supported chains. Optionally filter by chain IDs. Returns token addresses, symbols, decimals, and logos.' It uses a specific verb ('Get') and resource ('tokens'), and distinguishes from sibling tools like lifi_get_token (singular) and lifi_get_chains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions optional filtering but does not explicitly guide when to use this tool over siblings like lifi_get_token (for a single token) or lifi_get_chains. Usage is implied rather than stated, and no alternatives or exclusions are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lifi_get_toolsAInspect

Get available bridge and DEX exchange tools integrated into LI.FI. Returns protocol names, supported chains, and tool types (bridge vs exchange).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses what the tool returns (protocol names, supported chains, tool types) but does not mention any behavioral traits like idempotency, rate limits, or side effects. For a simple getter, this is acceptable but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that immediately conveys the action and output. Every word is informative, and there is no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (no parameters, no output schema), the description is largely complete. It explains the tool's function and return content. However, it could mention if the return is a list or any prerequisites, but overall it is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, so the description adds value by explaining the return format. The schema coverage is 100%, and the description clarifies the output without needing to detail parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get available bridge and DEX exchange tools integrated into LI.FI'. It specifies the returns (protocol names, supported chains, tool types), which distinguishes it from sibling tools that retrieve specific data like chains or quotes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide guidance on when to use this tool versus alternatives like lifi_get_chains or lifi_get_quote. There is no explicit context for when it should be invoked, such as before obtaining quotes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.