Tenzro Chainlink MCP
Server Details
Chainlink MCP: CCIP, CCT pools, Data Feeds, Data Streams, VRF v2.5, PoR, Automation, Functions.
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- tenzro/tenzro-network
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 20 of 20 tools scored.
Each tool has a clear prefix (ccip_, chainlink_, ds_, por_, vrf_) indicating the product, and within each group, verbs and nouns are distinct (e.g., get_fee vs get_lanes vs send_message). No two tools appear to do the same thing.
All tools follow snake_case with a product prefix and verb_noun pattern (e.g., ccip_get_fee, chainlink_check_upkeep, ds_list_feeds). No deviations or mixed conventions.
With 20 tools covering multiple Chainlink products, the count is slightly high but still reasonable for a comprehensive integration. Each tool serves a distinct function without feeling excessive.
The tool set covers key operations for CCIP, Data Feeds, Automation, Functions, Data Streams, Proof of Reserve, and VRF. While some minor operations (e.g., token management) are missing, the core workflows are well-supported.
Available Tools
20 toolsccip_get_feeAInspect
Estimate CCIP cross-chain messaging fee via Router.getFee() eth_call. Returns the native fee required to send a CCIP message from the source chain to the destination chain. Supports Ethereum, Base, and Arbitrum as source chains.
| Name | Required | Description | Default |
|---|---|---|---|
| data_hex | No | Hex-encoded data payload to send (with or without 0x prefix). Use '0x' or '' for empty | |
| receiver | Yes | Hex-encoded receiver address on the destination chain (with or without 0x prefix) | |
| fee_token | No | Fee token address. Use zero address (0x0000...0000) for native gas token payment | |
| src_chain_id | Yes | Source chain identifier: 'ethereum', 'base', 'arbitrum', or a chain ID number | |
| token_amounts | No | Token amounts to transfer as JSON array of {token, amount} objects. Empty array for message-only | |
| dst_chain_selector | Yes | Destination CCIP chain selector (uint64). E.g. 4949039107694359620 for Arbitrum |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It correctly identifies the operation as an eth_call (read-only) and specifies the return as native fee required. However, it does not disclose potential error conditions, rate limits, or the exact return format, which are important for an agent invoking the tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two concise sentences with no extraneous details. Every sentence adds value: first states purpose and method, second specifies return and supported chains. Well-structured and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters (3 required) and no output schema, the description covers high-level purpose but does not specify the return format (e.g., decimal string, hex). It also does not explain how parameters like fee_token affect the estimate. The sibling tools are not referenced for differentiation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds high-level context (supported chains, native fee) but does not add meaning beyond what the schema already provides for individual parameters. It repeats the schema's intent without deeper clarification.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool estimates a CCIP cross-chain messaging fee, specifying the method (Router.getFee() eth_call) and the resource (native fee for sending a CCIP message). It also explicitly lists supported source chains (Ethereum, Base, Arbitrum), distinguishing it from sibling tools like ccip_send_message.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for fee estimation before sending a message but provides no explicit guidance on when to use this tool versus siblings (e.g., when to call this versus ccip_send_message directly). No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ccip_get_lanesAInspect
Get available CCIP lanes (source-destination chain pairs) from the Chainlink REST API. Optionally filter by source or destination chain selector.
| Name | Required | Description | Default |
|---|---|---|---|
| environment | No | Environment: 'mainnet' or 'testnet'. Defaults to 'mainnet' | |
| dest_chain_selector | No | Optional destination chain selector to filter lanes | |
| source_chain_selector | No | Optional source chain selector to filter lanes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It implies a read-only GET operation but does not disclose response size, pagination, rate limits, or error handling. The behavioral disclosure is minimal beyond the obvious read action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences convey the core purpose and optionality efficiently. No redundant words; front-loaded with the main action. Every sentence contributes necessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple with 3 optional params and no output schema. While the description covers the basic purpose, it omits what the response contains (e.g., list of lane objects) and whether the tool is purely read-only. Given the low complexity, some additional context would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (all three parameters described). The tool description adds no additional meaning beyond repeating that filters are optional; it does not explain chain selectors or provide examples. Baseline 3 is appropriate as schema does the work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'available CCIP lanes' and mentions optional source/destination chain selector filtering. The verb 'Get' and resource 'CCIP lanes' are specific and differentiate it from sibling tools like ccip_get_fee or ccip_get_supported_chains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like ccip_get_supported_chains or ccip_get_fee. The description does not specify prerequisites, typical use cases, or when filtering is beneficial, leaving the agent to infer usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ccip_get_rate_limitsAInspect
Get CCIP Token Pool rate limiter configuration for a specific remote chain. Returns inbound and outbound rate limits (tokens per second, capacity) that control the maximum cross-chain transfer throughput. Part of CCIP v1.6+ security model.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain the pool is deployed on: 'ethereum', 'base', 'arbitrum'. Defaults to 'ethereum' | |
| pool_address | Yes | Token pool contract address (hex with 0x prefix) | |
| remote_chain_selector | Yes | Remote chain selector (uint64) to query rate limits for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description clearly indicates a read-only operation ('Get'), and specifies the returned data (inbound/outbound rate limits). It does not mention side effects, which is appropriate for a read. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: the first states the core purpose, the second adds output details and context. Every sentence adds value, no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains the return values (rate limits with units and capacity). It lacks error handling or default behavior for the optional chain parameter, but overall is sufficient for a simple read tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed parameter descriptions. The description adds little beyond the schema, only noting the remote chain context. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the specific resource 'CCIP Token Pool rate limiter configuration for a specific remote chain'. It distinguishes from sibling tools by focusing on rate limits and mentioning the security model context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning it is part of CCIP v1.6+ security model, but does not explicitly state when to use this tool over siblings or provide any exclusion guidelines.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ccip_get_supported_chainsAInspect
Get supported chains for Chainlink CCIP from the Chainlink REST API. Returns chain names, selectors, and network details.
| Name | Required | Description | Default |
|---|---|---|---|
| environment | No | Environment: 'mainnet' or 'testnet'. Defaults to 'mainnet' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description must fully disclose behavior. It states the tool retrieves data (non-destructive) and returns specific fields, but lacks details on authentication, rate limits, or any side effects. This is adequate for a simple read tool but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at two sentences, front-loading the primary purpose and output. Every word serves a purpose with no redundancy or unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one optional parameter, no output schema), the description covers the essential behavior and return values. It lacks output structure details but is sufficient for an agent to understand what the tool does and what to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a well-described parameter. The description does not add any additional meaning or context beyond the schema's parameter description, so it meets the baseline but does not exceed it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves supported chains for Chainlink CCIP from the REST API, specifying the output includes chain names, selectors, and network details. This verb+resource+result combination is distinct from sibling tools like ccip_get_lanes or ccip_get_supported_tokens, which focus on other aspects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for fetching chain metadata from an external API but offers no explicit guidance on when to use it over alternatives or when to avoid it. No exclusions, prerequisites, or contextual hints are provided, leaving the agent to infer usage from the tool's name and purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ccip_get_supported_tokensAInspect
Get supported tokens for Chainlink CCIP from the Chainlink REST API. Returns token addresses, symbols, and supported lanes.
| Name | Required | Description | Default |
|---|---|---|---|
| environment | No | Environment: 'mainnet' or 'testnet'. Defaults to 'mainnet' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It mentions the return content but does not indicate whether the operation is read-only, requires authentication, or has rate limits. The tool likely has no side effects, but this is not stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence with no extraneous words. It efficiently conveys the tool's purpose and output without unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single optional parameter, no output schema), the description is fairly complete. It specifies what is returned, but could include additional context such as typical use cases (e.g., before sending a CCIP message) or whether the token list is dynamic.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter 'environment', which has a clear description. The tool description adds no additional meaning beyond what the schema already provides, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves supported tokens for Chainlink CCIP and specifies the returned data (addresses, symbols, lanes). It uses a specific verb 'Get' and resource 'supported tokens', and the tool name itself is descriptive, distinguishing it from sibling CCIP tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for obtaining token information but lacks explicit guidance on when to use this tool versus alternatives like ccip_get_fee or ccip_get_lanes. There are no usage scenarios or prerequisites mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ccip_get_token_poolBInspect
Get information about a CCIP Token Pool contract. Returns the pool type (Lock/Release or Burn/Mint), the token address, supported remote chains, and rate limiter config. Token Pools are part of the Cross-Chain Token (CCT) standard in CCIP v1.6+.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: 'ethereum', 'base', 'arbitrum'. Defaults to 'ethereum' | |
| pool_address | Yes | Token pool contract address (hex with 0x prefix) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description indicates a read-only operation ('Get information') without side effects, but with no annotations provided, it lacks disclosure of potential errors (e.g., invalid address) or authorization requirements. It does not contradict any annotations, as none exist, but fails to add significant behavioral context beyond the return fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: first sentence states purpose and lists outputs, second adds version context. It is front-loaded, efficient, and contains no redundant information. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no output schema, so the description appropriately lists the returned fields. It covers the essential information for a query tool. However, it does not mention error handling (e.g., what happens for a non-existent pool) or any constraints, which would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with clear definitions for 'chain' (including defaults) and 'pool_address' (hex format). The description adds context about the return values, which helps infer parameter purpose, but does not enhance parameter semantics beyond the schema. Baseline of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves information about a CCIP Token Pool contract and lists the specific data returned (pool type, token address, supported chains, rate limiter). It is specific about the resource and verb, but does not explicitly differentiate from sibling tools like ccip_get_supported_tokens or ccip_get_rate_limits, which have overlapping domains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives (e.g., when to use ccip_get_token_pool vs ccip_get_supported_tokens). The description only states what it does, not the context or prerequisites. An agent would have to infer usage from the tool name and siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ccip_send_messageBInspect
Send a CCIP cross-chain message via Router.ccipSend(). Submits a signed transaction to the source chain's CCIP Router to send a message and/or tokens to the destination chain. Returns the transaction hash.
| Name | Required | Description | Default |
|---|---|---|---|
| data_hex | No | Hex-encoded data payload | |
| receiver | Yes | Hex-encoded receiver address on the destination chain | |
| fee_token | No | Fee token address (zero address for native). Defaults to native | |
| gas_limit | No | Gas limit for execution on destination chain (default: 200000) | |
| sender_key | Yes | Hex-encoded sender private key for signing the transaction | |
| src_chain_id | Yes | Source chain: 'ethereum', 'base', 'arbitrum', or chain ID | |
| token_amounts | No | Token amounts to transfer as JSON array of {token, amount} objects | |
| dst_chain_selector | Yes | Destination CCIP chain selector (uint64) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description indicates a write operation (submits a signed transaction) but lacks details on side effects, required permissions, gas costs, or potential reverts. With no annotations, the description should provide more behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, no wasted words. Front-loaded with the core action, followed by mechanism and return value. Appropriate length for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema. Description lacks context on prerequisites (e.g., funding), transaction confirmation, error handling, or how to interpret the hash. For a cross-chain message tool with 8 parameters, more completeness would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with descriptions for all parameters. The description adds no extra meaning beyond the schema, which is adequate given high coverage, but does not enhance understanding of complex parameters like sender_key.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool sends a CCIP cross-chain message via a specific function (Router.ccipSend()), with a verb ('Send') and resource ('CCIP cross-chain message'), and distinguishes it from read-only sibling tools like ccip_get_fee.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives, no preconditions or exclusions mentioned. Siblings include other ccip_ and chainlink_ tools, but the description does not differentiate usage contexts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ccip_track_messageAInspect
Track the execution status of a CCIP cross-chain message on the destination chain. Calls OffRamp.getExecutionState() to check message delivery status. States: 0=UNTOUCHED (not yet processed), 1=IN_PROGRESS (being executed), 2=SUCCESS (delivered), 3=FAILURE (execution failed).
| Name | Required | Description | Default |
|---|---|---|---|
| message_id | Yes | CCIP message ID (64-byte hex, with or without 0x prefix) | |
| dst_chain_id | Yes | Destination chain: 'ethereum', 'base', 'arbitrum', or chain ID | |
| offramp_address | Yes | OffRamp contract address on the destination chain (hex with 0x prefix) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the underlying contract call (OffRamp.getExecutionState()) and lists possible states, which is adequate. However, it does not explicitly state that the operation is read-only or mention any required permissions or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, with three sentences covering purpose, method, and state explanation. Every sentence adds value, and the most important information (what the tool does) is first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple status-checking tool with three parameters and no output schema, the description provides essential context: the underlying call and state codes. It could mention error handling or network-specific nuances, but overall it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so all parameters are described in the schema. The description adds no additional parameter-level information. It does explain the state codes, which are output-related, not parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to track the execution status of a CCIP cross-chain message. It specifies the action (track), resource (execution status), and context (destination chain). This distinguishes it from sibling tools like ccip_get_fee or ccip_send_message.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool (after sending a message to check its status) but lacks explicit guidance on when not to use it or alternatives. It doesn't mention other tracking methods, though none exist among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
chainlink_check_upkeepBInspect
Check if a Chainlink Automation upkeep needs to be performed by dry-running checkUpkeep(bytes) on the target contract. Returns whether upkeep is needed and the perform data.
| Name | Required | Description | Default |
|---|---|---|---|
| chain_id | No | Chain to query: 'ethereum', 'arbitrum', 'base', or chain ID. Defaults to 'ethereum' | |
| check_data | No | Hex-encoded check data to pass to checkUpkeep (with or without 0x prefix). Defaults to empty | |
| contract_address | Yes | Address of the Automation-compatible contract (hex with 0x prefix) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must fully disclose behavior. It mentions 'dry-running' which implies a read-only simulation, but does not state if it costs gas, requires authentication, or what happens if the contract is not Automation-compatible. No side effects or error cases are mentioned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose and result. Every word provides value. No fluff or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description covers the high-level purpose and return type (boolean + perform data) but lacks details on interpreting the perform data, error handling, or network requirements. Adequate but not comprehensive for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema provides 100% descriptive coverage for all parameters (chain_id, check_data, contract_address). The description does not add additional meaning beyond what is in the schema, so baseline 3 applies. No parameter-level details are enhanced.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool checks if a Chainlink Automation upkeep is needed via dry-running checkUpkeep on the target contract, and returns whether upkeep is needed and perform data. This distinguishes it from sibling tools like chainlink_get_upkeep_info which retrieve existing upkeep details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. Does not mention prerequisites, such as needing a contract address or that this tool is typically called before performing upkeep. No when-not-to-use or error conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
chainlink_estimate_functions_costAInspect
Estimate the cost of a Chainlink Functions request. Calculates the approximate LINK cost based on callback gas limit, gas price, and the Functions premium. Returns the estimated total cost in LINK.
| Name | Required | Description | Default |
|---|---|---|---|
| chain_id | No | Chain to query. Defaults to 'ethereum' | |
| gas_price_wei | No | Gas price in wei for cost estimation | |
| router_address | Yes | Functions Router address (hex with 0x prefix) | |
| subscription_id | Yes | Functions subscription ID (uint64) | |
| callback_gas_limit | Yes | Callback gas limit for the fulfillment |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description reveals the tool calculates an estimated cost without making on-chain modifications, but it does not clarify whether it queries the chain or computes locally. With no annotations, the description carries full burden but could be more transparent about side effects, rate limits, or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two focused sentences. The first sentence immediately states the core purpose, and the second provides essential detail. Every word adds value, with no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool and full schema coverage, the description is largely complete. It explains inputs, calculation basis, and output. Minor missing context: how the premium factor is determined and what happens if gas_price is omitted.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description mentions callback_gas_limit and gas_price, but adds no significant meaning beyond parameter names. It does not explain defaults, units, or relationships between parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's action (estimate) and resource (cost of Chainlink Functions request), and specifies it calculates LINK cost based on callback gas limit, gas price, and premium. This distinguishes it from sibling tools like ccip_get_fee or vrf_get_subscription, which address different Chainlink services.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, nor any prerequisites or context. It merely states what it does without explaining typical usage scenarios or conditions that might trigger its invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
chainlink_get_priceAInspect
Get the latest price from a Chainlink data feed by calling AggregatorV3Interface.latestRoundData(). Returns the price, round ID, timestamps, and decimal precision. Common feeds on Ethereum: ETH/USD = 0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419, BTC/USD = 0xF4030086522a5bEEa4988F8cA5B36dbC97BeE88c.
| Name | Required | Description | Default |
|---|---|---|---|
| chain_id | No | Chain to query: 'ethereum', 'arbitrum', 'base', or a chain ID. Defaults to 'ethereum' | |
| feed_address | Yes | Chainlink data feed contract address (hex with 0x prefix). E.g. 0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419 for ETH/USD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description fully discloses behavior: it calls latestRoundData() and returns price, round ID, timestamps, and decimal precision. No hints of destructive actions. However, it could mention potential reverts or gas costs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, no redundancy. First sentence states purpose, second lists return data, third gives examples. Concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains return fields. It covers both parameters implicitly. Missing error scenarios or network-specific details, but overall sufficient for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by listing example addresses for feed_address, aiding the agent in understanding common inputs. chain_id is not elaborated in description, but schema already covers it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get the latest price from a Chainlink data feed' and specifies the exact interface method (AggregatorV3Interface.latestRoundData()). It also provides concrete examples of common feed addresses, distinguishing this tool from siblings like chainlink_list_feeds which list feeds, not fetch prices.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a feed address is known and a price is needed, but it does not explicitly state when to use this tool over alternatives (e.g., chainlink_list_feeds for retrieving feed metadata). No when-not-to-use or exclusion criteria are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
chainlink_get_subscriptionAInspect
Get Chainlink Functions subscription details including balance, owner, authorized consumers, and request counts. Queries the Functions Router contract on-chain.
| Name | Required | Description | Default |
|---|---|---|---|
| chain_id | No | Chain to query. Defaults to 'ethereum' | |
| router_address | Yes | Functions Router address (hex with 0x prefix) | |
| subscription_id | Yes | Subscription ID (uint64) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden of behavioral disclosure. It mentions querying the on-chain contract, implying a read operation, but does not explicitly confirm idempotency, side effects, or prerequisites. The description is adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with zero extraneous content. It front-loads the core purpose in the first sentence and provides context in the second. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description hints at return fields (balance, owner, etc.) but does not specify the exact return format or error conditions. It lacks details about prerequisites like network access or chain_id validation, but is minimally sufficient for a simple get tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100%, so the schema already documents all parameters. The description adds no additional parameter semantics beyond the schema, meeting baseline expectations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get', the resource 'Chainlink Functions subscription details', and specifies key fields like balance, owner, authorized consumers, and request counts. It distinguishes from siblings such as vrf_get_subscription by specifying 'Functions' and mentioning on-chain query.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear use case (get subscription details) but does not explicitly state when to use this tool versus alternatives like vrf_get_subscription or chainlink_get_price. The context of Chainlink Functions vs VRF is implied by naming, but no direct guidance is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
chainlink_get_upkeep_infoAInspect
Get information about a Chainlink Automation upkeep from the registry. Returns the upkeep target, balance, gas limit, and execution status.
| Name | Required | Description | Default |
|---|---|---|---|
| chain_id | No | Chain to query: 'ethereum', 'arbitrum', 'base', or chain ID. Defaults to 'ethereum' | |
| upkeep_id | Yes | Upkeep ID (uint256 as decimal string) | |
| registry_address | Yes | Automation Registry address (hex with 0x prefix) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It indicates a read operation but does not disclose potential failure modes, permission requirements, or on-chain interaction specifics. Adequate but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys purpose and output. It is front-loaded and contains no extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with 3 parameters and no output schema, the description covers the return data adequately. However, it omits error handling or default chain behavior, slightly reducing completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All three parameters are well-described in the input schema (100% coverage). The description adds no additional parameter-level insight, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves information about a Chainlink Automation upkeep, specifying exact return fields (target, balance, gas limit, execution status). It distinguishes from siblings like chainlink_check_upkeep which likely checks execution eligibility.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives such as chainlink_check_upkeep. The description implies usage for general info retrieval but does not provide exclusions or context for decision-making.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
chainlink_list_feedsAInspect
List popular Chainlink data feed addresses for a given chain. Returns feed pairs, addresses, and decimal precision.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Optional chain to list feeds for: 'ethereum', 'arbitrum', 'base'. Defaults to 'ethereum' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool returns addresses, pairs, and precision but does not disclose if the operation is read-only, whether authentication is needed, or any rate limits. The term 'popular' is vague and not explained. More behavioral context is needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 12 words, efficiently conveying the purpose and output. No extraneous information is included, and the key points are front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (one optional parameter, no output schema), the description provides adequate information about return values. However, it lacks details on what 'popular' means and how the list is curated. The presence of sibling list tools (ds, por) suggests more context could help, but overall it is minimally sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage for the single optional 'chain' parameter, including an explicit list of allowed values. The description does not add additional parameter semantics beyond what the schema provides, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list', the resource 'popular Chainlink data feed addresses', and the scope 'for a given chain'. It also specifies the return fields: feed pairs, addresses, and decimal precision. This is specific and distinguishes it from sibling tools that perform different functions like sending messages or checking upkeep.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing feeds on a chain but does not explicitly provide when-to-use vs alternatives like ds_list_feeds or por_list_feeds. No exclusions or prerequisites are mentioned. The guidance is reasonable but lacks clear differentiation from sibling list tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ds_get_reportAInspect
Get a Data Streams report for a specific feed ID. Data Streams provide sub-second, low-latency market data for crypto, forex, equities, and commodities. Returns benchmarkPrice, bid, ask, timestamps, and fee info. Common feed IDs: ETH/USD = 0x000359843a543ee2fe414dc14c7e7920ef10f4372990b79d6361cdc0dd1ba782, BTC/USD = 0x00037da06d56d083fe599397a4769a042d63aa73dc4ef57709d31e9971a5b439.
| Name | Required | Description | Default |
|---|---|---|---|
| feed_id | Yes | Data Streams feed ID (hex string, e.g. '0x000359843a543ee2fe414dc14c7e7920ef10f4372990b79d6361cdc0dd1ba782' for ETH/USD) | |
| timestamp | No | Unix timestamp to query (optional — latest if omitted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavioral traits. It mentions the tool returns data but does not discuss side effects, authentication requirements, rate limits, or potential failure modes. The read-only nature is implied but not explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with four sentences, each serving a distinct purpose: tool action, context, return values, and examples. It is front-loaded with the key verb and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately describes the return fields. It provides example feed IDs for common use cases. However, it does not cover error handling or authentication, which are less critical for a read-only tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already has 100% description coverage, but the description adds value by listing common feed IDs and explaining the return fields (e.g., benchmarkPrice, bid, ask), which goes beyond the schema's parameter-level documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets a Data Streams report for a specific feed ID, specifying the returned fields (benchmarkPrice, bid, ask, timestamps, fee info) and providing example feed IDs. This differentiates it from the sibling ds_list_feeds tool, which lists feeds.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage requires a known feed ID and offers common feed IDs as examples, but it does not explicitly state when to use this tool versus alternatives like ds_list_feeds, nor does it provide prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ds_list_feedsAInspect
List available Chainlink Data Streams feeds. Returns feed IDs, pairs, and asset classes (crypto, forex, equities, commodities). Data Streams provide sub-second latency market data — distinct from the slower on-chain Data Feeds.
| Name | Required | Description | Default |
|---|---|---|---|
| asset_class | No | Filter by asset class: 'crypto', 'forex', 'equities', 'commodities' (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It states the tool returns feed IDs, pairs, and asset classes, and is a list operation. However, it does not disclose any side effects, authentication needs, rate limits, or potential size limits of the response. For a read-only list tool, this is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the purpose and return values, followed by a concise distinction from the sibling tool. Every word earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one optional parameter and no output schema, the description adequately explains the functionality, return content, and differentiation from a similar tool. It could mention potential response size or pagination, but for its simplicity, it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single optional parameter 'asset_class'. The tool description reinforces the meaning by listing the allowed asset classes (crypto, forex, equities, commodities), but adds no new information beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool lists available Chainlink Data Streams feeds, returns feed IDs, pairs, and asset classes, and distinguishes itself from the slower on-chain Data Feeds. This satisfies a specific verb+resource and differentiates it from the sibling 'chainlink_list_feeds'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description clearly contrasts Data Streams (sub-second latency) with Data Feeds (slower on-chain), providing guidance on when to use this tool versus the sibling. It also mentions the optional asset class filter. However, it does not explicitly state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
por_get_reserveAInspect
Read a Chainlink Proof of Reserve feed to verify asset reserves onchain. Uses the same AggregatorV3Interface as price feeds but returns reserve amounts instead of prices. Well-known PoR feeds on Ethereum: WBTC = 0xa81FE04086865e63E12dD3776978E49DEEa2ea4e, USDC = 0x9a177Bb065A0636C7972C6D27Abcd4B1e5EDb65c, TUSD = 0x478f4c42b877c697C4b19E396865D5437Ef4E08B.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: 'ethereum'. Defaults to 'ethereum' | |
| feed_address | Yes | Proof of Reserve feed contract address (hex). Well-known: WBTC=0xa81FE04086865e63E12dD3776978E49DEEa2ea4e, USDC=0x9a177Bb065A0636C7972C6D27Abcd4B1e5EDb65c |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It reveals the tool is read-only and returns reserve amounts, but does not mention error handling, permissions, or rate limits. For a simple read operation, this is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three sentences: purpose, differentiation from price feeds, and specific addresses. No unnecessary words, and the key information is front-loaded. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (two parameters, no output schema), the description covers purpose, usage, and examples. It could mention the return format or typical units, but the sibling tools (including chainlink_get_price) provide context, making it sufficiently complete for selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with clear descriptions for both parameters. The description adds context about well-known feed addresses and the AggregatorV3Interface interface, but does not substantially enhance parameter understanding beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool reads a Chainlink Proof of Reserve feed to verify asset reserves. It distinguishes from sibling price feed tools by explicitly noting it returns reserve amounts instead of prices, and provides specific well-known contract addresses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the tool uses the same interface as price feeds but returns reserves, guiding when to use this over price feed alternatives. It provides well-known feed addresses for common cases, but could be more explicit about when not to use it or other alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
por_list_feedsAInspect
List well-known Chainlink Proof of Reserve feeds. Returns feed addresses, asset names, and descriptions for verifying reserve backing of wrapped/synthetic assets.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description outlines what the tool returns (feed addresses, names, descriptions) but lacks information on side effects, authentication requirements, rate limits, or read-only nature. With no annotations, the agent has limited behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that front-loads the main action ('List well-known Chainlink Proof of Reserve feeds'). Every word serves a purpose with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with no parameters and no output schema, the description covers purpose and return values adequately. However, it could include details like whether the list is exhaustive or if special permissions are needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, and schema description coverage is 100% (trivially). The description adds meaning by specifying the return fields and the purpose (reserve backing verification), providing value beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists Chainlink Proof of Reserve feeds with specific return fields (addresses, asset names, descriptions). It distinguishes from siblings like chainlink_list_feeds by specifying 'Proof of Reserve', indicating a different domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for verifying reserve backing but does not explicitly state when to use this tool over siblings (e.g., chainlink_list_feeds or ds_list_feeds). No when-not-to-use or alternative guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vrf_get_subscriptionBInspect
Get VRF v2.5 subscription details from the VRFCoordinatorV2_5 contract. Returns balance, owner, authorized consumers, and pending requests. Supports Ethereum, Arbitrum, and Base.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: 'ethereum', 'arbitrum', 'base'. Defaults to 'ethereum' | |
| subscription_id | Yes | VRF subscription ID (uint256 as decimal string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and the description does not disclose behavioral traits beyond the read operation. It lacks details on side effects, authentication needs, rate limits, or prerequisites.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. Front-loaded with the action and resource, immediately stating purpose and return fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
A simple get operation for which the description covers purpose, return fields, and supported chains. It does not specify the contract address or network requirements, but for a read tool with a clear name, the completeness is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents both parameters. The description adds the supported chains as a list, but this is implicit from the schema's description. The description does not add new meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it gets VRF v2.5 subscription details and lists specific return fields. It differentiates from siblings like chainlink_get_subscription and vrf_request_random through the version and purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives such as chainlink_get_subscription (for other VRF versions) or vrf_request_random (for requesting randomness). The description does not provide when-not or explicit alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vrf_request_randomAInspect
Build transaction calldata for a VRF v2.5 random words request. Returns the hex-encoded calldata for VRFCoordinatorV2_5.requestRandomWords(). The caller must sign and submit the transaction from a consumer contract. VRF v2.5 supports payment in LINK or native token.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain: 'ethereum', 'arbitrum', 'base'. Defaults to 'ethereum' | |
| key_hash | Yes | VRF key hash for the gas lane (hex, 32 bytes) | |
| num_words | No | Number of random words to request (default: 1, max: 500) | |
| native_payment | No | Pay in native token instead of LINK (default: false) | |
| subscription_id | Yes | VRF subscription ID (uint256 as decimal string) | |
| callback_gas_limit | No | Callback gas limit (default: 100000) | |
| request_confirmations | No | Number of block confirmations before fulfillment (default: 3) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses that it only builds calldata, not sending the transaction, and outlines payment options. Without annotations, it covers key behavioral traits but omits error conditions or success indicators.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise three sentences with front-loaded purpose, then return format and payment details. Efficient but could benefit from clearer sectioning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers return value and caller responsibility adequately. No output schema, so it appropriately describes the hex-encoded calldata. Could reference external docs for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are well-described in the schema. The description adds minor context (e.g., VRF v2.5, payment options) but does not substantially deepen understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it builds transaction calldata for VRF v2.5 random words request, specifying the exact contract function and output format. Distinct from sibling tools which handle CCIP, Chainlink upkeep, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides essential usage context: caller must sign and submit transaction from a consumer contract, and supports LINK or native payment. Lacks explicit comparison to alternative approaches or when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!