Skip to main content
Glama

arbitrumoracle

Server Details

ArbitrumOracle - 12 Arbitrum tools: ERC-20, Camelot, GMX, sequencer, gas, bridge txs.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ToolOracle/arbitrumoracle
GitHub Stars
0
Server Listing
ArbitrumOracle

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.4/5 across 12 of 12 tools scored. Lowest: 2.5/5.

Server CoherenceA
Disambiguation4/5

Most tools target distinct aspects of the Arbitrum ecosystem (bridge, contracts, yields, gas, liquidity, protocols, tokens, stables, whales). Minor overlap exists between arb_token_risk and arb_token_screening (both analyze tokens) and between arb_contract_check and token risk (both check verification), but descriptions clarify the different focuses.

Naming Consistency5/5

All tools follow a consistent 'arb_' prefix with descriptive snake_case names. The pattern is uniform across all 12 tools, making it easy for an agent to predict and understand the tool purpose from the name alone.

Tool Count5/5

12 tools is a well-scoped set for an Arbitrum oracle. Each tool covers a distinct area of the ecosystem without being overly granular or too sparse. The count feels appropriate for the server's stated purpose.

Completeness5/5

The tool surface covers all major aspects of Arbitrum DeFi monitoring: overview, gas, yields, liquidity, bridge flows, protocols, tokens, stablecoins, whales, and compliance. There are no obvious gaps given the domain focus.

Available Tools

12 tools
arb_bridge_flowsBInspect

Bridge deposit/withdrawal flow monitoring for Arbitrum

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden for behavioral disclosure. It only says 'monitoring' without detailing whether it returns live data, historical data, or any side effects. The agent learns nothing beyond the name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short sentence with no wasted words. It is front-loaded and efficient for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should hint at return value (e.g., list of transactions, aggregated data). It does not. Missing behavioral details like update frequency or scope reduce completeness for a monitoring tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, so schema coverage is trivially 100%. The description does not need to add parameter meaning. It is adequate for a zero-parameter tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it monitors bridge deposit/withdrawal flows for Arbitrum. It uses a specific verb (monitoring) and resource (bridge flows), which distinguishes it from sibling tools like arb_contract_check and arb_gas_tracker.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. The description does not indicate when to use this tool versus alternatives, nor does it mention any prerequisites or context for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

arb_contract_checkCInspect

Smart contract risk analysis: verification, proxy, compiler, license

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesContract address
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It only lists aspects analyzed (verification, proxy, compiler, license) without disclosing whether it is read-only, requires authentication, has rate limits, or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, consisting of a single sentence. However, conciseness comes at the cost of clarity and completeness; it could be expanded without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool performs multi-faceted risk analysis (verification, proxy, compiler, license) and has no output schema or annotations, the description is insufficient. It fails to describe the output format, depth of analysis, or interpretation of results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a well-documented 'address' parameter. The description adds 'contract address', but this is redundant with the schema's description. Baseline score of 3 applies since description provides no additional meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it performs smart contract risk analysis related to verification, proxy, compiler, and license. It suggests a distinct function from sibling tools which cover financial flows, yields, and gas, but lacks a specific verb (e.g., 'analyze' or 'check') and the exact scope is ambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs siblings like 'arb_token_risk' or 'arb_stablecoin_risk'. The description does not specify prerequisites, contexts, or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

arb_defi_yieldsBInspect

Compare DeFi yields across all Arbitrum protocols. Filter by TVL and category.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNoFilter by keyword (e.g., stablecoin, eth, gmx)
min_tvl_usdNoMinimum pool TVL in USD (default: 100000)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden for behavioral disclosure, but it only states the tool compares yields and filters. It does not disclose read-only nature, destructive potential, rate limits, or result characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently conveys purpose and key filters with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema or annotations, the description lacks details about return format, available fields like APY, or pagination, which are important for a yield comparison tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The tool description does not add meaningful information beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool compares DeFi yields across all Arbitrum protocols, with filters for TVL and category. This is specific and distinct from sibling tools like arb_bridge_flows or arb_gas_tracker.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for yield comparison and filtering, but provides no explicit guidance on when to use this tool versus siblings or what conditions make it appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

arb_gas_trackerAInspect

Current Arbitrum gas prices with USD cost estimates for transfers and swaps

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description lacks details on update frequency, computation method for USD estimates, or any limitations. As a simple read tool, it partially fulfills transparency but leaves key behavioral traits undisclosed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no extraneous words. Every part adds value, achieving maximum conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the tool is simple (no parameters), the description omits details such as units, time coverage, and any caveats about the USD estimates. Without an output schema, more explanation of the return format would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters and is fully described (100% coverage). With no parameters, the description adds no additional parameter info, earning a baseline of 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves current Arbitrum gas prices with USD cost estimates for transfers and swaps, which is a specific verb+resource scope. It distinguishes well from siblings like arb_bridge_flows and arb_contract_check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for obtaining gas price estimates but does not provide explicit when-to-use or alternatives. Given the tool's specific function, context is clear without exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

arb_liquidity_scanBInspect

DEX liquidity and volume analysis across Arbitrum exchanges

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must fully disclose behavior. However, it lacks any behavioral traits: no mention of read-only nature, authentication, rate limits, or what the analysis outputs. The agent is left uninformed about key characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that front-loads the key information. It earns its place by conveying purpose without extraneous words. Could be slightly more detailed without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters, no output schema, and no annotations, the description is the sole source of context. It adequately covers the domain but lacks specifics on output format, time range, or whether it returns current or historical data, leaving gaps for a comprehensive understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With no parameters, schema coverage is trivially 100%. The description adds context by stating the domain of analysis, which is meaningful beyond the empty schema. Baseline 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies 'DEX liquidity and volume analysis across Arbitrum exchanges', clearly identifying the resource (DEX liquidity/volume) and action (analysis). It distinguishes from sibling tools which cover bridge flows, gas, yields, etc., providing domain-specific purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There is no indication of context, prerequisites, or when to avoid using it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

arb_overviewAInspect

Comprehensive Arbitrum ecosystem overview: ARB price, TVL, protocol count, chain status

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden. It states the tool provides an overview of several metrics, implying a read-only, non-destructive behavior. However, it does not disclose potential side effects, authentication needs, or rate limits, leaving some ambiguity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with 10 words, no redundancy, and the key purpose is front-loaded. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description partially compensates by listing the categories of data returned. However, it does not detail the output format or structure, which is needed for complete understanding of the tool's response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, and schema coverage is 100%. The description does not need to add parameter details, but explicitly noting the tool takes no arguments would be helpful for clarity. Baseline for zero parameters is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides a 'Comprehensive Arbitrum ecosystem overview' and lists specific data points (price, TVL, protocol count, chain status). This distinguishes it from sibling tools that focus on specific aspects like bridges, gas, or token risk.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for a broad overview, but lacks explicit guidance on when to use this tool versus the specialized siblings. No 'when not to use' or alternative recommendations are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

arb_protocol_healthAInspect

DeFi protocol health check: TVL, audit status, risk score. Supports GMX, Aave, Pendle, Camelot, etc.

ParametersJSON Schema
NameRequiredDescriptionDefault
protocolYesProtocol name (e.g., gmx, aave, pendle, camelot)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the burden of behavioral disclosure. It mentions the checks performed (TVL, audit, risk) but does not explicitly state that the operation is read-only, free of side effects, or clarify any authorization or rate limits. The behavioral transparency is adequate for a simple health check but lacks thoroughness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences: the first states the core function and metrics, the second lists supported protocols. No filler or redundancy, and the key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description provides the key input and three output metrics. However, it does not specify the exact output format (e.g., numeric scores, audit status string) or confirm that all listed metrics are always returned. This leaves some ambiguity for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the protocol parameter is well-documented with examples). The tool description adds some context by repeating those examples and linking them to the health check purpose, but does not add significant semantic value beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as a 'DeFi protocol health check' and specifies key metrics (TVL, audit status, risk score) and supported protocols. It effectively distinguishes from sibling tools like arb_gas_tracker or arb_token_risk.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for checking protocol health via listed metrics and protocols, but does not provide explicit guidance on when to use this tool versus alternatives (e.g., when to prefer arb_protocol_list or arb_token_risk). No exclusion criteria or when-not-to-use hints are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

arb_protocol_listAInspect

List all major DeFi protocols on Arbitrum ranked by TVL

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden of behavioral transparency. It states it 'list all major ... protocols' but does not disclose potential mutability, pagination, data freshness, or any side effects. For a simple read operation, more clarity on scope is needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that conveys the essential information without any waste. Every word is necessary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and simple inputs, the description is mostly complete but uses the ambiguous term 'major'. It does not specify if the list is exhaustive or if there are any implicit constraints. Adequate but with room for specificity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters with 100% schema description coverage. The description adds no parameter details because none are needed. Baseline for zero parameters is 4, and the description fulfills this.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (list), the resource (major DeFi protocols on Arbitrum), and the ordering (ranked by TVL). It is specific and distinguishes from sibling tools like arb_defi_yields (yields) and arb_protocol_health (health).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for getting a TVL-ranked list of protocols, but it does not provide explicit when-to-use or when-not-to-use guidance compared to siblings. No alternatives or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

arb_stablecoin_riskAInspect

Stablecoin supply and risk analysis on Arbitrum (USDC, USDT, DAI, etc.)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description lacks any behavioral details such as data freshness, rate limits, or expected output structure. For an analysis tool with zero annotation coverage, the description should compensate but does not.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that front-loads the purpose. It could be slightly more detailed without being verbose, but it is efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description should elaborate on what 'risk analysis' includes (e.g., peg status, supply metrics). It only mentions stablecoin names, leaving the output format and depth unclear.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so the description carries the full meaning. The description lists the stablecoins covered, which adds value beyond the empty schema. Baseline for 0 parameters is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the tool's function: stablecoin supply and risk analysis on Arbitrum, listing specific stablecoins (USDC, USDT, DAI). This distinguishes it from sibling tools like arb_token_risk (broader token risk) and arb_overview (general metrics).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage context is implied by the name and description (stablecoin analysis), but no explicit guidance on when or when not to use it versus alternatives is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

arb_token_riskAInspect

Risk score for any Arbitrum token by contract address. Checks verification, proxy status, activity.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesToken contract address on Arbitrum
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose whether the tool is read-only or has side effects. It lists the factors checked (verification, proxy status, activity) which gives some insight into behavior but is insufficient for full transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two sentences), front-loaded with the core purpose, and contains no fluff. Every sentence adds meaningful information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool (one parameter, no output schema, no annotations), the description provides a clear understanding of what the tool does and what factors it considers. It could mention the return format or score range, but it is largely complete for practical use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes the 'address' parameter adequately. The description adds context about what the risk score is based on but does not elaborate on the parameter format or constraints beyond the schema. With 100% schema coverage, the description provides marginal added value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: computing a risk score for an Arbitrum token given a contract address. It mentions specific checks (verification, proxy status, activity), which adds clarity. However, it does not explicitly differentiate from the sibling tool 'arb_token_screening' which may have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when a risk assessment of an Arbitrum token is needed, but it does not provide explicit when-to-use or when-not-to-use guidance, nor does it mention alternative tools. The context is clear but lacks exclusions or prioritization.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

arb_token_screeningCInspect

Compliance screening for Arbitrum tokens: verification, risk flags, basic AML check

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesToken contract address
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so description must carry full behavioral disclosure. It mentions verification and AML check but does not specify if it is read-only, if it calls external services, or what the output format is.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with 10 words, highly concise. However, it sacrifices completeness for brevity, missing important details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should explain what the screening results look like or what risk flags mean. It is vague and leaves the agent uncertain about the tool's output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the only parameter 'address', which has a clear description. The tool description adds context about compliance but does not add meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs compliance screening for Arbitrum tokens with verification, risk flags, and basic AML check. It is specific but does not fully distinguish from sibling tool arb_token_risk, which may overlap.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like arb_token_risk. No when-not or context is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

arb_whale_watchBInspect

Monitor large transactions for any Arbitrum address. Tracks whale movements.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesWallet or contract address
min_value_ethNoMinimum ETH value threshold (default: 10)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full responsibility for behavioral disclosure. It only says 'monitor' and 'tracks', but does not explain whether results are real-time, historical, or a subscription. The threshold default is in schema, but behavioral specifics are missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, concise and front-loaded with the main action. It is appropriately short but does not waste words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and minimal annotations, the description should provide more context about what the tool returns (e.g., list of transactions, summary, alert behavior). It is insufficient for an agent to fully understand the tool's capabilities.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds no extra meaning beyond the schema; it rephrases 'large transactions' and 'whale movements' which align with the threshold parameter but do not enhance understanding of parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool monitors large transactions for any Arbitrum address, with a specific focus on whale movements. This uniquely distinguishes it from sibling tools like arb_bridge_flows or arb_contract_check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. Sibling tools exist for various Arbitrum monitoring tasks, but the description does not clarify when whale_watch is appropriate or when to use others.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.