Skip to main content
Glama

Server Details

Whale & Institutional Flow MCP — 8 tools: TVL flows, alpha signals, stablecoin supply.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ToolOracle/smartmoneyoracle
GitHub Stars
0
Server Listing
SmartMoneyOracle

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 3/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation3/5

The tools have overlapping purposes that could cause confusion, particularly between 'chain_flows', 'flow_scan', 'protocol_flows', and 'stablecoin_flows', which all involve tracking capital or TVL movements. However, their descriptions provide some differentiation, such as focusing on cross-chain allocation, protocol-level analysis, top gainers/losers, and stablecoin-specific metrics, which helps agents distinguish them with careful reading.

Naming Consistency4/5

Most tool names follow a consistent snake_case pattern with descriptive terms (e.g., 'alpha_signal', 'chain_flows', 'health_check'), but there are minor deviations like 'institutional' (a single noun) and 'top_whales' (adjective_noun instead of verb_noun). Overall, the naming is readable and mostly predictable, with only slight inconsistencies.

Tool Count5/5

With 8 tools, the count is well-scoped for a DeFi analytics server, covering key areas like signals, capital flows, protocol tracking, and health checks. Each tool appears to serve a distinct analytical purpose, and the number is neither too sparse nor overwhelming for agents to manage effectively.

Completeness4/5

The toolset provides broad coverage for DeFi analytics, including market signals, TVL flows, protocol monitoring, and system health. Minor gaps exist, such as no direct tools for historical data analysis or customizable alerts, but agents can likely work around these with the available tools to perform core tasks like tracking capital movements and assessing market health.

Available Tools

8 tools
alpha_signalCInspect

Combined alpha scoring: price momentum + volume surge + buy pressure + whale boost activity. Rating: COLD to EXPLOSIVE.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoToken name or symbol (required)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the scoring factors and output rating scale, but lacks critical behavioral details: it doesn't specify if this is a read-only operation, how data is sourced, latency, rate limits, or error handling. The phrase 'combined alpha scoring' suggests computation, but the exact behavior is unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (one sentence) and front-loaded with the core purpose. However, it could be more structured by separating the purpose from the rating scale for clarity. There's no wasted text, but it borders on being too brief for a tool with no annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of financial scoring and lack of annotations or output schema, the description is incomplete. It doesn't explain the output format, how the rating is derived, data freshness, or limitations. For a tool that likely returns nuanced data, this leaves significant gaps for an AI agent to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with one parameter 'query' documented as 'Token name or symbol (required)'. The description adds no parameter-specific information beyond this, but with only one parameter and high schema coverage, the baseline is high. The description implies the parameter is used for token lookup in scoring, which aligns with the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool calculates a 'combined alpha scoring' based on multiple factors (price momentum, volume surge, buy pressure, whale boost activity), which provides a general purpose. However, it doesn't specify the exact action (e.g., 'retrieve', 'calculate', 'analyze') or clearly distinguish it from sibling tools like 'top_whales' or 'protocol_flows' that might also involve whale or flow analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any context, prerequisites, or exclusions, nor does it reference sibling tools. The rating scale ('COLD to EXPLOSIVE') implies a use case for evaluating token signals, but this is too vague to guide selection among similar tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

chain_flowsBInspect

Cross-chain capital allocation: which chains hold how much TVL. See where the ecosystem money sits.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes what the tool does (shows TVL allocation) but lacks behavioral details like whether it's read-only, requires authentication, has rate limits, or what format the output takes. For a tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, using two brief sentences that directly state the tool's purpose and value. Every word earns its place, with no redundant or vague phrasing. It efficiently communicates the core functionality without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (likely moderate, as it involves cross-chain data) and lack of annotations/output schema, the description is minimally complete. It states what the tool does but doesn't cover behavioral aspects or output details. For a tool with no structured metadata, it meets basic requirements but leaves gaps in transparency and usage context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description adds context about what the tool reveals (TVL across chains), which provides semantic meaning beyond the empty schema. Baseline is 4 for zero parameters, as the description adequately explains the tool's function without parameter clutter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to show cross-chain capital allocation by revealing which chains hold how much TVL. It uses specific verbs ('Cross-chain capital allocation', 'See where the ecosystem money sits') and identifies the resource (TVL across chains). However, it doesn't explicitly differentiate from sibling tools like 'protocol_flows' or 'stablecoin_flows', which may also relate to capital flows.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools or contexts where this tool is preferred, such as for high-level ecosystem analysis rather than detailed protocol-level insights. Usage is implied by the purpose but lacks explicit when/when-not instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flow_scanCInspect

Protocol TVL flow analysis: current TVL, 7-day inflow/outflow, chain breakdown. Track where capital is moving.

ParametersJSON Schema
NameRequiredDescriptionDefault
protocolNoProtocol slug (e.g. 'aave', 'lido', 'uniswap') (required)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While it describes what data the tool provides (TVL metrics, capital movement tracking), it doesn't mention important behavioral aspects like whether this is a read-only operation, if it requires authentication, rate limits, data freshness, or error conditions. The description is purely functional without operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two concise sentences that directly state the tool's purpose and function. The first sentence lists the specific metrics analyzed, and the second explains the tracking capability. There's no wasted language, though it could be slightly more front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of financial data analysis and the absence of both annotations and output schema, the description is insufficiently complete. It doesn't explain what format the results will be in, what timeframes are available beyond '7-day', how chain breakdown is presented, or what 'track where capital is moving' means in practical terms. For a data analysis tool with no structured output documentation, more detail is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100% with the single 'protocol' parameter fully documented in the schema. The description doesn't add any parameter-specific information beyond what the schema already provides about the protocol slug requirement. This meets the baseline score of 3 when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool analyzes protocol TVL flow with specific metrics (current TVL, 7-day inflow/outflow, chain breakdown) and tracks capital movement. It uses specific verbs like 'analysis' and 'track' with the resource 'protocol TVL flow', but doesn't explicitly differentiate from sibling tools like 'protocol_flows' or 'chain_flows'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'protocol_flows' and 'chain_flows' that likely handle similar data, there's no indication of when this specific TVL flow analysis is preferred or what distinguishes it from those other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

health_checkCInspect

Server health, API connectivity, tool list, pricing.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden but offers minimal behavioral insight. It hints at returning diagnostic data but doesn't specify if this performs active checks, requires permissions, has side effects, or details response format. More context on what 'health' entails or how 'pricing' is retrieved would improve transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise—a single phrase listing key components—which is efficient for a simple tool. However, it could be more structured by clarifying the action (e.g., 'Check' or 'Retrieve') to improve front-loading of purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's apparent complexity (providing multiple types of system info) and lack of annotations or output schema, the description is incomplete. It lists topics but doesn't explain what data is returned, how it's formatted, or any limitations, leaving significant gaps for an AI agent to understand usage fully.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description doesn't add parameter semantics, but that's acceptable here, warranting a baseline score above minimum due to the lack of parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description lists four components (server health, API connectivity, tool list, pricing) which gives a general idea of what information the tool provides, but it lacks a specific verb and doesn't clearly distinguish this diagnostic tool from potential siblings like 'flow_scan' or 'protocol_flows'. It's vague about whether this performs checks or just returns status information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, frequency, or context for invocation, nor does it differentiate from sibling tools that might also provide system or flow-related information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

institutionalBInspect

Track institutional-grade protocols: Lido, Aave, Maker, Ondo, BlackRock, Ethena etc. TVL, 1d/7d changes, categories.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions what data is tracked (TVL, changes, categories) but doesn't describe how the tool behaves: e.g., whether it's a read-only operation, requires authentication, has rate limits, returns real-time or historical data, or handles errors. For a tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, using a single sentence to state the tool's purpose with specific examples and metrics. Every word contributes to understanding, with no redundant or vague phrasing. However, it could be slightly more structured by separating protocol examples from metrics for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (tracking multiple protocols and metrics) and lack of annotations and output schema, the description is minimally adequate. It covers what data is tracked but omits behavioral details, output format, and usage context. For a tool with no structured fields to rely on, it should provide more completeness, such as explaining return values or data freshness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description adds value by specifying the types of protocols tracked (e.g., Lido, Aave) and metrics (TVL, 1d/7d changes, categories), which clarifies the tool's scope beyond the empty schema. This compensates adequately, though it doesn't detail output format or data sources.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: tracking institutional-grade protocols with specific examples (Lido, Aave, Maker, etc.) and metrics (TVL, 1d/7d changes, categories). It uses the verb 'track' with the resource 'institutional-grade protocols,' making it specific and actionable. However, it doesn't explicitly differentiate from sibling tools like 'protocol_flows' or 'stablecoin_flows,' which might overlap in domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It lists examples of protocols and metrics but doesn't specify contexts, prerequisites, or exclusions. With sibling tools like 'protocol_flows' and 'stablecoin_flows' potentially related, the lack of differentiation leaves usage ambiguous for an AI agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

protocol_flowsCInspect

Top protocols gaining and losing TVL right now. The smart money flow radar. Filter by category (DEX, Lending, etc.).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoResults per direction (default: 15)
categoryNoFilter: dexes, lending, liquid staking, bridge, etc.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It implies real-time data ('right now') and filtering capabilities, but doesn't describe critical behaviors such as data freshness, rate limits, authentication needs, error handling, or what 'smart money flow radar' entails. The description adds some context but leaves significant gaps for a tool that likely involves dynamic financial data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, with three short sentences that efficiently convey the core functionality. However, the phrase 'The smart money flow radar' is somewhat vague and could be considered fluff, slightly reducing efficiency. Overall, it's well-structured with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of real-time financial data and the lack of annotations and output schema, the description is incomplete. It doesn't explain the return format (e.g., list of protocols with TVL changes), data sources, update frequency, or handling of large datasets. For a tool with no structured output and behavioral gaps, this leaves the agent under-informed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already fully documents the two parameters ('limit' and 'category'). The description mentions filtering by category, which aligns with the schema but adds no additional semantic details beyond what's provided in the structured fields. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to show protocols gaining and losing TVL (Total Value Locked) right now, with filtering by category. It uses specific verbs ('gaining and losing') and identifies the resource ('protocols'), but doesn't explicitly differentiate from sibling tools like 'chain_flows' or 'stablecoin_flows' which might also involve flow-related data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions filtering by category but doesn't specify use cases, prerequisites, or exclusions. With sibling tools like 'alpha_signal' and 'flow_scan' available, there's no indication of how this tool differs in context or application.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stablecoin_flowsBInspect

Stablecoin supply tracking — a leading market indicator. Total supply, per-coin breakdown, chain distribution.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes what data the tool provides (supply tracking, breakdowns) but doesn't cover behavioral traits such as data freshness, rate limits, authentication needs, or potential side effects. For a tool with zero annotation coverage, this leaves significant gaps in understanding how it operates.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, using a single sentence that efficiently conveys the tool's core functionality. Every phrase ('Stablecoin supply tracking,' 'per-coin breakdown,' 'chain distribution') earns its place by adding specific information without redundancy or waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters, no annotations, and no output schema, the description is adequate as a basic overview but incomplete for full contextual understanding. It covers what data is provided but lacks details on data sources, update frequency, or error handling. For a data-fetching tool with minimal structured support, it meets a minimum viable standard but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, meaning no parameters are documented in the schema. The description adds value by implying the tool returns aggregated data (total supply, per-coin breakdown, chain distribution), which compensates for the lack of parameter documentation. However, it doesn't detail output formats or optional filters, keeping it from a perfect score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: tracking stablecoin supply with breakdowns by coin and chain distribution. It uses specific terms like 'supply tracking,' 'per-coin breakdown,' and 'chain distribution,' which indicate what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'chain_flows' or 'protocol_flows,' which might have overlapping domains, so it doesn't reach a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions it's a 'leading market indicator,' which implies a context of market analysis, but doesn't specify scenarios, prerequisites, or exclusions. Without explicit when-to-use or when-not-to-use statements, it offers minimal usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

top_whalesCInspect

Current whale activity across DeFi: top boosted tokens, community takeovers, CoinGecko trending. Filter by chain.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoFilter: solana, ethereum, base, bsc, etc.
limitNoResults (1-20, default: 10)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While it mentions what data is returned (whale activity with specific categories), it doesn't describe important behavioral aspects like rate limits, data freshness, authentication requirements, or what constitutes 'whale activity'. The description provides some context but leaves significant gaps for a tool that presumably queries live market data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that efficiently convey the core functionality and filtering capability. The first sentence front-loads the main purpose with specific categories, and the second adds the filtering context. No wasted words, though it could be slightly more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema that presumably returns complex market data about whale activity, the description is insufficient. It doesn't explain what format the results come in, what time period the 'current' activity covers, how 'whale' is defined, or what the different categories (boosted tokens, takeovers, trending) actually mean. The description leaves too many open questions for effective tool use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents both parameters (chain filter with examples, limit with range and default). The description adds only the phrase 'Filter by chain' which provides minimal additional semantic value beyond what's already in the schema. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides 'current whale activity across DeFi' with specific categories (top boosted tokens, community takeovers, CoinGecko trending), which is a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'alpha_signal' or 'institutional' that might also provide market insights.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions 'Filter by chain' which provides some context about when to use the chain parameter, but offers no guidance on when to use this tool versus alternatives like 'alpha_signal' or 'protocol_flows'. There's no mention of when-not-to-use scenarios or explicit alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.