Skip to main content
Glama

Server Details

10 crypto tools for AI agents: BTC/ETH/SOL/XRP momentum signals, Polymarket odds, sentiment, x402.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.7/5 across 16 of 16 tools scored. Lowest: 3.1/5.

Server CoherenceB
Disambiguation2/5

Multiple tools have overlapping or ambiguous purposes, causing potential confusion. For example, get_market_intel, get_market_mood, get_market_snapshot, get_sentiment, and get_trading_signals all provide market-related insights with unclear boundaries—agents may struggle to choose between them for specific needs like sentiment analysis or directional signals. While some tools like get_weather or scrape_url are distinct, the core trading/market tools lack clear differentiation.

Naming Consistency5/5

Tool names follow a highly consistent verb_noun pattern throughout, with all tools using the 'get_' or 'scrape_' prefix followed by a descriptive noun phrase (e.g., get_exchange_rates, scrape_url). There are no deviations in naming conventions, making the set predictable and easy to parse for agents.

Tool Count3/5

With 16 tools, the count is borderline high for a server focused on trading intelligence, as it includes unrelated utilities like get_weather and scrape_url that dilute the core purpose. While not excessive, the mix of trading-specific tools and general-purpose utilities makes the scope feel broad and somewhat unfocused, reducing appropriateness.

Completeness3/5

For trading intelligence, there are notable gaps in coverage, such as missing tools for placing trades, managing portfolios, or accessing historical data, which limits agent workflows. However, the server does provide a range of market data, signals, and analytics, along with unrelated utilities like web search and weather, creating an incomplete but partially functional surface for the domain.

Available Tools

17 tools
get_agent_pulseAInspect

Full crypto agent context in ONE call — replaces 5+ separate tool calls. Returns: BTC/ETH/SOL/XRP signals, market mood (bias/regime/key levels), whale flows (24h directional bias + notable moves), top 5 Polymarket markets by 24h volume, DeepBlue trading track record, and a 1-2 sentence agent_summary ready to drop into agent context. Best value for trading agents needing situational awareness — designed for periodic polling (e.g. every 5-15 min).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes what is returned but does not disclose any potential behavioral traits such as destructive actions (likely none), rate limits, authentication needs, or data freshness. The description is adequate but lacks details on error states or limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise: two sentences with no extraneous words. The key value proposition is front-loaded, and the return list is clearly structured for quick parsing by an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (aggregating multiple data types) and the existence of an output schema, the description covers the essential return values and use case. It mentions polling frequency but could be enhanced with notes on prerequisites or error handling. Overall, it is sufficiently complete for an agent to decide when to invoke this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so schema coverage is 100%. The description adds significant meaning by explaining the comprehensive output and use case, which is beyond the empty schema. It effectively communicates what the tool returns without needing parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides 'Full crypto agent context in ONE call — replaces 5+ separate tool calls,' listing specific data types (BTC/ETH/SOL/XRP signals, market mood, whale flows, Polymarket markets, DeepBlue track record, agent_summary). This distinguishes it from sibling tools that offer individual pieces of this data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description says 'Best value for trading agents needing situational awareness — designed for periodic polling (e.g. every 5-15 min).' This indicates when to use it and suggests it's for broad, periodic updates rather than specific queries. It does not explicitly state when not to use it or name specific alternatives, but the replacement claim implies it consolidates other calls.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_exchange_ratesAInspect

Live FX and crypto exchange rates. No API key needed. Args: base: Base currency (default: USD) symbols: Comma-separated currencies/crypto to convert to (default: EUR,GBP,JPY,BTC,ETH)

ParametersJSON Schema
NameRequiredDescriptionDefault
baseNoUSD
symbolsNoEUR,GBP,JPY,BTC,ETH

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses that no API key is needed (important access context) and mentions 'live' rates (timeliness). However, it doesn't cover rate limits, data sources, update frequency, error conditions, or authentication requirements beyond the API key note. Some behavioral aspects remain unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences: one stating purpose and key feature, another detailing parameters. The parameter section is clearly formatted. It's front-loaded with the core functionality. Minor improvement could be integrating parameter details more seamlessly rather than separate 'Args:' section.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 2 parameters with good description coverage, no annotations, and an output schema exists (so return values needn't be explained), the description is reasonably complete. It covers what the tool does, key access feature, and parameter meanings. Could be more complete by mentioning data sources or limitations, but adequate for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining both parameters: 'base' as 'Base currency' with default USD, and 'symbols' as 'Comma-separated currencies/crypto to convert to' with default values. This adds crucial meaning beyond the bare schema, though it doesn't specify format constraints or valid currency codes.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides 'Live FX and crypto exchange rates' which is a specific verb (get) and resource (exchange rates). It distinguishes itself from most siblings by focusing on currency/crypto rates rather than market analysis, news, or other data types. However, it doesn't explicitly differentiate from 'get_token_price' which might overlap with crypto rates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'No API key needed' which suggests ease of access. It doesn't provide explicit guidance on when to use this vs alternatives like 'get_token_price' for crypto or other market tools. The default values suggest typical use cases but no explicit when/when-not instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_intelBInspect

BTC macro intelligence: funding rates, open interest trends, liquidation risk levels, and regime classification (trending/ranging/volatile).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While it lists the types of intelligence provided, it doesn't disclose important behavioral traits: whether this is real-time or historical data, update frequency, data sources, authentication requirements, rate limits, or error conditions. For a market intelligence tool with zero annotation coverage, this is a significant gap in behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and well-structured in a single sentence that efficiently lists all key intelligence components. Every element (BTC macro intelligence, funding rates, open interest trends, liquidation risk levels, regime classification) earns its place by specifying the tool's scope and outputs. No wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which should document return values), the description doesn't need to explain return format details. However, for a market intelligence tool with no annotations and many similar siblings, the description should provide more context about data freshness, scope limitations, or when to prefer this over alternatives. The current description is adequate but leaves important contextual gaps unanswered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters (schema coverage 100%), so the description doesn't need to explain parameters. The baseline for zero parameters is 4. The description appropriately focuses on what the tool returns rather than parameter semantics, which aligns with the parameterless design.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides 'BTC macro intelligence' with specific data types listed (funding rates, open interest trends, liquidation risk levels, regime classification). It distinguishes from siblings like get_token_price or get_sentiment by focusing on comprehensive market intelligence rather than specific metrics. However, it doesn't explicitly contrast with get_market_snapshot or get_trading_signals which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools (get_market_snapshot, get_trading_signals, get_realtime_signal, get_market_mood) that might provide related market data, there's no indication of when this specific BTC intelligence tool is preferred. The description only states what it provides, not when it's most appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_moodAInspect

Quick market mood check: fear/greed level, regime classification, per-asset directional bias. Lightweight alternative to full sentiment analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions being 'lightweight' and 'quick,' which hints at performance characteristics, but lacks details on rate limits, authentication needs, data sources, or potential side effects. For a tool with no annotation coverage, this leaves significant gaps in understanding its operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured in two sentences: the first states the purpose and outputs, and the second provides usage context. Every sentence adds value without redundancy, making it efficient and front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (market analysis with no parameters) and the presence of an output schema, the description is moderately complete. It outlines the purpose and usage but lacks behavioral details like data freshness or limitations. With an output schema handling return values, the description doesn't need to explain outputs, but it could benefit from more context on reliability or scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the inputs. The description adds no parameter-specific information, which is acceptable given the lack of parameters. A baseline score of 4 is appropriate as it doesn't need to compensate for any schema gaps, and the description focuses on output semantics instead.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: checking market mood with specific outputs (fear/greed level, regime classification, per-asset directional bias). It distinguishes itself from 'get_sentiment' by positioning as a 'lightweight alternative to full sentiment analysis,' though it doesn't explicitly differentiate from other market-related siblings like 'get_market_intel' or 'get_trading_signals.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for a 'quick market mood check' as opposed to a 'full sentiment analysis,' implying it's faster or simpler. However, it doesn't explicitly state when not to use it or name specific alternatives beyond the general 'full sentiment analysis' reference, missing detailed exclusions or comparisons with other siblings like 'get_market_snapshot.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_snapshotBInspect

All-coins snapshot: price, momentum, signals in one call. Quick overview of BTC, ETH, SOL, XRP with current price, 1m/5m momentum, and bias.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the tool returns 'current price, 1m/5m momentum, and bias' but doesn't specify data freshness, rate limits, authentication requirements, or what 'bias' means operationally. For a market data tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that efficiently convey scope and content. The first sentence establishes the comprehensive nature ('All-coins snapshot'), and the second provides specific examples and metrics. No wasted words, though it could be slightly more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters, an output schema exists, and annotations are absent, the description adequately covers the purpose but lacks important context about when to use it versus siblings and behavioral details like data sources or limitations. The existence of an output schema reduces the need to describe return values, but other gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the input requirements. The description appropriately doesn't discuss parameters since none exist, maintaining focus on what the tool returns rather than what it accepts.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides a 'snapshot' of multiple cryptocurrencies with specific metrics (price, momentum, signals, bias), which is a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'get_token_price' or 'get_trading_signals' beyond listing the specific coins covered (BTC, ETH, SOL, XRP).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_token_price' for single coins or 'get_trading_signals' for more detailed signals. It only describes what the tool does, not when it's appropriate compared to other market-related tools in the sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_newsAInspect

Get Hacker News top stories or search by keyword. No API key needed. Args: query: Search term (leave empty for top stories) limit: Number of stories (1-20, default 5)

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: no API key needed and the limit range (1-20, default 5). However, it lacks details on rate limits, error handling, or response format, which are important for a tool with an output schema but no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the first sentence stating the core purpose and key constraint. The Args section is structured clearly without redundancy. Every sentence adds value, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, but an output schema exists), the description is mostly complete. It covers purpose, usage, and parameters well. However, it could benefit from mentioning the output structure or any prerequisites, though the output schema reduces the need for return value details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains that 'query' is a search term and leaving it empty fetches top stories, and 'limit' specifies the number of stories with a range and default. This compensates well for the low schema coverage, though it could detail data types or constraints more explicitly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get Hacker News top stories or search by keyword') and distinguishes it from siblings by focusing on Hacker News content. It explicitly mentions 'No API key needed,' which differentiates it from tools that might require authentication.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use it: for top stories or keyword searches on Hacker News. However, it does not explicitly state when not to use it or name alternatives among siblings, such as 'web_search' or 'get_github_trending,' which might serve similar purposes in different contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_performanceBInspect

DeepBlue bot trading performance: win rate, total P&L, recent trades. Verified track record from live Polymarket 5-min crypto trading.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the data source (live Polymarket 5-min crypto trading) and verification status, but doesn't describe response format, data freshness, rate limits, authentication needs, or potential side effects. For a tool with no annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states what metrics are retrieved, the second provides source context. Both sentences add value - the second establishes credibility through verification and specificity. It could be slightly more front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has zero parameters and an output schema exists (so return values are documented elsewhere), the description provides adequate context about what data is retrieved and its source. However, for a tool with no annotations, it should ideally mention more about the behavioral characteristics like data freshness or response format to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters with 100% schema description coverage, so the baseline is 4. The description appropriately doesn't discuss parameters since none exist, and it doesn't need to compensate for any schema gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves DeepBlue bot trading performance metrics including win rate, total P&L, and recent trades, with a specific source (live Polymarket 5-min crypto trading). It distinguishes from siblings by focusing on trading bot performance rather than market data, news, or other analytics. However, it doesn't explicitly contrast with the most similar sibling 'get_trading_signals'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance - it implies this tool should be used when seeking verified trading bot performance data. However, it offers no explicit when-to-use vs. when-not-to-use criteria, no prerequisites, and no comparison to alternatives like 'get_trading_signals' or 'get_polymarket_analytics' that might overlap in domain.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_polymarket_analyticsBInspect

Polymarket prediction market analytics: active positions, per-coin win rates, edge analysis, best/worst trading hours from DeepBlue's live trading bot.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the data source ('DeepBlue's live trading bot') which hints at real-time or live data, but doesn't disclose critical behavioral traits like whether this is a read-only operation, potential rate limits, authentication needs, or data freshness. For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently lists the key analytics components and data source. Every element ('active positions, per-coin win rates, edge analysis, best/worst trading hours') serves a clear purpose, and there's no redundant information. It's appropriately sized for a no-parameter tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters, an output schema exists, and no annotations, the description provides a reasonable overview of what analytics are returned. However, for a tool with no annotations, it should ideally mention behavioral aspects like data latency or access constraints. The existence of an output schema reduces the need to describe return values, but more context on usage scenarios would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the lack of inputs. The description adds value by specifying the analytics scope (active positions, win rates, edge analysis, trading hours) and data source, which helps the agent understand what data will be retrieved without needing parameters. This compensates adequately for the parameter-less design.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving Polymarket prediction market analytics including active positions, win rates, edge analysis, and trading hour insights. It specifies the data source ('from DeepBlue's live trading bot'), which helps distinguish it from generic market tools. However, it doesn't explicitly differentiate from sibling tools like get_market_intel or get_trading_signals that might overlap in domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions 'Polymarket prediction market analytics' but doesn't clarify if this is for real-time data, historical analysis, or specific use cases. With siblings like get_market_intel and get_trading_signals, there's no indication of when this tool is preferred or what makes it unique.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_realtime_signalBInspect

Live directional signal for a single coin from Binance websocket feed. Args: coin: Coin to analyze — btc, eth, sol, or xrp (default: btc) Returns momentum, orderbook imbalance, aggressor ratio, and directional bias.

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNobtc

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool is 'live' and from a 'websocket feed', implying real-time data, but lacks details on rate limits, authentication needs, data freshness, or potential errors. This leaves significant gaps in understanding operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the core purpose stated first, followed by parameter details in a structured format. Every sentence adds value, though the 'Args:' and 'Returns:' sections could be integrated more smoothly for better flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (real-time data source, one parameter), no annotations, and the presence of an output schema (which handles return values), the description is partially complete. It covers the purpose and parameters well but lacks behavioral details like rate limits or error handling, making it adequate but with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context beyond the input schema, which has 0% description coverage. It explains the 'coin' parameter's purpose ('Coin to analyze'), lists valid values ('btc, eth, sol, or xrp'), and specifies the default ('btc'). This compensates well for the schema's lack of documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving a live directional signal for a single coin from Binance websocket feed. It specifies the verb ('get'), resource ('directional signal'), and source ('Binance websocket feed'), but does not explicitly differentiate it from sibling tools like 'get_trading_signals' or 'get_market_snapshot', which might offer overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention sibling tools like 'get_trading_signals' or 'get_market_snapshot', nor does it specify use cases, prerequisites, or exclusions, leaving the agent without context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_sentimentAInspect

Crypto sentiment composite for BTC/ETH/SOL/XRP. Blends Fear & Greed Index (30%), whale/institutional exchange flow (20%), and 6 real-time technical indicators (50%). Per-coin scores from -1 (max bearish) to +1 (max bullish).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's a read-only operation (implied by 'get'), describes the composite methodology (Fear & Greed Index, whale flow, technical indicators), and specifies the output range (-1 to +1). It doesn't mention rate limits or data freshness, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and front-loaded: it immediately states the purpose in the first sentence, then provides methodology details and output format in subsequent sentences. Every sentence adds value with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (blending multiple data sources), no annotations, and the existence of an output schema, the description is complete enough. It explains what the tool does, how it works, and what it returns, covering the essential context without needing to detail return values since an output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the baseline is 4. The description appropriately doesn't discuss parameters since none exist, focusing instead on what the tool provides without requiring input.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states what the tool does: it provides a 'crypto sentiment composite' for specific cryptocurrencies (BTC/ETH/SOL/XRP) by blending multiple data sources. It clearly distinguishes from siblings like get_market_mood or get_trading_signals by specifying the exact methodology and output format.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying the cryptocurrencies covered and the sentiment scoring methodology, which helps differentiate it from general market tools. However, it doesn't explicitly state when to use this tool versus alternatives like get_market_mood or get_trading_signals, though the specificity provides some guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_token_priceBInspect

Real-time token price. Supports: BTC, ETH, SOL, XRP, and other major tokens. Args: token: Token symbol (e.g. 'BTC', 'ETH', 'SOL')

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states 'real-time' which implies current data, but doesn't mention rate limits, data sources, accuracy guarantees, error conditions, or authentication requirements. For a financial data tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that each serve distinct purposes: the first states the core functionality and scope, the second documents the parameter. The structure is clear and front-loaded with the main purpose. There's minimal waste, though the formatting with 'Args:' could be integrated more smoothly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, real-time financial data), no annotations, but with an output schema present, the description is minimally adequate. It covers the basic purpose and parameter semantics, but lacks behavioral context about data freshness, limitations, or error handling that would be important for reliable use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful parameter context beyond the schema. While the schema only shows 'token' as a required string, the description clarifies it expects token symbols like 'BTC', 'ETH', 'SOL', lists supported tokens, and mentions 'other major tokens' to indicate scope. With 0% schema description coverage, this significantly compensates for the schema's lack of documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving real-time token prices for specific cryptocurrencies. It specifies the verb 'get' and resource 'token price', and lists example tokens. However, it doesn't explicitly differentiate from sibling tools like get_market_snapshot or get_exchange_rates, which might offer overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple market-related sibling tools (get_market_snapshot, get_exchange_rates, get_trading_signals, etc.), there's no indication of what distinguishes this tool's scope or when it's the appropriate choice over other market data tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_trading_signalsAInspect

Live BTC, ETH, SOL, XRP 5-minute directional signals with confidence scores. Generated by DeepBlue's MomentumSignalGenerator from real-time Binance websocket data. Includes: direction (UP/DOWN), confidence (0.50-0.78), regime, and indicator breakdown.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: real-time/live nature, specific cryptocurrencies covered, 5-minute intervals, confidence score ranges (0.50-0.78), and output structure. It doesn't mention rate limits, authentication needs, or data freshness guarantees, but provides substantial operational context for a read-only data retrieval tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences with zero wasted words. The first sentence establishes core functionality, the second explains the generation method and data source, and the third details output components. Every sentence adds essential information, making it appropriately concise and well-organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has zero parameters, 100% schema coverage, and an output schema exists, the description provides excellent context for what the tool does. It explains the signal generation process, data sources, and output structure. For a parameterless data retrieval tool with output schema, this is nearly complete - it could potentially mention data latency or update frequency for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters with 100% schema description coverage, so the baseline is 4. The description appropriately doesn't discuss parameters since none exist, which is correct for this tool configuration. No additional parameter information is needed or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: generating live directional trading signals for specific cryptocurrencies (BTC, ETH, SOL, XRP) with 5-minute intervals. It specifies the data source (Binance websocket), generation method (DeepBlue's MomentumSignalGenerator), and output components (direction, confidence, regime, indicator breakdown). This distinguishes it from sibling tools like get_market_snapshot or get_token_price which provide different types of market data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through its content - it's for obtaining real-time trading signals for specific cryptocurrencies. However, it doesn't explicitly state when to use this tool versus alternatives like get_realtime_signal or get_market_intel, nor does it provide exclusion criteria or prerequisites. The guidance is implicit rather than explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_weatherAInspect

Current weather and 6-hour forecast for any city. No API key needed. Args: city: City name (e.g. 'London', 'New York', 'Tokyo')

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It usefully adds that 'No API key needed' (authentication context) and specifies the forecast duration ('6-hour'), but doesn't mention rate limits, error conditions, or what happens with invalid city names. It adequately describes the core behavior without rich operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first sentence, followed by a concise parameter section. Every sentence earns its place: the first explains what the tool does, the second adds authentication context, and the parameter documentation is essential given the schema's lack of descriptions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, weather data), no annotations, but with an output schema present, the description is reasonably complete. It covers purpose, authentication, parameter details, and forecast scope. The output schema presumably handles return values, so the description doesn't need to explain those. Minor gaps include lack of error handling or geographical limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, so the description must fully compensate. It explicitly documents the single parameter 'city' with clear semantics ('City name'), provides examples ('London', 'New York', 'Tokyo'), and explains it's required. This adds substantial value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('get current weather and 6-hour forecast') and resource ('for any city'). It distinguishes itself from sibling tools like get_exchange_rates or get_news by focusing exclusively on weather data, making its scope unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('for any city') and mentions 'No API key needed' as a practical usage consideration. However, it doesn't explicitly state when NOT to use it or name alternatives among sibling tools, though the weather focus naturally differentiates it from financial or news tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_whale_movesBInspect

On-chain whale movements and exchange flows. Tracks large BTC/ETH transfers, exchange inflows/outflows, and accumulation patterns.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it describes what data the tool provides (whale movements, exchange flows), it doesn't disclose important behavioral traits like whether this is real-time or historical data, update frequency, rate limits, authentication requirements, or what format the output takes. For a data retrieval tool with zero annotation coverage, this is inadequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that directly state the tool's purpose. The first sentence establishes the core function, and the second elaborates on specific data types. There's no wasted text, though the structure could be slightly improved by front-loading the most critical information more explicitly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that the tool has no parameters, has an output schema (which handles return value documentation), and performs a relatively straightforward data retrieval function, the description is minimally adequate. However, it lacks important context about data freshness, scope limitations, and how it differs from sibling tools. The presence of an output schema means the description doesn't need to explain return values, but other contextual gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and schema description coverage is 100% (since there are no parameters to describe). The description appropriately doesn't discuss parameters since none exist. It focuses on what the tool provides rather than how to configure it, which is correct for a parameterless tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: tracking large cryptocurrency transfers, exchange flows, and accumulation patterns. It specifies the resources involved (BTC/ETH transfers, exchange inflows/outflows) and the action (tracks). However, it doesn't explicitly differentiate from sibling tools like get_market_snapshot or get_trading_signals, which might also involve cryptocurrency data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when this tool is appropriate, when other tools should be used instead, or what specific scenarios it addresses. With multiple sibling tools related to cryptocurrency/market data, this lack of differentiation is a significant gap.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrape_urlAInspect

Extract clean readable text from any URL. No API key needed. Returns title, author, publish date, and full body text. Args: url: Full URL to scrape (must start with https://)

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it extracts 'clean readable text' (implying processing), returns specific structured data (title, author, date, body), and notes 'No API key needed' (simplifying auth). It lacks details on rate limits, error handling, or content types, but covers core functionality adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, the second details the return values, and the third explains the parameter with a constraint. Every sentence earns its place with no wasted words, making it easy to scan and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter), no annotations, and the presence of an output schema (which handles return values), the description is complete enough. It covers purpose, usage context, key behavior, and parameter details, providing all necessary information for an agent to invoke the tool correctly without overloading.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate. It adds meaningful semantics: the 'url' parameter is described as 'Full URL to scrape' with the constraint 'must start with https://', which clarifies format and security requirement beyond the schema's basic string type. Since there's only one parameter, this is sufficient for high utility.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Extract clean readable text') and resource ('from any URL'), distinguishing it from all sibling tools which focus on financial, market, news, or search data rather than general web scraping. It explicitly mentions what gets extracted (title, author, publish date, body text), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Extract clean readable text from any URL') and mentions 'No API key needed,' which is helpful guidance. However, it does not explicitly state when not to use it or name alternatives among the sibling tools (e.g., vs. 'get_news' or 'web_search'), missing full differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources