Skip to main content
Glama

Server Details

Live crypto trading signals, sentiment, Polymarket analytics. Free demo + x402 micropayments.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ClawHub-core/deepblue-mcp
GitHub Stars
0
Server Listing
Deep Blue Trading

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 17 of 17 tools scored.

Server CoherenceC
Disambiguation3/5

Several crypto analytics tools (get_market_snapshot, get_market_mood, get_market_intel, get_agent_pulse) have overlapping purposes despite detailed descriptions, while general-purpose tools (weather, news, web search) are clearly distinct. An agent may misselect among the crypto tools.

Naming Consistency3/5

Most tools follow a get_ prefix pattern with descriptive snake_case names, but scrape_url and web_search break this convention, introducing inconsistency.

Tool Count3/5

At 17 tools, the count is moderate, but the inclusion of unrelated tools (weather, GitHub trending, news, web search, scraping) makes the set feel bloated for a trading intelligence server.

Completeness2/5

For a trading intelligence server, the set covers many data points but lacks important trading actions like order placement or historical data, and the presence of irrelevant tools indicates a lack of focus.

Available Tools

17 tools
get_agent_pulseAInspect

Full crypto agent context in ONE call — replaces 5+ separate tool calls. Returns: BTC/ETH/SOL/XRP signals, market mood (bias/regime/key levels), whale flows (24h directional bias + notable moves), top 5 Polymarket markets by 24h volume, DeepBlue trading track record, and a 1-2 sentence agent_summary ready to drop into agent context. Best value for trading agents needing situational awareness — designed for periodic polling (e.g. every 5-15 min).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It transparently describes the output as read-only data (returns signals, mood, flows, etc.) and suggests usage for polling. However, it does not explicitly confirm read-only behavior or discuss any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a front-loaded value proposition followed by a bullet list of returned items. It is informative but could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 0 parameters and an existing output schema, the description is fully complete. It lists all returned data categories, provides usage context (periodic polling), and explains the tool's aggregated value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters, so baseline is 4. The description does not need to add parameter meaning as there are none.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides 'Full crypto agent context in ONE call' and lists all specific data elements (BTC/ETH/SOL/XRP signals, market mood, whale flows, etc.). It distinguishes itself from siblings by claiming to replace 5+ separate tool calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on when to use the tool ('Best value for trading agents needing situational awareness — designed for periodic polling') but does not explicitly state when not to use it or mention alternatives for specific sub-queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_exchange_ratesAInspect

Live FX and crypto exchange rates. No API key needed. Args: base: Base currency (default: USD) symbols: Comma-separated currencies/crypto to convert to (default: EUR,GBP,JPY,BTC,ETH)

ParametersJSON Schema
NameRequiredDescriptionDefault
baseNoUSD
symbolsNoEUR,GBP,JPY,BTC,ETH

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must disclose behavioral traits. It states the rates are 'live' and no API key is needed, but lacks details on update frequency, caching, rate limits, or any side effects. The minimal coverage leaves significant behavioral unknowns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise, using two sentences plus an argument list. Key information (live rates, no API key) is front-loaded. Every sentence adds value, and there is no verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and the existence of an output schema (presumably documenting return structure), the description covers the essentials. However, it lacks behavioral details like update frequency or accuracy, leaving some contextual gaps for an agent to fully understand the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must add meaning. It explains `base` as 'Base currency (default: USD)' and `symbols` as 'Comma-separated currencies/crypto to convert to', partially compensating for the schema gap. However, it does not specify valid currencies or format expectations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides 'Live FX and crypto exchange rates', which directly matches the tool name 'get_exchange_rates'. It distinguishes this tool from siblings like 'get_market_intel' and 'get_weather' by specifying its unique focus on exchange rates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions 'No API key needed', which suggests simplicity, but it does not provide explicit guidance on when to use this tool versus alternatives. Usage is implied by the tool's purpose, but no exclusions or comparisons are offered.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_intelAInspect

BTC macro intelligence: funding rates, open interest trends, liquidation risk levels, and regime classification (trending/ranging/volatile).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries full burden; it lists output data types, indicating a read-only operation. It omits potential rate limits or data freshness, but the core behavior is clear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that efficiently conveys the tool's purpose and outputs, with no excess verbiage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless tool with an output schema, the description adequately summarizes contents. It could mention data freshness or source, but is otherwise complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters, the description adds value by specifying the exact data returned (funding rates, open interest, etc.), beyond the empty input schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves BTC macro intelligence, listing specific metrics like funding rates and open interest trends, which distinguishes it from sibling tools focused on other data types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for obtaining BTC macro data but does not explicitly state when to use this tool over alternatives or provide exclusion criteria. Given the simple nature, it is adequate but lacks explicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_moodAInspect

Quick market mood check: fear/greed level, regime classification, per-asset directional bias. Lightweight alternative to full sentiment analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavioral traits. It mentions the tool is 'quick' and a 'lightweight alternative,' but does not elaborate on side effects, data source, refresh frequency, or whether it is read-only. This leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is only two sentences, front-loaded with key terms and no wasted words. It efficiently conveys the purpose and usage context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and an existing output schema, the description covers the main function adequately. However, it could mention data freshness or update frequency for completeness, but overall it is sufficient for a simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are zero parameters, so schema coverage is trivial. The description adds value by explaining the output concepts (fear/greed, regime, per-asset bias), which provides context beyond the output schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it checks market mood, providing fear/greed level, regime classification, and per-asset directional bias. It also distinguishes itself as a lightweight alternative to full sentiment analysis, differentiating it from sibling tools like get_sentiment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly frames this as a lightweight alternative to full sentiment analysis, indicating when to use this tool. However, it does not specify when not to use it compared to other sibling tools like get_market_intel or get_market_snapshot.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_snapshotAInspect

All-coins snapshot: price, momentum, signals in one call. Quick overview of BTC, ETH, SOL, XRP with current price, 1m/5m momentum, and bias.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided. The description indicates a read-only snapshot, but it lacks details on rate limits, authentication, or behavior if data is missing. It is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences, no wasted words, and front-loaded with the key purpose. It efficiently conveys the tool's value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and an output schema (presumably structured), the description covers the essential information. It could mention if only these four coins are returned, but overall it is sufficiently complete for a simple snapshot tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are zero parameters, so the schema provides no meaning. The description adds value by specifying the coins and data types included, which helps the agent understand the output.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides an all-coins snapshot with specific coins (BTC, ETH, SOL, XRP) and data fields (price, momentum, bias). This distinguishes it from sibling tools that focus on single coins or different indicators.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies this is for a quick overview but does not explicitly state when to use it versus alternatives like get_token_price or get_trading_signals. No when-not-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_newsAInspect

Get Hacker News top stories or search by keyword. No API key needed. Args: query: Search term (leave empty for top stories) limit: Number of stories (1-20, default 5)

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It states 'No API key needed' and limits (1-20 for limit), which are useful. However, it does not mention whether it's read-only, response format, or any error handling. Given the tool's simplicity, this is adequate but could be improved.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is extremely concise: two sentences plus a structured arg list. Every sentence adds value, with the main purpose front-loaded. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and the presence of an output schema (not shown), the description covers purpose, parameters, and key constraints. It is complete enough for an agent to understand its function and use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has only types and defaults (0% coverage). Description adds critical meaning: query as search term with empty for top stories, limit range 1-20, and default 5. This goes beyond the schema, making parameter usage clear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb ('Get'), resource ('Hacker News top stories or search by keyword'), and key condition ('No API key needed'). It distinguishes itself from sibling tools like get_github_trending and web_search by specifying Hacker News specifically.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage for Hacker News but does not explicitly state when to avoid this tool or mention alternatives (e.g., web_search for general news). The phrase 'No API key needed' hints at ease of use, but it lacks explicit when-to-use/when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_performanceAInspect

DeepBlue bot trading performance: win rate, total P&L, recent trades. Verified track record from live Polymarket 5-min crypto trading.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must carry the burden. It discloses the output includes win rate, P&L, and recent trades, but does not mention auth needs, rate limits, or any side effects. With an output schema present, the agent can infer structure, but behavioral traits are minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no filler, front-loaded with key details. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and the presence of an output schema, the description covers the tool's purpose and output. It could mention if data is real-time or historical, but overall is fairly complete for a simple retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are zero parameters, so schema coverage is 100% by default. The description adds no parameter-specific info, which is acceptable since none exist. Baseline 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides DeepBlue bot trading performance (win rate, total P&L, recent trades) and specifies it's a verified track record from live Polymarket 5-min crypto trading. This distinguishes it from sibling tools like get_exchange_rates or get_market_intel.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving bot performance but does not explicitly state when to use it vs alternatives or when not to use it. No exclusions or context are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_polymarket_analyticsAInspect

Polymarket prediction market analytics: active positions, per-coin win rates, edge analysis, best/worst trading hours from DeepBlue's live trading bot.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description bears full responsibility for behavioral disclosure. It mentions data sourced from 'DeepBlue's live trading bot,' but lacks details on update frequency, latency, or whether the tool is read-only. No mention of authentication needs, rate limits, or response format beyond a short list of analytics types.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently conveys the tool's core function. No redundant or extraneous information is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters and an output schema exists (though not shown), the description adequately outlines the analytics domains. However, it lacks information about prerequisites, data freshness, or any side effects (e.g., network calls). With no annotations, an agent might need additional context to fully understand usage boundaries.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, so per scoring rules the baseline is 4. The description adds value by explaining the scope of returned data (positions, win rates, etc.), which is not required but enriches understanding beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides Polymarket prediction market analytics, listing specific data like active positions, win rates, edge analysis, and trading hours. It distinguishes itself from sibling tools (e.g., get_exchange_rates, get_market_mood) by focusing exclusively on Polymarket, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool versus alternatives. It implies use for Polymarket analytics, but fails to contrast with siblings like get_market_snapshot or get_trading_signals, which might have overlapping capabilities. No guidance on exclusions or contexts where this tool is inappropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_realtime_signalAInspect

Live directional signal for a single coin from Binance websocket feed. Args: coin: Coin to analyze — btc, eth, sol, or xrp (default: btc) Returns momentum, orderbook imbalance, aggressor ratio, and directional bias.

ParametersJSON Schema
NameRequiredDescriptionDefault
coinNobtc

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully cover behavior. It mentions 'live' from websocket feed but does not disclose connection behavior, latency, rate limits, data staleness, or error handling (e.g., invalid coin). The description is insufficient for a real-time data tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences plus a structured list of args and returns. It is front-loaded with the core purpose. Minor redundancy in listing returns twice (in prose and list) but overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (not shown), the description adequately covers return fields. However, it omits prerequisites (e.g., API key?), lifecycle (does it keep streaming?), and single-coin limitation. For a real-time tool, more context on how long to wait or connection management is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Although schema description coverage is 0%, the description adds significant value by providing specific coin values (btc, eth, sol, xrp) and the default. This clarifies the parameter's domain beyond the schema's type-only definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides a 'live directional signal for a single coin from Binance websocket feed.' It lists specific return metrics (momentum, orderbook imbalance, aggressor ratio, directional bias) and distinctively positions itself among siblings by specifying the source and real-time nature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for real-time single-coin analysis but does not explicitly guide when to use this tool versus alternatives like get_trading_signals or conditions to avoid. No exclusions or context-specific guidance is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_sentimentAInspect

Crypto sentiment composite for BTC/ETH/SOL/XRP. Blends Fear & Greed Index (30%), whale/institutional exchange flow (20%), and 6 real-time technical indicators (50%). Per-coin scores from -1 (max bearish) to +1 (max bullish).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears the full burden. It details the composition of the sentiment score (Fear & Greed Index, exchange flow, technical indicators) and the output range (-1 to +1). Although it does not mention update frequency or caching, it sufficiently discloses the tool's behavior for a read-like operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three short sentences, front-loaded with the core output. Every sentence adds essential information (coins covered, weightings, output scale). No extraneous words or repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (blended sentiment for four coins), the description covers the main aspects. It explains the input (none), output scale, and weightings. With an output schema present, return value details are handled there. However, it could mention whether data is real-time or delayed to provide fuller context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, and the description does not need to elaborate on them. With no parameters and 100% schema coverage, the baseline score of 4 applies. The description adds context on what the tool returns, but no parameter details are necessary.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool's purpose: providing a crypto sentiment composite for BTC/ETH/SOL/XRP. It specifies the blend of indicators and the output scale, making it distinct from sibling tools like get_market_mood or get_trading_signals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for obtaining aggregated sentiment scores for specific cryptocurrencies but does not explicitly guide when to use this tool over alternatives like get_market_mood or get_trading_signals. No when-not or exclusion criteria are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_token_priceAInspect

Real-time token price. Supports: BTC, ETH, SOL, XRP, and other major tokens. Args: token: Token symbol (e.g. 'BTC', 'ETH', 'SOL')

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavioral traits. It only states 'real-time' and lists supported tokens. It does not mention whether authentication is required, rate limits, error behavior for unsupported tokens, or data freshness guarantees.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences covering purpose, supported tokens, and parameter explanation. No unnecessary words; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter) and the presence of an output schema, the description is minimally adequate. However, it could be more complete by mentioning data source, error handling, or that it returns the current price in a standard currency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one parameter 'token' with no description (0% coverage). The description adds 'token: Token symbol (e.g. 'BTC', 'ETH', 'SOL')', which clarifies the parameter's meaning and provides examples. However, it does not specify case sensitivity or whether full names are accepted.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: 'Real-time token price.' It lists supported tokens (BTC, ETH, SOL, XRP, and other major tokens), which distinguishes it from sibling tools like get_exchange_rates or get_market_intel that have different scopes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides basic context: use this tool to get the price of a specific token. However, it does not explicitly say when to choose this over alternatives (e.g., get_market_snapshot), nor does it mention when not to use it or provide exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_trading_signalsAInspect

Live BTC, ETH, SOL, XRP 5-minute directional signals with confidence scores. Generated by DeepBlue's MomentumSignalGenerator from real-time Binance websocket data. Includes: direction (UP/DOWN), confidence (0.50-0.78), regime, and indicator breakdown.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the signal generator (DeepBlue's MomentumSignalGenerator) and data source (Binance websocket), but does not mention rate limits, costs, or any side effects. Since no annotations are provided, the description carries full responsibility.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences and a bullet list, front-loading the core information. Every sentence is purposeful with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters and an output schema exists, the description fully explains what the tool returns, including specific coins, timeframe, and included data fields. No gaps are apparent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no parameters (0 params, 100% coverage). The description adds value by explaining the output structure (direction, confidence range, regime, indicator breakdown), which compensates for the absence of parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides live 5-minute directional signals for BTC, ETH, SOL, XRP with confidence scores, specifying the generation source. However, it does not explicitly distinguish itself from the sibling tool 'get_realtime_signal', which may have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'get_market_snapshot' or 'get_realtime_signal'. The description lacks context for selective usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_weatherAInspect

Current weather and 6-hour forecast for any city. No API key needed. Args: city: City name (e.g. 'London', 'New York', 'Tokyo')

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided. The description discloses that the tool is read-only ('Current weather... forecast') and requires no API key, which are useful behavioral traits. However, it does not mention potential rate limits, data freshness, or any limitations, leaving room for improvement.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences plus a brief parameter note. It is front-loaded with the core purpose and key advantage (no API key). Every sentence adds value, and there is no redundancy or unnecessary detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no nested objects) and the presence of an output schema (not shown but indicated), the description adequately covers the tool's behavior. It could mention that it returns both current weather and forecast, but this is already stated. A small gap is the lack of detail on what forecast data entails, but it is not essential for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has a single required parameter 'city' with 0% schema description coverage. The description adds examples ('e.g. 'London', 'New York', 'Tokyo''), clarifying the expected format. This provides meaningful context beyond the bare type definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states it provides 'Current weather and 6-hour forecast for any city.' This is a specific verb+resource combination, and it distinguishes itself from sibling tools (e.g., get_exchange_rates) as the only weather-related tool on the server.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description notes 'No API key needed,' which provides contextual guidance. While there is no explicit when-to-use or alternative exclusion, the tool's unique purpose among siblings makes such guidance less critical. The statement is clear and helpful.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_whale_movesAInspect

On-chain whale movements and exchange flows. Tracks large BTC/ETH transfers, exchange inflows/outflows, and accumulation patterns.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral details such as update frequency, time range, real-time vs. historical nature, or any required permissions. This leaves ambiguity about the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose, and provides specific details without any unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a no-parameter tool with an output schema, the description is largely adequate but could be improved by briefly noting output format or update behavior. Overall, sufficient for selecting the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters and 100% schema coverage trivially, the description adds no parameter information, but none is needed. Baseline is 4 per guidelines.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool tracks on-chain whale movements and exchange flows, specifically large BTC/ETH transfers and exchange patterns. It distinguishes itself from sibling tools that cover different domains like exchange rates, market intel, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly indicates when to use (for whale movement data), but lacks explicit guidance on when not to use or alternatives. Since no sibling tool overlaps in functionality, this is a minor gap.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrape_urlAInspect

Extract clean readable text from any URL. No API key needed. Returns title, author, publish date, and full body text. Args: url: Full URL to scrape (must start with https://)

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses that no API key is needed and lists return fields, but lacks details on handling dynamic content, rate limits, or anti-bot measures, which are relevant for a scraper.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences and a bulleted arg line. It front-loads the main action and immediately provides key details, earning its place without excess.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple one-parameter tool and presence of an output schema (not shown), the description adequately covers expected behavior. It could mention error handling or limitations, but is complete enough for the use case.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds critical meaning to the 'url' parameter beyond the schema by specifying 'Full URL to scrape (must start with https://).' This compensates fully for the 0% schema description coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Extract', the resource 'URL', and the output 'clean readable text' with specific fields. It distinguishes from siblings like web_search and get_news which are for searching, not direct URL extraction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description specifies 'No API key needed' and requires URL to start with 'https://', giving clear context for use. However, it does not explicitly mention when not to use it or provide alternatives, though siblings are implicitly different.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.