Skip to main content
Glama

Server Details

Crypto signal intelligence: 20 assets, 6 dimensions, regime detection, portfolio optimizer

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
manavaga/web3-signals-mcp
GitHub Stars
0
Server Listing
Web3 Signals — Crypto Signal Intelligence

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

10 tools
compare_assetsA
Read-only
Inspect

Which crypto should I buy — BTC, ETH, or SOL? Compare 2-5 cryptocurrencies ranked by signal strength. Input: comma-separated tickers (e.g. 'BTC,ETH,SOL'). Returns ranked comparison with scores, direction, and verdict. Use for portfolio allocation decisions.

ParametersJSON Schema
NameRequiredDescriptionDefault
assetsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With readOnlyHint=true in annotations, the description adds value by disclosing output structure ('scores, direction, and verdict') and the ranking methodology ('signal strength'). However, it omits what 'signal strength' or 'verdict' specifically entail, and doesn't mention data freshness or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four well-structured sentences with zero redundancy. Front-loads the use-case example, followed by core functionality, input specification, and usage context. The conversational opening earns its place by immediately clarifying the comparative nature of the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter tool with an output schema. Covers purpose, input syntax, output preview, and use-case context. Could be improved by defining 'signal strength' or mentioning data sources, but sufficient given the structured output schema handles detailed return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema description coverage on the single 'assets' parameter. The description specifies the format ('comma-separated tickers'), provides a concrete example ('BTC,ETH,SOL'), and notes quantity constraints ('2-5'). Minor gap: doesn't specify case sensitivity or spacing rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool compares and ranks 2-5 cryptocurrencies by signal strength, distinguishing it from single-asset siblings like get_asset_signal. The opening question format ('Which crypto should I buy') is slightly conversational but quickly resolves into a specific functional statement ('Compare 2-5 cryptocurrencies...').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage context ('Use for portfolio allocation decisions') and input constraints ('2-5 cryptocurrencies'), but lacks explicit when-not-to-use guidance or named alternatives from the sibling list (e.g., when to use get_asset_signal vs this tool).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_all_signalsA
Read-only
Inspect

Get buy/sell signals for all 20 major cryptocurrencies including Bitcoin, Ethereum, Solana, and more. Returns a 0-100 composite score and direction (bullish/bearish/neutral) for each asset, plus portfolio summary with market regime and risk level. Updated every 15 minutes. For full dimension breakdown and AI insights, use the paid REST API.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable behavioral context beyond the readOnlyHint annotation: discloses 15-minute update frequency (critical for trading data), and details return structure (0-100 score, direction enum, portfolio summary). Does not mention rate limits or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four well-structured sentences with zero waste: action/scope, return format, update frequency, and usage limitations. Front-loaded with verb. Examples (Bitcoin, Ethereum) effectively illustrate scope without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a zero-parameter, read-only tool with existing output schema. Covers purpose, return value summary, data freshness, and upgrade path. No gaps given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, establishing baseline 4. The description implicitly confirms the bulk nature of the endpoint ('all 20 major'), clarifying that no filtering parameters exist, which adds necessary semantic context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Get') and resource ('buy/sell signals for all 20 major cryptocurrencies'). Scope is well-defined ('all 20 major'), distinguishing it from sibling 'get_asset_signal' which likely retrieves individual asset data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit limitation guidance ('For full dimension breakdown... use the paid REST API'), indicating when not to use the tool. Could be improved by explicitly contrasting with sibling 'get_asset_signal' for single-asset lookups.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_analyticsA
Read-only
Inspect

Who is using AgentMarketSignal? See API usage statistics including total requests, unique clients, response times, breakdowns by endpoint and client type (AI agents, browsers, scripts). Useful for understanding adoption.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, so description appropriately focuses on data content rather than safety. It adds valuable behavioral context by specifying breakdown dimensions (by endpoint and client type: AI agents, browsers, scripts) that annotations don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: rhetorical hook establishes scope, second sentence lists specific metrics with parenthetical examples, third states business purpose. Front-loaded with concrete details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple schema (1 param, no nesting), existing output schema, and readOnly annotation, description adequately covers return content. Minor gap: doesn't explain temporal scope controlled by the undocumented 'days' parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage for the 'days' parameter. Description fails to compensate by not mentioning the time window, date range, or what the 'days' parameter controls at all. Relies entirely on parameter name for meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'See' with resource 'API usage statistics' and enumerates specific metrics (total requests, unique clients, response times, endpoint/client breakdowns). It clearly distinguishes from asset-focused siblings by focusing on 'Who is using AgentMarketSignal' (usage analytics vs market data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States explicit use case: 'Useful for understanding adoption.' While it doesn't explicitly name alternatives, the domain (API usage stats) clearly differentiates it from siblings like get_asset_performance or get_crypto_price. Lacks explicit 'when-not-to-use' guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_asset_performanceA
Read-only
Inspect

How accurate are the signals for BTC specifically? Get per-asset accuracy metrics for any cryptocurrency. Returns 30-day rolling accuracy, total signals evaluated, and comparison to overall accuracy.

ParametersJSON Schema
NameRequiredDescriptionDefault
assetYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, confirming safe read operations. The description adds valuable behavioral context by disclosing what the tool returns (30-day rolling accuracy, total signals evaluated, comparison to overall accuracy), helping the agent understand the data structure even though an output schema exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences efficiently convey purpose and output. The opening rhetorical question ('How accurate are the signals for BTC specifically?') is slightly informal but effectively illustrates usage. Core action ('Get per-asset accuracy metrics') is front-loaded in the second sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single string parameter and presence of output schema, the description is appropriately complete. It covers the tool's function, example input (BTC), and return value summary. No significant gaps remain for this level of complexity, though explicit parameter documentation would improve it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage for the 'asset' parameter, the description partially compensates by providing 'BTC' as an example and mentioning 'any cryptocurrency,' implying ticker symbols. However, it lacks explicit parameter documentation (e.g., expected format, valid values) needed to fully offset the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get per-asset accuracy metrics for any cryptocurrency' with specific scope (30-day rolling accuracy). It distinguishes from siblings like get_performance (general) and get_asset_signal (current signals) by emphasizing 'per-asset accuracy metrics,' though it doesn't explicitly name these alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the 'BTC specifically' example and 'per-asset' phrasing, suggesting use when analyzing individual cryptocurrency accuracy. However, it lacks explicit when-to-use guidance or named alternatives (e.g., when to use compare_assets or get_performance instead).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_asset_signalA
Read-only
Inspect

Is BTC bullish or bearish right now? Get a 0-100 buy/sell score for any cryptocurrency. Returns composite score, direction (bullish/bearish/neutral), signal label (STRONG BUY to STRONG SELL), and momentum. Supports: BTC, ETH, SOL, BNB, XRP, ADA, AVAX, DOT, MATIC, LINK, UNI, ATOM, LTC, FIL, NEAR, APT, ARB, OP, INJ, SUI. For the full 6-dimension breakdown with whale, technical, and derivatives analysis, use the paid REST API.

ParametersJSON Schema
NameRequiredDescriptionDefault
assetYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, confirming the safe read nature. The description adds valuable behavioral context: it details the return structure (composite score, direction labels from STRONG BUY to STRONG SELL), lists 20 supported asset tickers, and clarifies the scope (momentum signal vs. deep analytics). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a hook question, functional statement, return value specification, supported asset list, and scope limitation. Every sentence conveys distinct information without redundancy, appropriate for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description appropriately focuses on high-level signal methodology rather than field-by-field documentation. It comprehensively covers inputs (asset list), outputs (score types), and operational constraints (read-only, limited dimensions) sufficient for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage (the 'asset' property lacks a description field), the description compensates effectively by enumerating valid values: 'Supports: BTC, ETH, SOL...' This gives the agent concrete examples of what the asset parameter accepts, though it could explicitly state these are the valid ticker values for the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides a '0-100 buy/sell score' and identifies the resource as cryptocurrency signals. It distinguishes itself from siblings by specifying it returns a 'composite score' with direction and momentum, contrasting with the 'full 6-dimension breakdown' mentioned as an external alternative.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit guidance on limitations: 'For the full 6-dimension breakdown with whale, technical, and derivatives analysis, use the paid REST API.' This clarifies when NOT to use this tool. However, it misses explicit differentiation from server siblings like get_crypto_price or get_asset_performance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_crypto_priceA
Read-only
Inspect

What is the current price of Bitcoin, Ethereum, or any major crypto? Returns the latest USD price, 24-hour price change percentage, trading volume, and market cap. Updated every 15 minutes from CoinGecko and Binance. Supports 20 assets: BTC, ETH, SOL, BNB, XRP, ADA, AVAX, DOT, MATIC, LINK, UNI, ATOM, LTC, FIL, NEAR, APT, ARB, OP, INJ, SUI. Example: get_crypto_price('BTC')

ParametersJSON Schema
NameRequiredDescriptionDefault
assetYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint=true, the description adds valuable behavioral context not present in structured fields: data sources (CoinGecko and Binance), update frequency (every 15 minutes), and a preview of return values. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Every sentence earns its place: opening question establishes purpose, second sentence details return payload, third provides data provenance/freshness, fourth lists valid inputs, fifth gives usage example. Well front-loaded and appropriately sized for the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read operation with output schema present, the description is comprehensive. It covers purpose, return structure, data sources, valid parameter values, and usage example. No gaps remain given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by enumerating all 20 valid asset values and providing syntax example ('BTC'). This gives the agent complete semantic understanding of the 'asset' parameter despite the schema lacking descriptions or enums.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves current cryptocurrency prices with specific verbs ('get', 'returns') and identifies the exact resources (Bitcoin, Ethereum, major crypto). It distinguishes from siblings by specifying it returns USD price, 24-hour change, volume, and market cap—differentiating it from comparison, signal, or analytics tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context by listing the 20 supported assets (BTC, ETH, SOL, etc.) and giving a concrete example ('get_crypto_price('BTC')'), which implicitly defines scope. However, it lacks explicit guidance on when to use siblings like compare_assets or get_asset_performance instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_healthA
Read-only
Inspect

Is AgentMarketSignal working? Check the real-time status of all 5 AI data pipelines (whale tracking, technical analysis, derivatives, narrative sentiment, market data) and the signal fusion engine. Returns last run times, durations, and any errors.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, confirming safe read access. The description adds valuable behavioral context by specifying exactly which 5 pipelines are monitored and disclosing the return payload structure ('last run times, durations, and any errors'), which helps the agent understand what data to expect despite the output schema being present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally structured with two efficient sentences. The opening question immediately establishes purpose, followed by specific pipeline enumeration and return value disclosure. Zero redundancy—every clause provides actionable information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (zero parameters), presence of an output schema, and readOnly annotation, the description is fully complete. It specifies the 5 monitored subsystems and the error reporting behavior, providing sufficient context for correct invocation without over-specifying.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains 0 parameters, establishing a baseline of 4. With no parameters to describe, the description appropriately focuses on operational semantics rather than parameter details, satisfying the requirements for this dimension.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Check') and resource ('real-time status of all 5 AI data pipelines') and explicitly names the components monitored (whale tracking, technical analysis, derivatives, narrative sentiment, market data, signal fusion engine). It clearly distinguishes from siblings like get_all_signals or get_crypto_price by focusing on operational health rather than data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The opening question 'Is AgentMarketSignal working?' provides clear contextual guidance on when to invoke this tool (to verify system operational status). While it doesn't explicitly name alternatives, the framing makes the use case unambiguous compared to sibling data-fetching tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_briefingA
Read-only
Inspect

What should I buy or sell in crypto right now? Returns the top 3 buy and top 3 sell recommendations from 20 cryptocurrencies, plus market regime (trending/ranging), risk level, and momentum. Best starting point for portfolio decisions. Scores range 0-100: above 62 is a buy signal, below 38 is a sell signal.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Excellent addition beyond the readOnlyHint annotation: explains the scoring logic ('Scores range 0-100: above 62 is a buy signal, below 38 is a sell signal') and scope (20 cryptocurrencies), which is essential for interpreting results correctly. Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four dense sentences with no waste. Opens with user-centric question, follows with return value specifics, usage positioning, and critical score interpretation thresholds. Slightly unconventional opening but efficiently structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the existence of an output schema, the description appropriately focuses on semantic interpretation (score thresholds, regime meanings) rather than structural return values. Adequate for a zero-parameter, read-only aggregation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present, warranting baseline score of 4 per rubric. Schema coverage is trivially 100% with no properties to describe.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns 'top 3 buy and top 3 sell recommendations from 20 cryptocurrencies' plus market regime, risk, and momentum. It distinguishes itself from siblings via 'Best starting point for portfolio decisions,' positioning it as an overview tool vs. single-asset analysis (get_asset_signal) or comparisons (compare_assets).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage guidance ('Best starting point for portfolio decisions') indicating when to use it, but lacks explicit exclusions or named alternatives (e.g., does not specify when to use get_all_signals vs. this filtered briefing).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_performanceA
Read-only
Inspect

How accurate are these crypto signals? Returns 30-day rolling accuracy metrics showing how often buy/sell predictions were correct. Includes overall accuracy percentage, reputation score (0-100), and breakdowns by asset and timeframe (24h/48h).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond the readOnlyHint annotation by specifying the rolling window (30-day), the reputation score scale (0-100), and available breakdown dimensions (by asset, 24h/48h). This helps the agent understand the data characteristics and temporal scope of the returned metrics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences efficiently convey the tool's purpose and output structure. The opening question 'How accurate are these crypto signals?' is slightly unconventional but effectively frames the context. The subsequent sentences front-load the core metric type before detailing specific components.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters, is read-only, and has an output schema, the description provides appropriate completeness by summarizing key output characteristics (rolling window duration, metric types, breakdown options) without needing to fully document the return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero input parameters, the baseline score applies. The description correctly requires no parameter explanation, as confirmed by the 100% schema description coverage (trivially satisfied) and empty properties object.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns '30-day rolling accuracy metrics' for crypto signals, specifying the resource (accuracy data) and scope (buy/sell predictions). However, it opens with a question format rather than a direct action verb, and does not explicitly differentiate from sibling 'get_asset_performance' despite mentioning 'breakdowns by asset'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by detailing the content (accuracy percentages, reputation scores, timeframe breakdowns), suggesting it should be used when evaluating signal reliability. However, it lacks explicit guidance on when to choose this over 'get_asset_performance' or 'get_analytics', or any prerequisites for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_x402_statsB
Read-only
Inspect

How much revenue has AgentMarketSignal generated? View x402 micropayment analytics including total paid calls, revenue in USDC, payment conversion rate, and daily payment timeline.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, and the description aligns with this by using 'View'. It adds value by enumerating specific metrics returned (total paid calls, revenue in USDC, conversion rate, daily timeline) beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with minimal waste. The opening question format is slightly inefficient but quickly transitions to the concrete value proposition. Specific metrics are listed clearly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter tool with output schema, but incomplete due to the undocumented 'days' parameter. The description appropriately lists output metrics since output schema exists, but lacks sibling differentiation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (no description for 'days' parameter), and the description fails to compensate by explaining the parameter. While it mentions 'daily payment timeline', it doesn't clarify that 'days' controls the lookback period or that it defaults to 30.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (x402 micropayment analytics) and action (View), distinguishing it from the generic get_analytics sibling by specifying x402 micropayment data and AgentMarketSignal context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like get_analytics or get_performance. The rhetorical question implies revenue-checking use cases but lacks explicit when/when-not instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.