Skip to main content
Glama

Server Details

Real-time data feeds for AI agents with USDC micropayments on Base for premium tools.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
RipperMercs/terminalfeed
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 27 of 27 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have clearly distinct purposes (e.g., tf_btc_price vs tf_fear_greed), but there is some overlap between free and premium aggregated tools (e.g., tf_briefing, tf_premium_briefing, tf_premium_agent_context). However, descriptions explicitly differentiate them by content and cost.

Naming Consistency5/5

All tools follow a consistent pattern: 'tf_' prefix (with 'tf_premium_' for premium ones) and snake_case. Names are descriptive and predictable, e.g., tf_btc_price, tf_earthquakes, tf_premium_macro.

Tool Count3/5

27 tools is on the high side, but the server covers a broad domain (crypto, finance, earthquakes, AI trends, payment system, etc.). Each tool serves a specific purpose, so the count is borderline acceptable but feels slightly heavy.

Completeness4/5

The tool surface covers a wide range of data feeds: crypto, forex, macro indicators, earthquakes, HN, HuggingFace, prediction markets, payment system, and service status. Minor gaps exist (e.g., no dedicated stock prices tool beyond premium macro, no weather), but overall it's comprehensive for a terminal feed.

Available Tools

28 tools
tf_briefingAInspect

Fetches a real-time world-state snapshot composed from BTC ticker, Fear and Greed Index, recent earthquakes (USGS), top Hacker News story count, and ISS crew. Returns JSON. No auth required. Cache TTL 60s. Use when the agent needs a quick global pulse before deciding what to investigate.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description discloses key behaviors: no authentication needed, cache TTL, output format (JSON), and data composition. Minor missing details like error handling or exact data freshness, but strong overall.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, front-loaded with purpose. No wasted words; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and zero parameters, the description covers composition, output format, auth, and caching. Lacks details on JSON structure or potential delays but is sufficient for a lightweight briefing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so schema coverage is 100% trivially. The description adds meaningful context about the tool's output (the list of data sources), which goes beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it fetches a composite real-time snapshot of multiple data sources (BTC, Fear & Greed, earthquakes, HN top story count, ISS crew), distinguishing it from sibling tools that focus on individual data streams.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit usage guidance: 'Use when the agent needs a quick global pulse before deciding what to investigate.' Also notes 'No auth required' and 60s cache TTL, helping the agent decide when to use this tool over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_btc_priceAInspect

Fetches the current Bitcoin price in USD with 24h change, high, low, and volume. Source: Binance with CoinCap fallback. Cache TTL 15s. No auth required. Use for crypto trading decisions or when the agent needs a fresh BTC quote.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses source (Binance with CoinCap fallback), cache TTL (15s), and no auth required. No annotations, so description carries full burden; lacks mention of rate limits but otherwise transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Front-loaded with action and key details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers essential aspects for a simple price fetcher: what it returns, source, cache, auth. No output schema, but description lists fields adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so schema coverage is 100%. Description adds value by listing returned fields (price, change, high, low, volume) beyond schema, which is empty.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it fetches current Bitcoin price in USD with 24h change, high, low, and volume. Explicitly differentiates from siblings by specifying use case: crypto trading decisions or fresh BTC quote.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for when to use: 'crypto trading decisions' or 'needs a fresh BTC quote.' Does not explicitly exclude or mention alternatives, but context is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_crypto_moversAInspect

Fetches the top 15 cryptocurrencies sorted by 24h price change. Includes price, market cap, and percentage move. Source: CoinGecko/CoinLore. Cache TTL 30s. Use when the agent needs to surface notable crypto market moves.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description discloses source (CoinGecko/CoinLore), cache TTL (30s), and data included. Implies read-only behavior, though could mention no side effects explicitly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences. The action verb 'Fetches' is front-loaded. Every sentence adds value with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a parameterless tool. Covers purpose, data, source, cache, and usage. Could optionally mention output format (e.g., JSON array), but not necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema is empty, so parameters are not an issue. Description adds value by explaining the output fields and context, meeting the baseline for zero parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it fetches the top 15 cryptocurrencies sorted by 24h price change, including specific data fields. Distinguishes from siblings like tf_btc_price (single BTC price) and tf_fear_greed (sentiment index).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use when the agent needs to surface notable crypto market moves.' Provides clear context but does not mention when not to use or list alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_earthquakesAInspect

Fetches recent earthquakes magnitude 2.5 or greater from USGS. Returns count and most recent quake details (magnitude, place, time). Cache TTL 2min. Use for disaster monitoring or when the agent needs current seismic activity.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the behavioral burden. It discloses cache TTL (2min) which is valuable. However, it doesn't mention rate limits, error conditions, or data freshness guarantees beyond caching. For a simple read-only tool, this is adequate but not exhaustive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each serving a distinct purpose: purpose, return details, and usage/caching. It is front-loaded and contains no extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and no output schema, the description is remarkably complete. It specifies the data source (USGS), magnitude threshold (2.5+), return fields (count, most recent quake details with magnitude, place, time), caching behavior (2min TTL), and use cases. All essential context is provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are 0 parameters, and schema coverage is 100%. According to guidelines, baseline is 4. The description adds no parameter semantics because there are none, but it correctly indicates no input needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches recent earthquakes from USGS with magnitude >=2.5. It specifies return details (count, most recent quake details). This is distinct from sibling data tools (e.g., tf_btc_price, tf_fear_greed) which focus on different domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use for disaster monitoring or when the agent needs current seismic activity,' providing clear context. It does not explicitly exclude alternatives, but given the specific domain, the usage is well-defined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_economic_dataAInspect

Fetches latest FRED economic indicators: Fed funds rate, CPI, unemployment rate, GDP growth. Cache TTL 1h. Use when the agent needs current US macro indicators. For deeper macro (treasury yields, forex, commodities, indices) use tf_premium_macro.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description covers caching behavior (TTL 1h) and implies read-only nature. Lacks details on return format but adequate for a zero-param fetch tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, efficient use of words, no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Sufficient for a simple tool: covers indicators, caching, and usage context. No output schema needed; minor omission of return structure but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters; schema coverage is 100% (none). Baseline of 4 applies; description implicitly reinforces no arguments needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool fetches specific FRED economic indicators (Fed funds rate, CPI, unemployment, GDP growth) and distinguishes from sibling tf_premium_macro by specifying scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use ('needs current US macro indicators') and when to use alternative ('for deeper macro... use tf_premium_macro').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_fear_greedAInspect

Fetches the current Crypto Fear and Greed Index value (0-100) with classification label (Extreme Fear, Fear, Neutral, Greed, Extreme Greed). Source: Alternative.me. Cache TTL 5min. Use as a sentiment signal for crypto trading decisions.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the source (Alternative.me), cache TTL (5min), and output range/classification. This is sufficient for a read-only fetch with no side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no fluff: first describes what it fetches and returns, second gives source, cache TTL, and usage. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with no parameters and no output schema, the description covers the return value (0-100, classification), source, cache TTL, and intended use. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no parameters, and schema description coverage is 100% (trivial). With 0 parameters, the description need not add parameter info. Baseline 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it fetches the current Crypto Fear and Greed Index with a classification label (0-100). It distinguishes itself from siblings like tf_btc_price or tf_premium_sentiment by specifying the exact index and its use as a sentiment signal.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use as a sentiment signal for crypto trading decisions.' This provides clear guidance on when to use, though it does not mention alternatives or when not to use, which is acceptable given the narrow focus.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_forexAInspect

Fetches current foreign exchange rates with USD as base for 16 major currencies (EUR, GBP, JPY, CAD, AUD, CHF, CNY, INR, MXN, BRL, KRW, SGD, HKD, SEK, NOK, NZD). Source: Frankfurter (ECB-based). Cache TTL 5min. Use for currency conversion or FX-aware decisions.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full behavioral burden. It discloses the read-only nature ('fetches'), cache behavior ('Cache TTL 5min'), and data source ('Frankfurter (ECB-based)'). This is adequate for a zero-parameter tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, and every part adds value: action, scope, currencies, source, cache, usage. No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description fully explains what is returned (rates for 16 currencies), with source and cache details. For a parameterless tool, this is complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, so baseline is 4. The description adds meaning by listing the 16 currencies and source, which compensates for lack of param info. No further detail needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the verb 'fetches', the resource 'foreign exchange rates', and details the scope ('USD as base for 16 major currencies') with an explicit list. It clearly distinguishes from siblings like tf_btc_price (crypto) and tf_economic_data (indicators).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states 'Use for currency conversion or FX-aware decisions,' providing clear use cases. While no explicit when-not-to-use or alternatives are mentioned, the context is sufficient for a simple fetch tool with no parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_harnessesAInspect

Returns a snapshot of public agentic-coding benchmark scores across SWE-bench Verified, Terminal-Bench, Aider Polyglot, and METR HCAST. Each row pairs a harness with a model. Same model can score very differently on different harnesses; that gap is the value-add. Pass ?view=summary for top 10 combined leaderboard plus biggest harness gaps; ?view=gaps for full per-model harness deltas; ?view=combined for normalized cross-benchmark ranking; ?view=raw (default) for the full benchmark/result graph. Source: hand-curated from upstream leaderboards (swebench.com, terminal-bench.org, aider.chat, metr.org). Cache TTL 12h. Use when the agent needs to recommend a harness/model combo or explain why two agents using the same model perform differently.

ParametersJSON Schema
NameRequiredDescriptionDefault
viewNoOutput shape; default raw
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description discloses data source (hand-curated from upstream leaderboards), cache TTL (12h), and the nuance that same models score differently across harnesses. No side effects or auth needs, but these are implicitly read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with purpose, then explains views, source, and cache. Every sentence adds value, though the middle section listing view options is slightly verbose but justified.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, no output schema), the description adequately covers what it returns, the data sources, and usage context. No significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter 'view', but the description adds meaningful context for each enum value (e.g., '?view=summary for top 10 combined leaderboard plus biggest harness gaps'), going beyond the bare enum names in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Returns' and the resource 'snapshot of public agentic-coding benchmark scores' with specific benchmarks listed, distinguishing it from sibling tools which cover different domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Use when the agent needs to recommend a harness/model combo or explain why two agents using the same model perform differently.' No exclusions are needed given the tool's specificity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_payment_balanceAInspect

GET endpoint that returns remaining credits for the bearer token in the Authorization header. Requires Authorization: Bearer tf_live_<64-char-hex>. Costs 0 credits. Use to monitor agent budget.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes the HTTP method, auth requirement, cost, and that it returns remaining credits, providing key behavioral info beyond the absent annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no fluff, front-loading the purpose and key details. Every sentence adds necessary context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple balance check: explains auth format, cost, and purpose. Lacks return format details, but 'remaining credits' is sufficient given no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist in the schema; the description compensates by explaining the authorization token semantics and what the endpoint returns, adding value beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a GET endpoint returning remaining credits for the bearer token, specifying the exact resource and action. It distinguishes itself from sibling payment tools like buying or confirming credits.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states to use for monitoring agent budget and includes required authorization header and zero cost. Lacks explicit when-not-to-use guidance but is clear given sibling context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_payment_buy_creditsAInspect

POST endpoint that returns the published USDC wallet address (0x549c82e6bfc54bdae9a2073744cbc2af5d1fc6d1 on Base mainnet), a unique memo, and a quote tying the dollar amount to credits at $1 USDC = 50 credits. Use as the first step when the agent needs to buy credits to access /api/pro/* endpoints.

ParametersJSON Schema
NameRequiredDescriptionDefault
amount_usdNoUSDC amount to convert. Minimum $1 = 50 credits.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description bears full burden. Discloses it's a POST endpoint and output contents but does not mention side effects, expiration, or required next steps (e.g., confirmation).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, no extraneous text. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple first-step tool but lacks details on return format, error handling, and workflow continuation. With no output schema, more context on output structure would help.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Adds value beyond schema by stating minimum amount ($1) and conversion rate (50 credits per USDC). Schema already documented parameter, but description provides essential context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool returns a USDC wallet address, memo, and quote for buying credits, and positions it as the first step. Distinguishes from sibling payment tools (balance, confirm, history).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'use as the first step when the agent needs to buy credits to access /api/pro/* endpoints.' Provides clear context but does not specify when not to use or mention alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_payment_confirmAInspect

POST endpoint that verifies an on-chain Base mainnet USDC transfer to the published wallet and returns a bearer token (tf_live_<64-char-hex>) plus credit count. Use after the agent has sent USDC, with the tx hash and the memo from tf_payment_buy_credits. The returned token is cross-redeemable on tensorfeed.ai.

ParametersJSON Schema
NameRequiredDescriptionDefault
nonceNoThe memo string returned from tf_payment_buy_credits (optional but recommended).
tx_hashNoOn-chain Base mainnet USDC transaction hash.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses that this is a POST endpoint, specifies the return format (bearer token and credit count), and mentions cross-redeemability. Missing details on failure modes or input validation, but overall adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences with no wasted words. First sentence states purpose and action, second provides usage context, third specifies return value and property.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 2 parameters and no output schema, the description covers purpose, usage, and returns sufficiently. It does not explain error handling or rate limits, but for a simple verification endpoint, this is reasonable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema coverage is 100% with descriptions for both parameters. The description adds value by linking nonce to the memo from tf_payment_buy_credits and clarifying tx_hash as the On-chain Base mainnet USDC transaction hash, which goes beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: verifying an on-chain Base mainnet USDC transfer and returning a bearer token plus credit count. It also implicitly distinguishes from sibling tools like tf_payment_buy_credits.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly instructs when to use the tool: 'after the agent has sent USDC, with the tx hash and the memo from tf_payment_buy_credits.' This provides clear context and ties directly to a sibling tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_payment_historyAInspect

GET endpoint that returns confirmed USDC purchases (tx_hash, amount_usd, credits_added, block_number, confirmed_at) plus current balance and totals for the bearer token. Requires Authorization: Bearer tf_live_<64-char-hex>. Costs 0 credits. Tokens minted before the ledger existed return current_balance with purchases: [].

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries full burden. It discloses the need for specific auth, that it costs 0 credits, and the edge case for tokens minted before the ledger. It describes the response fields clearly, but does not mention rate limits or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with 'GET endpoint', and conveys all necessary information without fluff. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description covers response fields, auth, cost, and an edge case. It is adequate for a simple list endpoint, though it could mention idempotency or error handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, so the description doesn't need to add parameter info. Baseline for 0 parameters is 4. The description clarifies what data is returned, which adds value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it's a GET endpoint returning confirmed USDC purchases (with specific fields) plus current balance and totals for the bearer token. It distinguishes itself from other tools like tf_payment_balance by focusing on purchase history.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description specifies authorization requirements (Bearer token) and that it costs 0 credits, but does not explicitly guide when to use this tool over sibling tools like tf_payment_balance or tf_payment_buy_credits. It only implies its purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_predictionsAInspect

Fetches active Polymarket prediction markets sorted by 24h volume. Each market includes question, outcomes, and volume. Cache TTL 60s. Use when the agent needs market-implied probabilities on world events (elections, sports, macro).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description discloses cache TTL of 60s and sorting by volume, which are key behavioral traits. It does not cover error handling or empty states, but for a simple read tool this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, front-loaded with main action, no unnecessary words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters or output schema, the description fully explains the tool's output (question, outcomes, volume), caching, and ordering. Sufficient for an agent to understand and use the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so description adds no parameter details. According to guidelines, zero parameters yields baseline score of 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches active Polymarket prediction markets sorted by 24h volume, and mentions key fields like question, outcomes, and volume. It is distinct from siblings like tf_btc_price or tf_earthquakes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states to use when the agent needs market-implied probabilities on world events (elections, sports, macro), providing clear context and distinguishing from other data tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_premium_agent_contextAInspect

The "always start here" premium call for autonomous agents. Composes 13 upstream sources into a curated world-state snapshot: BTC ticker, Fear and Greed, VIX, Fed funds rate, USD-base forex (EUR/JPY/GBP/CHF), HN front page top 5, significant earthquakes 24h, upcoming space launches, top Polymarket markets, and infrastructure status (GitHub, Cloudflare, OpenAI, Anthropic). Returns BOTH a structured JSON context object for parsers AND a pre-formatted system_prompt string (~350 tokens) the agent pastes verbatim into its LLM context. Saves the agent from making 13 separate calls and writing a formatter. Curation choice (which signals matter, how to compress them) is the moat. Costs 2 credits ($0.04 USDC). 5-min cache. Bearer auth required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description fully covers behavioral traits. It discloses cost (2 credits, $0.04 USDC), 5-minute cache, bearer auth requirement, and that it returns both structured JSON and pre-formatted system_prompt. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is thorough but slightly verbose. It front-loads the key phrase 'always start here' and each sentence adds value (sources, output, benefit, cost, cache, auth). Could be tightened slightly but remains effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (13 sources), no output schema, and no annotations, the description is complete. It explains what is returned (JSON context and system_prompt), how the tool works, and all behavioral aspects (cost, cache, auth).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters and 100% coverage. Description adds meaning by explaining the tool takes no input and produces both a JSON context and a system prompt string. Since there are no parameters, baseline is 4 and description meets it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it's the 'always start here' premium call that composes 13 upstream sources into a curated world-state snapshot. It lists specific sources (BTC ticker, Fear and Greed, VIX, etc.) and distinguishes from siblings by noting it saves agents from making 13 separate calls and writing a formatter.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'always start here' for autonomous agents, indicating primary use. It explains the benefit: avoids 13 separate calls and formatting. Also mentions cost (2 credits), cache (5-min), and auth requirement, providing clear guidance on when and how to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_premium_briefingAInspect

Premium version of tf_briefing. Adds Polymarket prediction markets to the standard briefing payload, supports section filtering via ?include=, and supports ?history=24h for hourly BTC chart. Costs 1 credit ($0.02 USDC). Requires Authorization: Bearer tf_live_<64-char-hex>. Use when the agent needs prediction-market context or recent BTC trajectory in addition to the basic snapshot.

ParametersJSON Schema
NameRequiredDescriptionDefault
historyNoWhen set to 24h, response includes a series.btc_24h hourly array (24 data points).
includeNoComma-separated subset of sections: btc, fear-greed, earthquakes, hackernews, humans-in-space, predictions. Omit for all sections.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses cost of 1 credit, authorization requirement, and specific effects of parameters (e.g., ?history=24h adds hourly BTC array). No annotations provided, so description carries full burden.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. First sentence introduces purpose and features, second gives usage guidance. Front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description hints at augmented content (predictions and BTC chart). Could be more explicit about what changes from the basic briefing, but sufficient for a premium variant.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage, but description adds clarifying context: lists the exact sections for include and specifies that history adds a series.btc_24h array. Adds value beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it's a premium version of tf_briefing that adds Polymarket prediction markets, supports filtering via ?include=, and history via ?history=24h. Distinct from the sibling tf_briefing tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use: when prediction-market context or recent BTC trajectory is needed. Also mentions cost and authorization, but does not explicitly state when not to use (though sibling tf_briefing is implied).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_premium_correlation_matrixAInspect

Pre-computed cross-asset correlation matrix for AI trading and portfolio agents. Returns 30-day Pearson correlations on daily simple returns for 6 assets: BTC, ETH, SOL (Coinbase candles), and SPY, QQQ, GLD (Stooq.com CSVs). Output includes both a pairs array (sorted by absolute r descending) and an NxN matrix object for easy lookup. Each pair tagged with relationship strength (negligible / weak / moderate / strong) and direction (positive / negative). Saves the agent from fetching 6 historical price series and running the covariance math. Costs 2 credits ($0.04 USDC). 30-min cache. Bearer auth required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It comprehensively covers output format (pairs array and matrix), relationship strength tags, cost (2 credits), cache duration (30-min), and authentication (bearer). All behavioral traits are transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured paragraph that front-loads the purpose and method, then details outputs, cost, cache, and auth. Every sentence adds value, and there is no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given there is no output schema, the description adequately describes the return format (pairs array and matrix with tags) and key metadata. The sibling tools are numerous but distinct, and the description provides complete context for the agent to invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters with 100% coverage, so the description adds all necessary context. It explains what the tool returns without needing parameters, and the description compensates fully for the absence of parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes a cross-asset correlation matrix for AI trading and portfolio agents. It lists the specific assets (BTC, ETH, SOL, SPY, QQQ, GLD) and the method (30-day Pearson correlations on daily simple returns), distinguishing it from sibling tools like tf_premium_agent_context or tf_premium_briefing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains that the tool saves the agent from fetching 6 historical price series and computing covariance, implying its use when correlation data is needed. While it does not explicitly state when not to use or list alternatives, the context is clear enough for the agent to decide.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_premium_crypto_deepAInspect

Premium deep crypto snapshot. Includes top 50 coins by market cap with 1h/24h/7d change, Binance live ticker for top 20 USDT pairs by volume, Bitcoin network statistics from mempool.space (block height, fee tiers, hashrate, mempool size), and Ethereum gas oracle from Etherscan. Costs 2 credits ($0.04 USDC). Requires Authorization: Bearer tf_live_<64-char-hex>. Optional ?coins= filter and ?history=30d for daily BTC OHLCV. Use this instead of calling CoinGecko + Binance + mempool.space + Etherscan separately.

ParametersJSON Schema
NameRequiredDescriptionDefault
coinsNoComma-separated symbol filter, e.g. btc,eth,sol. Omit to get all top 50.
historyNoWhen set to 30d, response includes series.btc_30d with daily OHLCV candles.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description adequately discloses behavioral traits: it costs 2 credits, requires Bearer token authorization, and has optional filtering. It does not mention rate limits or idempotency, but for a read-only snapshot tool, this level of disclosure is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured: first sentence covers purpose and content, then cost, authorization, optional parameters, and usage advice. Every sentence earns its place without redundancy or unnecessary detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While no output schema is provided, the description lists the data components (top 50 coins, Binance ticker, Bitcoin stats, Ethereum gas) and mentions the optional history parameter yields daily BTC OHLCV. This gives a reasonable expectation of the output, though the exact JSON structure is not described.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides full descriptions for both parameters (100% coverage). The description merely reiterates their existence and optionality without adding substantial new semantics beyond what the schema offers, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides a comprehensive crypto snapshot combining multiple data sources (top 50 coins, Binance ticker, Bitcoin stats, Ethereum gas). It distinguishes itself from siblings like tf_btc_price or tf_crypto_movers by explicitly noting it replaces separate calls to several APIs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises when to use this tool: 'Use this instead of calling CoinGecko + Binance + mempool.space + Etherscan separately.' It also covers authorization requirements and cost, giving clear guidance on usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_premium_defi_tvlAInspect

Composed DeFi total-value-locked snapshot for crypto research and trading agents. Returns top 50 protocols by TVL (each with 1h/24h/7d change, category, chain, market cap, FDV), top 15 chains by TVL, by-category aggregate, and biggest gainers/losers over 24h and 7d windows. Source: DefiLlama free public API. Costs 2 credits ($0.04 USDC). 30-min cache. Bearer auth required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries burden. It discloses cost (2 credits), cache duration (30-min), source (DefiLlama), and auth requirement (bearer). Lacks discussion of error handling or rate limits, but the provided details are sufficient for agent decision-making.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is a single paragraph that efficiently lists the output components and key behavioral traits. It is front-loaded with purpose and data descriptions, but slightly dense.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and 0 parameters, the description covers return data, cost, cache, and auth. It is complete enough for an agent to invoke the tool, though it could mention response format (e.g., JSON).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters with 100% coverage. Description adds no parameter info, but since there are none, the baseline of 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool returns a 'DeFi total-value-locked snapshot' with specific data components (top 50 protocols, chains, aggregates). It distinguishes itself from siblings like tf_btc_price and tf_crypto_movers by focusing exclusively on DeFi TVL.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for DeFi TVL queries but does not explicitly state when not to use or provide alternatives. However, with zero parameters and a clear data scope, the intended use case is unambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_premium_exchange_flowsAInspect

Tracks ETH transfers in/out of major centralized exchange hot wallets (Binance, Coinbase, OKX, Kraken, Bybit, Crypto.com, KuCoin) in the last 3 blocks. Each transfer tagged as inflow (user -> exchange, often precedes selling), outflow (exchange -> user, often HODL withdrawal), or inter_exchange. Aggregated per-exchange and globally with net_eth, net_usd, and bias label (inflow_dominant / outflow_dominant / balanced). Threshold 5 ETH minimum (~$11K at $2300/ETH) to filter retail noise. Useful for trading bots detecting regime shifts: sustained large net inflow signals selling pressure ahead, sustained outflow signals accumulation. Pair with tf_premium_whales for context. Costs 2 credits ($0.04 USDC). 5-min cache. Bearer auth required. v1 covers ETH only; BTC requires a labeled-address dataset.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes behavior in detail: tracks last 3 blocks, tags inflows/outflows/inter_exchange, aggregates per-exchange and globally, outputs net_eth, net_usd, bias label. Also mentions threshold filtering, cache, and cost. No annotations provided, but description covers all behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is dense but well-structured, front-loaded with main purpose. Every sentence earns its place, though could benefit from minor structuring into separate sentences or bullet points for readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description fully explains return values (net_eth, net_usd, bias label, per-exchange and global). Covers threshold, cost, cache, auth, limitations (ETH only), and pairing with sibling. Complete for a data retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has no parameters (empty object). Baseline for 0 params is 4. The description adds value by explaining what data is returned without parameters, though no parameter details needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it tracks ETH transfers in/out of major exchange hot wallets, tagging them as inflow/outflow/inter_exchange, aggregating with net values and bias labels. It distinguishes itself from sibling tools like tf_premium_whales (BTC whale tracking) by specifying ETH focus and pairing suggestion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use: for trading bots detecting regime shifts, pairs with tf_premium_whales for context. Mentions threshold (5 ETH), cost (2 credits), cache (5-min), auth requirements (bearer), and that v1 covers only ETH. No alternative usage guidance needed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_premium_github_velocityAInspect

Composed GitHub developer-attention snapshot. Returns top 30 repos created in the last 7 days sorted by stars (with stars-per-day, language, topics, license, owner type, AI/ML focus flag), top 15 AI/ML-focused active repos (topic:llm with commits in the last 30 days), language and topic aggregates, and the AI/ML share of trending. Source: GitHub Search API. Costs 2 credits ($0.04 USDC). 30-min cache. Bearer auth required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses cost (2 credits/$0.04), cache (30-min), auth (Bearer), and source (GitHub Search API). For a read-only snapshot tool without annotations, this provides good transparency, though error behavior or data staleness beyond cache are not mentioned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a concise paragraph with no redundant sentences. Every sentence adds value: purpose, outputs, source, cost, cache, auth. It is optimally front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no parameters, no output schema, and no annotations, the description provides all necessary context: what is returned, the data source, cost, caching, and authentication. It is fully complete for an API call.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so the description fully compensates by detailing the exact output contents (repos, attributes, aggregates). It adds significant value beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it's a 'GitHub developer-attention snapshot' listing specific outputs: top 30 repos by stars with attributes, top 15 AI/ML repos, aggregates, and AI/ML share. It uniquely identifies the tool among siblings with no overlapping GitHub tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance is provided. However, the description implicitly suggests it's for trending GitHub analysis, and no sibling alternative exists, so the lack of explicit guidelines is less critical but still a gap.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_premium_macroAInspect

Premium composed macroeconomic snapshot in one HTTP call. Includes 7 FRED economic series (Fed rate, CPI, unemployment, GDP growth, 10-year treasury), 4 USD-base forex pairs (EUR, JPY, GBP, CHF), gold via PAXG/Kraken, US market context (SPY, DIA, QQQ, VIX), and oil/natural gas via FRED. Costs 2 credits ($0.04 USDC). Requires Authorization: Bearer tf_live_<64-char-hex>. Optional ?history=30d adds 30-day historical series. Use this instead of calling 14 different upstream APIs separately.

ParametersJSON Schema
NameRequiredDescriptionDefault
historyNoWhen set to 30d, FRED entries include a series array of 30 daily observations and forex.series is populated.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full burden. It discloses the cost (2 credits), authorization requirement (Bearer token), and the effect of the optional history parameter. It does not mention rate limits or response format, but the behavioral traits disclosed are sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is four sentences, front-loaded with the tool's purpose, followed by concise details on included data, cost, auth, and optional parameter. Every sentence adds value with no redundancy or wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (many data series) and lack of output schema, the description is fairly complete. It lists all included data, cost, auth, and the optional history parameter. It does not describe the return format, but the level of detail is high considering the scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers 100% of the single parameter (history) with a brief description. The tool description adds meaningful context: 'Optional ?history=30d adds 30-day historical series' and explains the effect on FRED entries and forex.series. This exceeds the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states it is a 'Premium composed macroeconomic snapshot in one HTTP call' and lists all included data sources (FRED series, forex, gold, US market, oil/gas). It distinguishes from siblings by advising 'Use this instead of calling 14 different upstream APIs separately.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clearly indicates when to use the tool (comprehensive macro snapshot) and provides important context like cost (2 credits) and authorization. It does not explicitly list exclusions or named alternatives, but the guidance to 'use this instead of calling 14 different APIs' implies its role.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_premium_sentimentAInspect

Premium composite sentiment snapshot. Aggregates Crypto Fear and Greed Index (alternative.me), VIX volatility index (Finnhub), trending ticker mentions across Hacker News top 30 + Reddit r/CryptoCurrency / r/wallstreetbets / r/stocks hot posts with per-headline keyword-based sentiment scoring, and top Polymarket prediction-market signals. Output includes per-ticker mention_count_24h, sentiment_score (-1 to +1), sentiment_label, and sample headlines. Use to gauge market mood before a trading or research decision. Costs 2 credits ($0.04 USDC). Requires Authorization: Bearer tf_live_<64-char-hex>. The notes field documents that scoring is keyword-based (crude but signal-bearing), not LLM-derived; treat as one input to a broader analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses cost (2 credits/$0.04 USDC), required authorization (Bearer token), and notes that scoring is keyword-based and not LLM-derived, which is important behavioral context. However, it does not mention potential latency, rate limits, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is well-structured with clear purpose upfront, followed by details on sources, output, usage, cost, auth, and notes. Each sentence adds value, though it could be slightly more concise without losing information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description thoroughly explains return values: per-ticker mention_count_24h, sentiment_score, sentiment_label, and sample headlines. It also covers cost, authorization, and the nature of the scoring. It is complete for the agent to understand what the tool does and what to expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters (empty properties, not required), so baseline is 4. The description adds value by explaining what the tool outputs (per-ticker fields, sentiment score, sample headlines) without needing to explain parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool provides a 'Premium composite sentiment snapshot' aggregating multiple sources (Crypto Fear and Greed Index, VIX, social media, Polymarket). It lists specific output fields and distinguishes itself from sibling tools in the same server, which are focused on individual metrics like price or economic data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description explicitly says 'Use to gauge market mood before a trading or research decision,' providing clear context for when to use. However, it does not mention when not to use the tool or compare with alternative sibling tools, which would improve the score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_premium_stablecoin_flowsAInspect

Composed stablecoin flow snapshot for crypto traders. Returns top 20 stablecoins by circulating supply, each with 24h/7d/30d net change in USD and percent, top chains breakdown, peg type, and current price. Aggregate includes total circulating, net inflow/outflow over 24h and 7d, growing-vs-shrinking count, and a bias label (growing / shrinking / balanced). Source: DefiLlama stablecoins API. Trading agents use stablecoin growth as a leading indicator for crypto buying power. Costs 2 credits ($0.04 USDC). 1-hour cache. Bearer auth required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, but the description discloses source (DefiLlama), cost (2 credits/$0.04), caching (1-hour), and auth (bearer). It implies a read-only operation without stating explicitly, but the absence of destructive warnings is acceptable for a data tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear first sentence overview, followed by details, use case, and meta-info. It is slightly verbose but every sentence adds value (e.g., cost, cache, auth). Could be trimmed slightly, but still effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description fully enumerates what the snapshot contains: top 20 stablecoins with various metrics, aggregate totals, growing/shrinking count, bias label, and source. It covers all necessary details for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and the schema coverage is 100%. Baseline for no parameters is 4, and the description adds no parameter-specific information, which is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides a 'Composed stablecoin flow snapshot' with specific data elements like top 20 stablecoins, net changes, top chains, peg type, and price. It distinguishes itself from siblings (e.g., exchange flows, TVL) by focusing on stablecoins and listing unique metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly mentions usage by trading agents as a leading indicator for crypto buying power, along with cost, cache, and auth requirements. However, it does not provide explicit when-not-to-use or alternative tool comparisons, though the context makes it clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_premium_whalesAInspect

Tracks whale-sized on-chain transactions on Bitcoin and Ethereum. BTC: scans the last 30 unconfirmed mempool transactions via mempool.space and surfaces any with output >=50 BTC; tagged with USD-equivalent at current BTC spot. ETH: scans the latest confirmed block via Etherscan eth_getBlockByNumber and surfaces transfers >=100 ETH; tagged with USD-equivalent at current ETH spot. Each whale entry includes tx_hash, value in native and USD units, from/to addresses (ETH only), and explorer URL. Useful for trading bots watching for institutional flow signals (large exchange in/outflows, treasury moves, OTC settlements). Costs 2 credits ($0.04 USDC). 5-min cache. Bearer auth required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description fully details the behavior: BTC mempool scanning, ETH latest block scanning, thresholds, output fields, cost (2 credits), cache duration (5-min), and authentication requirements. This is comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections per chain and details on output, use case, cost, cache, and auth. It is slightly long but every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains the return fields (tx_hash, value, addresses, URL) and includes cost, cache, and auth details. It is fully complete for an agent to invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so parameter semantics are not applicable. The description adds value by explaining the data sources and output, which compensates for the lack of parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool tracks whale-sized transactions on Bitcoin and Ethereum with specific thresholds and data sources. It distinguishes itself from sibling tools like tf_premium_exchange_flows by focusing on on-chain whale movements.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear use cases such as institutional flow signals, exchange in/outflows, and treasury moves. It does not explicitly exclude scenarios, but the context is sufficient for an agent to decide when to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_premium_world_deltasAInspect

Premium event-stream endpoint for monitor agents. Aggregates time-stamped events from 4 sources into one time-sorted feed: USGS earthquakes M4.0+, Hacker News new stories via Algolia, recently updated Polymarket markets, and space launches in [-1h, +12h] window. Accepts ?since= (defaults 1h ago, clamped to 1h cache horizon). Each event has type, timestamp, severity, and structured data. Saves an agent from polling 5 separate upstream feeds and merging client-side. Costs 2 credits ($0.04 USDC). Bearer auth required. 1-hour rolling cache; sub-second when warm.

ParametersJSON Schema
NameRequiredDescriptionDefault
sinceNoISO 8601 timestamp. Returns events newer than this. Defaults to 1 hour ago. Clamped to 1 hour ago if older.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavioral details: aggregation of events, time-sorted feed, parameter defaults and clamping, caching behavior, cost (2 credits), and authentication requirement. This is comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single dense paragraph with no redundancy. It front-loads the purpose and efficiently covers all key aspects without wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given there is no output schema, the description adequately describes the event structure (type, timestamp, severity, structured data). It also covers caching, cost, and parameter behavior, making it complete for an event feed tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'since' is fully described in the schema. The description adds valuable context: default is 1 hour ago, and it is clamped to a 1-hour cache horizon. This extra meaning justifies a score above baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a 'premium event-stream endpoint for monitor agents' and enumerates the four specific sources (USGS earthquakes, Hacker News, Polymarket, space launches). This specificity distinguishes it from sibling tools like tf_earthquakes or tf_premium_briefing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains that the tool 'saves an agent from polling 5 separate upstream feeds and merging client-side,' giving clear context for when to use it. It does not explicitly state when not to use it, but the context is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_service_statusAInspect

Fetches operational status of major dev infrastructure (GitHub, Cloudflare, Discord, OpenAI, Vercel, npm, Reddit, Atlassian, Anthropic). Cache TTL 60s. Use when the agent needs to know if a dependency is up or to explain a recent outage.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It discloses a behavioral trait: 'Cache TTL 60s,' which informs the agent that results are cached for 60 seconds. This adds value beyond what annotations would provide, indicating it's a read-only operation with temporary caching.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: first lists the action and services, second provides cache TTL and usage guidance. Every sentence earns its place with no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and no output schema, the description covers the core purpose, cache behavior, and usage context. It is adequate for the tool's simplicity, though it could optionally detail the status format (e.g., up/down with latency).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no parameters (empty properties), so baseline is 4. The description does not need to add parameter info, and it doesn't, which is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it fetches operational status of major dev infrastructure, listing specific services (GitHub, Cloudflare, Discord, etc.). The verb 'Fetches' and resource 'operational status' are specific, and the tool is distinct from siblings which cover other topics like BTC price or economic data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use when the agent needs to know if a dependency is up or to explain a recent outage.' This provides clear context for when to use the tool, though it doesn't mention when not to use it or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tf_solana_networkAInspect

Fetches live Solana mainnet network metrics: current transactions-per-second (most recent 60s sample), 3-sample average TPS, current slot, average slot time in ms, and epoch progress percentage. Source: solana-rpc.publicnode.com (getRecentPerformanceSamples + getSlot + getEpochInfo). Cache TTL 30s. Use when the agent needs to assess Solana network throughput or congestion.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description discloses the data source (solana-rpc.publicnode.com), cache TTL (30s), and lists returned metrics. However, it does not cover potential errors, rate limits, or whether the metrics are real-time or sampled. This is adequate but not exhaustive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, front-loaded with the core purpose, followed by source and cache details, and ends with a usage guideline. Every sentence is necessary and no word is wasted.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and no output schema, the description sufficiently explains the return values (metrics listed) and the source. It lacks detail on output format but is largely complete for a simple fetch tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, so the description does not need to elaborate on parameters. It adds value by explaining what metrics are returned and the data source, meeting the baseline for a parameterless tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the tool fetches live Solana mainnet network metrics, listing specific metrics (TPS, slot, epoch progress). It clearly distinguishes from other sibling tools which cover different domains like Bitcoin price, forex, earthquakes, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use the tool: 'when the agent needs to assess Solana network throughput or congestion.' It does not mention when not to use it or alternatives, but the uniqueness among siblings makes the usage context clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.