terminalfeed
Server Details
Real-time data feeds for AI agents with USDC micropayments on Base for premium tools.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- RipperMercs/terminalfeed
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.5/5 across 27 of 27 tools scored.
Most tools have clearly distinct purposes (e.g., tf_btc_price vs tf_fear_greed), but there is some overlap between free and premium aggregated tools (e.g., tf_briefing, tf_premium_briefing, tf_premium_agent_context). However, descriptions explicitly differentiate them by content and cost.
All tools follow a consistent pattern: 'tf_' prefix (with 'tf_premium_' for premium ones) and snake_case. Names are descriptive and predictable, e.g., tf_btc_price, tf_earthquakes, tf_premium_macro.
27 tools is on the high side, but the server covers a broad domain (crypto, finance, earthquakes, AI trends, payment system, etc.). Each tool serves a specific purpose, so the count is borderline acceptable but feels slightly heavy.
The tool surface covers a wide range of data feeds: crypto, forex, macro indicators, earthquakes, HN, HuggingFace, prediction markets, payment system, and service status. Minor gaps exist (e.g., no dedicated stock prices tool beyond premium macro, no weather), but overall it's comprehensive for a terminal feed.
Available Tools
28 toolstf_briefingAInspect
Fetches a real-time world-state snapshot composed from BTC ticker, Fear and Greed Index, recent earthquakes (USGS), top Hacker News story count, and ISS crew. Returns JSON. No auth required. Cache TTL 60s. Use when the agent needs a quick global pulse before deciding what to investigate.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses key behaviors: no authentication needed, cache TTL, output format (JSON), and data composition. Minor missing details like error handling or exact data freshness, but strong overall.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences, front-loaded with purpose. No wasted words; every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and zero parameters, the description covers composition, output format, auth, and caching. Lacks details on JSON structure or potential delays but is sufficient for a lightweight briefing tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100% trivially. The description adds meaningful context about the tool's output (the list of data sources), which goes beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it fetches a composite real-time snapshot of multiple data sources (BTC, Fear & Greed, earthquakes, HN top story count, ISS crew), distinguishing it from sibling tools that focus on individual data streams.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit usage guidance: 'Use when the agent needs a quick global pulse before deciding what to investigate.' Also notes 'No auth required' and 60s cache TTL, helping the agent decide when to use this tool over alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tf_btc_priceAInspect
Fetches the current Bitcoin price in USD with 24h change, high, low, and volume. Source: Binance with CoinCap fallback. Cache TTL 15s. No auth required. Use for crypto trading decisions or when the agent needs a fresh BTC quote.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses source (Binance with CoinCap fallback), cache TTL (15s), and no auth required. No annotations, so description carries full burden; lacks mention of rate limits but otherwise transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. Front-loaded with action and key details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers essential aspects for a simple price fetcher: what it returns, source, cache, auth. No output schema, but description lists fields adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100%. Description adds value by listing returned fields (price, change, high, low, volume) beyond schema, which is empty.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it fetches current Bitcoin price in USD with 24h change, high, low, and volume. Explicitly differentiates from siblings by specifying use case: crypto trading decisions or fresh BTC quote.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context for when to use: 'crypto trading decisions' or 'needs a fresh BTC quote.' Does not explicitly exclude or mention alternatives, but context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tf_crypto_moversAInspect
Fetches the top 15 cryptocurrencies sorted by 24h price change. Includes price, market cap, and percentage move. Source: CoinGecko/CoinLore. Cache TTL 30s. Use when the agent needs to surface notable crypto market moves.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description discloses source (CoinGecko/CoinLore), cache TTL (30s), and data included. Implies read-only behavior, though could mention no side effects explicitly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences. The action verb 'Fetches' is front-loaded. Every sentence adds value with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a parameterless tool. Covers purpose, data, source, cache, and usage. Could optionally mention output format (e.g., JSON array), but not necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema is empty, so parameters are not an issue. Description adds value by explaining the output fields and context, meeting the baseline for zero parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it fetches the top 15 cryptocurrencies sorted by 24h price change, including specific data fields. Distinguishes from siblings like tf_btc_price (single BTC price) and tf_fear_greed (sentiment index).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use when the agent needs to surface notable crypto market moves.' Provides clear context but does not mention when not to use or list alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tf_earthquakesAInspect
Fetches recent earthquakes magnitude 2.5 or greater from USGS. Returns count and most recent quake details (magnitude, place, time). Cache TTL 2min. Use for disaster monitoring or when the agent needs current seismic activity.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the behavioral burden. It discloses cache TTL (2min) which is valuable. However, it doesn't mention rate limits, error conditions, or data freshness guarantees beyond caching. For a simple read-only tool, this is adequate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, each serving a distinct purpose: purpose, return details, and usage/caching. It is front-loaded and contains no extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description is remarkably complete. It specifies the data source (USGS), magnitude threshold (2.5+), return fields (count, most recent quake details with magnitude, place, time), caching behavior (2min TTL), and use cases. All essential context is provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are 0 parameters, and schema coverage is 100%. According to guidelines, baseline is 4. The description adds no parameter semantics because there are none, but it correctly indicates no input needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches recent earthquakes from USGS with magnitude >=2.5. It specifies return details (count, most recent quake details). This is distinct from sibling data tools (e.g., tf_btc_price, tf_fear_greed) which focus on different domains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use for disaster monitoring or when the agent needs current seismic activity,' providing clear context. It does not explicitly exclude alternatives, but given the specific domain, the usage is well-defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tf_economic_dataAInspect
Fetches latest FRED economic indicators: Fed funds rate, CPI, unemployment rate, GDP growth. Cache TTL 1h. Use when the agent needs current US macro indicators. For deeper macro (treasury yields, forex, commodities, indices) use tf_premium_macro.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description covers caching behavior (TTL 1h) and implies read-only nature. Lacks details on return format but adequate for a zero-param fetch tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, efficient use of words, no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Sufficient for a simple tool: covers indicators, caching, and usage context. No output schema needed; minor omission of return structure but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters; schema coverage is 100% (none). Baseline of 4 applies; description implicitly reinforces no arguments needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool fetches specific FRED economic indicators (Fed funds rate, CPI, unemployment, GDP growth) and distinguishes from sibling tf_premium_macro by specifying scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use ('needs current US macro indicators') and when to use alternative ('for deeper macro... use tf_premium_macro').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tf_fear_greedAInspect
Fetches the current Crypto Fear and Greed Index value (0-100) with classification label (Extreme Fear, Fear, Neutral, Greed, Extreme Greed). Source: Alternative.me. Cache TTL 5min. Use as a sentiment signal for crypto trading decisions.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses the source (Alternative.me), cache TTL (5min), and output range/classification. This is sufficient for a read-only fetch with no side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no fluff: first describes what it fetches and returns, second gives source, cache TTL, and usage. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with no parameters and no output schema, the description covers the return value (0-100, classification), source, cache TTL, and intended use. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, and schema description coverage is 100% (trivial). With 0 parameters, the description need not add parameter info. Baseline 4 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it fetches the current Crypto Fear and Greed Index with a classification label (0-100). It distinguishes itself from siblings like tf_btc_price or tf_premium_sentiment by specifying the exact index and its use as a sentiment signal.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use as a sentiment signal for crypto trading decisions.' This provides clear guidance on when to use, though it does not mention alternatives or when not to use, which is acceptable given the narrow focus.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tf_forexAInspect
Fetches current foreign exchange rates with USD as base for 16 major currencies (EUR, GBP, JPY, CAD, AUD, CHF, CNY, INR, MXN, BRL, KRW, SGD, HKD, SEK, NOK, NZD). Source: Frankfurter (ECB-based). Cache TTL 5min. Use for currency conversion or FX-aware decisions.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full behavioral burden. It discloses the read-only nature ('fetches'), cache behavior ('Cache TTL 5min'), and data source ('Frankfurter (ECB-based)'). This is adequate for a zero-parameter tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, and every part adds value: action, scope, currencies, source, cache, usage. No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description fully explains what is returned (rates for 16 currencies), with source and cache details. For a parameterless tool, this is complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, so baseline is 4. The description adds meaning by listing the 16 currencies and source, which compensates for lack of param info. No further detail needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies the verb 'fetches', the resource 'foreign exchange rates', and details the scope ('USD as base for 16 major currencies') with an explicit list. It clearly distinguishes from siblings like tf_btc_price (crypto) and tf_economic_data (indicators).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states 'Use for currency conversion or FX-aware decisions,' providing clear use cases. While no explicit when-not-to-use or alternatives are mentioned, the context is sufficient for a simple fetch tool with no parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tf_harnessesAInspect
Returns a snapshot of public agentic-coding benchmark scores across SWE-bench Verified, Terminal-Bench, Aider Polyglot, and METR HCAST. Each row pairs a harness with a model. Same model can score very differently on different harnesses; that gap is the value-add. Pass ?view=summary for top 10 combined leaderboard plus biggest harness gaps; ?view=gaps for full per-model harness deltas; ?view=combined for normalized cross-benchmark ranking; ?view=raw (default) for the full benchmark/result graph. Source: hand-curated from upstream leaderboards (swebench.com, terminal-bench.org, aider.chat, metr.org). Cache TTL 12h. Use when the agent needs to recommend a harness/model combo or explain why two agents using the same model perform differently.
| Name | Required | Description | Default |
|---|---|---|---|
| view | No | Output shape; default raw |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description discloses data source (hand-curated from upstream leaderboards), cache TTL (12h), and the nuance that same models score differently across harnesses. No side effects or auth needs, but these are implicitly read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose, then explains views, source, and cache. Every sentence adds value, though the middle section listing view options is slightly verbose but justified.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, no output schema), the description adequately covers what it returns, the data sources, and usage context. No significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter 'view', but the description adds meaningful context for each enum value (e.g., '?view=summary for top 10 combined leaderboard plus biggest harness gaps'), going beyond the bare enum names in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Returns' and the resource 'snapshot of public agentic-coding benchmark scores' with specific benchmarks listed, distinguishing it from sibling tools which cover different domains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Use when the agent needs to recommend a harness/model combo or explain why two agents using the same model perform differently.' No exclusions are needed given the tool's specificity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tf_hf_trendingAInspect
Fetches the top 15 trending HuggingFace models sorted by likes in the last 7 days. Each item includes id (author/name), likes, downloads, pipeline tag, and url. Source: huggingface.co/api/models. Cache TTL 10min. Use when the agent needs to surface what the open-source AI community is paying attention to right now.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and description fails to disclose behavioral traits like read-only, safety, or side effects. Only mentions cache TTL and data source, insufficient for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences, front-loaded with action. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool with no output schema, the description covers purpose, source, cache, and usage. Sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has no parameters, so description adds value by explaining what the tool returns. Schema coverage is 100% trivially, but description provides context beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it fetches top 15 trending HuggingFace models sorted by likes in last 7 days, with specific fields. No sibling tools overlap, making it unique.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use when the agent needs to surface what the open-source AI community is paying attention to right now.' Provides clear context though no explicit exclusions or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tf_payment_balanceAInspect
GET endpoint that returns remaining credits for the bearer token in the Authorization header. Requires Authorization: Bearer tf_live_<64-char-hex>. Costs 0 credits. Use to monitor agent budget.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes the HTTP method, auth requirement, cost, and that it returns remaining credits, providing key behavioral info beyond the absent annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no fluff, front-loading the purpose and key details. Every sentence adds necessary context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple balance check: explains auth format, cost, and purpose. Lacks return format details, but 'remaining credits' is sufficient given no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist in the schema; the description compensates by explaining the authorization token semantics and what the endpoint returns, adding value beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a GET endpoint returning remaining credits for the bearer token, specifying the exact resource and action. It distinguishes itself from sibling payment tools like buying or confirming credits.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states to use for monitoring agent budget and includes required authorization header and zero cost. Lacks explicit when-not-to-use guidance but is clear given sibling context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tf_payment_buy_creditsAInspect
POST endpoint that returns the published USDC wallet address (0x549c82e6bfc54bdae9a2073744cbc2af5d1fc6d1 on Base mainnet), a unique memo, and a quote tying the dollar amount to credits at $1 USDC = 50 credits. Use as the first step when the agent needs to buy credits to access /api/pro/* endpoints.
| Name | Required | Description | Default |
|---|---|---|---|
| amount_usd | No | USDC amount to convert. Minimum $1 = 50 credits. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description bears full burden. Discloses it's a POST endpoint and output contents but does not mention side effects, expiration, or required next steps (e.g., confirmation).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no extraneous text. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple first-step tool but lacks details on return format, error handling, and workflow continuation. With no output schema, more context on output structure would help.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds value beyond schema by stating minimum amount ($1) and conversion rate (50 credits per USDC). Schema already documented parameter, but description provides essential context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool returns a USDC wallet address, memo, and quote for buying credits, and positions it as the first step. Distinguishes from sibling payment tools (balance, confirm, history).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'use as the first step when the agent needs to buy credits to access /api/pro/* endpoints.' Provides clear context but does not specify when not to use or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tf_payment_confirmAInspect
POST endpoint that verifies an on-chain Base mainnet USDC transfer to the published wallet and returns a bearer token (tf_live_<64-char-hex>) plus credit count. Use after the agent has sent USDC, with the tx hash and the memo from tf_payment_buy_credits. The returned token is cross-redeemable on tensorfeed.ai.
| Name | Required | Description | Default |
|---|---|---|---|
| nonce | No | The memo string returned from tf_payment_buy_credits (optional but recommended). | |
| tx_hash | No | On-chain Base mainnet USDC transaction hash. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses that this is a POST endpoint, specifies the return format (bearer token and credit count), and mentions cross-redeemability. Missing details on failure modes or input validation, but overall adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences with no wasted words. First sentence states purpose and action, second provides usage context, third specifies return value and property.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 2 parameters and no output schema, the description covers purpose, usage, and returns sufficiently. It does not explain error handling or rate limits, but for a simple verification endpoint, this is reasonable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema coverage is 100% with descriptions for both parameters. The description adds value by linking nonce to the memo from tf_payment_buy_credits and clarifying tx_hash as the On-chain Base mainnet USDC transaction hash, which goes beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: verifying an on-chain Base mainnet USDC transfer and returning a bearer token plus credit count. It also implicitly distinguishes from sibling tools like tf_payment_buy_credits.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly instructs when to use the tool: 'after the agent has sent USDC, with the tx hash and the memo from tf_payment_buy_credits.' This provides clear context and ties directly to a sibling tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tf_payment_historyAInspect
GET endpoint that returns confirmed USDC purchases (tx_hash, amount_usd, credits_added, block_number, confirmed_at) plus current balance and totals for the bearer token. Requires Authorization: Bearer tf_live_<64-char-hex>. Costs 0 credits. Tokens minted before the ledger existed return current_balance with purchases: [].
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden. It discloses the need for specific auth, that it costs 0 credits, and the edge case for tokens minted before the ledger. It describes the response fields clearly, but does not mention rate limits or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with 'GET endpoint', and conveys all necessary information without fluff. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description covers response fields, auth, cost, and an edge case. It is adequate for a simple list endpoint, though it could mention idempotency or error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, so the description doesn't need to add parameter info. Baseline for 0 parameters is 4. The description clarifies what data is returned, which adds value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it's a GET endpoint returning confirmed USDC purchases (with specific fields) plus current balance and totals for the bearer token. It distinguishes itself from other tools like tf_payment_balance by focusing on purchase history.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies authorization requirements (Bearer token) and that it costs 0 credits, but does not explicitly guide when to use this tool over sibling tools like tf_payment_balance or tf_payment_buy_credits. It only implies its purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tf_predictionsAInspect
Fetches active Polymarket prediction markets sorted by 24h volume. Each market includes question, outcomes, and volume. Cache TTL 60s. Use when the agent needs market-implied probabilities on world events (elections, sports, macro).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses cache TTL of 60s and sorting by volume, which are key behavioral traits. It does not cover error handling or empty states, but for a simple read tool this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences, front-loaded with main action, no unnecessary words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters or output schema, the description fully explains the tool's output (question, outcomes, volume), caching, and ordering. Sufficient for an agent to understand and use the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so description adds no parameter details. According to guidelines, zero parameters yields baseline score of 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches active Polymarket prediction markets sorted by 24h volume, and mentions key fields like question, outcomes, and volume. It is distinct from siblings like tf_btc_price or tf_earthquakes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states to use when the agent needs market-implied probabilities on world events (elections, sports, macro), providing clear context and distinguishing from other data tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tf_service_statusAInspect
Fetches operational status of major dev infrastructure (GitHub, Cloudflare, Discord, OpenAI, Vercel, npm, Reddit, Atlassian, Anthropic). Cache TTL 60s. Use when the agent needs to know if a dependency is up or to explain a recent outage.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It discloses a behavioral trait: 'Cache TTL 60s,' which informs the agent that results are cached for 60 seconds. This adds value beyond what annotations would provide, indicating it's a read-only operation with temporary caching.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: first lists the action and services, second provides cache TTL and usage guidance. Every sentence earns its place with no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description covers the core purpose, cache behavior, and usage context. It is adequate for the tool's simplicity, though it could optionally detail the status format (e.g., up/down with latency).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters (empty properties), so baseline is 4. The description does not need to add parameter info, and it doesn't, which is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it fetches operational status of major dev infrastructure, listing specific services (GitHub, Cloudflare, Discord, etc.). The verb 'Fetches' and resource 'operational status' are specific, and the tool is distinct from siblings which cover other topics like BTC price or economic data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use when the agent needs to know if a dependency is up or to explain a recent outage.' This provides clear context for when to use the tool, though it doesn't mention when not to use it or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tf_solana_networkAInspect
Fetches live Solana mainnet network metrics: current transactions-per-second (most recent 60s sample), 3-sample average TPS, current slot, average slot time in ms, and epoch progress percentage. Source: solana-rpc.publicnode.com (getRecentPerformanceSamples + getSlot + getEpochInfo). Cache TTL 30s. Use when the agent needs to assess Solana network throughput or congestion.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses the data source (solana-rpc.publicnode.com), cache TTL (30s), and lists returned metrics. However, it does not cover potential errors, rate limits, or whether the metrics are real-time or sampled. This is adequate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, front-loaded with the core purpose, followed by source and cache details, and ends with a usage guideline. Every sentence is necessary and no word is wasted.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description sufficiently explains the return values (metrics listed) and the source. It lacks detail on output format but is largely complete for a simple fetch tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, so the description does not need to elaborate on parameters. It adds value by explaining what metrics are returned and the data source, meeting the baseline for a parameterless tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies the tool fetches live Solana mainnet network metrics, listing specific metrics (TPS, slot, epoch progress). It clearly distinguishes from other sibling tools which cover different domains like Bitcoin price, forex, earthquakes, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool: 'when the agent needs to assess Solana network throughput or congestion.' It does not mention when not to use it or alternatives, but the uniqueness among siblings makes the usage context clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!