Skip to main content
Glama

Octodamus Market Intelligence

Server Details

AI market oracle for agents. Crypto signals, Polymarket edges, x402 USDC payments on Base.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Octodamus/octodamus-site
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 10 of 10 tools scored. Lowest: 2.6/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but get_agent_signal and get_oracle_signals both provide trading signal data at different granularities, which could cause confusion. However, descriptions clarify the use cases (consolidated vs raw votes). Overall, the boundaries are clear enough.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with 'get_' prefix, making them predictable and easy to understand. The naming is uniform across all 10 tools.

Tool Count5/5

With 10 tools, the server is well-scoped for a market intelligence service covering pricing, sentiment, signals, prediction markets, and administrative actions like subscription. Each tool serves a clear purpose, and the count is appropriate.

Completeness4/5

The tool surface covers the core domain of market intelligence: price data, sentiment, trading signals (both consolidated and raw), prediction markets, and data source listing. Minor gaps exist, such as missing historical data or backtesting capabilities, but these are not essential for the stated purpose.

Available Tools

10 tools
get_agent_signalTrading SignalA
Read-onlyIdempotent
Inspect

Consolidated trading signal from the 9/11 oracle consensus system.

Use this as the primary decision endpoint; poll every 15 minutes (next_poll_seconds = 900 in the response). For first-call context initialisation, prefer get_all_data() instead.

Response fields: action — "BUY" | "SELL" | "HOLD" | "WATCH" confidence — float 0.0–1.0 (higher = stronger oracle consensus) signal — "BULLISH" | "BEARISH" | "NEUTRAL" fear_greed — int 0–100 (0 = Extreme Fear, 100 = Extreme Greed) btc_trend — "UP" | "DOWN" | "SIDEWAYS" polymarket_edge — {market, ev, side} top expected-value play reasoning — plain-text explanation of the consensus next_poll_seconds — seconds until the signal refreshes (typically 900)

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNoOctoData API key (format: octo_...). Free key (500 req/day): POST https://api.octodamus.com/v1/signup?email=YOUR_EMAIL. Premium key (10k req/day): call get_premium_api() or GET /v1/subscribe?plan=trial.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the polling behavior ('Poll every 15 minutes'), which is a key operational trait. However, it doesn't mention authentication requirements (implied by the api_key parameter but not stated), rate limits beyond polling, error handling, or what 'primary signal endpoint' means in context. The description adds some behavioral context but leaves gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded: it starts with the core purpose and return values, then adds the polling instruction. Both sentences earn their place by providing essential information. It could be slightly more structured (e.g., separating return fields from usage), but it's efficient with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (a signal endpoint with multiple return fields), no annotations, no output schema, and low parameter coverage, the description is moderately complete. It details the return structure and polling frequency, which are critical, but lacks parameter explanations, authentication context, and error handling. For a tool with rich output but minimal structured support, it does an adequate but incomplete job.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter (api_key) with 0% description coverage in the schema. The tool description does not mention the api_key parameter at all, providing no semantic information beyond what the bare schema indicates. With low schema coverage, the description fails to compensate, leaving the parameter's purpose and format unexplained.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Primary signal endpoint. Returns action (BUY/SELL/HOLD), confidence (0-1), signal (BULLISH/BEARISH/NEUTRAL), fear_greed (0-100), btc_trend, polymarket_edge {market, ev}, and reasoning.' It specifies the verb 'returns' and lists the comprehensive data structure returned, though it doesn't explicitly differentiate from sibling tools like 'get_oracle_signals' or 'get_polymarket_edge'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Poll every 15 minutes.' This tells the agent when and how frequently to use this tool, which is crucial for avoiding excessive API calls. It doesn't mention alternatives, but the polling instruction is clear and actionable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_all_dataAll Signals CombinedB
Read-onlyIdempotent
Inspect

All signal data in a single call: signal + sentiment + prices + Polymarket edges.

Use this on session initialisation instead of calling each tool separately. Equivalent to get_agent_signal() + get_sentiment() + get_prices() + get_polymarket_edge() combined. After initialisation, use get_agent_signal() on its 15-minute polling cycle for updates.

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNoOctoData API key (format: octo_...). Free key (500 req/day): POST https://api.octodamus.com/v1/signup?email=YOUR_EMAIL. Premium key (10k req/day): call get_premium_api() or GET /v1/subscribe?plan=trial.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the tool returns a 'combined snapshot' but doesn't disclose behavioral traits such as rate limits, authentication needs (beyond the api_key parameter), data freshness, or error handling. This leaves significant gaps for a tool that likely involves data fetching.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads key information ('Combined snapshot') and lists the data types included. There's no wasted text, making it appropriately sized and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of combining multiple data sources, no annotations, no output schema, and low schema coverage, the description is incomplete. It lacks details on what the snapshot includes (e.g., format, structure), how data is aggregated, or any limitations, making it inadequate for informed tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter (api_key) with 0% description coverage, and the tool description doesn't add any parameter-specific details. Since there are 0 parameters with semantic info, the baseline is 4, but the description doesn't compensate for the lack of schema coverage by explaining the api_key's role or format, so it's scored lower.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as providing a 'combined snapshot' of multiple data types (signal, sentiment, prices, Polymarket) in one call. It specifies the verb 'get' and resources, though it doesn't explicitly distinguish from siblings like get_agent_signal or get_prices, which offer similar data separately.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning it combines multiple data sources, suggesting it's for efficiency when needing all data types at once. However, it doesn't explicitly state when to use this vs. alternatives like get_agent_signal for just signals or get_prices for just prices, nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_data_sourcesData SourcesA
Read-onlyIdempotent
Inspect

List all 27 live data feeds powering the Octodamus oracle system.

No API key required. Use for transparency or discovery — shows each source name, data type, and refresh interval. Useful when explaining signal provenance to end users or auditing data coverage.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates that this is a read-only operation ('List') and mentions the authentication requirement ('No API key required'), but doesn't describe response format, rate limits, or other behavioral traits like pagination or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place. The first sentence states the core purpose with specific details, and the second provides important usage context about authentication requirements. There's zero wasted language.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters, no annotations, and no output schema, the description provides adequate basic information about what the tool does and its accessibility. However, for a tool that presumably returns a list of data sources, the description doesn't explain what information will be returned about each data source or the response format, which would be helpful context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema already fully documents the parameter situation. The description appropriately doesn't add parameter information beyond what's in the schema, which is correct for a parameterless tool. A baseline of 4 is appropriate since no parameter documentation is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List all 27 live data feeds'), identifies the resource ('powering Octodamus'), and distinguishes this from sibling tools by specifying it's about data sources rather than signals, prices, or market briefs. The mention of '27 live data feeds' provides concrete scope information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('No API key required') which implies it's accessible without authentication. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools for different data needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_guideGet Trading GuideAInspect

Purchase the Build the House trading system guide via x402 on Base.

Returns step-by-step x402 payment instructions. After completing the EIP-3009 payment ($29 USDC on Base), the API returns a download_url valid for 30 days. No API key required to purchase.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations, describes payment process (x402, EIP-3009, $29 USDC on Base), step-by-step instructions, and download URL validity of 30 days. Discloses no API key needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, no unnecessary words. Front-loaded with action verb 'Purchase'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Explains return values (payment instructions, download_url with 30-day validity) and process. No gaps given no parameters and presence of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters; baseline score of 4 applies. Description adds context about payment process but no param details needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool purchases a trading guide via x402 on Base and returns payment instructions. Distinct from sibling tools which focus on data retrieval (e.g., get_market_brief, get_prices).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Specifies when to use (to purchase the guide) and notes no API key required. Does not explicitly state alternatives, but the unique purpose makes usage clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_briefMarket BriefB
Read-onlyIdempotent
Inspect

Full AI market briefing as a concise narrative paragraph.

Ideal for injecting into an agent system prompt at session start to ground all subsequent reasoning in current market conditions. Covers macro regime, crypto momentum, key levels, and notable catalysts. Refreshes every 30 minutes; call once per session rather than polling.

Response: {brief: "...narrative text..."}

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNoOctoData API key (format: octo_...). Free key (500 req/day): POST https://api.octodamus.com/v1/signup?email=YOUR_EMAIL. Premium key (10k req/day): call get_premium_api() or GET /v1/subscribe?plan=trial.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the output format (narrative) and context (agent reasoning), but lacks critical details such as whether this is a read-only operation, if it requires authentication (implied by the api_key parameter but not stated), rate limits, or any side effects. This is a significant gap for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, consisting of a single sentence that efficiently conveys the core purpose and ideal usage. There is no wasted language, and every word serves to add value, making it highly effective in its brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (a market briefing tool with no output schema and no annotations), the description is incomplete. It doesn't cover parameter meanings, behavioral traits like authentication needs, or output details beyond format. While it states the purpose clearly, it lacks the depth needed for a tool that likely involves external data fetching and agent integration.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter (api_key) with 0% description coverage, meaning the schema provides no details about this parameter. The description does not mention the api_key or explain its purpose, failing to compensate for the lack of schema documentation. This leaves the parameter's role unclear, which is a notable deficiency.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to generate a 'Full AI market briefing in narrative format.' It specifies both the resource (market briefing) and the output format (narrative), which is helpful. However, it doesn't explicitly differentiate this tool from siblings like 'get_all_data' or 'get_prices,' which might also provide market-related information, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some implied usage guidance by stating the output is 'ideal for agent reasoning context,' suggesting it's meant for AI agents to use in decision-making. However, it doesn't offer explicit when-to-use or when-not-to-use advice, nor does it mention alternatives among the sibling tools, leaving gaps in practical application.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_oracle_signalsOracle Signal BreakdownC
Read-onlyIdempotent
Inspect

Raw votes from all 12 oracle signals with per-signal confidence and consensus score.

Each oracle is a separate real-world data source: funding rate, open interest, long/short ratio, Fear & Greed index, macro regime (FRED), aviation volume, TSA travel demand, Polymarket crowd, options flow, congressional trading, CLOB order book depth, and Binance 24h cumulative delta. Each votes BUY (+1), SELL (-1), or NEUTRAL (0) independently.

Use this for deep analysis, signal attribution, or debugging a BUY/SELL/HOLD decision. For a consolidated action, call get_agent_signal() instead.

Response example: { "consensus_score": 8, "max_score": 12, "action": "BUY", "win_rate": 0.62, "oracles": [ {"name": "funding_rate", "vote": 1, "confidence": 0.85}, {"name": "long_short_ratio", "vote": 1, "confidence": 0.70}, {"name": "fear_greed", "vote": 0, "confidence": 0.50}, {"name": "macro_regime", "vote": 1, "confidence": 0.80}, {"name": "binance_delta", "vote": 1, "confidence": 0.75} ] }

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNoOctoData API key (format: octo_...). Free key (500 req/day): POST https://api.octodamus.com/v1/signup?email=YOUR_EMAIL. Premium key (10k req/day): call get_premium_api() or GET /v1/subscribe?plan=trial.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions data elements but fails to describe critical traits like whether this is a read-only operation, authentication needs (implied by 'api_key' but not stated), rate limits, or response format. This leaves significant gaps in understanding the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that lists key data elements without unnecessary words. It is appropriately sized and front-loaded, though it could be slightly more structured for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of retrieving oracle data, no annotations, no output schema, and low schema coverage, the description is incomplete. It lacks details on authentication, response structure, error handling, and how it differs from siblings, making it inadequate for effective tool use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter with 0% description coverage, so the description must compensate. It adds no parameter-specific information beyond what the schema provides (e.g., no explanation of 'api_key' usage or format). Since there's only one parameter, the baseline is 4, but the lack of any semantic detail reduces it to 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool retrieves 'Raw Oracle pack' with specific data elements (individual votes, consensus strength, win rate), which gives a vague purpose. However, it doesn't specify the exact action (e.g., fetch, retrieve) or distinguish it from sibling tools like 'get_agent_signal' or 'get_all_data', leaving ambiguity about scope and differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as 'get_agent_signal' or 'get_all_data'. The description lists data elements but offers no context on appropriate scenarios, prerequisites, or exclusions, leaving the agent without usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_polymarket_edgePolymarket Edge PlaysB
Read-onlyIdempotent
Inspect

Ranked Polymarket prediction markets by expected value (EV).

Use when you want to position on prediction markets. Returns a list ordered by EV descending; each entry includes question, recommended_side ("YES" or "NO"), expected_value (float), and confidence.

Complement with get_agent_signal() to confirm directional alignment before acting on any Polymarket position.

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNoOctoData API key (format: octo_...). Free key (500 req/day): POST https://api.octodamus.com/v1/signup?email=YOUR_EMAIL. Premium key (10k req/day): call get_premium_api() or GET /v1/subscribe?plan=trial.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool returns a ranked list with specific fields, but doesn't cover critical aspects like whether it's read-only, requires authentication (implied by api_key but not stated), rate limits, error handling, or data freshness. The description adds some value but leaves significant gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, consisting of a single sentence that efficiently conveys the core functionality. Every word earns its place, with no wasted text or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (returns ranked markets with scoring), no annotations, no output schema, and minimal parameter documentation, the description is incomplete. It covers the basic purpose but lacks details on authentication needs, return format beyond field names, and behavioral constraints. It's minimally viable but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no parameter-specific information beyond what the input schema provides. With 0% schema description coverage and 1 parameter (api_key), the description doesn't explain the purpose or format of the api_key. However, the baseline is 3 since the parameter count is low (1) and the tool's overall purpose is clear, though it doesn't compensate for the lack of schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool does: returns top Polymarket prediction markets with EV scoring, providing a ranked list with recommended_side, expected value, and confidence per market. It specifies the verb ('returns') and resource ('top Polymarket prediction markets'), though it doesn't explicitly differentiate from sibling tools like get_prices or get_oracle_signals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context, or exclusions, leaving the agent to infer usage from the purpose alone. No explicit alternatives or when-not scenarios are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_premium_apiGet Premium API AccessAInspect

Subscribe to OctoData Premium API via x402 on Base.

Returns step-by-step x402 payment instructions for any plan. After completing the EIP-3009 payment, the API returns an api_key immediately — no human in the loop. Free option also available.

Plans: Micro — $0.01 USDC per call, no key needed, pay-per-request via x402 Trial — $5 USDC, 7 days, 10k req/day Annual — $29 USDC/year early bird (first 100 seats), $149/year after

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description fully discloses behavioral traits: it returns step-by-step payment instructions, notes that after completing EIP-3009 payment the API returns an api_key immediately with no human in the loop, and mentions a free option. Annotations (e.g., readOnlyHint=false) are consistent; no contradiction. The description adds value beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is reasonably concise, with clear sections for payment summary, plans list, and post-payment behavior. However, it could be slightly more structured (e.g., bullet points for plans) to improve scannability. No excess verbiage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters, the presence of an output schema, and comprehensive annotations, the description is complete. It explains the subscription flow, payment methods, plans, and the outcome (api_key). Sibling tools are all data retrieval, so this stands alone.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist in the input schema, so the description does not need to add parameter semantics. With 100% schema description coverage, the tool is self-explanatory. The description provides rich context about plans and payment flow, which is sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'Subscribe' and clearly states the resource: 'OctoData Premium API via x402 on Base'. It distinctly covers the tool's function of returning payment instructions and API key issuance, differentiating it from siblings which are data retrieval tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use this tool: to subscribe to premium API via x402 payments. It states the process and mentions a free option, helping the agent understand the context. No sibling alternatives are provided, but the description is self-contained and clear about its exclusive purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pricesLive PricesC
Read-onlyIdempotent
Inspect

Live spot prices with 24-hour percentage change for BTC, ETH, SOL, NVDA, TSLA, AAPL.

Use before position sizing, level checks, or any calculation that requires a current reference price. Refreshes every 60 seconds via Kraken (crypto) and Finnhub (equities). Free tier included.

Response keyed by symbol — example: { "BTC": {"price_usd": 84200.50, "change_24h_pct": 2.3}, "ETH": {"price_usd": 1820.10, "change_24h_pct": -0.8}, "SOL": {"price_usd": 148.40, "change_24h_pct": 1.1}, "NVDA": {"price_usd": 875.00, "change_24h_pct": 0.4}, "TSLA": {"price_usd": 250.20, "change_24h_pct": -1.2}, "AAPL": {"price_usd": 190.50, "change_24h_pct": 0.6} }

For directional signal on these prices, call get_agent_signal() instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNoOctoData API key (format: octo_...). Free key (500 req/day): POST https://api.octodamus.com/v1/signup?email=YOUR_EMAIL. Premium key (10k req/day): call get_premium_api() or GET /v1/subscribe?plan=trial.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool fetches current data with 24h changes, implying a read-only operation, but doesn't cover critical aspects like rate limits, authentication needs (beyond the api_key parameter), error handling, or data freshness. This leaves significant gaps for a tool that likely interacts with external APIs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core functionality. Every word earns its place by specifying the data type (crypto prices), key metric (24h % change), and scope (major assets), with no redundant or vague phrasing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of fetching real-time crypto data, no annotations, no output schema, and minimal schema coverage, the description is incomplete. It doesn't address authentication details, rate limits, error cases, or the format of returned data (e.g., structure, units), which are essential for effective tool use in this context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no parameter-specific information beyond what the schema provides. With 0% schema description coverage and one required parameter (api_key), the baseline is 3 since the schema documents the parameter minimally. However, the description doesn't explain what the api_key is for or how to obtain it, missing an opportunity to add value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving current crypto prices with 24-hour percentage changes for major assets. It specifies the verb ('get') and resource ('prices'), though it doesn't explicitly differentiate from sibling tools like 'get_all_data' or 'get_market_brief', which might offer overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description mentions 'major assets' but doesn't clarify scope compared to siblings like 'get_all_data' (which might include more assets) or 'get_market_brief' (which might offer additional market context). There's no mention of prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_sentimentMarket SentimentB
Read-onlyIdempotent
Inspect

AI-derived sentiment scores for major crypto assets and macro themes.

Scores range from -1.0 (maximum bearish) to +1.0 (maximum bullish). Use to add conviction context to a signal: a BUY action with a high positive sentiment score is a stronger setup than one with neutral sentiment.

Response: dict keyed by asset symbol, each with score, label ("Very Bearish" … "Very Bullish"), and source_count.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNoAsset to filter by: "BTC", "ETH", or "SOL". Leave empty ("") to get scores for all assets.
api_keyNoOctoData API key (format: octo_...). Free key (500 req/day): POST https://api.octodamus.com/v1/signup?email=YOUR_EMAIL. Premium key (10k req/day): call get_premium_api() or GET /v1/subscribe?plan=trial.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the scoring range (-1.0 to +1.0) and optional filtering, but fails to disclose critical behavioral traits such as authentication requirements (implied by the required 'api_key' parameter), rate limits, data freshness, or what happens when no symbol is provided. For a tool with no annotation coverage, this leaves significant gaps in understanding its operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, consisting of just two sentences that efficiently convey the core functionality and optional filtering. Every sentence earns its place by providing essential information without redundancy or fluff, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (retrieving AI sentiment scores with authentication), lack of annotations, no output schema, and incomplete parameter documentation, the description is insufficient. It doesn't explain the return format, error handling, or how the sentiment is calculated, leaving the agent with incomplete context to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, so the description must compensate. It adds meaning for the 'symbol' parameter by specifying allowed values (BTC, ETH, SOL) and indicating it's optional. However, it doesn't explain the 'api_key' parameter at all, leaving its purpose and format undocumented. With 2 parameters and only partial coverage in the description, this is inadequate compensation for the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'AI sentiment scores for BTC, ETH, SOL and macro themes' with a specific scoring range (-1.0 to +1.0). It uses a specific verb ('get') and identifies the resource (sentiment scores for specific cryptocurrencies). However, it doesn't explicitly distinguish this tool from its siblings like 'get_agent_signal' or 'get_oracle_signals', which may also provide related financial signals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance by mentioning 'Optionally filter by symbol (BTC, ETH, SOL)', suggesting this tool can be used to retrieve sentiment scores broadly or filtered by specific symbols. However, it doesn't explicitly state when to use this tool versus alternatives like 'get_agent_signal' or 'get_oracle_signals', nor does it provide exclusions or prerequisites beyond the optional filtering.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.