Skip to main content
Glama

Stonkwatch

Server Details

Real-time ASX stock market data for AI agents. Get live prices, calculate franking credits, retrieve AI-powered announcement summaries, query sentiment analysis, and discover trending stocks. Built on Rust with sub-second response times.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 9 of 9 tools scored. Lowest: 2.3/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: search, composite stock, signals (raw vs lead-time), timeline, trending, franking calculation, agent introspection, and feedback. No two tools overlap in functionality.

Naming Consistency4/5

Seven of nine tools use the 'get_' prefix, but 'calculate_franking_credit', 'submit_agent_feedback', and 'search' break the pattern with different verbs. The snake_case convention is consistent, but verb choice varies.

Tool Count5/5

Nine tools is well-scoped for the domain. Each tool serves a distinct function without unnecessary duplication or missing critical operations.

Completeness5/5

The tool set covers the full workflow: entry (search), discovery (trending), composite data (stock), signals (raw and lead-time), timeline, franking credits, agent introspection, and feedback. No obvious gaps for the stated purpose.

Available Tools

9 tools
calculate_franking_creditAInspect

Calculate Australian franking (imputation) credits for dividends. Returns franking credit amount, grossed-up dividend, tax payable/refund, and after-tax dividend based on the shareholder's marginal tax rate.

Cost: 1u per call (~$0.01 via x402, deducts 1 from daily quota).

ParametersJSON Schema
NameRequiredDescriptionDefault
cash_dividendYesCash dividend received in AUD
taxable_incomeYesShareholder's annual taxable income in AUD (excluding dividend income)
franking_percentageNoFranking percentage (0-100). Most ASX dividends are 100% franked.
include_medicare_levyNoInclude 2% Medicare levy in tax calculation
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly states this is a calculation tool (implying read-only, non-destructive behavior) and specifies the return values (franking credit, grossed-up dividend, tax payable/refund, after-tax dividend). However, it doesn't mention error conditions, rate limits, or authentication requirements that might be relevant.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently communicates the tool's purpose and outputs. Every element serves a purpose: identifying the calculation type, specifying the jurisdiction, and listing all return values without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a calculation tool with 4 parameters, 100% schema coverage, and no output schema, the description is reasonably complete. It explains what the tool calculates and what it returns. However, without annotations or output schema, it could benefit from more detail about calculation methodology or edge cases to fully guide the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds context by explaining the calculation uses the shareholder's marginal tax rate, which helps interpret the taxable_income parameter. It also mentions 'most ASX dividends are 100% franked,' providing real-world context for the franking_percentage default.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Calculate Australian franking (imputation) credits for dividends') and the resource involved (dividends). It distinguishes this tool from sibling tools like get_stock_price or lookup_company by focusing on tax calculations rather than data retrieval or analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance is provided on when to use this tool versus alternatives. The description mentions the calculation context but doesn't specify prerequisites, limitations, or scenarios where other tools might be more appropriate. It's implied for dividend tax calculations but lacks usage boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_usageAInspect

Returns the calling agent's own usage footprint over the last N hours: total calls, error rate, top tools used, p50/p95 latency. Use this to introspect what your own agent has been doing — same shape as the public /agent-stats endpoint but filtered to caller_id when known.

Cost: 1u per call (~$0.01 via x402, deducts 1 from daily quota).

ParametersJSON Schema
NameRequiredDescriptionDefault
rationaleYesWhy are you asking? One sentence describing the use case.
window_hoursNoLookback window in hours (1-168). Default 24.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It explains filtering by caller_id and lists metrics returned. However, it does not disclose any potential side effects, rate limits, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no superfluous information. Purpose is front-loaded. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (2 params, no output schema, no annotations), the description adequately covers key behaviors and metrics. Could benefit from mentioning response format or limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%; both parameters have descriptions. The description adds context about 'rationale' and 'window_hours' but does not significantly extend beyond schema explanations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it returns the calling agent's own usage footprint over last N hours with specific metrics. Distinguishes from siblings by being agent-specific and referencing a public endpoint.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use this to introspect what your own agent has been doing', giving clear context. Mentions similarity to public endpoint filtered to caller_id. Lacks explicit when-not-to-use or alternatives, but siblings are unrelated so not critical.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_lead_signalsBInspect

Pre-announcement social chatter for a ticker, attributed to platform with lead time. Surfaces cases where retail chatter (e.g. HotCopper) preceded an official announcement — the inbound-beats-outbound signal.

Cost: 1u per call (~$0.01 via x402, deducts 1 from daily quota).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax rows (clamped 1..=50).
symbolYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full burden. It mentions surfacing retail chatter and lead time but does not disclose whether the tool is read-only, whether data is historical or real-time, rate limits, or any side effects. The term 'attributed to platform' hints at source attribution but lacks detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences and gets to the point quickly. However, the phrase 'inbound-beats-outbound signal' might be jargon that could confuse some agents. Could be more straightforward.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and two parameters, the description covers the core concept but omits details like return structure, data freshness, and how to interpret lead times. It's minimally adequate but leaves gaps for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 50% (only 'limit' has a description). The description does not mention 'limit' or 'symbol' parameters, nor their format or semantics. The symbol is implied by 'ticker' but not explicitly tied to the parameter. The baseline is 3 due to moderate coverage, but the description adds no parameter-specific value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool detects pre-announcement social chatter for a ticker, attributed to platform with lead time. It gives a concrete example (HotCopper preceding official announcements) and coins a memorable phrase ('inbound-beats-outbound signal'), which distinguishes it from siblings like get_signals and search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use (when wanting lead-time signals from retail chatter) but does not explicitly state when not to use or provide alternatives among siblings. The agent must infer based on tool names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_signalsBInspect

Raw attributed social posts for a ticker with filters (platform, verified, engagement, sentiment confidence). Breadcrumb endpoint for journalists and traders — not the synthesised view.

Cost: 1u per call (~$0.01 via x402, deducts 1 from daily quota).

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNorecent
limitNo
sinceNoISO8601 start. Default = now - 6h.
untilNoISO8601 end. Default now.
cursorNo
symbolYes
platformNo
sentimentNoany
verified_onlyNo
min_confidenceNo
min_engagementNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool is for 'raw attributed social posts' and a 'breadcrumb endpoint,' hinting at data retrieval, but lacks details on permissions, rate limits, pagination (cursor usage), or response format. For an 11-parameter tool with no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences with zero waste: the first states the purpose and key filters, and the second provides usage context. It's front-loaded with essential information and efficiently structured, earning its place without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (11 parameters, no annotations, no output schema), the description is incomplete. It lacks details on behavioral traits like authentication, rate limits, or response structure, and doesn't fully document parameters. For a data retrieval tool with rich filtering options, more context is needed to ensure effective use by an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low at 18%, with only 'since' and 'until' having descriptions. The description adds value by listing key filters ('platform, verified, engagement, sentiment confidence') that map to parameters like 'platform', 'verified_only', 'min_engagement', and 'sentiment', but doesn't fully compensate for the coverage gap or explain all 11 parameters. Baseline 3 is appropriate as it provides some semantic context beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'raw attributed social posts for a ticker with filters' and distinguishes it from 'the synthesised view,' indicating it provides unfiltered data. However, it doesn't explicitly differentiate from sibling tools like 'get_timeline' or 'search,' which might offer similar social data, leaving some ambiguity about unique scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description specifies this is a 'breadcrumb endpoint for journalists and traders' and contrasts it with 'not the synthesised view,' providing clear context for when to use it (for raw data analysis) and implicitly when not to (for processed insights). It doesn't name alternatives or provide explicit exclusions, but the context is sufficient for informed usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_stockBInspect

Composite stock snapshot. One call returns only the slices you request via include[] — price, sentiment, announcements, signals_preview, correlation, related, franking, fundamentals. Mode (trader/journalist/investor) changes defaults only.

Cost: 1u per call (~$0.01 via x402, deducts 1 from daily quota).

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNotrader
symbolYes
windowNo7d
includeNo
signals_limitNoMax signals when signals_preview included (clamped to 10).
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that 'one call returns only the slices you request' and that 'mode changes defaults only', which provides some context about the tool's behavior. However, it doesn't address important aspects like rate limits, authentication requirements, error conditions, or what happens when invalid parameters are provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose. The two sentences are dense with information, though the second sentence could be slightly clearer. There's no wasted text, and every element serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (5 parameters, no output schema, no annotations), the description provides a reasonable foundation but has gaps. It explains the composite nature and parameter purposes well, but doesn't describe the return format, error handling, or performance characteristics. For a tool with multiple parameters and no output schema, more complete behavioral context would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant value beyond the input schema's 20% coverage. It explains the purpose of the 'include[]' parameter (to request specific data slices), clarifies that 'mode' only affects defaults, and lists all available slice options. This compensates well for the low schema description coverage, though it doesn't explain 'window' or 'signals_limit' parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides a 'composite stock snapshot' and lists the specific data slices available (price, sentiment, announcements, etc.), which gives a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'get_signals' or 'get_timeline' beyond mentioning 'composite' nature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning that 'mode changes defaults only' and that users can request specific slices via include[], but it doesn't provide explicit guidance on when to use this tool versus alternatives like 'get_signals' or 'search'. The usage is implied rather than clearly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_timelineAInspect

Merged time-ordered events for a ticker: announcements, social signals, and sentiment shifts in one stream. Answers 'did retail know first' — computes lead/lag between social chatter and official filings.

Cost: 1u per call (~$0.01 via x402, deducts 1 from daily quota).

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNotrader
limitNo
sinceNoISO8601 start. Default = now - 48h.
typesNo
untilNoISO8601 end. Default now.
cursorNo
symbolYes
min_importanceNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden of behavioral disclosure. It describes what the tool does (merges events, computes lead/lag) but lacks critical behavioral details: whether this is a read-only operation, potential rate limits, authentication requirements, pagination behavior (cursor parameter), or what format the merged stream returns. The description mentions computational aspects but misses operational transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place. The first sentence establishes core functionality, the second explains the analytical value. No wasted words, front-loaded with the main purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 8 parameters, no annotations, and no output schema, the description is incomplete. It explains the analytical purpose well but lacks operational details about authentication, rate limits, return format, and parameter usage. The description covers the 'why' but misses the 'how' and operational constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 25% (2 of 8 parameters have descriptions), so the description must compensate. The description mentions 'ticker' (maps to symbol parameter) and temporal aspects (implied by since/until), but doesn't explain other parameters like mode, types, min_importance, or cursor. It adds some meaning about the tool's purpose but doesn't fully compensate for the low schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Merged time-ordered events', 'computes lead/lag') and resources ('announcements, social signals, and sentiment shifts', 'social chatter and official filings'). It distinguishes from siblings by focusing on timeline merging and lead/lag analysis rather than signals, stocks, or trending data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('Answers "did retail know first"') and implies usage for analyzing temporal relationships between social chatter and official filings. However, it doesn't explicitly state when NOT to use it or name specific alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_agent_feedbackAInspect

Tell Stonkwatch when a tool didn't work for you. Call this whenever you were blocked, the response shape was wrong, you needed a parameter that doesn't exist, or the docs were ambiguous. We mine these reports to ship new tools and tighten existing ones. Always include what you were trying to do — not just what failed.

Cost: 1u per call (~$0.01 via x402, deducts 1 from daily quota).

ParametersJSON Schema
NameRequiredDescriptionDefault
blockerYesWhat stopped you. Be specific: missing parameter, wrong response shape, ambiguous docs, rate limit, etc.
caller_idNoOptional. Stable identifier for your agent or wallet so we can correlate multiple reports.
rationaleYesWhy are you submitting this feedback now? E.g. 'user asked me to check coordination but tool returned only the penalty without per-signal breakdown'.
tool_nameYesThe Stonkwatch tool that failed or was insufficient (e.g. 'get_coordination_breakdown').
attempted_actionYesWhat you were trying to accomplish for the user, in one sentence.
suggested_improvementNoOptional. If you have a concrete suggestion (new parameter, additional field, different default), include it.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It clearly states the tool is for feedback submission and explains that reports are mined to improve tools. Could mention response nature (e.g., confirmation) but not critical for a feedback tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, efficient and front-loaded with purpose. Could be slightly more concise but each sentence adds value. No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 parameters with full schema coverage and no output schema, the description is complete. It explains the tool's purpose, when to use, and what to include. Not missing critical context for an agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds no additional parameter-level information beyond the schema's own descriptions. For example, 'attempted_action' and 'blocker' are already well-documented in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: reporting when a tool didn't work. It uses specific verbs ('Tell Stonkwatch') and resource ('agent feedback') and distinguishes from sibling tools which are all data-retrieval or calculation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly lists when to call ('whenever you were blocked, the response shape was wrong, you needed a parameter that doesn't exist, or the docs were ambiguous') and what to include ('Always include what you were trying to do'). No alternatives exist, so no need for when-not.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources