Stonkwatch
Server Details
Real-time ASX stock market data for AI agents. Get live prices, calculate franking credits, retrieve AI-powered announcement summaries, query sentiment analysis, and discover trending stocks. Built on Rust with sub-second response times.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 9 of 9 tools scored. Lowest: 2.3/5.
Each tool has a clearly distinct purpose: search, composite stock, signals (raw vs lead-time), timeline, trending, franking calculation, agent introspection, and feedback. No two tools overlap in functionality.
Seven of nine tools use the 'get_' prefix, but 'calculate_franking_credit', 'submit_agent_feedback', and 'search' break the pattern with different verbs. The snake_case convention is consistent, but verb choice varies.
Nine tools is well-scoped for the domain. Each tool serves a distinct function without unnecessary duplication or missing critical operations.
The tool set covers the full workflow: entry (search), discovery (trending), composite data (stock), signals (raw and lead-time), timeline, franking credits, agent introspection, and feedback. No obvious gaps for the stated purpose.
Available Tools
9 toolscalculate_franking_creditAInspect
Calculate Australian franking (imputation) credits for dividends. Returns franking credit amount, grossed-up dividend, tax payable/refund, and after-tax dividend based on the shareholder's marginal tax rate.
Cost: 1u per call (~$0.01 via x402, deducts 1 from daily quota).
| Name | Required | Description | Default |
|---|---|---|---|
| cash_dividend | Yes | Cash dividend received in AUD | |
| taxable_income | Yes | Shareholder's annual taxable income in AUD (excluding dividend income) | |
| franking_percentage | No | Franking percentage (0-100). Most ASX dividends are 100% franked. | |
| include_medicare_levy | No | Include 2% Medicare levy in tax calculation |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly states this is a calculation tool (implying read-only, non-destructive behavior) and specifies the return values (franking credit, grossed-up dividend, tax payable/refund, after-tax dividend). However, it doesn't mention error conditions, rate limits, or authentication requirements that might be relevant.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently communicates the tool's purpose and outputs. Every element serves a purpose: identifying the calculation type, specifying the jurisdiction, and listing all return values without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a calculation tool with 4 parameters, 100% schema coverage, and no output schema, the description is reasonably complete. It explains what the tool calculates and what it returns. However, without annotations or output schema, it could benefit from more detail about calculation methodology or edge cases to fully guide the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds context by explaining the calculation uses the shareholder's marginal tax rate, which helps interpret the taxable_income parameter. It also mentions 'most ASX dividends are 100% franked,' providing real-world context for the franking_percentage default.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Calculate Australian franking (imputation) credits for dividends') and the resource involved (dividends). It distinguishes this tool from sibling tools like get_stock_price or lookup_company by focusing on tax calculations rather than data retrieval or analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool versus alternatives. The description mentions the calculation context but doesn't specify prerequisites, limitations, or scenarios where other tools might be more appropriate. It's implied for dividend tax calculations but lacks usage boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_agent_usageAInspect
Returns the calling agent's own usage footprint over the last N hours: total calls, error rate, top tools used, p50/p95 latency. Use this to introspect what your own agent has been doing — same shape as the public /agent-stats endpoint but filtered to caller_id when known.
Cost: 1u per call (~$0.01 via x402, deducts 1 from daily quota).
| Name | Required | Description | Default |
|---|---|---|---|
| rationale | Yes | Why are you asking? One sentence describing the use case. | |
| window_hours | No | Lookback window in hours (1-168). Default 24. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It explains filtering by caller_id and lists metrics returned. However, it does not disclose any potential side effects, rate limits, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no superfluous information. Purpose is front-loaded. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (2 params, no output schema, no annotations), the description adequately covers key behaviors and metrics. Could benefit from mentioning response format or limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%; both parameters have descriptions. The description adds context about 'rationale' and 'window_hours' but does not significantly extend beyond schema explanations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns the calling agent's own usage footprint over last N hours with specific metrics. Distinguishes from siblings by being agent-specific and referencing a public endpoint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use this to introspect what your own agent has been doing', giving clear context. Mentions similarity to public endpoint filtered to caller_id. Lacks explicit when-not-to-use or alternatives, but siblings are unrelated so not critical.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_lead_signalsBInspect
Pre-announcement social chatter for a ticker, attributed to platform with lead time. Surfaces cases where retail chatter (e.g. HotCopper) preceded an official announcement — the inbound-beats-outbound signal.
Cost: 1u per call (~$0.01 via x402, deducts 1 from daily quota).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max rows (clamped 1..=50). | |
| symbol | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full burden. It mentions surfacing retail chatter and lead time but does not disclose whether the tool is read-only, whether data is historical or real-time, rate limits, or any side effects. The term 'attributed to platform' hints at source attribution but lacks detail.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences and gets to the point quickly. However, the phrase 'inbound-beats-outbound signal' might be jargon that could confuse some agents. Could be more straightforward.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and two parameters, the description covers the core concept but omits details like return structure, data freshness, and how to interpret lead times. It's minimally adequate but leaves gaps for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% (only 'limit' has a description). The description does not mention 'limit' or 'symbol' parameters, nor their format or semantics. The symbol is implied by 'ticker' but not explicitly tied to the parameter. The baseline is 3 due to moderate coverage, but the description adds no parameter-specific value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool detects pre-announcement social chatter for a ticker, attributed to platform with lead time. It gives a concrete example (HotCopper preceding official announcements) and coins a memorable phrase ('inbound-beats-outbound signal'), which distinguishes it from siblings like get_signals and search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (when wanting lead-time signals from retail chatter) but does not explicitly state when not to use or provide alternatives among siblings. The agent must infer based on tool names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_signalsBInspect
Raw attributed social posts for a ticker with filters (platform, verified, engagement, sentiment confidence). Breadcrumb endpoint for journalists and traders — not the synthesised view.
Cost: 1u per call (~$0.01 via x402, deducts 1 from daily quota).
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | recent | |
| limit | No | ||
| since | No | ISO8601 start. Default = now - 6h. | |
| until | No | ISO8601 end. Default now. | |
| cursor | No | ||
| symbol | Yes | ||
| platform | No | ||
| sentiment | No | any | |
| verified_only | No | ||
| min_confidence | No | ||
| min_engagement | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool is for 'raw attributed social posts' and a 'breadcrumb endpoint,' hinting at data retrieval, but lacks details on permissions, rate limits, pagination (cursor usage), or response format. For an 11-parameter tool with no annotations, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences with zero waste: the first states the purpose and key filters, and the second provides usage context. It's front-loaded with essential information and efficiently structured, earning its place without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (11 parameters, no annotations, no output schema), the description is incomplete. It lacks details on behavioral traits like authentication, rate limits, or response structure, and doesn't fully document parameters. For a data retrieval tool with rich filtering options, more context is needed to ensure effective use by an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is low at 18%, with only 'since' and 'until' having descriptions. The description adds value by listing key filters ('platform, verified, engagement, sentiment confidence') that map to parameters like 'platform', 'verified_only', 'min_engagement', and 'sentiment', but doesn't fully compensate for the coverage gap or explain all 11 parameters. Baseline 3 is appropriate as it provides some semantic context beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'raw attributed social posts for a ticker with filters' and distinguishes it from 'the synthesised view,' indicating it provides unfiltered data. However, it doesn't explicitly differentiate from sibling tools like 'get_timeline' or 'search,' which might offer similar social data, leaving some ambiguity about unique scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies this is a 'breadcrumb endpoint for journalists and traders' and contrasts it with 'not the synthesised view,' providing clear context for when to use it (for raw data analysis) and implicitly when not to (for processed insights). It doesn't name alternatives or provide explicit exclusions, but the context is sufficient for informed usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_stockBInspect
Composite stock snapshot. One call returns only the slices you request via include[] — price, sentiment, announcements, signals_preview, correlation, related, franking, fundamentals. Mode (trader/journalist/investor) changes defaults only.
Cost: 1u per call (~$0.01 via x402, deducts 1 from daily quota).
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | trader | |
| symbol | Yes | ||
| window | No | 7d | |
| include | No | ||
| signals_limit | No | Max signals when signals_preview included (clamped to 10). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that 'one call returns only the slices you request' and that 'mode changes defaults only', which provides some context about the tool's behavior. However, it doesn't address important aspects like rate limits, authentication requirements, error conditions, or what happens when invalid parameters are provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose. The two sentences are dense with information, though the second sentence could be slightly clearer. There's no wasted text, and every element serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (5 parameters, no output schema, no annotations), the description provides a reasonable foundation but has gaps. It explains the composite nature and parameter purposes well, but doesn't describe the return format, error handling, or performance characteristics. For a tool with multiple parameters and no output schema, more complete behavioral context would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds significant value beyond the input schema's 20% coverage. It explains the purpose of the 'include[]' parameter (to request specific data slices), clarifies that 'mode' only affects defaults, and lists all available slice options. This compensates well for the low schema description coverage, though it doesn't explain 'window' or 'signals_limit' parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides a 'composite stock snapshot' and lists the specific data slices available (price, sentiment, announcements, etc.), which gives a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'get_signals' or 'get_timeline' beyond mentioning 'composite' nature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning that 'mode changes defaults only' and that users can request specific slices via include[], but it doesn't provide explicit guidance on when to use this tool versus alternatives like 'get_signals' or 'search'. The usage is implied rather than clearly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_timelineAInspect
Merged time-ordered events for a ticker: announcements, social signals, and sentiment shifts in one stream. Answers 'did retail know first' — computes lead/lag between social chatter and official filings.
Cost: 1u per call (~$0.01 via x402, deducts 1 from daily quota).
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | trader | |
| limit | No | ||
| since | No | ISO8601 start. Default = now - 48h. | |
| types | No | ||
| until | No | ISO8601 end. Default now. | |
| cursor | No | ||
| symbol | Yes | ||
| min_importance | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden of behavioral disclosure. It describes what the tool does (merges events, computes lead/lag) but lacks critical behavioral details: whether this is a read-only operation, potential rate limits, authentication requirements, pagination behavior (cursor parameter), or what format the merged stream returns. The description mentions computational aspects but misses operational transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place. The first sentence establishes core functionality, the second explains the analytical value. No wasted words, front-loaded with the main purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 8 parameters, no annotations, and no output schema, the description is incomplete. It explains the analytical purpose well but lacks operational details about authentication, rate limits, return format, and parameter usage. The description covers the 'why' but misses the 'how' and operational constraints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 25% (2 of 8 parameters have descriptions), so the description must compensate. The description mentions 'ticker' (maps to symbol parameter) and temporal aspects (implied by since/until), but doesn't explain other parameters like mode, types, min_importance, or cursor. It adds some meaning about the tool's purpose but doesn't fully compensate for the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Merged time-ordered events', 'computes lead/lag') and resources ('announcements, social signals, and sentiment shifts', 'social chatter and official filings'). It distinguishes from siblings by focusing on timeline merging and lead/lag analysis rather than signals, stocks, or trending data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool ('Answers "did retail know first"') and implies usage for analyzing temporal relationships between social chatter and official filings. However, it doesn't explicitly state when NOT to use it or name specific alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trendingAInspect
Discovery entry point: rank ASX stocks by membrane-potential activity score.
Cost: 1u per call (~$0.01 via x402, deducts 1 from daily quota).
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Presentation mode (ranking is persona-agnostic) | trader |
| limit | No | Number of results (default 20, max 100) | |
| window | No | Ranking window | 1d |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only states it's a ranking tool without disclosing behavioral traits like whether it's read-only, requires authentication, has rate limits, or what the output format looks like. The 'discovery entry point' phrasing adds some context but lacks operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the key information ('discovery entry point' and ranking function) with zero wasted words. Every part of the sentence contributes to understanding the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a ranking tool with 3 parameters, 100% schema coverage, and no output schema, the description is minimally adequate. It states what the tool does but lacks details about output format, ranking methodology, or error conditions that would be helpful given the absence of annotations and output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description doesn't add any parameter-specific meaning beyond what's in the schema (e.g., it doesn't explain what 'membrane-potential activity score' means or how ranking works). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('rank'), resource ('ASX stocks'), and metric ('membrane-potential activity score'), distinguishing it from siblings like get_stock (individual stock data) or search (general search). It also frames it as a 'discovery entry point' which helps contextualize its role.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for discovery/ranking scenarios but doesn't explicitly state when to use this tool versus alternatives like get_signals or search. It mentions 'persona-agnostic' ranking in the schema, but this isn't a clear usage guideline in the description itself.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchCInspect
Resolve a query to a canonical stonkwatch ticker entity. Entry point for all session workflows.
Cost: 1u per call (~$0.01 via x402, deducts 1 from daily quota).
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | ||
| limit | No | ||
| query | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool resolves queries to ticker entities, but doesn't describe how it behaves—e.g., whether it performs fuzzy matching, returns multiple results, handles errors, or requires authentication. This is inadequate for a tool with 3 parameters and no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences that are front-loaded: the first states the purpose, and the second adds contextual role. There's no wasted text, but it could be more structured by explicitly separating purpose from usage guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (3 parameters, 0% schema coverage, no output schema, no annotations), the description is incomplete. It doesn't explain return values, error handling, or how parameters interact, leaving significant gaps for an agent to use the tool correctly in a session workflow context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, meaning none of the 3 parameters (query, type, limit) are documented in the schema. The description doesn't add any meaning beyond the schema—it doesn't explain what 'query' should contain, what 'type' enum values mean, or how 'limit' affects results. This fails to compensate for the lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Resolve a query to a canonical stonkwatch ticker entity', which provides a clear verb ('Resolve') and resource ('ticker entity'), but it's vague about what 'resolve' entails (e.g., search, lookup, match) and doesn't distinguish from siblings like 'get_stock' or 'get_signals'. The phrase 'Entry point for all session workflows' adds context but doesn't clarify the specific action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions this is an 'Entry point for all session workflows', implying it's used to start sessions, but it doesn't provide explicit guidance on when to use this tool versus alternatives like 'get_stock' or 'get_trending'. There's no mention of prerequisites, exclusions, or specific scenarios, leaving usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_agent_feedbackAInspect
Tell Stonkwatch when a tool didn't work for you. Call this whenever you were blocked, the response shape was wrong, you needed a parameter that doesn't exist, or the docs were ambiguous. We mine these reports to ship new tools and tighten existing ones. Always include what you were trying to do — not just what failed.
Cost: 1u per call (~$0.01 via x402, deducts 1 from daily quota).
| Name | Required | Description | Default |
|---|---|---|---|
| blocker | Yes | What stopped you. Be specific: missing parameter, wrong response shape, ambiguous docs, rate limit, etc. | |
| caller_id | No | Optional. Stable identifier for your agent or wallet so we can correlate multiple reports. | |
| rationale | Yes | Why are you submitting this feedback now? E.g. 'user asked me to check coordination but tool returned only the penalty without per-signal breakdown'. | |
| tool_name | Yes | The Stonkwatch tool that failed or was insufficient (e.g. 'get_coordination_breakdown'). | |
| attempted_action | Yes | What you were trying to accomplish for the user, in one sentence. | |
| suggested_improvement | No | Optional. If you have a concrete suggestion (new parameter, additional field, different default), include it. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It clearly states the tool is for feedback submission and explains that reports are mined to improve tools. Could mention response nature (e.g., confirmation) but not critical for a feedback tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, efficient and front-loaded with purpose. Could be slightly more concise but each sentence adds value. No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters with full schema coverage and no output schema, the description is complete. It explains the tool's purpose, when to use, and what to include. Not missing critical context for an agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds no additional parameter-level information beyond the schema's own descriptions. For example, 'attempted_action' and 'blocker' are already well-documented in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: reporting when a tool didn't work. It uses specific verbs ('Tell Stonkwatch') and resource ('agent feedback') and distinguishes from sibling tools which are all data-retrieval or calculation tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists when to call ('whenever you were blocked, the response shape was wrong, you needed a parameter that doesn't exist, or the docs were ambiguous') and what to include ('Always include what you were trying to do'). No alternatives exist, so no need for when-not.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!