TroyStack
Server Details
Live precious metals prices, COMEX vault data, market intelligence, and calculators.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.3/5 across 12 of 12 tools scored. Lowest: 2.6/5.
Every tool has a clearly distinct purpose with no ambiguity. For example, add_holding is for adding purchases, get_portfolio returns portfolio data, get_spot_prices provides current prices, and scan_receipt extracts data from images. The descriptions clearly differentiate each tool's function within the precious metals domain.
Tool names follow a highly consistent verb_noun pattern throughout, with all tools using snake_case. Most start with 'get_' for retrieval operations (e.g., get_analytics, get_portfolio), while others use specific verbs like 'add_holding', 'chat_with_troy', and 'scan_receipt', maintaining readability and predictability.
With 12 tools, the count is well-scoped for a precious metals portfolio and market analysis server. Each tool earns its place by covering distinct aspects such as portfolio management, market data, analytics, and AI features, without being overwhelming or insufficient for the domain.
The tool surface provides complete coverage for the precious metals domain, including CRUD-like operations (add_holding, get_portfolio), market intelligence (get_spot_prices, get_price_history, get_stack_signal), analytics (get_analytics, get_speculation), and specialized utilities (scan_receipt, get_junk_silver). No obvious gaps exist for core workflows.
Available Tools
12 toolsadd_holdingAdd HoldingCInspect
Add a precious metals purchase to your portfolio
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Product name (e.g. "2024 American Silver Eagle") | |
| metal | Yes | Metal type | |
| api_key | Yes | TroyStack API key (required) | |
| quantity | Yes | Number of pieces | |
| weight_oz | Yes | Weight per piece in troy ounces | |
| purchase_date | No | Purchase date YYYY-MM-DD (default: today) | |
| purchase_price | Yes | Price per piece in USD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states this is an 'Add' operation which implies a write/mutation, but doesn't mention authentication requirements (though 'api_key' is in schema), whether this is idempotent, what happens on duplicate entries, or what the response looks like. Significant behavioral details are missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that gets straight to the point with zero wasted words. It's appropriately sized for a tool with this complexity and is perfectly front-loaded with the essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a write/mutation tool with 7 parameters, no annotations, and no output schema, the description is insufficient. It doesn't explain what constitutes a successful operation, what the return value might be, error conditions, or how this tool fits into the broader portfolio management workflow. The agent lacks critical context for proper usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description doesn't add any additional parameter semantics beyond what's in the schema - it doesn't explain relationships between parameters (e.g., how 'quantity' and 'weight_oz' interact) or provide usage examples. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Add') and resource ('precious metals purchase to your portfolio'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'scan_receipt' or 'get_portfolio' which might have overlapping functionality, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. For example, it doesn't mention whether this is for manual entry versus automated imports, or how it relates to tools like 'scan_receipt' or 'get_portfolio'. The description only states what it does, not when it's appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
chat_with_troyChat with TroyAInspect
Ask Troy anything about precious metals, markets, your stack. Provide api_key for personalized portfolio-aware answers, or omit for general market analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | TroyStack API key for portfolio-aware responses (optional) | |
| message | Yes | Your question or message for Troy |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the api_key requirement for personalized responses and the optional nature of authentication, but doesn't describe response format, rate limits, error conditions, or other operational behaviors. It provides basic context but lacks comprehensive behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place: the first establishes the tool's purpose and scope, the second provides clear usage guidance. No wasted words, front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a conversational tool with no annotations and no output schema, the description provides adequate basic context about purpose and authentication modes. However, it doesn't explain what kind of responses to expect, whether there are conversation history considerations, or other contextual details that would help the agent use it effectively in complex scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds marginal value by reinforcing the api_key's purpose ('for portfolio-aware responses') and the message parameter's role ('Your question or message'), but doesn't provide additional syntax, format, or constraint details beyond what the schema specifies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Ask Troy anything') and resources ('precious metals, markets, your stack'), distinguishing it from sibling tools like get_spot_prices or get_portfolio by focusing on conversational Q&A rather than data retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides when-to-use guidance: use with api_key for 'personalized portfolio-aware answers' or omit it for 'general market analysis'. This clearly distinguishes between authenticated and unauthenticated use cases, helping the agent choose appropriately.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_analyticsGet Portfolio AnalyticsCInspect
Cost basis, average cost per oz, break-even price, unrealized P/L per metal
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | TroyStack API key (required) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. The description lists output metrics but doesn't describe how the tool behaves: it doesn't mention whether this is a read-only operation, requires authentication via api_key, involves calculations, or has any rate limits or side effects. For a tool with no annotations, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient phrase listing key metrics without unnecessary words. It's appropriately sized for its purpose, though it could be more structured with a verb to clarify action. Every term earns its place by specifying what analytics are provided.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of financial analytics, no annotations, and no output schema, the description is incomplete. It lists metrics but doesn't explain how they are calculated, returned, or formatted. For a tool that likely involves data processing and authentication, more context on behavior and output is needed to be fully helpful to an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'api_key' documented as 'TroyStack API key (required)'. The description adds no additional parameter information beyond what the schema provides. With high schema coverage, the baseline score is 3, as the description doesn't compensate but also doesn't detract from the schema's documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description lists specific metrics (cost basis, average cost per oz, break-even price, unrealized P/L per metal) which gives some indication of what the tool returns, but it lacks a clear verb-action statement like 'retrieve' or 'calculate'. It doesn't explicitly state that this tool provides analytics for a portfolio, though the title suggests this. The description distinguishes from siblings by focusing on specific financial metrics rather than holdings, prices, or other functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'get_portfolio' or 'get_spot_prices'. The description implies usage for portfolio analytics but doesn't specify prerequisites, context, or exclusions. Without explicit when/when-not statements or named alternatives, the agent must infer usage from the metrics listed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_daily_briefGet Daily BriefAInspect
Troy's daily market brief. Provide api_key for personalized brief, or omit for the latest Stack Signal market synthesis.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | TroyStack API key for personalized brief (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool can return either personalized or general content based on the api_key parameter, which is useful behavioral context. However, it doesn't mention authentication requirements, rate limits, data freshness, or what format the brief returns (text, structured data, etc.), leaving significant gaps for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two clear sentences that each serve a distinct purpose: first establishes the resource, second explains parameter usage. There's no wasted verbiage, and the most important information (what the tool returns and how to control it) is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description adequately covers the basic functionality and parameter usage. However, for a tool that presumably returns market analysis content, it should ideally mention the format of the return value (e.g., text summary, structured data) and any authentication or rate limit considerations. The description is minimally complete but lacks depth for confident agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful context about the api_key parameter's purpose: it enables 'personalized brief' versus 'latest Stack Signal market synthesis' when omitted. Since schema description coverage is 100% (the schema already documents the parameter as optional), the description provides valuable semantic differentiation beyond the schema's basic documentation, though it doesn't explain format or validation details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving 'Troy's daily market brief' with the specific resource identified. It distinguishes between two usage modes (personalized vs. latest synthesis), though it doesn't explicitly differentiate from sibling tools like 'get_stack_signal' or 'get_spot_prices' which might also provide market information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use different modes: include api_key for a personalized brief, omit it for the latest market synthesis. However, it doesn't specify when to choose this tool over alternatives like 'get_stack_signal' or 'get_spot_prices' from the sibling list, nor does it mention any prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_junk_silverCalculate Junk Silver Melt ValueBInspect
Calculates silver melt value for pre-1965 US coinage
| Name | Required | Description | Default |
|---|---|---|---|
| dimes | No | Roosevelt/Mercury dime count (0.07234 oz Ag each) | |
| dollars | No | Morgan/Peace dollar count (0.77344 oz Ag each) | |
| quarters | No | Washington quarter count (0.18084 oz Ag each) | |
| kennedy_40 | No | Kennedy 1965-1970 40% silver halves (0.14792 oz Ag each) | |
| war_nickels | No | Jefferson 1942-1945 war nickel count (0.05626 oz Ag each) | |
| half_dollars | No | Walking Liberty/Franklin/1964 Kennedy half count (0.36169 oz Ag each) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states what the tool does but doesn't describe how it behaves: no information about required permissions, rate limits, error handling, or what the output looks like (e.g., currency, format). For a calculation tool with zero annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that gets straight to the point. It's appropriately sized for a calculation tool and wastes no words while clearly stating the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a calculation tool with rich schema documentation (100% coverage) but no annotations and no output schema, the description is minimally adequate. It states what the tool calculates but doesn't provide context about the calculation method, output format, or how it integrates with other tools. The schema compensates for parameter documentation, but behavioral aspects remain undocumented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description doesn't mention any parameters, but the input schema has 100% description coverage with detailed explanations of each coin type and silver content. Since the schema does the heavy lifting, the baseline score of 3 is appropriate - the description adds no parameter semantics beyond what's already in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Calculates silver melt value for pre-1965 US coinage'. It specifies both the action ('calculates') and the resource ('silver melt value'), though it doesn't explicitly differentiate from siblings like 'get_spot_prices' or 'get_portfolio' which might provide related financial data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_spot_prices' (which might provide current silver prices) or 'get_portfolio' (which might track holdings), nor does it specify prerequisites or exclusions for its use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_portfolioGet PortfolioBInspect
Returns your precious metals portfolio: total value, cost basis, gain/loss, per-metal breakdown, and individual holdings
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | TroyStack API key (required) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but offers minimal behavioral context. It implies a read-only operation ('Returns') but doesn't disclose authentication requirements (though the schema shows api_key), rate limits, error conditions, or whether data is real-time vs cached. The description adds some value by listing return fields but lacks crucial operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and immediately lists all return data. Every word earns its place with zero redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only portfolio retrieval tool with 1 parameter (fully documented in schema) but no annotations or output schema, the description is minimally adequate. It clearly states what data is returned, but lacks context about data freshness, authentication flow, error handling, or relationship to sibling tools. The absence of output schema means the description should ideally explain return structure more thoroughly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (api_key parameter fully documented), so baseline is 3. The description adds no parameter-specific information beyond what the schema already provides about the api_key requirement. No additional syntax, format, or usage context is given for parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Returns') and resource ('your precious metals portfolio'), listing the exact data returned (total value, cost basis, gain/loss, per-metal breakdown, individual holdings). It distinguishes from siblings like get_spot_prices (spot prices only) or get_analytics (analytics data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose get_portfolio over get_analytics for portfolio insights, or when to use get_spot_prices instead for price-only data. No prerequisites or exclusions are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_price_historyGet Price HistoryBInspect
Historical spot price data for a precious metal over a time range
| Name | Required | Description | Default |
|---|---|---|---|
| metal | Yes | The metal to fetch history for | |
| range | No | Time range (default 1Y) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It implies a read-only operation but doesn't specify authentication needs, rate limits, error conditions, or what the return data looks like (e.g., format, granularity). For a data retrieval tool with zero annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose. Every word earns its place without redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read operation with two parameters and 100% schema coverage, the description is minimally adequate. However, with no annotations and no output schema, it should ideally provide more context about return format, data structure, or usage constraints to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters well-documented in the schema (metal with enum values, range with enum and default). The description adds no additional parameter semantics beyond what's in the schema, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving historical spot price data for precious metals over a time range. It specifies the verb ('get'), resource ('price history'), and scope ('precious metal', 'time range'), but doesn't explicitly differentiate from sibling tools like 'get_spot_prices' or 'get_daily_brief'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_spot_prices' (likely for current prices) or 'get_daily_brief', nor does it specify prerequisites or exclusions for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_speculationRun What-If Price ScenarioCInspect
Calculates price multipliers comparing current spot to hypothetical target prices
| Name | Required | Description | Default |
|---|---|---|---|
| gold | No | Target gold price per oz | |
| silver | No | Target silver price per oz | |
| platinum | No | Target platinum price per oz | |
| palladium | No | Target palladium price per oz |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions calculation but doesn't specify whether this is a read-only operation, what data sources it uses (e.g., real-time spot prices), whether it requires authentication, or what the output format looks like. For a calculation tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that communicates the core purpose without any wasted words. It's appropriately sized and front-loaded with the essential information. Every word earns its place in this concise formulation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (financial calculation with 4 parameters), lack of annotations, and no output schema, the description is insufficiently complete. It doesn't explain what 'price multipliers' means mathematically, what time frame 'current spot' refers to, or what the return values represent. The user must guess at the tool's behavior and output format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all four parameters with clear descriptions (e.g., 'Target gold price per oz'). The description adds no additional parameter information beyond what's in the schema. According to the scoring rules, when schema coverage is high (>80%), the baseline is 3 even with no param info in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Calculates price multipliers comparing current spot to hypothetical target prices'. It specifies the verb ('calculates'), resource ('price multipliers'), and scope ('comparing current spot to hypothetical target prices'). However, it doesn't explicitly distinguish this from sibling tools like get_spot_prices or get_price_history, which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like get_spot_prices (for current prices) or get_price_history (for historical data), nor does it specify scenarios where this what-if analysis is appropriate versus other portfolio tools. The user must infer usage from the purpose alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_spot_pricesGet Spot PricesAInspect
Returns current spot prices for gold, silver, platinum, and palladium with daily change percentages
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the tool returns current spot prices with daily changes, but doesn't disclose behavioral traits like data source, update frequency, rate limits, authentication requirements, or error handling. The description is minimal and lacks operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core functionality ('Returns current spot prices') and specifies the exact metals and additional data (daily change percentages). Every word earns its place with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and a read operation with potential complexity (financial data), the description is incomplete. It doesn't explain return format, data freshness, source reliability, or error conditions. For a financial data tool, more context about data characteristics would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the baseline is 4. The description appropriately doesn't discuss parameters since none exist, focusing instead on what the tool returns.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Returns' and specifies the exact resources: 'current spot prices for gold, silver, platinum, and palladium with daily change percentages'. It distinguishes from siblings like get_price_history (historical data) and get_portfolio (personal holdings).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for obtaining current precious metals spot prices with daily changes, but doesn't explicitly state when to use this tool versus alternatives like get_price_history (historical data) or get_daily_brief (broader market summary). No explicit exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_stack_signalGet Stack Signal ArticlesBInspect
Returns Stack Signal — TroyStack's curated precious metals market intelligence with Troy's commentary
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max articles to return (default 20) | |
| offset | No | Pagination offset (default 0) | |
| category | No | Optional category filter |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions 'returns' which implies a read operation, but doesn't address authentication needs, rate limits, error conditions, or whether this is a real-time or cached data source. The description adds minimal behavioral context beyond the basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that clearly states the tool's purpose without unnecessary words. It's appropriately sized for a simple retrieval tool and front-loads the essential information about what the tool returns.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with 3 documented parameters but no output schema, the description provides basic purpose but lacks important context. Without annotations or output schema, it should ideally mention the return format (e.g., list of articles with fields), authentication requirements, or data freshness to help the agent understand what to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters (limit, offset, category) with their types, constraints, and default values. The description adds no additional parameter information beyond what's in the schema, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns Stack Signal articles with Troy's commentary, specifying both the resource (Stack Signal articles) and the source (TroyStack's curated market intelligence). However, it doesn't differentiate from sibling tools like get_daily_brief or get_analytics that might also provide market information, so it falls short of a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like get_daily_brief or get_analytics. It doesn't mention prerequisites, timing considerations, or any explicit when-not-to-use scenarios, leaving the agent to infer usage context from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vault_watchGet COMEX Vault WatchBInspect
Returns COMEX warehouse inventory: registered, eligible, combined ounces, daily changes, open interest, oversubscribed ratio
| Name | Required | Description | Default |
|---|---|---|---|
| metal | No | Optional metal filter; returns all 4 if omitted |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states 'returns' data, implying a read-only operation, but doesn't disclose behavioral traits like whether this is real-time or cached data, rate limits, authentication needs, error conditions, or response format. For a data retrieval tool with zero annotation coverage, this leaves significant gaps in understanding how it behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Returns COMEX warehouse inventory') followed by specific data points. Every element earns its place by clarifying what data is returned, with zero redundant or verbose language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (data retrieval with optional filtering), no annotations, and no output schema, the description is partially complete. It specifies what data is returned but lacks details on behavioral aspects, error handling, or output structure. It's adequate for basic understanding but has clear gaps for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'metal' fully documented in the schema (optional filter with enum values). The tool description doesn't mention parameters at all, so it adds no semantic value beyond what the schema provides. With high schema coverage and only one optional parameter, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'returns' and specifies the exact resource 'COMEX warehouse inventory' with detailed data points (registered, eligible, combined ounces, daily changes, open interest, oversubscribed ratio). It distinguishes from siblings like get_spot_prices or get_price_history by focusing on inventory metrics rather than pricing or historical data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. While it implicitly suggests use for inventory data, it doesn't mention when not to use it or refer to sibling tools like get_analytics or get_daily_brief that might overlap in financial context. The parameter description in the schema hints at optional filtering, but the tool description itself offers no usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scan_receiptScan ReceiptBInspect
Extract precious metals purchase data from a receipt image using AI vision
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | TroyStack API key (required) | |
| image_base64 | Yes | Base64-encoded receipt image (JPEG) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden but only states the extraction function. It lacks details on behavioral traits such as error handling, rate limits, authentication needs (implied by api_key but not explained), or output format, which are critical for an AI vision tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without any wasted words, making it easy to understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of an AI vision tool with no annotations and no output schema, the description is incomplete. It fails to explain what the extracted data includes, how it's structured, or potential limitations, leaving significant gaps for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the two parameters. The description adds no additional meaning beyond implying the image is for receipts, which is already clear from the tool name and context, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('extract'), resource ('precious metals purchase data'), and method ('from a receipt image using AI vision'), distinguishing it from sibling tools like add_holding or get_portfolio which handle different operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It does not mention prerequisites like needing a receipt image or compare to other tools for data extraction, leaving usage context unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!