troystack
Server Details
Precious metals AI analyst with 12 tools — prices, portfolio, chat, receipts, and more.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- kingleosgold/troystack-api
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.2/5 across 10 of 12 tools scored.
Most tools have distinct purposes, such as get_spot_prices for current prices and get_price_history for historical data, but some overlap exists between chat_with_troy and get_stack_signal, both offering market insights, which could cause minor confusion.
All tools follow a consistent verb_noun naming pattern, primarily using 'get_' or 'add_' prefixes, with no deviations in style, making them predictable and easy to understand.
With 12 tools, the count is well-scoped for the precious metals portfolio management domain, covering portfolio tracking, market data, analytics, and utilities without being overwhelming.
The toolset provides comprehensive coverage for portfolio management, market analysis, and utilities like receipt scanning, but lacks update or delete operations for holdings, which are minor gaps agents can work around.
Available Tools
12 toolsadd_holdingAdd HoldingCInspect
Add a precious metals purchase to your portfolio
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Product name (e.g. "2024 American Silver Eagle") | |
| metal | Yes | Metal type | |
| api_key | Yes | TroyStack API key (required) | |
| quantity | Yes | Number of pieces | |
| weight_oz | Yes | Weight per piece in troy ounces | |
| purchase_date | No | Purchase date YYYY-MM-DD (default: today) | |
| purchase_price | Yes | Price per piece in USD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states this is an 'Add' operation (implying a write/mutation) but provides no information about permissions needed, whether the operation is idempotent, what happens on failure, or what the response looks like. For a mutation tool with zero annotation coverage, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that clearly states the tool's purpose without any wasted words. It's appropriately sized and front-loaded with the essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 7 parameters, no annotations, and no output schema, the description is insufficient. It doesn't address behavioral aspects like error conditions, response format, or system impacts. The 100% schema coverage helps with parameters, but the overall context for using this write operation remains incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description adds no additional parameter information beyond what's in the schema, maintaining the baseline score of 3 for adequate coverage through structured data alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Add') and resource ('precious metals purchase to your portfolio'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'scan_receipt' which might also add holdings, but the specificity of 'precious metals purchase' provides good context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided about when to use this tool versus alternatives like 'scan_receipt' for automated addition or other portfolio management tools. The description states what it does but offers no context about appropriate use cases or prerequisites beyond what's implied by the parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
chat_with_troyChat with TroyAInspect
Ask Troy anything about precious metals, markets, your stack. Provide api_key for personalized portfolio-aware answers, or omit for general market analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | TroyStack API key for portfolio-aware responses (optional) | |
| message | Yes | Your question or message for Troy |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that api_key enables portfolio-aware responses, which adds some context about personalization. However, it lacks details on traits like rate limits, authentication needs beyond the api_key, response format, or potential side effects, leaving significant gaps for a tool with no structured annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded and concise, consisting of two sentences that efficiently convey the tool's purpose and key usage guidance. Every sentence earns its place by adding essential information without redundancy or unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (interactive chat with optional personalization), lack of annotations, and no output schema, the description is moderately complete. It covers the basic purpose and parameter usage but misses behavioral details like response format or error handling. It's adequate as a minimum viable description but has clear gaps in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters (api_key and message). The description adds value by explaining the semantic difference between providing or omitting api_key (personalized vs. general responses), but it doesn't provide additional syntax or format details beyond what the schema offers. Baseline 3 is appropriate as the schema handles most of the parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to ask questions about precious metals, markets, and personal stacks to Troy. It specifies the verb 'ask' and the resource 'Troy' with domain context. However, it doesn't explicitly differentiate from siblings like 'get_analytics' or 'get_speculation', which might also provide market insights.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool: for asking questions to Troy. It includes guidance on optional api_key usage for personalized vs. general responses, which helps in decision-making. However, it doesn't explicitly state when not to use it or name alternatives among siblings, such as using 'get_spot_prices' for price data instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_analyticsGet Portfolio AnalyticsCInspect
Cost basis, average cost per oz, break-even price, unrealized P/L per metal
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | TroyStack API key (required) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. The description lists output metrics but doesn't explain how they're calculated, whether they're real-time or historical, what permissions are required beyond the API key, or what format the response takes. For a financial analytics tool with zero annotation coverage, this leaves significant behavioral questions unanswered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise - just four metrics listed without unnecessary words. Every element (each metric) earns its place by specifying what analytics are provided. The structure is front-loaded with the core information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial analytics tool with no annotations and no output schema, the description is insufficiently complete. It lists output metrics but doesn't explain their calculation methodology, timeframes, or response format. Given the complexity of financial analytics and the lack of structured documentation, the description should provide more context about what users can expect from this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100% with the single parameter 'api_key' well-documented in the schema. The description doesn't mention any parameters at all, which is acceptable since the schema handles the parameter documentation completely. The baseline score of 3 reflects adequate parameter coverage through the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states what metrics the tool provides (cost basis, average cost per oz, break-even price, unrealized P/L per metal), which gives a specific understanding of what analytics are returned. However, it doesn't explicitly mention the verb 'retrieve' or 'calculate' and doesn't distinguish this from sibling tools like get_portfolio or get_daily_brief that might provide overlapping information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With sibling tools like get_portfolio and get_daily_brief that might provide related financial data, there's no indication of when this specific analytics tool is appropriate versus those other options.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_daily_briefGet Daily BriefBInspect
Troy's daily market brief. Provide api_key for personalized brief, or omit for the latest Stack Signal market synthesis.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | TroyStack API key for personalized brief (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but offers minimal behavioral disclosure. It mentions the optional api_key parameter but doesn't describe what the brief contains, format, update frequency, authentication requirements, rate limits, or error handling. For a tool with zero annotation coverage, this leaves significant gaps in understanding how it behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (one sentence) and front-loaded with the core purpose. Every word earns its place by distinguishing between the two usage modes. No wasted verbiage or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the brief contains, its structure, or how the 'personalized' version differs from the 'latest synthesis'. For a tool that presumably returns market insights, more context on output format and content would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the optional api_key parameter. The description adds marginal value by clarifying that the api_key enables 'personalized brief' versus 'latest Stack Signal market synthesis' when omitted, but doesn't provide additional syntax or format details beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving 'Troy's daily market brief' with options for personalized vs. latest synthesis. It specifies the resource ('market brief') and the action ('get'), though it doesn't explicitly differentiate from siblings like get_stack_signal or get_spot_prices beyond the 'daily' aspect.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use each option: provide api_key for personalized brief, omit for latest Stack Signal synthesis. However, it doesn't explicitly state when to use this tool versus alternatives like get_stack_signal or get_analytics, which might offer overlapping market insights.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_junk_silverCalculate Junk Silver Melt ValueBInspect
Calculates silver melt value for pre-1965 US coinage
| Name | Required | Description | Default |
|---|---|---|---|
| dimes | No | Roosevelt/Mercury dime count (0.07234 oz Ag each) | |
| dollars | No | Morgan/Peace dollar count (0.77344 oz Ag each) | |
| quarters | No | Washington quarter count (0.18084 oz Ag each) | |
| kennedy_40 | No | Kennedy 1965-1970 40% silver halves (0.14792 oz Ag each) | |
| war_nickels | No | Jefferson 1942-1945 war nickel count (0.05626 oz Ag each) | |
| half_dollars | No | Walking Liberty/Franklin/1964 Kennedy half count (0.36169 oz Ag each) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states it 'calculates' which implies a read-only operation, but doesn't mention whether it requires authentication, uses real-time prices, has rate limits, or what format the output takes. For a calculation tool with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that communicates the core purpose without any wasted words. It's appropriately sized for a calculation tool and front-loads the essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a calculation tool with 6 parameters and no output schema, the description is too minimal. It doesn't explain what the calculation returns (dollar amount? weight?), what price source it uses, or any limitations. With no annotations and no output schema, the description should provide more complete context about the tool's behavior and results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with each parameter clearly documented in the schema itself. The description adds no additional parameter information beyond what's already in the schema descriptions. According to guidelines, when schema coverage is high (>80%), the baseline is 3 even with no param info in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('calculates silver melt value') and the specific resource ('pre-1965 US coinage'). It distinguishes itself from siblings like get_spot_prices or get_portfolio by focusing exclusively on melt value calculation for historical coins.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to use get_spot_prices for current prices, get_portfolio for broader holdings, or other siblings. There's no context about prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_portfolioGet PortfolioBInspect
Returns your precious metals portfolio: total value, cost basis, gain/loss, per-metal breakdown, and individual holdings
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | TroyStack API key (required) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions the return data but doesn't disclose behavioral traits like authentication needs (implied by api_key but not explained), rate limits, data freshness, or error handling. 'Returns your... portfolio' is basic functional disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently lists all key output components without redundancy. It's front-loaded with the core purpose and avoids unnecessary words, making it highly scannable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with one parameter and no output schema, the description covers the purpose and output structure adequately. However, without annotations or usage guidelines, it lacks context on authentication, data scope, or sibling differentiation, leaving gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single required api_key parameter. The description adds no parameter-specific information beyond what the schema provides, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Returns') and resource ('your precious metals portfolio'), with detailed output components (total value, cost basis, etc.). It distinguishes from siblings like get_spot_prices (spot prices only) or get_analytics (analytics-focused).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like get_analytics or get_vault_watch. The description lists output components but doesn't specify use cases (e.g., portfolio review vs. detailed analysis).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_price_historyGet Price HistoryCInspect
Historical spot price data for a precious metal over a time range
| Name | Required | Description | Default |
|---|---|---|---|
| metal | Yes | The metal to fetch history for | |
| range | No | Time range (default 1Y) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. While it implies a read-only operation by stating 'get' historical data, it doesn't address important behavioral aspects like rate limits, authentication requirements, data freshness, response format, or whether this is a real-time or cached data source.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise - a single sentence that efficiently communicates the core functionality. Every word earns its place, with no redundant information. It's appropriately sized for a simple data retrieval tool with well-documented parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a data retrieval tool with no annotations and no output schema, the description is insufficient. It doesn't explain what format the historical data returns (time series, aggregated statistics, etc.), data granularity, or any limitations. Given the lack of structured metadata, the description should provide more context about the tool's behavior and output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions both parameters (metal and time range) at a high level, but adds minimal semantic value beyond what's already in the schema. With 100% schema description coverage and both parameters well-documented with enums and descriptions, the baseline is 3. The description doesn't provide additional context about parameter interactions or usage nuances.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving historical spot price data for precious metals over a time range. It specifies both the resource (precious metal spot prices) and the action (get historical data), but doesn't explicitly differentiate from sibling tools like get_spot_prices or get_daily_brief that might also provide price-related data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like get_spot_prices (which might provide current prices) or get_daily_brief (which might provide summarized data), nor does it specify use cases, prerequisites, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_speculationRun What-If Price ScenarioBInspect
Calculates price multipliers comparing current spot to hypothetical target prices
| Name | Required | Description | Default |
|---|---|---|---|
| gold | No | Target gold price per oz | |
| silver | No | Target silver price per oz | |
| platinum | No | Target platinum price per oz | |
| palladium | No | Target palladium price per oz |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states the calculation function but doesn't reveal whether this is a read-only operation, if it requires authentication, what happens with missing parameters, rate limits, or error conditions. For a calculation tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (price calculation with 4 parameters), no annotations, and no output schema, the description is minimally adequate but incomplete. It explains what the tool does but lacks behavioral context, usage guidance, and output information that would be helpful for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all four parameters with clear descriptions. The description adds no additional parameter semantics beyond what's in the schema, but doesn't need to compensate for coverage gaps. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('calculates') and resource ('price multipliers'), explaining it compares current spot prices to hypothetical target prices. However, it doesn't explicitly distinguish this tool from siblings like 'get_spot_prices' or 'get_price_history' which might also involve price calculations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, appropriate contexts, or exclusions, leaving the agent to infer usage from the purpose alone without distinguishing it from sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_spot_pricesGet Spot PricesAInspect
Returns current spot prices for gold, silver, platinum, and palladium with daily change percentages
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only states what data is returned without disclosing behavioral traits like rate limits, authentication needs, data freshness (e.g., real-time vs. delayed), or error handling. It mentions no side effects, which is adequate but minimal for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Returns current spot prices') and includes all necessary details without waste. Every word earns its place, and no extraneous information is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool with no annotations and no output schema, the description is complete in stating what data is returned. However, it lacks context on data format (e.g., structured vs. raw), units (e.g., USD per ounce), or source reliability, which could be helpful given the financial nature and sibling tools like get_analytics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so no parameter documentation is needed. The description appropriately focuses on output semantics ('current spot prices... with daily change percentages'), adding value beyond the empty input schema. Baseline 4 applies for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Returns current spot prices') and resources ('gold, silver, platinum, and palladium'), including the additional data point 'daily change percentages'. It distinguishes itself from siblings like get_price_history (historical data) and get_portfolio (user holdings).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for current spot prices with daily changes, but provides no explicit guidance on when to use this tool versus alternatives like get_price_history for historical data or get_daily_brief for broader market updates. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_stack_signalGet Stack Signal ArticlesCInspect
Returns Stack Signal — TroyStack's curated precious metals market intelligence with Troy's commentary
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max articles to return (default 20) | |
| offset | No | Pagination offset (default 0) | |
| category | No | Optional category filter |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns articles but doesn't describe key behaviors like whether it's a read-only operation, if it requires authentication, rate limits, or what the return format looks like (e.g., list of articles with fields). This is a significant gap for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, though it could be slightly more structured by explicitly mentioning it's a retrieval tool for articles.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a data retrieval tool with no annotations and no output schema, the description is incomplete. It doesn't explain the return values (e.g., article structure, fields like title or date), behavioral aspects, or how it differs from siblings, leaving the agent with insufficient context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, clearly documenting all three parameters (limit, offset, category) with their types, constraints, and defaults. The description adds no additional parameter information beyond what the schema provides, so it meets the baseline score for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'returns Stack Signal — TroyStack's curated precious metals market intelligence with Troy's commentary', which specifies the verb ('returns'), resource ('Stack Signal articles'), and content type. However, it doesn't explicitly differentiate from sibling tools like 'get_daily_brief' or 'get_analytics' that might also provide market information, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools or contexts where this specific curated intelligence is preferred over other data sources like 'get_spot_prices' or 'get_daily_brief', leaving the agent without usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vault_watchGet COMEX Vault WatchAInspect
Returns COMEX warehouse inventory: registered, eligible, combined ounces, daily changes, open interest, oversubscribed ratio
| Name | Required | Description | Default |
|---|---|---|---|
| metal | No | Optional metal filter; returns all 4 if omitted |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the tool's behavior as a read operation ('Returns') and specifies the inventory metrics returned. However, it lacks details on rate limits, authentication needs, data freshness, or error handling, which are important for a financial data tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence that efficiently lists all key data points without wasted words. It is front-loaded with the core purpose and follows with specific metrics, making it easy to scan and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description adequately covers the purpose and data returned. However, it lacks context on output format (e.g., structured data vs. raw text), potential limitations, or integration with sibling tools, leaving gaps in completeness for informed agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting the optional metal filter with enum values. The description adds value by clarifying the default behavior ('returns all 4 if omitted'), which isn't in the schema. With only one parameter, this extra context compensates well, though it doesn't explain semantic nuances beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Returns') and resource ('COMEX warehouse inventory'), listing the exact data points provided (registered, eligible ounces, etc.). It distinguishes this tool from siblings like get_spot_prices or get_price_history by focusing on inventory metrics rather than pricing or portfolio data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through the listed data points (e.g., for inventory analysis), but provides no explicit guidance on when to use this tool versus alternatives like get_analytics or get_daily_brief. It mentions the optional metal filter, which hints at context but doesn't define specific scenarios or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scan_receiptScan ReceiptCInspect
Extract precious metals purchase data from a receipt image using AI vision
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | TroyStack API key (required) | |
| image_base64 | Yes | Base64-encoded receipt image (JPEG) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'using AI vision' but lacks details on accuracy, limitations, error handling, or output format. For a tool with no annotations and complex AI processing, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Extract precious metals purchase data from a receipt image') and adds method context ('using AI vision') without unnecessary details. Every word earns its place, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of AI-based data extraction, no annotations, and no output schema, the description is incomplete. It does not explain what data is extracted (e.g., item names, prices, dates), potential errors, or how results are returned, leaving significant gaps for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the two parameters (api_key and image_base64). The description adds no additional meaning beyond the schema, such as explaining the expected image quality or API key permissions. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Extract') and resource ('precious metals purchase data from a receipt image'), and mentions the method ('using AI vision'). However, it does not explicitly distinguish this tool from sibling tools like 'add_holding' or 'get_junk_silver', which might also involve precious metals data but serve different functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., needing a receipt image), exclusions, or compare it to sibling tools like 'add_holding' for manual data entry or 'get_analytics' for processed data analysis, leaving usage context unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!