True Value Rankings — Cryptocurrency Fundamental Analysis
Server Details
Cryptocurrency fundamental analysis scores, fair value estimates, and rankings from TVR.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
11 toolscompare_coinsAInspect
Compares 2 to 5 cryptocurrencies on TVR Score, valuation, and fair value gap. Identifies the largest scoring difference. For a full metric-by-metric comparison, use get_comparison_report. Data from True Value Rankings (truevaluerankings.com).
| Name | Required | Description | Default |
|---|---|---|---|
| coins | Yes | Array of 2-5 cryptocurrency ticker symbols to compare (e.g., ["BTC", "ETH", "SOL"]) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds value by disclosing data source 'True Value Rankings' and specific analytical behavior 'Identifies the largest scoring difference'. However, lacks disclosure on rate limits, caching, auth requirements, or whether operation is read-only (though implied by 'Compares').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four efficient sentences with zero waste: purpose statement, behavioral detail, sibling differentiation, and data attribution. Front-loaded with the core action. Every sentence earns its place without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter comparison tool without output schema, description adequately covers comparison dimensions (TVR Score, valuation, fair value gap) and output focus (largest difference). Slight gap in not explicitly declaring read-only nature given lack of annotations, but sufficient for agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with clear typing and constraints (minItems: 2, maxItems: 5). Description mentions '2 to 5 cryptocurrencies' which aligns with schema constraints but doesn't add semantic meaning beyond what the schema already provides. Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Compares' with clear resource 'cryptocurrencies' and scope 'TVR Score, valuation, and fair value gap'. Explicitly distinguishes from sibling 'get_comparison_report' by contrasting summary comparison vs full metric-by-metric analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit alternative with 'For a full metric-by-metric comparison, use get_comparison_report', clearly signaling when NOT to use this tool and directing to the appropriate sibling. This establishes clear selection criteria against the other comparison tools available.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_comparison_reportAInspect
PREMIUM: Detailed metric-by-metric comparison of 2 to 5 coins with all 8 scores and analysis. Requires micropayment via MPP. Data from True Value Rankings (truevaluerankings.com).
| Name | Required | Description | Default |
|---|---|---|---|
| _meta | No | ||
| coins | Yes | Array of 2-5 cryptocurrency ticker symbols to compare |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the micropayment requirement (critical behavioral trait), data source ('True Value Rankings'), and return scope ('all 8 scores'). It lacks details on failure modes (e.g., payment failure) or caching behavior, preventing a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence that front-loads the critical 'PREMIUM' classification and efficiently packs in the purpose, scope, payment requirement, and data source with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description adequately covers the critical constraints (micropayment) and data provenance. However, it could improve by briefly describing the return format or analysis structure since the output schema is absent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50% (the 'coins' parameter is well-described in the schema, but '_meta' is undocumented). The description mentions the '2 to 5 coins' constraint which aligns with the schema, but fails to explain the undocumented '_meta' parameter, thus not compensating for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('comparison'), resource ('coins'/'metrics'), and scope ('2 to 5 coins', 'all 8 scores'). The 'PREMIUM' label and 'Requires micropayment' requirement effectively distinguish this from sibling tool 'compare_coins' by indicating this is the paid, detailed tier.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description clearly states a critical usage constraint ('Requires micropayment via MPP') which signals when to use this tool (when detailed analysis is worth payment). However, it does not explicitly name free alternatives like 'compare_coins' for cases when users don't want to pay.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_custom_rankingAInspect
PREMIUM: Recalculates the full ranking table using custom metric weights. Shows how different priorities change the rankings. Requires micropayment via MPP. Data from True Value Rankings (truevaluerankings.com).
| Name | Required | Description | Default |
|---|---|---|---|
| _meta | No | ||
| privacy | Yes | Weight for Privacy (default: 5) | |
| adoption | Yes | Weight for Adoption (default: 15) | |
| liquidity | Yes | Weight for Market Liquidity (default: 10) | |
| longevity | Yes | Weight for Longevity (default: 8) | |
| legal_clarity | Yes | Weight for Legal Clarity (default: 5) | |
| technical_dev | Yes | Weight for Technical Development (default: 5) | |
| economic_model | Yes | Weight for Economic Model (default: 47) | |
| network_security | Yes | Weight for Network Security (default: 20) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates the payment/auth requirement ('Requires micropayment via MPP'), data provenance ('Data from True Value Rankings'), and access tier ('PREMIUM'). It does not mention rate limits, caching behavior, or whether results persist, but covers the essential cost and source constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence where every clause earns its place: the 'PREMIUM' prefix immediately signals access constraints, followed by action, value proposition, cost requirement, and data attribution. No words are wasted.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 8 required parameters, no annotations, and no output schema, the description adequately covers the essential context: it explains the transformation (recalculation), the cost barrier (micropayment), and the data source. It could be improved by explicitly stating the return format (e.g., 'returns a ranked list'), but 'Recalculates the full ranking table' provides sufficient implicit context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 89% schema description coverage, the schema already documents each weight parameter with defaults. The description adds the conceptual framing ('custom metric weights') that unifies the 8 parameters, but does not add syntax details, validation rules (e.g., whether weights must sum to 100), or semantic interpretation beyond what the schema provides. Baseline 3 is appropriate when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Recalculates') and resources ('full ranking table') combined with the mechanism ('custom metric weights') to clearly define the tool's function. The 'PREMIUM' prefix and emphasis on 'custom' effectively distinguishes this from sibling tools like get_top_ranked which likely return standard rankings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides the critical constraint 'Requires micropayment via MPP' which informs when the tool can be invoked, but lacks explicit comparison to siblings (e.g., when to use get_top_ranked vs this custom version). The phrase 'Shows how different priorities change the rankings' implies a sensitivity analysis use case but doesn't explicitly state 'use this when you want to test custom weight scenarios'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_full_breakdownAInspect
PREMIUM: Returns a complete fundamental analysis for a cryptocurrency, including all 8 metric scores with explanations, special scoring rules applied, valuation reasoning, and ranking context. Requires micropayment via MPP (Machine Payments Protocol). Data from True Value Rankings (truevaluerankings.com).
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Cryptocurrency ticker symbol (e.g., BTC, ETH, LTC, XMR, SOL, ADA, AVAX, DOT, POL, LINK) | |
| _meta | No | Payment metadata (handled automatically by MPP-enabled clients) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses the critical 'Requires micropayment via MPP' constraint and data source (True Value Rankings). However, it lacks disclosure of other behavioral traits like caching policies, rate limits, error handling for insufficient payments, or whether this is a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste. The first sentence front-loads the value proposition (complete analysis with specific components), while the second covers the payment constraint and attribution. The 'PREMIUM' tag efficiently signals the tier.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of both annotations and output schema, the description compensates well by enumerating the specific output components (8 metric scores, explanations, valuation reasoning). It covers the payment prerequisite and data provenance. Minor gap: no mention of error states or data freshness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing detailed descriptions for both 'coin' (with examples) and '_meta'. The description does not contradict or significantly enhance parameter semantics beyond the schema, meeting the baseline expectation when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a 'complete fundamental analysis' with specific outputs listed: 'all 8 metric scores with explanations, special scoring rules applied, valuation reasoning, and ranking context.' The word 'complete' and specific mention of 'all 8 metrics' effectively distinguishes this from siblings like get_tvr_score (likely single score) or compare_coins (comparative view).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'PREMIUM' prefix and micropayment requirement provide implicit context that this is for comprehensive analysis needs, but there is no explicit guidance on when to choose this over siblings like get_tvr_score (for quick checks) or get_comparison_report. The description implies usage through 'complete' but never explicitly states selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_historical_scoresAInspect
PREMIUM: Returns how a coin's scores have changed over time, with dates and reasons for each change. Requires micropayment via MPP. Data from True Value Rankings (truevaluerankings.com).
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Cryptocurrency ticker symbol | |
| _meta | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully communicates the micropayment requirement, the data provenance (truevaluerankings.com), and the return structure (scores, dates, reasons). It could be improved by mentioning rate limits or error conditions, but covers the primary behavioral constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with the critical 'PREMIUM' flag front-loaded, followed by functionality, payment requirement, and data source in a single dense sentence. Every clause provides distinct value (cost, behavior, provenance) with no redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the absence of an output schema, the description adequately explains the return values (historical scores with dates and reasons). It covers the essential micropayment prerequisite. The only significant omission is the undocumented _meta parameter; otherwise, it provides sufficient context for a historical data retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% (coin is documented, _meta is not). The description references 'a coin' which aligns with the schema but adds no semantic depth beyond 'Cryptocurrency ticker symbol.' It fails to explain the purpose or content of the _meta parameter, leaving a significant gap given the partial schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns historical score changes 'over time, with dates and reasons for each change,' which specifically distinguishes it from siblings like get_tvr_score (current scores) or get_top_ranked (lists). The 'PREMIUM' prefix and mention of 'True Value Rankings' further clarify the scope and data source.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides critical usage constraints by stating 'Requires micropayment via MPP,' which is essential for agent selection. However, it lacks explicit guidance on when to use this versus alternatives like get_tvr_score (current values) or get_full_breakdown, forcing the agent to infer from the 'historical' keyword alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_methodologyAInspect
Explains how True Value Rankings scores cryptocurrencies: the 8 metrics, default weights, weight presets, and general scoring approach. Default scores are one possible configuration that premium users can fully customize. Data from True Value Rankings (truevaluerankings.com).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It adds valuable context about the data source (truevaluerankings.com) and premium customization capabilities, but omits operational details like response format, caching behavior, or whether the methodology content is static versus dynamic.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with no redundancy. It front-loads the core action ('Explains how...') and follows with supporting context about customization and data attribution, with every sentence earning its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (no parameters) and lack of output schema, the description adequately covers what content will be returned (8 metrics, weights, presets). However, it could improve by indicating whether the response is structured data or narrative text.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4. The description correctly does not invent parameters, and the empty schema is appropriate for a documentation retrieval tool that requires no filtering inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Explains how True Value Rankings scores cryptocurrencies' with specific content details (8 metrics, default weights, weight presets). It clearly distinguishes itself from data-retrieval siblings like get_tvr_score or get_top_ranked by focusing on methodology explanation rather than fetching specific coin data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by mentioning it explains the 'default' configuration that premium users can customize, hinting at when users might need this context. However, it lacks explicit guidance such as 'use this before interpreting scores from get_tvr_score' or contrasts with specific alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_portfolio_analysisAInspect
PREMIUM: Analyzes a multi-coin portfolio for fundamental strength, returning a weighted TVR Score and metric profile. Requires micropayment via MPP. Data from True Value Rankings (truevaluerankings.com).
| Name | Required | Description | Default |
|---|---|---|---|
| _meta | No | ||
| portfolio | Yes | Portfolio coins with allocation percentages (should sum to 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and successfully discloses the micropayment requirement, data provenance ('True Value Rankings'), and return structure ('weighted TVR Score and metric profile'). It does not mention side effects or idempotency, but covers the critical cost behavior missing from structured fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two dense, front-loaded sentences with zero waste. The 'PREMIUM:' tag immediately signals cost, followed by the action, return values, payment method, and data source. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description adequately covers the return values and operational constraints (micropayment). It could mention the 10-coin limit or _meta parameter purpose, but provides sufficient context for an agent to understand the tool's function and cost structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% (portfolio described, _meta not). The description mentions 'multi-coin portfolio' which aligns with the portfolio parameter, but does not add semantics for the undocumented _meta parameter or elaborate on parameter constraints (like the 10-coin max) beyond what the schema already provides. Baseline 3 is appropriate for partial schema coverage without description compensation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Analyzes') and resources ('multi-coin portfolio') and clearly distinguishes from siblings like get_tvr_score by emphasizing portfolio-level analysis rather than single-coin evaluation. The mention of 'weighted TVR Score' further clarifies the specific output type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage constraints via 'PREMIUM' tagging and explicit micropayment requirement ('Requires micropayment via MPP'), which helps agents understand prerequisites. However, it lacks explicit guidance on when to use single-coin alternatives like get_tvr_score versus this portfolio tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_top_rankedAInspect
Returns the top N cryptocurrencies by a chosen ranking method: tvr_score (fundamental strength), most_undervalued (largest positive fair value gap), or most_overvalued (largest negative fair value gap). Data from True Value Rankings (truevaluerankings.com).
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of coins to return (1-10, default 5) | |
| rank_by | No | Ranking method: tvr_score, most_undervalued, or most_overvalued | tvr_score |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully adds context about the data source (truevaluerankings.com) and explains the semantic meaning of each ranking method (e.g., 'fundamental strength', 'fair value gap'). However, it omits other behavioral traits like rate limits, caching behavior, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence that efficiently conveys the tool's purpose, the available ranking methods with their definitions, and the data source. Every clause earns its place; there is no redundancy or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no nested objects) and lack of output schema, the description adequately covers the ranking methodology. However, since there is no output schema, the description should ideally describe what data fields are returned for each cryptocurrency (e.g., symbol, score, price), which is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the input schema has 100% coverage and documents the parameters adequately, the description adds valuable semantic context beyond the schema. It explains what 'tvr_score' actually measures (fundamental strength) and defines the valuation gaps for the undervalued/overvalued options, helping the agent understand the business logic behind the enum values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Returns the top N cryptocurrencies' and specifies the three distinct ranking methods available. However, it does not explicitly differentiate from siblings like 'get_custom_ranking' or 'list_coins' to clarify when this specific tool should be chosen over alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the three ranking methods (tvr_score, most_undervalued, most_overvalued) with helpful parenthetical definitions that imply when to use each. However, it lacks explicit guidance on when to use this tool versus siblings like 'compare_coins' or 'get_full_breakdown', or prerequisites for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tvr_estimated_valueAInspect
Returns TVR's fair value estimate (TVR Price) for a cryptocurrency, including the fair value gap and valuation category. TVR Price compares fundamental quality to current market price. Data from True Value Rankings (truevaluerankings.com).
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Cryptocurrency ticker symbol (e.g., BTC, ETH, LTC, XMR, SOL, ADA, AVAX, DOT, POL, LINK) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully explains the conceptual methodology ('compares fundamental quality to current market price'), discloses the data source ('truevaluerankings.com'), and clarifies what components are returned. However, it omits operational details like rate limits or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. The first sentence front-loads the core return values (TVR Price, fair value gap, valuation category), while the second provides methodological context and attribution. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately compensates by explicitly listing the three key return components (fair value estimate, gap, and category). It also provides methodology context. For a single-parameter read tool, this is sufficient, though mentioning authentication requirements or rate limits would improve it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the 'coin' parameter (including examples like BTC, ETH). The description mentions the tool is for a 'cryptocurrency' but adds no additional parameter semantics, validation rules, or format constraints beyond what the schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Returns TVR's fair value estimate' with specific components (fair value gap, valuation category). It distinguishes itself from sibling 'get_tvr_score' by emphasizing 'TVR Price' and explaining that it 'compares fundamental quality to current market price,' providing clear functional scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lacks explicit guidance on when to use this tool versus siblings like 'get_tvr_score' or 'get_full_breakdown'. It describes what the tool returns but does not advise on selection criteria, prerequisites, or when alternative tools might be more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tvr_scoreAInspect
Returns the TVR Score, rank, valuation category, TVR Price, and fair value gap for a cryptocurrency. Shows the strongest and weakest metric areas (2 of 8). TVR evaluates fundamental strength using Sound Value principles. Default scores are one possible configuration; premium users can customize all inputs. Data from True Value Rankings (truevaluerankings.com).
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Cryptocurrency ticker symbol (e.g., BTC, ETH, LTC, XMR, SOL, ADA, AVAX, DOT, POL, LINK) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully adds context about the data source (truevaluerankings.com), evaluation methodology (Sound Value principles), and scope limitations (2 of 8 metrics shown). However, it omits error handling behavior, caching policies, or explicit confirmation that this is a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with the core purpose front-loaded in the first sentence. Five sentences cover return values, scope limitations, methodology, usage constraints, and data source attribution. Each sentence adds distinct value, though the 'Sound Value principles' sentence adds marginal utility compared to the others.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single parameter) and lack of output schema, the description adequately compensates by enumerating the specific fields returned (TVR Score, rank, etc.) and explaining the partial nature of the data (2 of 8 metrics). It provides sufficient context for an agent to understand what data will be retrieved.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single 'coin' parameter, including example ticker symbols. The main description does not add additional semantics about the parameter format or validation rules, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states what the tool returns (TVR Score, rank, valuation category, TVR Price, fair value gap) using specific verbs. The '2 of 8' metric detail effectively distinguishes this summary view from the sibling 'get_full_breakdown' tool without explicitly naming it, though explicit sibling comparison would strengthen this further.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage guidance by noting that 'Default scores are one possible configuration; premium users can customize all inputs,' which hints that 'get_custom_ranking' exists for customization needs. However, it lacks explicit 'when to use' guidance comparing it to similar siblings like 'get_full_breakdown' or 'get_tvr_estimated_value.'
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_coinsAInspect
Lists all cryptocurrencies currently evaluated by True Value Rankings with their symbol, name, and category. New coins are added regularly. Data from True Value Rankings (truevaluerankings.com).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses data volatility ('New coins are added regularly') and data source attribution ('truevaluerankings.com'). However, it lacks disclosure of safety characteristics (read-only implied but not stated), pagination behavior for potentially large lists, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three front-loaded sentences with minimal waste. The first sentence delivers the core purpose immediately. The third sentence ('Data from True Value Rankings...') is slightly redundant with the first sentence's mention of the same source but adds the URL specificity, keeping it from being truly wasteful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description appropriately compensates by specifying the returned fields ('symbol, name, and category'). For a zero-parameter listing tool, this level of detail—covering scope, return fields, and data freshness—is sufficient for agent selection, though it could benefit from mentioning the return structure type (array vs object).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which establishes a baseline score of 4 per the evaluation rules. The description correctly offers no parameter-related content since none are required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Lists' with the specific resource 'cryptocurrencies currently evaluated by True Value Rankings' and clarifies the scope includes 'symbol, name, and category.' The phrase 'all cryptocurrencies' effectively distinguishes this from siblings like get_top_ranked (filtered) or get_full_breakdown (detailed single-coin analysis).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies this is the enumeration tool (listing 'all' coins) versus specialized analysis siblings, it provides no explicit guidance on when to use this versus get_top_ranked or compare_coins. The note about 'New coins are added regularly' hints at freshness concerns but doesn't specify usage patterns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!