Skip to main content
Glama

True Markets Market Data

Server Details

Real-time crypto market data

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 8 of 8 tools scored. Lowest: 2.6/5.

Server CoherenceA
Disambiguation5/5

Every tool has a clearly distinct purpose. Single-asset lookups (get_asset, get_asset_summary) are distinguished from bulk lists (list_assets, get_trending_assets), raw price data (get_price_history) is distinguished from AI summaries (get_asset_summary), and market-wide views (get_market_summary) are distinguished from individual asset analysis. Surging (price change) vs trending (volume) metrics are also clearly delineated.

Naming Consistency5/5

Excellent verb-noun consistency throughout. All tools use snake_case with leading verbs (get_ or list_). The pattern consistently conveys scope: singular nouns for specific records (get_asset), plural nouns for collections (list_assets, get_trending_assets), and compound nouns for specific data types (get_price_history, get_asset_summary).

Tool Count5/5

Eight tools is ideal for this domain. The set includes asset discovery (list, trending, surging), specific asset inspection (get_asset, get_asset_summary), historical data (get_price_history, get_asset_summary_history), and macro market views (get_market_summary) without bloat or redundancy.

Completeness4/5

Covers the core market data lifecycle well: discovery, detailed lookup, price history, and AI-powered analysis. Minor gaps include lack of symbol-based search (requires chain/address for get_asset) and no dedicated real-time price endpoint (latest price likely requires querying get_price_history), but the surface supports complete analytical workflows.

Available Tools

8 tools
get_assetB
Read-onlyIdempotent
Inspect

Get details for a specific token by chain and contract address.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainYesBlockchain network (solana, base)
addressYesToken contract address
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations fully披露 the safety profile (readOnlyHint=true, destructiveHint=false, idempotentHint=true) and external data access (openWorldHint=true). The description adds no behavioral context beyond what annotations provide—missing details on not-found behavior, rate limits, or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, zero waste. Front-loaded with verb 'Get', followed by object 'details for a specific token' and qualifying constraint 'by chain and contract address'. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 2-parameter tool with rich annotations, but gaps remain: no output schema is provided, yet the description doesn't specify what 'details' are returned (metadata, decimals, supply?) or error behavior for invalid addresses. Should do more to specify return value structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage ('Blockchain network (solana, base)' and 'Token contract address'), so the baseline is 3. The description merely references these parameters by name ('by chain and contract address') without adding syntax constraints, format examples, or validation rules beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb-resource combination: 'Get details for a specific token' identifies the action and scope. The phrase 'specific token' implicitly distinguishes this from sibling list operations (list_assets, get_surging_assets) by emphasizing singular lookup, though it lacks explicit differentiation from get_asset_summary.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus siblings like get_asset_summary (which likely returns condensed data) or list_assets (for bulk operations). No prerequisites or conditions mentioned despite the tool requiring exact chain/address identifiers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_asset_summaryA
Read-onlyIdempotent
Inspect

Get a daily AI-generated summary for a specific asset including news analysis and sentiment.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolYesAsset symbol (e.g., BTC, ETH, SOL)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only/idempotent/safe behavior, but the description adds crucial behavioral context: the content is AI-generated (not rule-based), includes news analysis and sentiment components, and represents daily data (frequency). This disclosure about content nature and generation method goes beyond what structured annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 11 words with zero waste. Front-loaded with verb 'Get', immediately followed by data type 'daily AI-generated summary', then scope 'for a specific asset', and finally content details 'including news analysis and sentiment'. Every phrase earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (1 param, 100% schema coverage), presence of good annotations, and absence of output schema, the description adequately explains what the tool returns (AI-generated summary with news/sentiment). It appropriately compensates for missing output schema by describing the return payload content.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'for a specific asset' which loosely maps to the symbol parameter but adds no syntax details, validation rules, or format guidance beyond the schema's examples (BTC, ETH, SOL). It neither adds significant value nor contradicts the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Get), resource (daily AI-generated summary), and scope (specific asset including news analysis and sentiment). It implicitly distinguishes from siblings like get_asset_summary_history via 'daily' (current vs. historical) and from get_asset via 'AI-generated' (processed analysis vs. raw data), though it doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus siblings like get_asset_summary_history (historical data) or get_market_summary (aggregate vs. specific asset). While 'daily' suggests current data, there are no when-not-to-use clauses or alternative recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_asset_summary_historyB
Read-onlyIdempotent
Inspect

Get historical daily summaries for a specific asset.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of historical summaries (default: 7, max: 30)
symbolYesAsset symbol (e.g., BTC, ETH, SOL)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already disclose readOnly/idempotent status. The description adds valuable behavioral context by specifying 'daily' granularity, which explains the temporal distribution of returned records. However, it omits error handling behavior (e.g., invalid symbols) and content details of the summaries.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient at nine words with no redundancy. The verb-object structure front-loads the action, and every word serves a purpose (especially 'daily' and 'historical' which constrain scope).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite 100% schema coverage and strong annotations, the absence of an output schema creates a gap the description fails to fill: it does not specify what fields constitute a 'summary' (e.g., OHLC prices, volume). For a read tool with no return documentation, this leaves critical gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds the critical 'daily' qualifier, which semantically links the 'limit' parameter to days of history (default 7 implying a week), enriching the agent's understanding of the data window.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (Get) and resource (historical daily summaries for asset) clearly. However, it does not explicitly differentiate from sibling 'get_asset_summary' (likely current data) or 'get_price_history', leaving ambiguity for the agent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit guidance on when to use this tool versus siblings like 'get_asset_summary' or 'get_price_history'. While 'historical' implies temporal use, there are no when/when-not directives or alternative recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_summaryA
Read-onlyIdempotent
Inspect

Get a market digest with key bullets, trending assets, sentiment analysis, and sources. Returns an AI-generated summary of current market conditions.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds crucial behavioral context beyond annotations by explicitly stating the output is 'AI-generated' and detailing its composite structure (key bullets, trending assets, sentiment, sources). This distinguishes it from raw data retrieval tools (like get_price_history). Annotations cover safety/permission traits (readOnly, idempotent), while the description explains content provenance and format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero redundancy. Front-loaded with active verb 'Get' and immediately specifies the resource and contents. Second sentence clarifies generation method. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description adequately describes the return value's nature (AI-generated summary) and constituent parts. For a zero-parameter read-only tool, this is sufficient; it doesn't need to explain error states or input validation, and the annotations already clarify the operation's safety profile.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present, establishing a baseline score of 4. The description correctly omits parameter discussion as there are none to document, and no compensatory explanation is required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity with verb 'Get' + resource 'market digest'. Clearly distinguishes from sibling tools like get_asset or get_trending_assets by describing a broad, composite market overview (including sentiment and sources) rather than specific asset data or simple lists. The 'AI-generated' qualifier further clarifies the output type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus siblings like get_trending_assets or get_asset_summary, despite overlapping concepts (it mentions 'trending assets' which is also a sibling tool name). No prerequisites or exclusion criteria provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_price_historyA
Read-onlyIdempotent
Inspect

Get historical price data for one or more tokens. Returns timestamped price points. No authentication required.

ParametersJSON Schema
NameRequiredDescriptionDefault
windowNoLookback period: 1h, 4h, 1d, 7d, 1M (default: 1d)
symbolsYesComma-separated asset symbols (e.g., SOL,BTC,ETH)
resolutionNoInterval between points: 5s, 1m, 5m, 1h, 1d (default: 5m)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only, idempotent, non-destructive, and open-world hints. The description valuably adds 'No authentication required' (auth context not in annotations) and 'Returns timestamped price points' (compensates for missing output_schema by hinting at return structure). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose declaration, return value hint (critical given no output schema), and auth requirement. Every word earns its place. Front-loaded with the core action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage, good annotations (safety hints, idempotency), and three straightforward scalar parameters, the description appropriately compensates for missing output schema by noting return format. Lacks only rate limit or data freshness details, but adequately complete for a read-only data retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents all three parameters (symbols, window, resolution) including formats and defaults. The description mentions 'one or more tokens' which aligns with the symbols parameter, but adds no additional semantic detail beyond the comprehensive schema. Baseline 3 appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Get' with specific resource 'historical price data' and scope 'one or more tokens' indicating batch capability. Distinguishes from siblings like get_asset (current data) and get_trending_assets by emphasizing historical/time-series nature, though could explicitly contrast with get_asset_summary_history.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context through 'Returns timestamped price points' (suggests granular time-series use cases) and operational constraint 'No authentication required'. However, lacks explicit when-to-use guidance versus sibling get_asset_summary_history or get_market_summary for aggregated versus point-in-time data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_surging_assetsB
Read-onlyIdempotent
Inspect

Get assets with the highest recent price change percentage.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort order: desc (default) or asc
limitNoMax results to return (default: 10, max: 50)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, idempotent, non-destructive behavior. The description adds context that 'surging' refers to 'recent price change percentage,' but fails to define the timeframe (e.g., 24h vs 1h) or disclose what fields are returned given the lack of an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of nine words is efficient and front-loaded. However, extreme brevity leaves gaps in behavioral disclosure and sibling differentiation; slightly more substance would improve utility without sacrificing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 2-parameter query tool with rich annotations and full schema coverage, the description is minimally adequate. However, lacking an output schema, it should ideally describe return structure or at minimum the 'recent' timeframe definition to be complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents 'sort' and 'limit.' The description adds no parameter-specific guidance (e.g., that limit max is 50, or sort values), meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Provides specific verb 'Get' and resource 'assets,' clearly defining the scope as 'highest recent price change percentage.' However, it does not explicitly differentiate from sibling 'get_trending_assets,' which could confuse users about when to use surging vs trending.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus 'list_assets' (which could accept filters) or 'get_trending_assets.' Missing prerequisites, expected latency, or rate limit warnings despite being an open-world data fetch.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_assetsA
Read-onlyIdempotent
Inspect

List all available tradeable tokens. Returns symbol, name, chain, address, and decimals for each asset.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoFilter by blockchain (solana, base)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, non-destructive, idempotent). Description adds valuable return payload disclosure (symbol, name, chain, address, decimals) compensating for missing output schema. Does not mention pagination behavior or result limits if any exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First establishes operation scope, second documents return structure. Front-loaded action verb and no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Strong coverage for a simple enumeration tool: annotations disclose behavioral safety, description discloses returned fields compensating for missing output schema, and schema documents sole parameter. Minor gap on pagination/result set size behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete parameter documentation. Description implies optional filtering via 'all available' but does not explicitly discuss the chain parameter semantics or valid values (solana, base). Baseline 3 appropriate when schema carries full load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'List' with specific resource 'tradeable tokens' and scope 'all available' that distinguishes from sibling tools like get_asset (singular lookup) and get_trending_assets (filtered view). The 'all' signals bulk enumeration vs targeted retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'List all available' implicitly suggests use for comprehensive discovery vs specific lookups, but lacks explicit guidance on when to prefer siblings like get_trending_assets or get_surging_assets over filtering this list client-side. No explicit when-not-to-use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources