x402 Crypto Market Structure
Server Details
Real-time crypto market intelligence for AI agents. Live price, funding rate, open interest, buy/sell ratio, fear/greed, orderflow, and OHLCV history across 20 exchanges and 24 tokens. Address risk scoring included.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 9 of 9 tools scored. Lowest: 3.7/5.
Most tools have distinct purposes: address risk analysis, historical data at different timeframes, and various market analysis functions. However, there is some overlap between market.analyze, market.orderflow, and market.full, which all provide market analysis with varying levels of detail and scope, potentially causing confusion about which to use for specific scenarios.
The naming follows a consistent dot-separated pattern (e.g., address.risk, history.1h, market.snapshot) with clear categories (address, history, market, api). Minor deviations exist, such as api.info not fitting the resource.action pattern as neatly as others, but overall the structure is predictable and readable.
With 9 tools, the count is well-scoped for a crypto market analysis server. It covers key areas like address analysis, historical data access at multiple resolutions, and comprehensive market insights, without being overwhelming or too sparse for the domain.
The toolset provides strong coverage for crypto market analysis, including address risk, historical data, and real-time market insights. A minor gap is the lack of tools for executing trades or managing portfolios, but given the server's focus on market structure and analysis rather than trading operations, this is reasonable and agents can work with the provided data tools.
Available Tools
9 toolsaddress.riskARead-onlyIdempotentInspect
Before you transact, know who you're dealing with. Paste any wallet address — EVM or Solana, auto-detected — and get entity label (exchange, protocol, flagged mixer), risk level, account age, transaction history, and top holdings. Flags: new_account, unverified_contract, dormant, high_throughput. REST equivalent: POST /analyze/address (0.25 USDC).
Args:
address: EVM address (0x...) or Solana address (base58)| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it discloses the cost (0.25 USDC), supported address types (EVM or Solana with auto-detection), and specific risk flags (new_account, unverified_contract, dormant, high_throughput). While annotations cover safety aspects (readOnly, non-destructive, idempotent), the description enriches this with practical implementation details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured: it opens with the use case, lists key outputs and flags, notes cost and REST equivalent, and clearly documents the single parameter. Every sentence serves a purpose without redundancy, making it easy to scan and understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (risk analysis with multiple outputs), the description provides comprehensive context: it covers purpose, usage, parameters, cost, and behavioral details. With annotations handling safety aspects and an output schema presumably defining return values, no critical information is missing for effective tool selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by explaining the 'address' parameter's semantics: it specifies acceptable formats (EVM 0x... or Solana base58) and clarifies it's a wallet address for analysis. This adds essential meaning beyond the bare schema, though it doesn't detail validation rules or examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: analyze wallet addresses to provide risk assessment and entity information. It specifies the action ('get entity label, risk level, account age, transaction history, and top holdings') and distinguishes itself from sibling tools by focusing on address risk analysis rather than market data or historical queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage ('Before you transact, know who you're dealing with') and mentions the REST equivalent and cost. However, it doesn't explicitly state when NOT to use this tool or compare it to specific alternatives among the sibling tools, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
api.infoARead-onlyIdempotentInspect
REST API access for autonomous agents — pricing, quick start, and migration guide.
Call this when: building a trading bot, deploying an autonomous agent, hitting
rate limits, or running 24/7 without a human in the loop. The MCP tier (what
you're using now) is free via Smithery and good for testing. The REST API is
for production: pay per call in USDC, no rate limits, no API key required.| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond annotations by explaining pricing (pay per call in USDC), rate limits (no rate limits for REST API), and authentication (no API key required). It doesn't contradict annotations and provides meaningful behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized at two paragraphs. The first sentence establishes purpose, and the second provides detailed usage guidelines. While efficient, the first paragraph could be slightly more focused on the tool's specific function rather than general API context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, rich annotations covering safety/idempotency, and an output schema exists (so return values are documented elsewhere), the description provides good contextual completeness. It explains when to use the tool, behavioral traits like pricing and rate limits, and distinguishes between MCP and REST API tiers. The main gap is a more precise purpose statement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so the schema fully documents the absence of parameters. The description appropriately doesn't discuss parameters since none exist. This meets the baseline expectation for a zero-parameter tool where the schema handles all documentation needs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool provides 'REST API access for autonomous agents' with information about 'pricing, quick start, and migration guide', which gives a general purpose. However, it doesn't specify a clear verb+resource combination like 'retrieve API documentation' or 'get API configuration details', and doesn't distinguish this informational tool from its sibling data/analysis tools. The purpose is somewhat vague rather than specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use guidance with 'Call this when: building a trading bot, deploying an autonomous agent, hitting rate limits, or running 24/7 without a human in the loop.' It also distinguishes between the current MCP tier and REST API for production use, offering clear context about alternatives and appropriate usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
history.1dARead-onlyIdempotentInspect
Daily bars with orderflow included — same fields as history.1h (CVD, buy/sell ratio, liquidations per bar), but daily resolution. Up to 7 years of history. Regime modeling, long-horizon backtesting, macro-cycle analysis. 24 tokens supported. REST equivalent: POST /data/history/1d (5.00 USDC).
Args:
token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM,
DOT, ARB, SUI, OP, LTC, NEAR, TRX, BCH, SHIB, HBAR, TON, XLM, UNI, AAVE)
limit: Max bars to return (1-5000, default 500)
start: Start date ISO 8601 (e.g. '2020-01-01')
end: End date ISO 8601 (e.g. '2024-12-31')| Name | Required | Description | Default |
|---|---|---|---|
| end | No | ||
| limit | No | ||
| start | No | ||
| token | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond annotations: it specifies the data range (up to 7 years of history), supported tokens (24 tokens), and cost information (5.00 USDC via REST). It doesn't contradict annotations, and the added context is helpful for understanding limitations and scope.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized. It front-loads key information (purpose, resolution, use cases) and follows with parameter details. Every sentence adds value, though the parameter section could be slightly more concise. No wasted words, but minor improvements in flow are possible.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (data retrieval with multiple parameters), rich annotations (readOnly, openWorld, idempotent), and the presence of an output schema (implied by context signals), the description is complete. It covers purpose, usage guidelines, behavioral context (range, tokens, cost), and parameter semantics adequately. The output schema likely handles return values, so no need to explain them in the description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides a clear list of parameters (token, limit, start, end) with examples and constraints (e.g., token symbols, limit range 1-5000, ISO 8601 format). However, it doesn't explain parameter interactions (e.g., how limit interacts with start/end) or default behaviors beyond what's implied. Given 0% schema coverage, the description adds meaningful semantics but could be more comprehensive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving daily bars with orderflow data, specifying the same fields as history.1h (CVD, buy/sell ratio, liquidations per bar) but at daily resolution. It distinguishes from sibling tools like history.1h and history.5m by explicitly mentioning daily resolution and different use cases like regime modeling and long-horizon backtesting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: for regime modeling, long-horizon backtesting, and macro-cycle analysis. It distinguishes from sibling tools by specifying daily resolution (vs. 1h or 5m) and mentioning the REST equivalent cost (5.00 USDC), which helps in decision-making. No explicit exclusions are stated, but the context implies alternatives like history.1h for higher resolution.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
history.1hARead-onlyIdempotentInspect
Historical hourly bars with orderflow baked in — not just OHLC, but CVD, buy/sell ratio, and liquidations per bar. Build your own models or backtest strategies without scraping exchanges. Up to 7 years of history, 5,000 bars per call. Without start/end: newest-first. With start/end: oldest-first. 24 tokens supported. REST equivalent: POST /data/history/1h (5.00 USDC).
Args:
token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM,
DOT, ARB, SUI, OP, LTC, NEAR, TRX, BCH, SHIB, HBAR, TON, XLM, UNI, AAVE)
limit: Max bars to return (1-5000, default 500)
start: Start date ISO 8601 (e.g. '2021-01-01'). Returns oldest-first when set.
end: End date ISO 8601 (e.g. '2021-12-31'). Defaults to now.| Name | Required | Description | Default |
|---|---|---|---|
| end | No | ||
| limit | No | ||
| start | No | ||
| token | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond this: it specifies the data ordering logic (newest-first without start/end, oldest-first with start/end), supported tokens, cost (5.00 USDC), and REST endpoint. This enriches the agent's understanding without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with key purpose and features. It uses bullet-like formatting for parameters, which aids readability. However, the mention of REST equivalent and cost, while informative, could be considered slightly extraneous for an AI agent's primary needs, making it not perfectly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (historical data retrieval with multiple metrics), the description is complete. It covers purpose, usage, parameters, data characteristics, and behavioral nuances. With annotations providing safety info and an output schema presumably detailing the returned bars, no critical gaps remain for the agent to understand and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by detailing all parameters: 'token' with a comprehensive list of 24 symbols, 'limit' with range and default, 'start' and 'end' with format examples and behavioral effects on ordering. It adds meaning beyond the bare schema, clarifying usage and constraints effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving historical hourly bars with orderflow data (CVD, buy/sell ratio, liquidations) for modeling and backtesting. It distinguishes from basic OHLC data and specifies the scope (up to 7 years, 5,000 bars per call), differentiating it from siblings like history.1d or history.5m which have different timeframes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (for historical data with orderflow metrics) and implies alternatives by mentioning 'not just OHLC,' suggesting other tools might provide simpler data. However, it doesn't explicitly name when-not-to-use scenarios or direct alternatives among the sibling tools, such as market.snapshot for current data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
history.5mARead-onlyIdempotentInspect
5-minute granularity for the last 17 days. Intraday pattern work, entry/exit timing, real-time flow analysis. Same fields as hourly (CVD, buy/sell ratio, liquidations per bar). Note: 5-minute data delay on live data. 24 tokens supported. REST equivalent: POST /data/history/5m (1.00 USDC).
Args:
token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM,
DOT, ARB, SUI, OP, LTC, NEAR, TRX, BCH, SHIB, HBAR, TON, XLM, UNI, AAVE)
limit: Max bars to return (1-5000, default 500)
start: Start date ISO 8601 (optional, within 17-day window)
end: End date ISO 8601 (optional)| Name | Required | Description | Default |
|---|---|---|---|
| end | No | ||
| limit | No | ||
| start | No | ||
| token | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context: it discloses the 5-minute data delay on live data, mentions the 24 tokens supported (which isn't in annotations), and notes the REST equivalent and cost (1.00 USDC). This enhances transparency beyond annotations without contradicting them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized, with key information front-loaded (granularity, time window, use cases). It uses bullet points for parameters, making it easy to scan. However, the first sentence is a bit dense with multiple clauses, and the cost mention could be more integrated, but overall it's efficient with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has annotations covering safety and idempotency, an output schema (not provided in details but indicated as present), and the description adds context like data delay, token support, and cost, it is fairly complete. The description explains parameters well despite 0% schema coverage. It could improve by explicitly mentioning output structure or linking to sibling tools, but for a read-only data retrieval tool, it provides sufficient context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description compensates by listing all parameters (token, limit, start, end) with details: token includes an enum-like list of 24 symbols, limit specifies range and default, and start/end note ISO 8601 format and constraints (e.g., start within 17-day window). This adds significant meaning beyond the bare schema, but some details like exact format examples are missing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 5-minute granularity historical data for cryptocurrency tokens, specifying the time window (last 17 days) and data fields (CVD, buy/sell ratio, liquidations per bar). It distinguishes from hourly data by noting it has the same fields but finer granularity. However, it doesn't explicitly differentiate from sibling tools like history.1d or history.1h beyond granularity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for intraday pattern work, entry/exit timing, and real-time flow analysis, suggesting when to use it. It notes a 5-minute data delay on live data, which provides some context. However, it doesn't explicitly state when to use this tool versus alternatives like history.1h or market.snapshot, nor does it mention exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market.analyzeARead-onlyIdempotentInspect
Is macro with you or against you? Get the current regime (bull/bear/risk_on/risk_off/choppy), directional signal and confidence, and macro context (DXY, VIX, fear/greed) before entering a position. Data-only, no LLM latency. 15 tokens supported. REST equivalent: POST /analyze/market (0.25 USDC).
Args:
token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM,
DOT, ARB, SUI, OP, LTC)
context: Optional historical context window ('7d' or '30d'). Adds percentile rankings.| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| context | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, openWorldHint, idempotentHint, and destructiveHint, covering safety and behavior. The description adds valuable context beyond annotations: it discloses cost ('0.25 USDC'), token support limit ('15 tokens'), and latency characteristics ('no LLM latency'), which are not captured in annotations. There is no contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded, starting with a purpose question, followed by key outputs, technical details, and parameter explanations. Every sentence adds value, such as cost and latency info, with no wasted words, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (macro analysis with multiple outputs), rich annotations, and the presence of an output schema, the description is complete. It covers purpose, usage context, behavioral traits, and parameter semantics adequately, without needing to explain return values since an output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides semantics for both parameters: 'token' is explained with a list of supported symbols and its role in analysis, and 'context' is described as optional with details on values and effects ('Adds percentile rankings'). This adds meaning beyond the schema, but does not fully detail all aspects like default behavior or constraints, keeping it at baseline level.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: analyzing market regimes, directional signals, confidence, and macro context for cryptocurrency positions. It specifies the verb 'Get' with resources like regime, signal, confidence, and context, distinguishing it from sibling tools like market.snapshot or market.orderflow by focusing on macro analysis rather than raw data or order flow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage: 'before entering a position' and mentions it's 'Data-only, no LLM latency,' which helps differentiate from tools that might involve AI processing. However, it does not explicitly state when not to use this tool or name specific alternatives among siblings, such as market.full or market.snapshot, for different needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market.fullARead-onlyIdempotentInspect
Full pre-trade diligence in one call. Gets all market and orderflow data, then makes a grounded judgment: BULLISH/BEARISH/NEUTRAL stance, ACCUMULATION/DISTRIBUTION signal, LOW/MODERATE/HIGH/CRITICAL risk level, and a verdict that cites actual data values — not vibes. Use when you want a single answer rather than assembling the pieces yourself. 15 tokens supported. REST equivalent: POST /analyze/full (0.75 USDC).
Args:
token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM,
DOT, ARB, SUI, OP, LTC)
context: Optional context window ('7d' or '30d').| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| context | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context: it specifies the tool makes 'grounded judgments' based on data, cites actual values, and mentions cost (0.75 USDC) and token support limits (15 tokens). This enhances transparency beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with purpose, followed by outputs, usage guidance, token/cost details, and parameter explanations in a logical flow. Every sentence adds value with no redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (comprehensive analysis), rich annotations, and presence of an output schema, the description is complete. It covers purpose, usage, behavioral context, parameter meanings, and cost, leaving output details to the schema. No gaps for an agent to invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description carries full burden. It clearly explains 'token' as a symbol with enumerated examples and 'context' as an optional window with values ('7d' or '30d'), adding meaningful semantics beyond the bare schema. However, it doesn't detail default behavior if context is omitted.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'full pre-trade diligence' by getting market/orderflow data and making a judgment with specific outputs (stance, signal, risk level, verdict). It distinguishes from siblings like market.snapshot or market.analyze by emphasizing comprehensive analysis in one call rather than piecemeal assembly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use when you want a single answer rather than assembling the pieces yourself,' providing clear when-to-use guidance. It also mentions token support and cost (0.75 USDC), which helps differentiate from alternatives that might be partial or free.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market.orderflowARead-onlyIdempotentInspect
Is this move backed by real buyers, or is it two venues painting tape? See CVD direction, buy/sell ratio, whale clustering, liquidation pressure, and how many of 20 exchanges are actually accumulating vs distributing. Volume concentration (HHI > 0.6) flags wash trading or thin manipulation. Data-only. 24 tokens supported. REST equivalent: POST /analyze/orderflow (0.50 USDC).
Args:
token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM,
DOT, ARB, SUI, OP, LTC, NEAR, TRX, BCH, SHIB, HBAR, TON, XLM, UNI, AAVE)
context: Optional context window ('7d' or '30d'). Adds percentile rankings.| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| context | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnly, openWorld, idempotent, and non-destructive hints. The description adds valuable behavioral context: it's a data-only analysis tool (reinforcing readOnly), mentions 24 supported tokens (scope limitation), includes cost information (0.50 USDC), and describes what kind of analysis it performs (manipulation detection, accumulation/distribution metrics). This goes beyond what annotations provide without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a purpose statement, key metrics, limitations (data-only, 24 tokens), cost information, and parameter details. While somewhat dense, each sentence adds value. The parameter section is clearly separated but could be more integrated with the main description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (order flow analysis with multiple metrics), the description provides good context about what it analyzes, supported tokens, optional context windows, and cost. With annotations covering safety aspects and an output schema presumably detailing results, the description is reasonably complete. It could benefit from more explicit differentiation from sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing detailed parameter information. It lists all 24 supported token symbols for the 'token' parameter and explains that 'context' adds percentile rankings with specific window options ('7d' or '30d'). This adds significant meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool analyzes order flow data to detect market manipulation and real buying pressure, with specific metrics mentioned (CVD direction, buy/sell ratio, etc.). It distinguishes from siblings by focusing on order flow analysis rather than general market data or historical trends. However, it doesn't explicitly contrast with 'market.analyze' or 'market.full' which might have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing to assess market manipulation or genuine buying pressure, but doesn't explicitly state when to use this tool versus alternatives like 'market.analyze' or 'market.snapshot'. It mentions data-only analysis and 24 supported tokens, which provides some context but lacks explicit guidance on tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market.snapshotARead-onlyIdempotentInspect
What's the market doing right now? Price, funding rate, CVD, whale activity, and liquidation pressure in one call — 16 fields, no LLM overhead. Feed directly into your own models or decision logic. 24 tokens supported. REST equivalent: POST /data (0.20 USDC).
Args:
token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM,
DOT, ARB, SUI, OP, LTC, NEAR, TRX, BCH, SHIB, HBAR, TON, XLM, UNI, AAVE)| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it discloses the cost (0.20 USDC for REST equivalent), lists the 24 supported tokens, and mentions the 16 fields returned. While annotations already indicate read-only, open-world, idempotent, and non-destructive behavior, the description provides practical implementation details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a compelling opening question, key features bulleted in the first sentence, implementation details in the second, and clear parameter documentation. Every sentence adds value with zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (market data aggregation), the description is complete: it explains purpose, usage, behavioral details, and parameter semantics. With an output schema present, it doesn't need to describe return values, and the annotations cover safety aspects adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing complete parameter semantics. It explains that 'token' is a token symbol and lists all 24 supported values (BTC, ETH, SOL, etc.), which is essential information not present in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to provide a comprehensive market snapshot including price, funding rate, CVD, whale activity, and liquidation pressure. It specifies the scope (16 fields, 24 tokens supported) and distinguishes it from siblings by emphasizing it's a single-call overview rather than historical or analytical tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: when you need current market data for decision logic or feeding into models. It implies an alternative to LLM processing by stating 'no LLM overhead.' However, it doesn't explicitly contrast with specific sibling tools like market.analyze or market.full.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!