x402 Crypto Market Structure
Server Details
Real-time crypto market intelligence for AI agents. Live price, funding rate, open interest, buy/sell ratio, fear/greed, orderflow, and OHLCV history across 20 exchanges and 24 tokens. Address risk scoring included.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.5/5 across 7 of 7 tools scored. Lowest: 3.8/5.
Multiple market analysis tools (market_snapshot, market_orderflow, market_full, market_light) have overlapping functionality and descriptions, making it difficult for an agent to distinguish between them. For example, both market_snapshot and market_orderflow include CVD and liquidation pressure data.
Tool names lack a consistent pattern: 'market_analyze', 'market_full', 'market_light', 'market_orderflow', and 'market_snapshot' use different suffixes and are mixed with 'address_risk' and 'api_info' which do not follow the market_ prefix.
With 7 tools, the count is appropriate for the scope of a crypto market analysis server. It provides a reasonable variety without being overwhelming.
The server covers address risk, macro analysis, orderflow, and snapshots, but lacks historical data beyond limited context windows, on-chain metrics, order book depth, and any execution capabilities. Gaps exist for a comprehensive market structure server.
Available Tools
7 toolsaddress_riskARead-onlyIdempotentInspect
Before you transact, know who you're dealing with. Paste any wallet address — EVM or Solana, auto-detected — and get entity label (exchange, protocol, flagged mixer), risk level, account age, transaction history, and top holdings. Flags: new_account, unverified_contract, dormant, high_throughput. REST equivalent: POST /analyze/address (0.25 USDC).
Args:
address: EVM address (0x...) or Solana address (base58)| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond annotations: it discloses cost ('0.25 USDC'), specific risk flags (e.g., 'new_account', 'unverified_contract'), and auto-detection of address types. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with key information: purpose, usage context, and key features. Every sentence adds value, such as listing specific flags and cost, with no redundant or verbose content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (risk analysis with multiple outputs), the description is complete: it covers purpose, usage, parameters, behavioral traits (cost, flags), and mentions an output schema exists. With annotations and output schema provided, no additional details are needed for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description carries full burden. It clearly explains the 'address' parameter as 'EVM address (0x...) or Solana address (base58)', adding essential format details not in the schema. With only one parameter, this is sufficient for effective use.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: analyze a wallet address (EVM or Solana) to provide entity label, risk level, account age, transaction history, and top holdings. It specifies the verb 'get' and resource 'entity label, risk level, etc.', distinguishing it from sibling tools like market analysis or historical data tools by focusing on address risk assessment.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Before you transact, know who you're dealing with.' This provides clear context for usage (pre-transaction risk assessment) and distinguishes it from alternatives by focusing on address analysis rather than market or historical data tools listed as siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
api_infoARead-onlyIdempotentInspect
REST API access for autonomous agents — pricing, quick start, and migration guide.
Call this when: building a trading bot, deploying an autonomous agent, hitting
the MCP rate limit, or running 24/7 without a human in the loop. The MCP tier
(what you're using now) is free via Smithery, rate-limited to 60 calls/minute
per IP, and good for testing. The REST API is for production: pay per call in
USDC; paid endpoints are rate-limited to 60 calls/minute and 200 calls/hour
per wallet. No API key required.| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and behavior. The description adds valuable context beyond this by disclosing rate limits (no rate limits for REST API), authentication needs (no API key required), and pricing details (pay per call in USDC), which are not captured in annotations. There is no contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the main purpose and usage guidelines, followed by additional details. It is appropriately sized with two paragraphs, but some sentences could be more streamlined (e.g., the second paragraph is a bit verbose). Overall, it earns its place without significant waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, rich annotations, and an output schema exists, the description provides sufficient context by explaining when to use it, behavioral traits, and comparisons between MCP and REST API. It covers key aspects like pricing and rate limits, making it complete enough for an informational tool, though it could briefly hint at output content.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description does not discuss parameters, which is appropriate. A baseline of 4 is applied as it compensates adequately for the lack of parameters by focusing on usage and behavioral context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool provides 'REST API access for autonomous agents' with information about 'pricing, quick start, and migration guide,' which gives a general purpose but lacks a specific verb+resource combination. It doesn't clearly distinguish from sibling tools like 'market_analyze' or 'history_1d,' which appear to be data-focused, making the purpose somewhat vague rather than precise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Call this when: building a trading bot, deploying an autonomous agent, hitting rate limits, or running 24/7 without a human in the loop.' It also provides clear alternatives by comparing the MCP tier (free for testing) vs. the REST API (for production), offering explicit guidance on usage context and exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_analyzeARead-onlyIdempotentInspect
Is macro with you or against you? Get the current regime (bull/bear/risk_on/risk_off/choppy), directional signal and confidence, and macro context (DXY, VIX, fear/greed) before entering a position. Data-only, no LLM latency. coverage disclosed per token. REST equivalent: POST /analyze/market (0.25 USDC).
Args:
token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM,
DOT, ARB, SUI, OP, LTC, AMP, ZEC)
context: Optional historical context window ('7d' or '30d'). Adds percentile rankings.| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| context | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies '15 tokens supported' (limitation), 'Data-only, no LLM latency' (performance characteristic), and cost information. While annotations cover safety (readOnly, non-destructive, idempotent), the description provides practical constraints that help the agent understand operational boundaries.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a purpose statement, key features, limitations, cost, and parameter details in logical order. Every sentence adds value: the opening question establishes context, technical details are grouped, and parameter explanations are clear without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (market analysis with multiple outputs), the description provides complete context: purpose, usage timing, limitations, cost, parameter details, and distinguishes from siblings. With annotations covering safety and an output schema presumably detailing return values, no additional explanation is needed for behavioral or output aspects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by explaining both parameters: 'token' gets a comprehensive list of supported symbols, and 'context' explains its optional nature, valid values ('7d' or '30d'), and effect ('Adds percentile rankings'). This provides complete semantic understanding beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get the current regime, directional signal and confidence, and macro context') and resources ('market analysis for token positions'). It distinguishes from siblings by focusing on regime analysis rather than historical data (history_*), order flow (market_orderflow), or risk assessment (address_risk).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('before entering a position') and provides clear alternatives through sibling tool names. It also specifies what it doesn't do ('Data-only, no LLM latency') and includes cost information ('0.25 USDC'), giving comprehensive usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_fullARead-onlyIdempotentInspect
Full pre-trade diligence in one call. Gets all market and orderflow data, then makes a grounded judgment: BULLISH/BEARISH/NEUTRAL stance, ACCUMULATION/DISTRIBUTION signal, LOW/MODERATE/HIGH/CRITICAL risk level, and a verdict that cites actual data values — not vibes. Use when you want a single answer rather than assembling the pieces yourself. coverage disclosed per token. REST equivalent: POST /analyze/full (0.75 USDC).
Args:
token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM,
DOT, ARB, SUI, OP, LTC, AMP, ZEC)
context: Optional context window ('7d' or '30d').| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| context | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies the tool's output format (BULLISH/BEARISH/NEUTRAL stance, etc.), emphasizes data-driven verdicts ('cites actual data values — not vibes'), mentions token support ('15 tokens supported'), and discloses cost ('0.75 USDC'). While annotations cover safety (readOnly, non-destructive), the description enriches understanding of what the tool actually produces.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured: first sentence states the core purpose, subsequent sentences detail outputs and usage guidance, then practical constraints (tokens, cost), and finally parameter explanations. Every sentence adds value with zero wasted words, and key information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (comprehensive market analysis), rich annotations (readOnly, openWorld, idempotent), and the presence of an output schema, the description provides excellent context. It explains what the tool does, when to use it, what outputs to expect, practical constraints, and parameter meanings - covering all essential aspects without needing to duplicate structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates well by explaining both parameters: 'token' is described with a list of 15 supported symbols, and 'context' is explained as 'Optional context window' with specific values ('7d' or '30d'). This adds meaningful semantics beyond the bare schema, though it doesn't fully document all parameter constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Full pre-trade diligence in one call' that 'Gets all market and orderflow data, then makes a grounded judgment' with specific outputs (stance, signal, risk level, verdict). It distinguishes itself from siblings like market_snapshot or market_orderflow by emphasizing comprehensive analysis rather than piecemeal data gathering.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Use when you want a single answer rather than assembling the pieces yourself.' This provides clear guidance to choose this comprehensive tool over more granular siblings like market_orderflow or history_* tools that provide individual data components.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_lightARead-onlyIdempotentInspect
Structured coverage for ANY listed token — price, momentum, extremes, signals, project info, exchange listings, and optional LLM brief. Broader coverage than market_snapshot (which is limited to 26 supported tokens), shallower data (no orderflow, whale, liquidation, or signal fields). Use for tokens outside the BTC/ETH/SOL/etc. core set. Symbol disambiguation is automatic (top-by-volume match). REST equivalent: POST /data/light (0.05 USDC).
Rate limit: 10 calls / minute per IP (lower than other tools — this fans out to
a paid upstream provider). Cached responses up to 24h are served without
refetching; agents needing fresher data should use the paid REST endpoint.
Args:
symbol: Token ticker — 1 to 10 alphanumeric characters (e.g. PEPE, WIF,
FLOKI, BONK). Case-insensitive.| Name | Required | Description | Default |
|---|---|---|---|
| brief | No | full | |
| symbol | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations (readOnlyHint, idempotentHint, destructiveHint) already indicate safety. Description adds caching behavior, rate limiting rationale (paid upstream), and automatic symbol disambiguation. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise yet thorough: opening sentence defines purpose, then comparison, usage guidance, rate/cache details, and parameter specs. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (covering return data), description covers all necessary context: tool purpose, sibling differentiation, usage caveats, rate/cache behavior, and parameter constraints. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description provides full semantics: character length (1-10), alphanumeric constraint, case-insensitivity, and examples (PEPE, WIF). This compensates entirely for the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool provides structured coverage for ANY listed token, listing specific data categories (price, momentum, etc.). It explicitly differentiates from market_snapshot by noting broader token coverage but shallower data depth.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises use for tokens outside the core set (e.g., BTC/ETH/SOL), compares with market_snapshot, and provides rate limits (10 calls/min) and caching policy (24h) with alternative paid endpoint for fresher data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_orderflowARead-onlyIdempotentInspect
Is this move backed by real buyers, or is it two venues painting tape? See CVD direction, buy/sell ratio, whale clustering, liquidation pressure, and how many of 20 live exchanges are accumulating vs distributing. Volume concentration (HHI > 0.6) flags wash trading or thin manipulation. Data-only. orderflow coverage disclosed per token. REST equivalent: POST /analyze/orderflow (0.50 USDC).
Args:
token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM,
DOT, ARB, SUI, OP, LTC, NEAR, TRX, BCH, SHIB, HBAR, TON, XLM, UNI, AAVE,
AMP, ZEC)
context: Optional context window ('7d' or '30d'). Adds percentile rankings.| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| context | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only, open-world, idempotent, and non-destructive traits. The description adds valuable context: it's data-only (no trading), supports 24 tokens, has a REST equivalent with cost (0.50 USDC), and mentions specific analysis metrics like volume concentration (HHI > 0.6) for wash trading detection. This goes beyond annotations by detailing scope and constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose and key metrics, followed by token support and REST details, then parameter explanations. It's appropriately sized with no redundant sentences, though the rhetorical question at the start could be more direct. Each sentence adds value, such as cost and data-only nature.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given annotations cover safety traits, an output schema exists (so return values needn't be explained), and the description adds context on tokens, cost, and analysis metrics, it's fairly complete. However, it could better differentiate from siblings and detail parameter defaults more explicitly, keeping it from a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides a list of 24 supported token symbols and explains that the optional 'context' parameter adds percentile rankings with window options ('7d' or '30d'). This adds meaningful semantics beyond the bare schema, though it doesn't detail parameter interactions or defaults beyond the schema's default for 'context'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool analyzes order flow to determine if market moves are backed by real buyers or manipulation, with specific metrics listed (CVD direction, buy/sell ratio, etc.). It distinguishes from siblings by focusing on order flow analysis rather than risk assessment, historical data, or broader market analysis. However, it doesn't explicitly name how it differs from 'market_analyze' or 'market_full'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for analyzing market manipulation vs. genuine buying, with context window options for percentile rankings. It doesn't explicitly state when to use this tool versus alternatives like 'market_analyze' or 'market_full', nor does it provide exclusions or prerequisites. The guidance is contextual but lacks sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_snapshotARead-onlyIdempotentInspect
What's the market doing right now? Price, funding rate, CVD, whale activity, and liquidation pressure in one call — 16 fields, no LLM overhead. Feed directly into your own models or decision logic. orderflow coverage disclosed per token. REST equivalent: POST /data (0.20 USDC).
Args:
token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM,
DOT, ARB, SUI, OP, LTC, NEAR, TRX, BCH, SHIB, HBAR, TON, XLM, UNI, AAVE,
AMP, ZEC)| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond this: it discloses cost ('0.20 USDC'), output format (16 fields), supported tokens (24 symbols), and that it's a REST equivalent. This enriches the agent's understanding of practical usage without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured: it starts with a purpose-driven question, lists key data points, specifies technical details (fields, tokens, REST equivalent, cost), and ends with clear parameter documentation. Every sentence adds value without redundancy, and it's appropriately front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (aggregating multiple market metrics), rich annotations, and the presence of an output schema, the description is complete. It covers purpose, usage, behavioral context (cost, format), and parameter details, leaving output specifics to the schema. No gaps remain for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, providing no details about the 'token' parameter. The description fully compensates by listing all 24 supported token symbols with examples, clearly defining the parameter's semantics and valid values, which is essential for correct tool invocation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to provide a comprehensive market snapshot including price, funding rate, CVD, whale activity, and liquidation pressure. It specifies the scope (16 fields, 24 tokens supported) and distinguishes it from siblings by emphasizing 'no LLM overhead' and direct model feeding, unlike analysis-focused siblings like market_analyze.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: for real-time market data aggregation without analysis overhead. It implies alternatives by mentioning 'no LLM overhead' (suggesting market_analyze might include that) and 'feed directly into your own models' (differentiating from tools that process data internally). However, it doesn't explicitly name when-not-to-use cases or list all sibling alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!