kr-crypto-intelligence
Server Details
Korean crypto data + AI sentiment + divergence. 11 tools. x402 Base+Polygon+Solana.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- bakyang2/kr-crypto-intelligence
- GitHub Stars
- 0
- Server Listing
- kr-crypto-intelligence
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 13 of 13 tools scored. Lowest: 3.2/5.
Tools are mostly distinct, with some potential overlap between get_arbitrage_scanner, get_kimchi_premium, and get_global_vs_korea_divergence, but descriptions clarify different scopes (all tokens vs single symbol, raw data vs AI analysis).
All tool names follow a consistent verb_noun pattern in snake_case (e.g., get_arbitrage_scanner, get_kr_prices), with only check_health deviating slightly but still clear.
13 tools cover the domain of Korean crypto market intelligence without being excessive or inadequate. Each tool serves a specific function in the workflow.
Covers core areas: prices, premiums, arbitrage, alerts, sentiment, and market analysis. Minor gaps include lack of historical data or direct order execution, but these are likely out of scope.
Available Tools
13 toolscheck_healthBRead-onlyInspect
Check service health and exchange connectivity status. Returns status of Upbit, Bithumb, and Binance API connections.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While it states what the tool does (check health/connectivity) and which exchanges are covered, it doesn't describe important behavioral aspects: what 'health' means, what specific connectivity metrics are checked, whether this performs active API calls or checks cached status, what authentication might be required, or potential rate limits. The description is minimal and lacks operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and well-structured: two sentences that directly state the tool's purpose and what it returns. Every word earns its place, with no redundant information. The first sentence establishes the core function, and the second provides specific exchange details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that this is a health/status checking tool with zero parameters and an output schema exists, the description is minimally adequate. It tells the agent what the tool does and which exchanges are covered. However, with no annotations and a potentially complex health checking operation, the description could benefit from more context about what constitutes 'health' or 'connectivity status.' The existence of an output schema means return values are documented elsewhere, but the description itself is quite sparse.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, and schema description coverage is 100% (since there are no parameters to describe). The description appropriately doesn't attempt to explain nonexistent parameters. With no parameters, the baseline score is 4, as there's nothing for the description to add beyond what the empty schema already indicates.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Check service health and exchange connectivity status.' It specifies the verb ('check') and resource ('service health and exchange connectivity status'), and identifies which exchanges are checked (Upbit, Bithumb, Binance). However, it doesn't explicitly differentiate from sibling tools like 'get_available_symbols' or 'get_kr_prices' which might also involve exchange connectivity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, timing considerations, or suggest when this health check is appropriate compared to other tools that fetch data from these exchanges. With sibling tools like 'get_kr_prices' that presumably also require exchange connectivity, there's no differentiation provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_arbitrage_scannerARead-onlyInspect
Scan Kimchi Premium for ALL tokens (180+) traded on both Upbit and Binance. Returns: token-by-token premium %, reverse premiums (negative = Korean discount), Upbit vs Bithumb price gaps, market share between exchanges. Each token includes: warning flags, volume soaring alerts, deposit soaring alerts. Updated every 60 seconds. Essential for cross-exchange arbitrage analysis.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: the tool returns multiple data points (premium %, reverse premiums, price gaps, market share), includes alerts (warning flags, volume soaring, deposit soaring), and specifies update frequency ('Updated every 60 seconds'). This covers operational behavior without contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by specific return details and operational context. Each sentence adds value: the first defines the scan, the second lists returns, the third details included alerts, and the fourth provides update frequency and usage context. There is no wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (scanning 180+ tokens with multiple data points), the description is complete. It explains what the tool does, what it returns, and its update frequency. With an output schema present, it does not need to detail return values, and with no parameters, it adequately covers all necessary context for an AI agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter details are needed. The description does not add parameter semantics, which is acceptable given the baseline. However, it does not explicitly state 'no parameters required,' so it misses a minor opportunity for clarity, warranting a score of 4 instead of 5.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose: 'Scan Kimchi Premium for ALL tokens (180+) traded on both Upbit and Binance.' It specifies the verb ('Scan'), resource ('Kimchi Premium'), and scope ('ALL tokens (180+)'), distinguishing it from siblings like 'get_kimchi_premium' which likely has a narrower scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage: 'Essential for cross-exchange arbitrage analysis.' It implies when to use this tool (for comprehensive arbitrage scanning) but does not explicitly state when not to use it or name alternatives among siblings, such as 'get_kimchi_premium' for a more focused analysis.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_available_symbolsARead-onlyInspect
Get all available trading symbols on Korean exchanges. Returns symbols available on Upbit, Bithumb, and those common to both. Use this to check which symbols you can query before calling other tools.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool returns data from multiple exchanges (Upbit and Bithumb) and indicates it's a read operation ('Get'), but doesn't mention behavioral aspects like rate limits, authentication requirements, response format, or whether the data is cached/real-time. The description adds some context but lacks comprehensive behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each serve a distinct purpose: the first states what the tool does, and the second provides usage guidance. There is zero wasted text, and the information is front-loaded with the core functionality stated immediately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has zero parameters, an output schema exists (so return values don't need explanation in the description), and it's a relatively simple read operation, the description is mostly complete. It covers purpose and usage context well. The main gap is the lack of behavioral details (like rate limits or response structure), but with an output schema handling return values, this is less critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters (schema description coverage is 100%), so there are no parameters to document. The description appropriately doesn't discuss parameters, which is correct for a parameterless tool. It earns a baseline 4 since no parameter information is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get all available trading symbols'), resource ('on Korean exchanges'), and scope ('Upbit, Bithumb, and those common to both'). It distinguishes this tool from siblings like get_kr_prices (which likely returns price data rather than symbol lists) and get_fx_rate (which handles exchange rates).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Use this to check which symbols you can query before calling other tools.' This provides clear guidance about its role as a prerequisite check before invoking other trading-related tools, establishing a specific usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_exchange_alertsARead-onlyInspect
Get Korean exchange alerts: new listings, delistings, investment warnings, and caution flags. Detects: INVESTMENT_WARNING, PRICE_FLUCTUATIONS, VOLUME_SOARING, DEPOSIT_SOARING, GLOBAL_PRICE_DIFF, SMALL_ACCOUNTS_CONCENTRATION. New listings/delistings detected by comparing market list changes every 60 seconds. Critical for risk management and early listing detection.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and adds valuable behavioral context: it specifies the types of alerts detected (e.g., INVESTMENT_WARNING), mentions the detection method ('comparing market list changes every 60 seconds'), and highlights the tool's purpose ('risk management and early listing detection'), though it lacks details on rate limits or output format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with the first sentence stating the core purpose and subsequent sentences adding necessary details without waste. Every sentence contributes to understanding the tool's function and context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (alerts with specific types and detection methods), no annotations, and an output schema present, the description is mostly complete: it covers purpose, alert types, detection method, and usage context, but could benefit from more details on output structure or behavioral constraints, though the output schema mitigates this gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so the baseline is 4. The description does not add parameter information, but this is acceptable since no parameters exist, and it does not contradict the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get Korean exchange alerts') and resources ('new listings, delistings, investment warnings, and caution flags'), and it distinguishes itself from siblings by focusing on exchange alerts rather than arbitrage, prices, premiums, or market movers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('Critical for risk management and early listing detection'), but it does not explicitly state when to use this tool versus alternatives like 'get_market_movers' or 'get_available_symbols', nor does it provide exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_fx_rateARead-onlyInspect
Get current USD/KRW exchange rate. Essential for converting between Korean Won and US Dollar prices.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only states what the tool does, not behavioral traits like whether it's real-time or cached data, rate limits, error conditions, or authentication requirements. It's a basic functional description without operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place - the first states the core function, the second provides usage context. No wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, has output schema), the description is reasonably complete for understanding when to use it. However, with no annotations and a mutation-adjacent financial tool, it could benefit from more behavioral context about data freshness, reliability, or limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the description doesn't need to compensate for parameter documentation. The baseline for zero parameters is 4, and the description appropriately focuses on the tool's purpose rather than parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('Get') and resource ('current USD/KRW exchange rate'), and distinguishes it from siblings by focusing on this specific currency pair rather than other financial data like kimchi premium or stablecoin premium.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Essential for converting between Korean Won and US Dollar prices'), but doesn't explicitly state when not to use it or name specific alternatives among the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_global_vs_korea_divergenceARead-onlyInspect
Light tier — premium between CoinGecko global price and Upbit Korean price + 1-2 sentence AI interpretation. 25 supported symbols. 60s cache. Returns: prices (global_usd, korea_krw, fx_rate), divergence (premium_pct, direction, magnitude), context_signals (investment_warning, volume_spike_24h), and ai_interpretation (1-2 sentence English summary). $0.05 per call via x402.
Args: symbol: Crypto symbol — supported: BTC, ETH, XRP, SOL, ADA, DOGE, DOT, MATIC, LINK, AVAX, ATOM, UNI, LTC, NEAR, OP, ARB, APT, ALGO, FTM, SUI, TRX, BCH, ETC, HBAR, SHIB
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | No | Crypto symbol (e.g., BTC, ETH, XRP, SOL, ADA, DOGE, DOT, MATIC, LINK, AVAX, ATOM, UNI, LTC, NEAR, OP, ARB, APT, ALGO, FTM, SUI, TRX, BCH, ETC, HBAR, SHIB) | BTC |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (readOnlyHint, openWorldHint), the description adds cost ($0.05 per call), cache (60s), and detailed return fields. This provides useful behavioral context for invocation decisions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose, then return fields, cost, and args. It is efficient but slightly redundant with symbol list repetition. Still well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description adequately summarizes return values and adds cost and cache details. It is complete for a tool with one parameter and clear output structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the description adds no new parameter meaning beyond the schema. Listing supported symbols is redundant but helpful, so a baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it calculates the premium between CoinGecko global price and Upbit Korean price with AI interpretation. It specifies 25 supported symbols and cache duration, effectively distinguishing it from siblings like get_kimchi_premium.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for divergence analysis but does not explicitly guide when to use this tool versus alternatives like get_kimchi_premium or get_global_vs_korea_divergence_deep. No when-not guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_global_vs_korea_divergence_deepARead-onlyInspect
Deep tier — light data + Korean news signals (Coinness Telegram, 24h window) + structured AI breakdown (drivers, global context, action suggestion, confidence). 5-min cache. Returns light response fields plus: recent_news_signal (korean_news_count_24h, sentiment_score, top_keywords) and ai_deep_analysis (summary, korean_market_drivers, global_context, implied_action_suggestion, confidence). $0.10 per call via x402.
Args: symbol: Crypto symbol — supported: BTC, ETH, XRP, SOL, ADA, DOGE, DOT, MATIC, LINK, AVAX, ATOM, UNI, LTC, NEAR, OP, ARB, APT, ALGO, FTM, SUI, TRX, BCH, ETC, HBAR, SHIB
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | No | Crypto symbol (e.g., BTC, ETH, XRP, SOL, ADA, DOGE, DOT, MATIC, LINK, AVAX, ATOM, UNI, LTC, NEAR, OP, ARB, APT, ALGO, FTM, SUI, TRX, BCH, ETC, HBAR, SHIB) | BTC |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint and openWorldHint, so the description does not need to repeat. It adds valuable behavioral context: '5-min cache', cost ($0.10 per call via x402), and details on returned fields (news_signal, ai_deep_analysis). No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a front-loaded summary and bullet-style list of returned fields. It is informative without being excessively verbose, though could be slightly more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema, the description does not need to detail return values. It sufficiently covers purpose, parameters, caching, cost, and the unique deep analysis features. Minor gaps in usage guidance prevent a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter 'symbol' is fully described in the schema (100% coverage). The description merely lists the same supported symbols, adding no extra meaning beyond the schema's description. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Deep tier — light data + Korean news signals (Coinness Telegram, 24h window) + structured AI breakdown'. It specifies the resource (global vs Korea divergence) and distinguishes from the sibling 'get_global_vs_korea_divergence' by adding news and AI analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies a more detailed analysis than its sibling, but does not explicitly state when to use this vs alternatives. It provides context like '5-min cache' and '0.10 per call' but lacks explicit when-not or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_kr_pricesARead-onlyInspect
Get cryptocurrency prices from Korean exchanges (Upbit, Bithumb). Returns KRW-denominated prices, 24h volume, and change rate.
Args: symbol: Crypto symbol (e.g., BTC, ETH, XRP, SOL, DOGE) exchange: Exchange to query — 'upbit', 'bithumb', or 'all' for both
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | No | Crypto symbol to query (e.g., BTC, ETH, XRP) | BTC |
| exchange | No | Exchange to query: upbit, bithumb, or all | all |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes what data is returned (prices, volume, change rate) and the exchange options, but doesn't mention rate limits, authentication requirements, error conditions, or whether this is a read-only operation. It provides basic behavioral context but lacks important operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a clear purpose statement followed by a well-organized Args section. Every sentence earns its place, providing essential information without redundancy. The two-sentence format is front-loaded with the most important information first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), 2 parameters with good description coverage, and moderate complexity, the description is mostly complete. It covers purpose, parameters, and basic return data. However, for a financial data tool with no annotations, it could benefit from mentioning data freshness, rate limits, or error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing complete parameter semantics. It clearly explains both parameters: 'symbol' with specific examples (BTC, ETH, XRP, SOL, DOGE) and 'exchange' with valid values ('upbit', 'bithumb', 'all'). This adds significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get cryptocurrency prices'), identifies the resources ('Korean exchanges: Upbit, Bithumb'), and distinguishes this tool from siblings by specifying it returns KRW-denominated data with volume and change metrics. It doesn't just restate the tool name but provides meaningful differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool (for Korean exchange crypto prices in KRW), but doesn't explicitly mention when NOT to use it or name specific alternatives among sibling tools. It implies usage for price data rather than other sibling functions like health checks or premium calculations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_kr_sentimentARead-onlyInspect
Korean crypto market sentiment analysis in English. Combines exchange intelligence (189+ tokens premium, warnings, volume spikes) with Korean news context (Coinness Telegram) for AI-powered real-time insights. First-in-world Korean-to-English crypto sentiment API. Returns: sentiment label, score (-1 to +1), English report, exchange signals, news context. 1-hour cache. $0.05 per call via x402.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and openWorldHint=true, and the description adds valuable behavioral context beyond this: it discloses a '1-hour cache' for data freshness, a cost of '$0.05 per call via x402', and specifics about the return format (sentiment label, score, report, signals, news context). No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose but includes some marketing language (e.g., 'First-in-world') and cost details that, while informative, could be streamlined. Sentences are mostly efficient, but phrases like '189+ tokens premium' are somewhat dense and could be clarified for better conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of sentiment analysis, the description is complete: it covers purpose, data sources, output format, caching, and cost. With annotations providing safety hints and an output schema existing (implied by context signals), the description adequately supplements structured fields without needing to detail return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline is 4. The description compensates by explaining the input implicitly (no parameters needed for this analysis) and adds context about the data sources (exchange intelligence, Korean news) and output details, enhancing understanding beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'Korean crypto market sentiment analysis in English' with specific components (exchange intelligence, Korean news context) and distinguishes it from siblings like get_market_read or get_exchange_alerts by focusing on sentiment rather than raw data or alerts. It specifies the unique value proposition as 'First-in-world Korean-to-English crypto sentiment API'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for real-time crypto sentiment insights in the Korean market, but does not explicitly state when to use this tool versus alternatives like get_market_read or get_exchange_alerts. It provides context (e.g., 'AI-powered real-time insights') but lacks direct comparisons or exclusions for sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_market_moversARead-onlyInspect
Get Korean market movers: 1-minute price surges/crashes (>1%), volume spikes, and top 20 tokens by trading volume on Upbit. Detects rapid price movements and unusual volume activity in Korean crypto markets. Korean retail activity often leads global price movements — early signal for traders.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It describes what the tool detects (price surges/crashes, volume spikes, top tokens) and its relevance as an early signal for traders, but lacks details on rate limits, authentication needs, data freshness, or error handling. It doesn't contradict annotations since none exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with core functionality in the first sentence, followed by additional context. It's efficient but could be slightly tighter by combining the second and third sentences without losing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (market analysis with multiple criteria), no annotations, and an output schema (which handles return values), the description is reasonably complete. It covers purpose, scope, and relevance, though it could benefit from more behavioral details like update frequency or data sources.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the inputs. The description adds no parameter-specific information, which is appropriate here. Baseline is 4 for zero parameters, as no compensation is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving Korean market movers with specific criteria (1-minute price surges/crashes >1%, volume spikes, top 20 tokens by trading volume on Upbit). It distinguishes from siblings by focusing on rapid market movements rather than health checks, arbitrage, symbols, alerts, FX rates, premiums, or general prices.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context: detecting rapid price movements and unusual volume activity in Korean crypto markets, with a note that Korean retail activity often leads global price movements. However, it doesn't explicitly state when to use this tool versus alternatives like get_exchange_alerts or get_kr_prices, nor does it provide exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_market_readARead-onlyInspect
AI-powered Korean crypto market analysis. Combines Kimchi Premium, stablecoin premium, FX rate, Upbit/Bithumb volume rankings, Binance funding rate, open interest, BTC dominance, and Fear & Greed index. Returns AI-generated signal (BULLISH/BEARISH/NEUTRAL), confidence score, actionable summary, and all raw data. Price: $0.10 via x402.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true and openWorldHint=true, indicating safe, flexible operations. The description adds valuable context beyond annotations: it discloses the cost ('Price: $0.10 via x402'), specifies the AI-generated nature of outputs, and lists all integrated data sources, which helps agents understand behavioral scope and constraints not covered by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by specific data sources and outputs, and ends with pricing. Every sentence adds value: the first defines the tool, the second lists components, the third details returns, and the fourth notes cost. No wasted words, efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (integrating multiple data sources), rich annotations (readOnlyHint, openWorldHint), and the presence of an output schema, the description is complete. It covers purpose, data inputs, outputs (signal, confidence, summary, raw data), and cost, providing sufficient context without needing to explain return values due to the output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline is 4. The description compensates by explaining that no inputs are needed for this analysis, implicitly covering the empty parameter set, and adds context about the data sources and outputs, enhancing understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'AI-powered Korean crypto market analysis' with a specific verb ('analysis') and resource ('Korean crypto market'). It distinguishes from siblings by listing unique data sources (Kimchi Premium, stablecoin premium, etc.) and outputs (AI-generated signal, confidence score, actionable summary, raw data), unlike simpler sibling tools like get_kimchi_premium or get_fx_rate.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through its comprehensive data sources and outputs, suggesting it's for holistic market analysis rather than specific metrics. However, it doesn't explicitly state when to use this tool versus alternatives like get_arbitrage_scanner or get_market_movers, nor does it mention exclusions or prerequisites beyond the price note.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.