Skip to main content
Glama

kr-crypto-intelligence

Server Details

Korean crypto market data + AI analysis: Kimchi Premium, stablecoin premium, market read.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
bakyang2/kr-crypto-intelligence
GitHub Stars
0
Server Listing
kr-crypto-intelligence

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 6 of 6 tools scored. Lowest: 3.2/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but get_kimchi_premium and get_stablecoin_premium could be confused as both measure premiums between Korean and global markets. The descriptions help differentiate them, with kimchi_premium focusing on crypto price differences and stablecoin_premium on fund flow indicators.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern with snake_case (e.g., check_health, get_available_symbols, get_fx_rate). The naming is predictable and readable throughout the set.

Tool Count5/5

Six tools is well-scoped for a crypto intelligence server focused on Korean markets. Each tool serves a clear purpose, from health checks to specific premium calculations, without feeling excessive or insufficient.

Completeness4/5

The toolset covers core intelligence needs for Korean crypto markets, including health, symbols, FX rates, prices, and key premium indicators. A minor gap is the lack of historical data tools, but agents can work with real-time data for typical queries.

Available Tools

6 tools
check_healthBInspect

Check service health and exchange connectivity status. Returns status of Upbit, Bithumb, and Binance API connections.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it states what the tool does (check health/connectivity) and which exchanges are covered, it doesn't describe important behavioral aspects: what 'health' means, what specific connectivity metrics are checked, whether this performs active API calls or checks cached status, what authentication might be required, or potential rate limits. The description is minimal and lacks operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and well-structured: two sentences that directly state the tool's purpose and what it returns. Every word earns its place, with no redundant information. The first sentence establishes the core function, and the second provides specific exchange details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that this is a health/status checking tool with zero parameters and an output schema exists, the description is minimally adequate. It tells the agent what the tool does and which exchanges are covered. However, with no annotations and a potentially complex health checking operation, the description could benefit from more context about what constitutes 'health' or 'connectivity status.' The existence of an output schema means return values are documented elsewhere, but the description itself is quite sparse.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and schema description coverage is 100% (since there are no parameters to describe). The description appropriately doesn't attempt to explain nonexistent parameters. With no parameters, the baseline score is 4, as there's nothing for the description to add beyond what the empty schema already indicates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Check service health and exchange connectivity status.' It specifies the verb ('check') and resource ('service health and exchange connectivity status'), and identifies which exchanges are checked (Upbit, Bithumb, Binance). However, it doesn't explicitly differentiate from sibling tools like 'get_available_symbols' or 'get_kr_prices' which might also involve exchange connectivity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, timing considerations, or suggest when this health check is appropriate compared to other tools that fetch data from these exchanges. With sibling tools like 'get_kr_prices' that presumably also require exchange connectivity, there's no differentiation provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_available_symbolsAInspect

Get all available trading symbols on Korean exchanges. Returns symbols available on Upbit, Bithumb, and those common to both. Use this to check which symbols you can query before calling other tools.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool returns data from multiple exchanges (Upbit and Bithumb) and indicates it's a read operation ('Get'), but doesn't mention behavioral aspects like rate limits, authentication requirements, response format, or whether the data is cached/real-time. The description adds some context but lacks comprehensive behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each serve a distinct purpose: the first states what the tool does, and the second provides usage guidance. There is zero wasted text, and the information is front-loaded with the core functionality stated immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that the tool has zero parameters, an output schema exists (so return values don't need explanation in the description), and it's a relatively simple read operation, the description is mostly complete. It covers purpose and usage context well. The main gap is the lack of behavioral details (like rate limits or response structure), but with an output schema handling return values, this is less critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters (schema description coverage is 100%), so there are no parameters to document. The description appropriately doesn't discuss parameters, which is correct for a parameterless tool. It earns a baseline 4 since no parameter information is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get all available trading symbols'), resource ('on Korean exchanges'), and scope ('Upbit, Bithumb, and those common to both'). It distinguishes this tool from siblings like get_kr_prices (which likely returns price data rather than symbol lists) and get_fx_rate (which handles exchange rates).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use this to check which symbols you can query before calling other tools.' This provides clear guidance about its role as a prerequisite check before invoking other trading-related tools, establishing a specific usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_fx_rateAInspect

Get current USD/KRW exchange rate. Essential for converting between Korean Won and US Dollar prices.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states what the tool does, not behavioral traits like whether it's real-time or cached data, rate limits, error conditions, or authentication requirements. It's a basic functional description without operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place - the first states the core function, the second provides usage context. No wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, has output schema), the description is reasonably complete for understanding when to use it. However, with no annotations and a mutation-adjacent financial tool, it could benefit from more behavioral context about data freshness, reliability, or limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the description doesn't need to compensate for parameter documentation. The baseline for zero parameters is 4, and the description appropriately focuses on the tool's purpose rather than parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('Get') and resource ('current USD/KRW exchange rate'), and distinguishes it from siblings by focusing on this specific currency pair rather than other financial data like kimchi premium or stablecoin premium.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Essential for converting between Korean Won and US Dollar prices'), but doesn't explicitly state when not to use it or name specific alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_kimchi_premiumAInspect

Get real-time Kimchi Premium — the price difference between Korean exchanges (Upbit) and global exchanges (Binance). South Korea ranks top 3 globally in crypto trading volume. A positive premium means Korean traders are paying more than the global market price.

Args: symbol: Crypto symbol (e.g., BTC, ETH, XRP, SOL, DOGE)

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNoBTC

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses that this is a read operation ('Get'), provides real-time data, explains what a positive premium means, and gives market context. However, it doesn't mention rate limits, error conditions, data freshness, or authentication requirements that would be helpful for a financial data tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose. Every sentence earns its place: first defines the tool, second provides market context, third explains premium interpretation, fourth documents the parameter. Zero wasted words while maintaining clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (so return values are documented elsewhere), 1 parameter with good semantic coverage in the description, and no annotations, the description provides adequate context. It explains what the tool does, when to use it, parameter meaning, and premium interpretation. The main gap is lack of behavioral details like rate limits or error handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage, so the description must compensate. It provides clear semantics for the single parameter: 'symbol: Crypto symbol (e.g., BTC, ETH, XRP, SOL, DOGE)' with helpful examples. This adds significant value beyond the bare schema, though it doesn't specify format constraints or validation rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get real-time Kimchi Premium') and defines the resource ('price difference between Korean exchanges and global exchanges'). It distinguishes from siblings by focusing on this specific premium calculation rather than general prices, FX rates, or other premium types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool (to get the Kimchi Premium for crypto symbols) and implies differentiation from siblings like get_kr_prices (Korean prices only) and get_stablecoin_premium (different premium type). However, it doesn't explicitly state when NOT to use it or name specific alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_kr_pricesAInspect

Get cryptocurrency prices from Korean exchanges (Upbit, Bithumb). Returns KRW-denominated prices, 24h volume, and change rate.

Args: symbol: Crypto symbol (e.g., BTC, ETH, XRP, SOL, DOGE) exchange: Exchange to query — 'upbit', 'bithumb', or 'all' for both

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNoBTC
exchangeNoall

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes what data is returned (prices, volume, change rate) and the exchange options, but doesn't mention rate limits, authentication requirements, error conditions, or whether this is a read-only operation. It provides basic behavioral context but lacks important operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a clear purpose statement followed by a well-organized Args section. Every sentence earns its place, providing essential information without redundancy. The two-sentence format is front-loaded with the most important information first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which handles return values), 2 parameters with good description coverage, and moderate complexity, the description is mostly complete. It covers purpose, parameters, and basic return data. However, for a financial data tool with no annotations, it could benefit from mentioning data freshness, rate limits, or error handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing complete parameter semantics. It clearly explains both parameters: 'symbol' with specific examples (BTC, ETH, XRP, SOL, DOGE) and 'exchange' with valid values ('upbit', 'bithumb', 'all'). This adds significant value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get cryptocurrency prices'), identifies the resources ('Korean exchanges: Upbit, Bithumb'), and distinguishes this tool from siblings by specifying it returns KRW-denominated data with volume and change metrics. It doesn't just restate the tool name but provides meaningful differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool (for Korean exchange crypto prices in KRW), but doesn't explicitly mention when NOT to use it or name specific alternatives among sibling tools. It implies usage for price data rather than other sibling functions like health checks or premium calculations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_stablecoin_premiumAInspect

Get USDT and USDC premium on Korean exchanges vs official USD/KRW rate. Positive premium = capital flowing INTO Korean crypto market. Negative premium = capital flowing OUT. This is a key indicator of Korean market fund flow direction, separate from Kimchi Premium.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds context about what the premium values signify (capital flow direction) and clarifies it's separate from Kimchi Premium, but does not cover aspects like rate limits, error handling, or data freshness. It provides some useful behavioral insight but lacks comprehensive details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by explanatory context in a few efficient sentences. Each sentence adds value by defining premium implications and distinguishing from related concepts, with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters, an output schema exists, and no annotations, the description is reasonably complete. It explains what the tool does and the meaning of its results, but could benefit from more behavioral details like data sources or update frequency to fully compensate for the lack of annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately does not discuss parameters, focusing instead on the tool's purpose and output interpretation. This meets the baseline for tools with no parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving USDT and USDC premium data on Korean exchanges compared to the official USD/KRW rate. It distinguishes itself from sibling tools like 'get_kimchi_premium' by explicitly noting this is a separate indicator, providing specific verb+resource differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool by explaining its role as a key indicator of Korean market fund flow direction, but it does not explicitly state when to choose it over alternatives like 'get_kimchi_premium' or 'get_fx_rate'. The context is clear, but no exclusions or direct comparisons are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.