kr-crypto-intelligence
Server Details
Korean crypto market data + AI analysis: Kimchi Premium, stablecoin premium, market read.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- bakyang2/kr-crypto-intelligence
- GitHub Stars
- 0
- Server Listing
- kr-crypto-intelligence
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 6 of 6 tools scored. Lowest: 3.2/5.
Most tools have distinct purposes, but get_kimchi_premium and get_stablecoin_premium could be confused as both measure premiums between Korean and global markets. The descriptions help differentiate them, with kimchi_premium focusing on crypto price differences and stablecoin_premium on fund flow indicators.
All tools follow a consistent verb_noun pattern with snake_case (e.g., check_health, get_available_symbols, get_fx_rate). The naming is predictable and readable throughout the set.
Six tools is well-scoped for a crypto intelligence server focused on Korean markets. Each tool serves a clear purpose, from health checks to specific premium calculations, without feeling excessive or insufficient.
The toolset covers core intelligence needs for Korean crypto markets, including health, symbols, FX rates, prices, and key premium indicators. A minor gap is the lack of historical data tools, but agents can work with real-time data for typical queries.
Available Tools
6 toolscheck_healthBInspect
Check service health and exchange connectivity status. Returns status of Upbit, Bithumb, and Binance API connections.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While it states what the tool does (check health/connectivity) and which exchanges are covered, it doesn't describe important behavioral aspects: what 'health' means, what specific connectivity metrics are checked, whether this performs active API calls or checks cached status, what authentication might be required, or potential rate limits. The description is minimal and lacks operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and well-structured: two sentences that directly state the tool's purpose and what it returns. Every word earns its place, with no redundant information. The first sentence establishes the core function, and the second provides specific exchange details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that this is a health/status checking tool with zero parameters and an output schema exists, the description is minimally adequate. It tells the agent what the tool does and which exchanges are covered. However, with no annotations and a potentially complex health checking operation, the description could benefit from more context about what constitutes 'health' or 'connectivity status.' The existence of an output schema means return values are documented elsewhere, but the description itself is quite sparse.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, and schema description coverage is 100% (since there are no parameters to describe). The description appropriately doesn't attempt to explain nonexistent parameters. With no parameters, the baseline score is 4, as there's nothing for the description to add beyond what the empty schema already indicates.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Check service health and exchange connectivity status.' It specifies the verb ('check') and resource ('service health and exchange connectivity status'), and identifies which exchanges are checked (Upbit, Bithumb, Binance). However, it doesn't explicitly differentiate from sibling tools like 'get_available_symbols' or 'get_kr_prices' which might also involve exchange connectivity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, timing considerations, or suggest when this health check is appropriate compared to other tools that fetch data from these exchanges. With sibling tools like 'get_kr_prices' that presumably also require exchange connectivity, there's no differentiation provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_available_symbolsAInspect
Get all available trading symbols on Korean exchanges. Returns symbols available on Upbit, Bithumb, and those common to both. Use this to check which symbols you can query before calling other tools.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool returns data from multiple exchanges (Upbit and Bithumb) and indicates it's a read operation ('Get'), but doesn't mention behavioral aspects like rate limits, authentication requirements, response format, or whether the data is cached/real-time. The description adds some context but lacks comprehensive behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each serve a distinct purpose: the first states what the tool does, and the second provides usage guidance. There is zero wasted text, and the information is front-loaded with the core functionality stated immediately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has zero parameters, an output schema exists (so return values don't need explanation in the description), and it's a relatively simple read operation, the description is mostly complete. It covers purpose and usage context well. The main gap is the lack of behavioral details (like rate limits or response structure), but with an output schema handling return values, this is less critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters (schema description coverage is 100%), so there are no parameters to document. The description appropriately doesn't discuss parameters, which is correct for a parameterless tool. It earns a baseline 4 since no parameter information is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get all available trading symbols'), resource ('on Korean exchanges'), and scope ('Upbit, Bithumb, and those common to both'). It distinguishes this tool from siblings like get_kr_prices (which likely returns price data rather than symbol lists) and get_fx_rate (which handles exchange rates).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Use this to check which symbols you can query before calling other tools.' This provides clear guidance about its role as a prerequisite check before invoking other trading-related tools, establishing a specific usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_fx_rateAInspect
Get current USD/KRW exchange rate. Essential for converting between Korean Won and US Dollar prices.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only states what the tool does, not behavioral traits like whether it's real-time or cached data, rate limits, error conditions, or authentication requirements. It's a basic functional description without operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place - the first states the core function, the second provides usage context. No wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, has output schema), the description is reasonably complete for understanding when to use it. However, with no annotations and a mutation-adjacent financial tool, it could benefit from more behavioral context about data freshness, reliability, or limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the description doesn't need to compensate for parameter documentation. The baseline for zero parameters is 4, and the description appropriately focuses on the tool's purpose rather than parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('Get') and resource ('current USD/KRW exchange rate'), and distinguishes it from siblings by focusing on this specific currency pair rather than other financial data like kimchi premium or stablecoin premium.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Essential for converting between Korean Won and US Dollar prices'), but doesn't explicitly state when not to use it or name specific alternatives among the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_kr_pricesAInspect
Get cryptocurrency prices from Korean exchanges (Upbit, Bithumb). Returns KRW-denominated prices, 24h volume, and change rate.
Args: symbol: Crypto symbol (e.g., BTC, ETH, XRP, SOL, DOGE) exchange: Exchange to query — 'upbit', 'bithumb', or 'all' for both
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | No | BTC | |
| exchange | No | all |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes what data is returned (prices, volume, change rate) and the exchange options, but doesn't mention rate limits, authentication requirements, error conditions, or whether this is a read-only operation. It provides basic behavioral context but lacks important operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a clear purpose statement followed by a well-organized Args section. Every sentence earns its place, providing essential information without redundancy. The two-sentence format is front-loaded with the most important information first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), 2 parameters with good description coverage, and moderate complexity, the description is mostly complete. It covers purpose, parameters, and basic return data. However, for a financial data tool with no annotations, it could benefit from mentioning data freshness, rate limits, or error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing complete parameter semantics. It clearly explains both parameters: 'symbol' with specific examples (BTC, ETH, XRP, SOL, DOGE) and 'exchange' with valid values ('upbit', 'bithumb', 'all'). This adds significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get cryptocurrency prices'), identifies the resources ('Korean exchanges: Upbit, Bithumb'), and distinguishes this tool from siblings by specifying it returns KRW-denominated data with volume and change metrics. It doesn't just restate the tool name but provides meaningful differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool (for Korean exchange crypto prices in KRW), but doesn't explicitly mention when NOT to use it or name specific alternatives among sibling tools. It implies usage for price data rather than other sibling functions like health checks or premium calculations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.