kr-crypto-intelligence
Server Quality Checklist
- Disambiguation4/5
Most tools have distinct purposes: health checks, symbol lists, FX rates, Kimchi premium, KR prices, market analysis, and stablecoin premium. However, get_kimchi_premium and get_stablecoin_premium both measure price differentials (crypto vs. global and stablecoin vs. FX), which could cause minor confusion without careful reading of descriptions.
Naming Consistency5/5All tools follow a consistent verb_noun naming pattern (e.g., check_health, get_available_symbols, get_fx_rate). The pattern is uniform across all seven tools, making them predictable and easy to parse.
Tool Count5/5Seven tools is well-scoped for a crypto intelligence server focused on Korean markets. Each tool serves a clear, non-redundant function, from basic data retrieval to advanced AI analysis, covering the domain effectively without bloat.
Completeness4/5The toolset covers key aspects: health, symbols, FX rates, price data, premium indicators, and market analysis. A minor gap is the lack of historical data tools (e.g., for trend analysis), but agents can work around this using the provided real-time and analysis tools.
Average 4.2/5 across 7 of 7 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 7 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. While it states what the tool returns ('status of Upbit, Bithumb, and Binance API connections'), it doesn't disclose important behavioral traits like whether this performs active API calls (which could have rate limits), what specific health metrics are checked, authentication requirements, or error handling behavior for a health monitoring tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two focused sentences. The first sentence establishes the core purpose, and the second specifies exactly what gets returned. Every word earns its place with zero redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (zero parameters, health check purpose) and the presence of an output schema (which should document return values), the description provides adequate context. It clearly states what the tool does and which services it monitors. However, for a health monitoring tool with no annotations, it could benefit from mentioning whether this performs active API calls or checks cached status.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, and schema description coverage is 100% (though empty). The description appropriately doesn't waste space discussing non-existent parameters. A baseline of 4 is appropriate for a zero-parameter tool where the schema fully documents the empty parameter set.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Check service health and exchange connectivity status') and identifies the exact resources involved ('Upbit, Bithumb, and Binance API connections'). It distinguishes itself from sibling tools like get_fx_rate or get_kr_prices by focusing on health monitoring rather than market data retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through the mention of specific exchange APIs, suggesting this tool should be used when monitoring connectivity to those particular services. However, it provides no explicit guidance on when to use this versus alternatives, nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes the tool as fetching a 'current' rate, implying real-time or latest data, but doesn't disclose behavioral traits like rate limits, data sources, update frequency, or error handling. The description adds some context (e.g., it's for conversion purposes) but lacks details on operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, and the second adds contextual value without redundancy. Both sentences earn their place by clarifying usage, making it efficient and well-structured with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no annotations, but has an output schema), the description is reasonably complete. It explains what the tool does and its primary use case. Since an output schema exists, the description doesn't need to detail return values. However, it could benefit from more behavioral context, such as data freshness or limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% description coverage, so no parameters need documentation. The description doesn't add parameter details beyond the schema, but this is acceptable given the lack of parameters. It implies the tool returns a rate for a fixed currency pair (USD/KRW), which aligns with the schema's simplicity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get current USD/KRW exchange rate.' It specifies the verb ('Get') and resource ('USD/KRW exchange rate'), making it unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'get_kimchi_premium' or 'get_stablecoin_premium', which might also involve exchange rates but focus on specific metrics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: 'Essential for converting between Korean Won and US Dollar prices.' This implies it's the go-to tool for USD/KRW conversions. However, it doesn't explicitly state when not to use it or name alternatives among siblings, such as using 'get_kimchi_premium' for premium calculations instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses what data is returned (prices, volume, change rate) and the exchange options, but doesn't mention rate limits, authentication requirements, error conditions, or whether this is a real-time or delayed feed. Some behavioral context is provided but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with a clear purpose statement followed by a concise Args section. Every sentence earns its place - the first establishes scope and return values, the second explains parameters with examples. Zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), 2 parameters with good description coverage, and moderate complexity, the description is mostly complete. It could benefit from mentioning any limitations or edge cases, but covers the essential purpose and parameters well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by explaining both parameters thoroughly. It defines 'symbol' with concrete examples (BTC, ETH, XRP, SOL, DOGE) and 'exchange' with valid values ('upbit', 'bithumb', 'all'), adding crucial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get cryptocurrency prices'), resources ('from Korean exchanges'), and scope ('KRW-denominated prices, 24h volume, and change rate'). It distinguishes itself from siblings like get_fx_rate or get_kimchi_premium by focusing on direct price data rather than derived metrics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when Korean exchange prices are needed, but doesn't explicitly state when to use this tool versus alternatives like get_market_read or get_available_symbols. No guidance on prerequisites or exclusions is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explains what the tool calculates (real-time price difference) and provides useful context about what a positive premium means. However, it doesn't disclose important behavioral traits like rate limits, data freshness guarantees, error conditions, or authentication requirements. The description adds value but leaves significant behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized. The first sentence states the core purpose, followed by contextual information about South Korea's trading volume and premium interpretation. The Args section is clearly separated. While every sentence earns its place, the South Korea ranking sentence, while informative, isn't strictly necessary for tool selection and could be considered slightly extraneous.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, calculation-based), no annotations, but with an output schema present, the description is reasonably complete. It explains what the tool does, provides parameter semantics, and establishes context. The output schema will handle return values, so the description doesn't need to explain those. However, it could benefit from more behavioral transparency details given the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds substantial meaning beyond the input schema, which has 0% description coverage. It explains that 'symbol' represents a 'Crypto symbol' and provides concrete examples (BTC, ETH, XRP, SOL, DOGE). This clarifies the parameter's purpose and format, which the schema alone (with just type and default) doesn't provide. For a single parameter with no schema descriptions, this is excellent compensation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get real-time Kimchi Premium — the price difference between Korean exchanges (Upbit) and global exchanges (Binance).' It specifies the exact calculation (price difference), identifies the specific exchanges involved (Upbit vs. Binance), and distinguishes it from siblings like get_fx_rate or get_stablecoin_premium by focusing on this particular crypto arbitrage metric.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool by explaining what Kimchi Premium represents and when it's positive. However, it doesn't explicitly state when NOT to use it or name specific alternatives among sibling tools (e.g., when to use get_kr_prices instead). The context about South Korea's trading volume ranking helps establish relevance but doesn't provide explicit usage boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool is 'AI-powered' and returns an 'AI-generated signal,' which adds useful context about its analytical nature. However, it lacks details on rate limits, error handling, or authentication requirements, and the price mention hints at a transactional aspect but doesn't fully clarify operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in a single sentence that front-loads the core purpose ('AI-powered Korean crypto market analysis') and then lists all integrated data sources and outputs. Every element (e.g., data sources, return values, price) adds value without redundancy, making it highly concise and well-organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (aggregating multiple data sources with AI analysis) and the presence of an output schema, the description is mostly complete. It details all input data sources and output components (signal, confidence score, etc.), and the output schema likely covers return values. However, it could better address behavioral aspects like cost implications or usage limits to be fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the input requirements. The description adds no parameter-specific information, which is acceptable given the absence of parameters. A baseline of 4 is appropriate as it doesn't need to compensate for any gaps, but it doesn't enhance beyond the schema's completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool performs 'AI-powered Korean crypto market analysis' and lists all specific data sources it combines (Kimchi Premium, stablecoin premium, etc.), making the purpose highly specific and comprehensive. It clearly distinguishes from sibling tools like get_kimchi_premium or get_fx_rate by aggregating multiple metrics into a unified analysis rather than returning individual data points.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying it's for 'Korean crypto market analysis' and includes a price, suggesting it's a paid service for comprehensive insights. However, it doesn't explicitly state when to use this tool versus alternatives (e.g., vs. get_kr_prices for just prices), nor does it mention any prerequisites or exclusions, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions the tool returns symbols from specific exchanges (Upbit, Bithumb, and common ones), which adds useful context beyond a basic 'get' operation. However, it lacks details on rate limits, error handling, or response format, leaving behavioral gaps for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose and followed by usage guidance. Every sentence adds value without redundancy, making it efficient and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, 100% schema coverage, and an output schema exists, the description is reasonably complete. It explains what the tool does and when to use it, though it could benefit from more behavioral details (e.g., response structure or limitations) to fully compensate for the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description does not add parameter semantics, but this is acceptable given the lack of parameters, warranting a baseline score above 3 for simplicity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('all available trading symbols on Korean exchanges'), and distinguishes it from siblings by specifying it returns symbols from Upbit, Bithumb, and common ones. This helps the agent understand it's for listing symbols, not fetching prices or other data like sibling tools do.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('to check which symbols you can query before calling other tools'), providing clear guidance on its role as a prerequisite for other tools. It implies an alternative is not using it before querying symbols, which is helpful for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it's a read operation (implied by 'Get'), provides financial data, and explains the economic interpretation of results (positive/negative premium meaning). However, it doesn't mention rate limits, error handling, or data freshness, leaving some gaps in operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence. Subsequent sentences add valuable interpretation without waste. Each sentence earns its place: the second explains premium direction, the third ties it to market flows, and the fourth distinguishes from siblings. It's efficiently structured and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (financial indicator), no annotations, but an output schema exists, the description is largely complete. It explains what the tool does, when to use it, and how to interpret results. However, it could mention data sources or update frequency for full context. The output schema likely covers return values, so this is adequate but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage. The description doesn't need to add parameter details, so baseline is high. It implicitly confirms no inputs are required by focusing solely on output interpretation. No compensation is needed, and it avoids redundancy, earning a 4 for appropriate handling.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get USDT and USDC premium on Korean exchanges vs official USD/KRW rate.' It specifies the exact resources (USDT, USDC), the comparison (Korean exchanges vs official rate), and distinguishes it from sibling 'get_kimchi_premium' by noting it's 'separate from Kimchi Premium.' This is specific, avoids tautology, and differentiates from alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: as 'a key indicator of Korean market fund flow direction.' It distinguishes from sibling 'get_kimchi_premium' by noting it's 'separate from Kimchi Premium,' providing clear alternatives. The context of positive/negative premium interpretation further guides usage in analyzing capital flow direction, making it highly actionable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/bakyang2/kr-crypto-intelligence'
If you have feedback or need assistance with the MCP directory API, please join our Discord server