Korean Agriculture Market Data
Server Details
Korean wholesale agriculture market data - auction prices, seasonal produce, market trends
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- SongT-50/korean-agriculture-mcp
- GitHub Stars
- 0
- Server Listing
- korean-agriculture-mcp
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 6 of 6 tools scored. Lowest: 2.9/5.
Each tool addresses a distinct aspect: market price comparison, auction summary statistics, market list, price trends, realtime auction data, and product price search. The descriptions clearly differentiate their purposes, minimizing confusion.
All tool names follow a consistent verb_noun pattern using snake_case (e.g., compare_market_prices, get_auction_summary). The verbs are distinct and the nouns accurately describe the returned data.
With 6 tools, the set is well-scoped for the agricultural market data domain. Each tool covers a necessary operation without redundancy or excessive granularity.
The surface provides a comprehensive coverage of market data operations: listing markets, comparing prices, viewing trends, accessing realtime auctions, and searching by product. Only minor enhancements like longer historical trends might be missing, but core workflows are fully supported.
Available Tools
6 toolscompare_market_pricesCInspect
특정 품목의 전국 도매시장 가격을 비교합니다.
Args:
product_keyword: 품목 키워드 (예: "사과", "딸기", "배추")
date: 정산일 (YYYY-MM-DD). 빈 문자열이면 오늘.
Returns:
전국 시장별 가격 비교 (평균가, 최고가, 최저가, 거래량)| Name | Required | Description | Default |
|---|---|---|---|
| date | No | ||
| product_keyword | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are absent, and the description does not disclose behavioral traits such as data freshness, rate limits, error handling, or side effects. It only lists basic parameters and return fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (4 lines after the main sentence) and front-loaded with the purpose. It efficiently uses space but could benefit from a more structured format with clear sections.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low parameter count, 0% schema coverage, and no annotations, the description covers the core functionality but omits details like output structure (though mentioned) and error handling. It is adequate for simple usage but not fully self-contained.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description adds meaning by explaining product_keyword as an item keyword with examples and date as a settlement date with format. However, it does not clarify default behavior for date (empty string means today) or allowable values for product_keyword.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: comparing wholesale market prices across the country for a specific item. It mentions parameters and return fields, but lacks explicit differentiation from sibling tools like get_price_trend or search_product_price.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. The description does not provide context, prerequisites, or exclusion criteria. Sibling tools are not referenced.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_auction_summaryAInspect
도매시장 경매 데이터의 부류별/시장별 통계를 요약합니다.
Args:
market_code: 도매시장 코드. 빈 문자열이면 전국.
date: 정산일 (YYYY-MM-DD). 빈 문자열이면 오늘.
Returns:
부류별 평균가, 거래건수, 시장별 거래 현황 요약| Name | Required | Description | Default |
|---|---|---|---|
| date | No | ||
| market_code | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must convey behavioral traits. It states that it is a read-like operation (summarizing data) and specifies input effects (empty codes mean nationwide). However, it does not mention authentication, rate limits, or the non-destructive nature. The behavior is adequately described for a query tool, but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and well-structured with a one-line overview followed by Args and Returns sections. No extraneous words; every part contributes to understanding the tool's purpose and usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has only two optional parameters and an output schema (not shown), the description provides sufficient context to use it effectively. It covers the return summary (average price, transaction count, market status). Minor gaps include lack of error handling or date range constraints, but overall it is complete enough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage, but the description's Args section explains both parameters, including defaults (empty strings imply 'nationwide' or 'today'). This adds significant value beyond the raw schema, effectively compensating for the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as summarizing auction data by category and market (부류별/시장별 통계). It distinguishes itself from siblings like get_realtime_auction (real-time) and get_price_trend (trends), making its purpose specific and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains parameter defaults but does not provide explicit guidance on when to use this tool versus its siblings. For example, it could mention that this is for aggregate statistics while get_realtime_auction is for live data. The usage is implied but not clearly delineated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_market_listAInspect
전국 공영 도매시장 목록과 부류(대분류) 코드를 조회합니다.
Args:
region: 지역 필터 (예: "대전", "서울", "부산"). 빈 문자열이면 전국.
Returns:
도매시장 코드 목록 및 부류 코드 목록| Name | Required | Description | Default |
|---|---|---|---|
| region | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses output (market codes and category codes) but does not explicitly state it is read-only or mention any behavioral traits like rate limits. The 'get' prefix implies safety, but explicit confirmation is lacking.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise with three sentences: main purpose, argument details, return summary. Every sentence adds value with no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple with one parameter and an output schema. Description adequately covers input/output. Minor improvement would be explicitly noting read-only nature, but overall complete for its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description compensates by explaining the only parameter 'region' with examples and default behavior. This adds substantial meaning beyond the schema's empty structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves national public wholesale market lists and category codes, with a region filter. It distinguishes from sibling tools like compare_market_prices and get_auction_summary, which focus on prices or summaries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the region parameter with examples (e.g., 'Seoul', 'Busan') and notes empty string for nationwide. However, it does not specify when to use this tool versus alternatives or provide exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_price_trendAInspect
품목의 최근 N일간 가격 추이를 조회합니다.
Args:
product_keyword: 품목 키워드 (예: "사과", "딸기")
market_code: 도매시장 코드 (빈 문자열이면 전국)
days: 조회 기간 (기본 7일, 최대 30일)
Returns:
일별 평균가격 추이| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| market_code | No | ||
| product_keyword | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the output (daily average price trend) and a constraint (max 30 days) but lacks disclosure of read-only nature, authentication needs, or potential side effects. The behavioral detail is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise and well-structured: a one-sentence purpose followed by clear Args and Returns sections. Every sentence serves a purpose, no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (3 parameters, no nested objects) and presence of an output schema, the description adequately covers inputs and output. It could mention return value format or error conditions, but the existing coverage is sufficient for a straightforward tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, meaning the schema provides no parameter descriptions. The description compensates fully by explaining each parameter with examples (e.g., '사과' for product_keyword), defaults, and constraints (max 30 days for days), adding significant value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves the price trend over recent N days for an item, with a specific verb and resource. It differentiates from sibling tools like compare_market_prices by focusing on trending, though not explicitly stating alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides parameter usage details (e.g., examples for product_keyword, defaults for days and market_code) but does not offer explicit guidance on when to use this tool versus siblings or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_realtime_auctionAInspect
전국 도매시장 실시간 경매 현황을 조회합니다. 63,000건+ 전국 경매 데이터를 실시간으로 제공합니다.
Args:
market_code: 도매시장 코드 (예: "250003"=대전노은, "110001"=서울가락, "220001"=대구북부). 빈 문자열이면 전국.
category_code: 대분류 코드 (예: "06"=과실류, "10"=엽경채류, "12"=조미채소류). 빈 문자열이면 전체.
date: 정산일 (YYYY-MM-DD). 빈 문자열이면 오늘.
num_results: 조회 건수 (기본 50, 최대 1000)
Returns:
실시간 경매 데이터 (품목, 가격, 수량, 규격, 시장, 법인, 산지 정보)| Name | Required | Description | Default |
|---|---|---|---|
| date | No | ||
| market_code | No | ||
| num_results | No | ||
| category_code | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions the tool provides real-time data nationwide, the number of results limit (default 50, max 1000), and that empty parameters mean 'all'. However, it does not disclose data freshness, ordering, rate limits, or authorization requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured: a brief opening sentence, a bulleted Args list, and a Returns summary. Every sentence serves a purpose, and the essential information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters, no annotations, and an existing output schema, the description adequately covers the input parameters. The Returns section gives a general idea of output fields. It could mention pagination or data ordering, but overall it is complete enough for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage, but the description thoroughly explains all four parameters: market_code (with examples), category_code (with examples), date (with format), and num_results (with default and max). This adds significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves real-time auction status of nationwide wholesale markets ('전국 도매시장 실시간 경매 현황을 조회합니다'). It provides specific examples of market and category codes, which helps distinguish it from sibling tools like compare_market_prices or get_auction_summary.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains each parameter with examples and defaults, but it does not explicitly state when to use this tool versus alternatives or when not to use it. The usage context is clear given the parameter explanations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_product_priceAInspect
품목 키워드로 전국 도매시장 경매 가격을 검색합니다.
Args:
product_keyword: 품목 키워드 (예: "사과", "딸기", "배추", "고추", "포도")
market_code: 도매시장 코드 (빈 문자열이면 전국 검색)
date: 정산일 (YYYY-MM-DD). 빈 문자열이면 오늘.
num_results: 조회 건수 (기본 100, 최대 1000)
Returns:
품목별 가격 정보 + 시장별 평균/최고/최저가 요약| Name | Required | Description | Default |
|---|---|---|---|
| date | No | ||
| market_code | No | ||
| num_results | No | ||
| product_keyword | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description explains default behaviors (empty market_code for national, empty date for today) and return format (price info + summary). However, it lacks details on rate limits, authentication needs, or error handling, which are not covered by annotations (none provided).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with a clear purpose, followed by a structured bullet list of arguments and return format. Every sentence adds value without excess.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (4 params, 1 required, output schema present), the description covers purpose, parameter defaults, and return structure. It could include error handling or market code limitations, but is largely complete for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates well by detailing each parameter, including examples for product_keyword and default behaviors for market_code, date, and num_results. This adds significant meaning beyond the schema's default values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches national wholesale market auction prices by product keyword. It provides specific verb and resource, and distinguishes from sibling tools like compare_market_prices and get_auction_summary by focusing on keyword-based search with market-level summaries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It does not specify when not to use it or mention scenarios where other sibling tools would be more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.