Korean Public Data
Server Details
Korean government open data - weather, population, law search via data.go.kr
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- SongT-50/korean-public-data-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 6 of 6 tools scored.
Each tool targets a distinct domain: business registration, air quality, economic statistics, real estate trades, weather forecast, and supported options. No two tools overlap in purpose.
All tools follow a clear verb_noun pattern (check_, get_, list_) in snake_case. Names are predictable and uniform.
Six tools is well-scoped for a public data server covering multiple sectors. Not too few to be thin, not too many to be overwhelming.
The server covers key public data areas (business, air, economy, real estate, weather) but missing some common types like transportation or population data. The inclusion of a list_supported_options tool helps mitigate gaps.
Available Tools
6 toolscheck_business_registrationAInspect
사업자등록번호로 사업 상태를 조회합니다.
Args:
business_numbers: 사업자등록번호 리스트 (예: ["1234567890", "0987654321"]). 하이픈 없이 10자리 숫자. 최대 100개.
Returns:
각 사업자의 등록 상태 (계속사업자, 휴업자, 폐업자 등)| Name | Required | Description | Default |
|---|---|---|---|
| business_numbers | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions the query operation and possible return values (e.g., '계속사업자'), but does not disclose rate limits, authentication needs, or error behavior. The tool is read-only by nature, but this is not explicitly stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with a clear structure: a one-line summary followed by Args and Returns. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity and the existence of an output schema, the description is largely complete. It lists possible statuses and parameter details. However, typical use case context (e.g., business verification) and error handling are missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds essential semantics beyond the schema: specifies format (10 digits without hyphens), maximum 100 items, and provides an example. Since schema coverage is 0%, this is invaluable for correct invocation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb '조회' (query) and resource '사업 상태' (business status) using business registration numbers. It is distinct from sibling tools which cover unrelated domains like air quality or weather.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains how to use the tool with Args and Returns but does not specify when to use it versus alternatives or any prerequisites. Usage is implied by the function, but no explicit guidance is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_air_qualityAInspect
실시간 대기질(미세먼지, 초미세먼지, 오존 등)을 조회합니다.
Args:
location: 지역명 (예: "서울", "강남", "부산", "제주"). 15개 주요 지역 지원.
Returns:
PM10, PM2.5, 오존, 이산화질소, 일산화탄소, 아황산가스 수치와 등급| Name | Required | Description | Default |
|---|---|---|---|
| location | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It states the tool returns real-time data and lists output pollutants, but it does not disclose behavioral traits such as data freshness, rate limits, or potential side effects. The read-only nature is implied but not explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with a clear header, Args, and Returns sections. Every sentence adds value, and it is front-loaded with the primary action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (one parameter, no nested objects) and the presence of an output schema, the description is complete. It covers the parameter, return values (pollutants and grades), and regional scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% coverage for the 'location' parameter, but the description adds significant meaning by providing examples ('서울', '강남', '부산', '제주') and noting coverage of 15 major regions. This compensates well for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb '조회' (query) and the resource '실시간 대기질' (real-time air quality), listing specific pollutants. It distinguishes itself from sibling tools like weather forecast or economic stats by focusing on air quality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context through the Args section, explaining the 'location' parameter with examples and mentioning support for 15 major regions. However, it does not explicitly state when not to use this tool or provide alternative suggestions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_economic_statsAInspect
한국은행 경제통계를 조회합니다.
Args:
indicator: 경제지표명. 지원 항목: 기준금리, 소비자물가지수, 실업률, GDP성장률, 수출액, 수입액, 원달러환율, 코스피
period: 조회기간. "latest"(최근 12개월), "2025"(특정연도), "202501-202602"(기간지정)
Returns:
해당 경제지표의 시계열 데이터| Name | Required | Description | Default |
|---|---|---|---|
| period | No | latest | |
| indicator | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It describes a query operation ('조회') and notes it returns time series data, implying read-only behavior. However, it does not explicitly state safety, permissions, or side effects, which is a minor gap for a data retrieval tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with a single-sentence purpose followed by well-structured Args and Returns sections. Every word adds value, and the front-loaded purpose ensures quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (so return values are documented elsewhere), the description covers input parameters adequately. It may lack explicit mention of error handling or authentication, but for a straightforward query tool, the completeness is high.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning the schema alone provides minimal info (just strings). The description compensates fully by listing eight specific indicator options and three period format examples, adding meaning far beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it queries Bank of Korea economic statistics and lists specific indicators, making the purpose unmistakable. It distinguishes itself from sibling tools which are unrelated domains like business registration or weather.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides parameter usage (indicator and period formats) but does not explicitly state when to use this tool versus alternatives. The sibling tools are distinct, so the context is clear, but no exclusions or when-not-to guidance is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_real_estate_tradesAInspect
아파트 실거래가를 조회합니다.
Args:
district: 지역구 이름 (예: "강남구", "서초구", "성남시분당구"). 서울 25개구 + 주요 경기/광역시 지원.
year_month: 조회할 연월 (예: "202602"). YYYYMM 형식.
Returns:
해당 지역/기간의 아파트 실거래 내역 (단지명, 면적, 가격, 층, 거래일)| Name | Required | Description | Default |
|---|---|---|---|
| district | Yes | ||
| year_month | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It describes the tool as a read operation ('조회합니다') and lists return fields. However, it does not disclose any potential side effects, rate limits, authentication needs, or behavior on no results. This is adequate but lacks detail.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with a clear structure: a one-sentence purpose statement followed by structured Args and Returns sections. Every sentence adds value without unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has an output schema, so the description's mention of return fields is a bonus. It covers the core functionality and parameter constraints. However, it does not address edge cases or error handling, which would improve completeness for a tool with no annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, so the description must compensate. It does so by explaining the district parameter with examples and scope, and the year_month parameter with format and example. This adds significant meaning beyond the schema's type 'string'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it queries apartment real transaction prices. It specifies the required parameters (district, year_month) and what it returns (complex name, area, price, floor, transaction date). The sibling tools are all unrelated, so this tool is easily distinguished.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides example values for both parameters and mentions the supported regions (Seoul 25 districts and major Gyeonggi/metropolitan cities). It does not explicitly state when not to use this tool or alternative tools, but given the sibling tools are very different, the context is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_weather_forecastAInspect
도시별 단기 날씨 예보를 조회합니다.
Args:
city: 도시 이름 (예: "서울", "부산", "제주", "수원"). 25개 주요 도시 지원.
hours_ahead: 앞으로 몇 시간 예보를 볼지 (기본 24시간, 최대 72시간)
Returns:
시간대별 기온, 강수확률, 하늘상태 등 날씨 정보| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | ||
| hours_ahead | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must cover behavioral traits. It describes return fields (temperature, precipitation, sky condition) but omits data freshness, rate limits, or side effects. Being read-only is implied but not explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise with two clear sections (Args, Returns), no redundant information, and front-loaded purpose. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter tool with an output schema, the description provides sufficient context: supported cities, parameter constraints, and return fields. Could list the 25 cities or data update frequency for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (no descriptions), but the description compensates by explaining city as '도시 이름 (예: 서울)' and hours_ahead with default and max values, adding essential meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb '조회합니다' (retrieves) and the resource '단기 날씨 예보' (short-term weather forecast), distinguishing it from sibling tools like check_business_registration or get_air_quality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides parameter-specific guidance (city examples, hours_ahead defaults and max) but does not explicitly state when to use this tool versus others, though no alternative weather tool exists among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_supported_optionsAInspect
이 MCP 서버에서 지원하는 도시, 지역, 경제지표 목록을 확인합니다.
Returns:
각 도구별 지원 옵션 목록| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It mentions the return value but does not disclose any behavioral traits (e.g., read-only nature, caching, rate limits, or authentication needs). The simple nature of the tool (no parameters) mitigates the lack of detail, but the description is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: one sentence plus a return statement. No unnecessary words or repetition. It is front-loaded with the purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters and an output schema exists, the description is sufficiently complete. It explains what the tool returns and its purpose. No additional context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters, and schema description coverage is 100% (vacuously). As per guidelines, 0 params defaults to baseline 4. The description does not add parameter meaning as there are none.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool lists supported cities, regions, and economic indicators for this MCP server. It uses a specific verb ('확인합니다' - check) and resource ('도시, 지역, 경제지표 목록' - list of cities, regions, economic indicators), and distinguishes itself from sibling tools which are all data retrieval tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide any guidance on when or when not to use this tool. It does not mention alternatives or indicate that this tool should be called before other tools to determine valid parameters. The usage context is only implied, not explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!