Skip to main content
Glama
scvcoder

korean-privacy-law-mcp

by scvcoder

get_law_history

Retrieve all enforcement-date versions of a Korean law, including current, historical, and pending, with metadata. Track timeline of frequently amended laws like PIPA by obtaining version history to then query specific text.

Instructions

법령 연혁 목록 (법제처 lawSearch · target=eflaw). 한 법령의 시행일별 모든 버전 반환 — 현행·연혁·시행예정 구분. 각 버전마다 mst·공포일·시행일·제개정구분 노출 → get_historical_law(mst)로 그 시점 본문 조회. PIPA 같은 자주 개정되는 법령의 시점 분기 추적에 필수. 다음: get_historical_law(mst)로 특정 시점 본문, compare_old_new로 신구 비교.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
lawNameYes법령명 (정확 매칭, 약칭 가능). 같은 이름의 모든 시행일 버전 (현행·연혁·시행예정) 반환.
displayNo결과 개수 (기본 100, 한 법령의 모든 연혁 회수에 충분)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. The description mentions output fields but does not explicitly state read-only nature, rate limits, or auth requirements. It conveys retrieval behavior but leaves gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five sentences with no redundancy. Purpose is at the start, output details follow, and next steps are clear. Slightly more compact than necessary but still efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and two simple parameters, the description explains the tool's role and output fields. It covers usage context and sibling relations, though error handling is omitted.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage. The description adds output context but does not enhance parameter meaning beyond the schema. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns all versions of a law by enforcement date, including current, historical, and pending. It differentiates from siblings like get_historical_law and compare_old_new by explaining the workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a specific use case (tracking timeline branches of frequently amended laws like PIPA) and directs to next steps. It lacks explicit when-not-to-use scenarios but offers clear context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/scvcoder/korean-privacy-law-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server