Skip to main content
Glama

compare_regulation_timeline

Compare amendment timelines of regulations on a specified topic across institutions. Identify enactment dates, amendment frequency, and the institution with the most recent revision.

Instructions

[ALIO] 기관간 동일 토픽(인사규정·휴직·채용 등) 규정의 제·개정 타임라인 비교. 제정 시점, 개정 빈도, 최근 개정 기관 식별.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
topicYes비교할 규정 주제 (예: '인사규정', '휴직', '블라인드 채용')
institutionsNo비교 대상 기관코드/기관명 목록 (선택). 생략 시 수집된 전체 기관 자동 사용. 사용자가 특정 기관을 지목하면 해당 명칭/코드를 배열로 전달.
maxPerInstitutionYes기관당 최대 매칭 규정 수 (기본:1, 가장 관련도 높은 1건)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the burden of behavioral disclosure. It describes that the tool compares timelines and identifies establishment timing, revision frequency, and recent revision institutions, indicating a read-only operation. However, it does not explicitly state it is idempotent or free of side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and front-loaded with the core purpose. Two sentences efficiently convey the tool's function without unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite the lack of an output schema, the description adequately outlines what the tool produces (comparison of timelines, establishment timing, revision frequency, recent institution). It provides enough context for an agent to understand the output, though finer details of the result format are not specified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides 100% description coverage for all three parameters. The tool description adds context by explaining the overall purpose but does not add significant parameter-level details beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the verb 'compare' and the resource: timelines of regulations on the same topic across institutions. It distinguishes itself from sibling comparison tools like compare_alio_articles and compare_old_new by focusing on timeline comparison rather than article text or version differences.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool should be used when comparing timelines of regulation revisions across institutions on the same topic, but does not explicitly state when to use it over alternatives or provide exclusions. No alternative tools are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/workbookbulb863/korean-law-alio-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server