Skip to main content
Glama

find_similar_regulations

Search for regulations in other institutions with similar titles. Compare and benchmark your institution's rules against similar ones elsewhere.

Instructions

[ALIO] 한 기준 규정과 제목 유사도가 높은 다른 기관 규정 N개 검색 (1:N 매칭). '우리 규정이랑 비슷한 거 다른 기관에선?' — 직접 벤치마킹용.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
institutionYes기준 기관 코드(apbaId) 또는 기관명 일부
regIdNo기준 규정 ID. title 과 둘 중 하나
titleNo기준 규정 제목 부분일치. regId 대신 사용 가능
thresholdYes유사도 하한 (0~1, 기본:0.4)
excludeBaseYes기준 기관의 다른 규정 제외 (기본:true — 다른 기관만 검색)
maxYes최대 결과 수 (기본:10)

Implementation Reference

  • The main handler function for find_similar_regulations. Takes a base regulation and an institution, finds regulations from other institutions with similar titles using titleSimilarity, and returns matches sorted by similarity score.
    export async function findSimilarRegulations(
      _api: LawApiClient,
      input: FindSimilarRegulationsInput
    ): Promise<ToolResponse> {
      try {
        const idx = await loadIndex()
        const baseInst = findInstitution(idx, input.institution)
        if (!baseInst) {
          return {
            content: [{ type: "text", text: `기준 기관을 찾을 수 없습니다: ${input.institution}` }],
            isError: true,
          }
        }
    
        const manifest = idx.manifests.get(baseInst.apbaId)
        if (!manifest) {
          return {
            content: [{ type: "text", text: `기준 기관의 manifest 없음: ${baseInst.apbaId}` }],
            isError: true,
          }
        }
    
        if (!input.regId && !input.title) {
          return {
            content: [{ type: "text", text: "regId 또는 title 중 하나는 필수입니다." }],
            isError: true,
          }
        }
    
        // 기준 규정 찾기
        let baseReg = input.regId
          ? manifest.regulations.find((r) => r.regId === input.regId)
          : undefined
        if (!baseReg && input.title) {
          const t = input.title.toLowerCase()
          baseReg = manifest.regulations.find((r) => r.title.toLowerCase().includes(t))
        }
        if (!baseReg) {
          return {
            content: [
              {
                type: "text",
                text: `기준 규정을 찾을 수 없습니다. list_alio_regulations(institution="${baseInst.apbaId}") 로 확인하세요.`,
              },
            ],
            isError: true,
          }
        }
    
        // 모든 다른 규정 vs 기준 — 유사도 계산
        interface Hit {
          score: number
          apbaId: string
          apbaNa: string
          regId: string
          title: string
          category?: string
        }
    
        const hits: Hit[] = []
        for (const { inst, entry } of idx.flatRegulations) {
          if (input.excludeBase && inst.apbaId === baseInst.apbaId) continue
          // 기준 규정 자체는 항상 제외
          if (inst.apbaId === baseInst.apbaId && entry.regId === baseReg.regId) continue
    
          const score = titleSimilarity(baseReg.title, entry.title)
          if (score < input.threshold) continue
    
          hits.push({
            score,
            apbaId: inst.apbaId,
            apbaNa: inst.apbaNa,
            regId: entry.regId,
            title: entry.title,
            category: entry.category,
          })
        }
    
        hits.sort((a, b) => b.score - a.score)
        const sliced = hits.slice(0, input.max)
    
        const lines: string[] = []
        lines.push(`# 유사 규정 검색`)
        lines.push("")
        lines.push(`## 기준`)
        lines.push(`- 기관: [${baseInst.apbaId}] ${baseInst.apbaNa}`)
        lines.push(`- 규정: ${baseReg.title} (regId=${baseReg.regId}${baseReg.category ? `, category=${baseReg.category}` : ""})`)
        lines.push(`- 유사도 하한: ${input.threshold} | excludeBase: ${input.excludeBase}`)
        lines.push("")
        lines.push(`## 매칭 (${hits.length}건 / 표시 ${sliced.length}건)`)
    
        if (sliced.length === 0) {
          lines.push("- 조건을 만족하는 유사 규정이 없습니다. threshold 를 낮춰보세요.")
        } else {
          for (const h of sliced) {
            const cat = h.category ? ` [${h.category}]` : ""
            lines.push(`- [유사도 ${h.score.toFixed(2)}] [${h.apbaId}] ${h.apbaNa} — ${h.title}${cat} (regId=${h.regId})`)
          }
        }
    
        return { content: [{ type: "text", text: truncateResponse(lines.join("\n")) }] }
      } catch (err) {
        return formatToolError(err, "find_similar_regulations")
      }
    }
  • Zod schema defining inputs: institution (required), regId/title (one required), threshold (default 0.4), excludeBase (default true), max (default 10).
    export const FindSimilarRegulationsSchema = z.object({
      institution: z.string().describe("기준 기관 코드(apbaId) 또는 기관명 일부"),
      regId: z.string().optional().describe("기준 규정 ID. title 과 둘 중 하나"),
      title: z.string().optional().describe("기준 규정 제목 부분일치. regId 대신 사용 가능"),
      threshold: z.number().min(0).max(1).default(0.4).describe("유사도 하한 (0~1, 기본:0.4)"),
      excludeBase: z.boolean().default(true).describe("기준 기관의 다른 규정 제외 (기본:true — 다른 기관만 검색)"),
      max: z.number().min(1).max(50).default(10).describe("최대 결과 수 (기본:10)"),
    })
  • Registration entry in the tool registry mapping the name 'find_similar_regulations' to its schema and handler.
    {
      name: "find_similar_regulations",
      description: "[ALIO] 한 기준 규정과 제목 유사도가 높은 다른 기관 규정 N개 검색 (1:N 매칭). '우리 규정이랑 비슷한 거 다른 기관에선?' — 직접 벤치마킹용.",
      schema: FindSimilarRegulationsSchema,
      handler: findSimilarRegulations
    },
  • Import statement importing findSimilarRegulations and FindSimilarRegulationsSchema from the find-similar module.
    import { findSimilarRegulations, FindSimilarRegulationsSchema } from "./tools/alio/find-similar.js"
  • Helper imports: findInstitution/loadIndex for loading institution index data, titleSimilarity for computing title similarity scores.
    import { findInstitution, loadIndex } from "../../lib/alio/index-loader.js"
    import { titleSimilarity } from "../../lib/alio/compare.js"
    import { truncateResponse } from "../../lib/schemas.js"
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility for behavioral disclosure. It mentions the matching is 1:N, excludes base institution by default, and uses a similarity threshold. However, it omits details such as what happens when no similar regulations are found, potential rate limits, authentication requirements, or the exact nature of the return values. This is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise: two sentences that front-load the core action and use case. It is efficient and avoids fluff, but could be slightly more structured with bullet points or separated sections to improve scanability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lacks detail about the output format (e.g., does it return regulation IDs, titles, similarity scores?) and does not mention pagination, sorting, or error handling. Given the absence of an output schema and the tool's moderate complexity (6 parameters, 4 required), the description is insufficient for an agent to fully understand the tool's capabilities and expected results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema alone documents all parameters. The description provides a high-level framing (benchmarking, 1:N matching) but does not add significant detail beyond the schema's parameter descriptions. It repeats that 'threshold' is a similarity lower bound and 'excludeBase' excludes the same institution, which is already in the schema. Thus, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action: searching for N regulations from other institutions with high title similarity to a given base regulation, explicitly for benchmarking. It uses a specific verb ('검색') and resource ('유사도가 높은 다른 기관 규정'), and the 1:N matching concept distinguishes it from siblings like 'find_similar_precedents' or 'suggest_alio_benchmark'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description hints at the use case ('직접 벤치마킹용') but does not explicitly specify when to use this tool over alternatives like 'suggest_alio_benchmark' or 'chain_alio_benchmark'. It provides implied context but lacks clear guidance on when-not-to-use or explicit alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/workbookbulb863/korean-law-alio-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server