Skip to main content
Glama

chain_alio_benchmark

Benchmark a public institution in Korea by combining its profile, topic-matched regulations, and peer gap analysis into one comprehensive report.

Instructions

[ALIO 체인] 한 기관 벤치마킹 종합 — 프로파일 + 토픽 매칭 규정 + 동종 기관 갭 분석을 한 번에. '우리 기관 ○○ 측면 어떤가?' 시작점.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
institutionYes기준 기관 (apbaId 또는 이름) — '우리 기관'
topicNo관심 토픽 키워드 (예: '인사', '징계'). 생략 시 분류 분포 기준
maxYes각 섹션 최대 표시 (기본:8)
similarityThresholdYes동종 규정 매칭 유사도 하한

Implementation Reference

  • The main handler function `chainAlioBenchmark` that executes the benchmark tool logic. Takes a LawApiClient and input, loads index, finds institution, and produces profile + topic matching + peer gap analysis.
    export async function chainAlioBenchmark(
      _api: LawApiClient,
      input: ChainAlioBenchmarkInput
    ): Promise<ToolResponse> {
      try {
        const idx = await loadIndex()
        const baseInst = findInstitution(idx, input.institution)
        if (!baseInst) {
          return {
            content: [{ type: "text", text: `기관을 찾을 수 없습니다: ${input.institution}` }],
            isError: true,
          }
        }
        const baseManifest = idx.manifests.get(baseInst.apbaId)
        if (!baseManifest) {
          return {
            content: [{ type: "text", text: `${baseInst.apbaNa} — manifest 없음` }],
            isError: true,
          }
        }
    
        const baseRegs = baseManifest.regulations
        const topic = input.topic?.trim().toLowerCase()
    
        // [섹션 1] 프로파일
        const catCount = new Map<string, number>()
        for (const r of baseRegs) {
          const c = r.category || "(미분류)"
          catCount.set(c, (catCount.get(c) || 0) + 1)
        }
        const topCats = [...catCount.entries()].sort((a, b) => b[1] - a[1]).slice(0, input.max)
    
        const profile: string[] = []
        profile.push(`## 1. 기관 프로파일 — [${baseInst.apbaId}] ${baseInst.apbaNa}`)
        profile.push(`- 유형: ${baseInst.typeNa || "(미상)"} | 주무부처: ${baseInst.jidtNa || "(미상)"}`)
        profile.push(`- 보유 규정: ${baseRegs.length}건`)
        profile.push("- 분류 분포 top:")
        topCats.forEach(([c, n]) => profile.push(`  - ${c}: ${n}건`))
    
        // [섹션 2] 토픽 매칭 규정 (있으면)
        const topicSection: string[] = []
        if (topic) {
          const matched = baseRegs.filter((r) => r.title.toLowerCase().includes(topic))
          topicSection.push(`## 2. 토픽 "${input.topic}" 매칭 규정 (${matched.length}건)`)
          const sliced = matched.slice(0, input.max)
          if (sliced.length === 0) {
            topicSection.push("- 우리 기관에는 해당 토픽 규정이 없음 — 동종 기관 벤치마킹 권장")
          } else {
            for (const r of sliced) {
              topicSection.push(`- ${r.title} (regId=${r.regId}${r.category ? `, ${r.category}` : ""})`)
            }
          }
        } else {
          topicSection.push(`## 2. 토픽 미지정 — 토픽 매칭 섹션 생략 (입력의 topic 인자로 활성화)`)
        }
    
        // [섹션 3] 동종 기관 갭 분석 — 우리에게 없는 규정
        const gapSection: string[] = []
        gapSection.push(`## 3. 동종 기관에는 있으나 우리에게 없는 규정 후보 (top ${input.max})`)
        const peers = getCollectedInstitutions(idx).filter((p) => p.apbaId !== baseInst.apbaId)
        const baseTitles = baseRegs.map((r) => r.title)
    
        interface Gap {
          title: string
          score: number // 우리 기관 어떤 규정과도 유사도 ↓
          examples: Array<{ apbaId: string; apbaNa: string; regId: string }>
        }
        const gaps = new Map<string, Gap>()
        for (const peer of peers) {
          const peerManifest = idx.manifests.get(peer.apbaId)
          if (!peerManifest) continue
          // 토픽 필터
          const peerRegs = topic
            ? peerManifest.regulations.filter((r) => r.title.toLowerCase().includes(topic))
            : peerManifest.regulations
          for (const peerReg of peerRegs) {
            // 우리 기관 가장 유사한 규정과의 유사도
            const maxSim = baseTitles.length
              ? Math.max(...baseTitles.map((t) => titleSimilarity(peerReg.title, t)))
              : 0
            if (maxSim >= input.similarityThreshold) continue // 유사한 게 있음 → 갭 아님
            const key = peerReg.title
            const existing = gaps.get(key)
            if (existing) {
              if (existing.examples.length < 3) {
                existing.examples.push({ apbaId: peer.apbaId, apbaNa: peer.apbaNa, regId: peerReg.regId })
              }
            } else {
              gaps.set(key, {
                title: peerReg.title,
                score: 1 - maxSim,
                examples: [{ apbaId: peer.apbaId, apbaNa: peer.apbaNa, regId: peerReg.regId }],
              })
            }
          }
        }
        const topGaps = [...gaps.values()]
          .sort((a, b) => b.examples.length - a.examples.length)
          .slice(0, input.max)
        if (topGaps.length === 0) {
          gapSection.push("- 갭 후보 없음 (또는 유사도 threshold 가 너무 낮음)")
        } else {
          for (const g of topGaps) {
            const ex = g.examples
              .map((e) => `[${e.apbaId}] ${e.apbaNa}`)
              .join(", ")
            gapSection.push(`- ${g.title} — 보유 기관 예시: ${ex} (외 ${Math.max(0, g.examples.length - 3)}개)`)
          }
        }
    
        const header = `# 벤치마크 종합 — [${baseInst.apbaId}] ${baseInst.apbaNa}`
        const merged =
          header +
          "\n\n" +
          [profile.join("\n"), topicSection.join("\n"), gapSection.join("\n")].join("\n\n")
        return { content: [{ type: "text", text: truncateResponse(merged) }] }
      } catch (err) {
        return formatToolError(err, "chain_alio_benchmark")
      }
    }
  • Zod schema `ChainAlioBenchmarkSchema` defining input: institution (string), topic (optional string), max (number, default 8), similarityThreshold (number, default 0.4). Also exports the inferred type `ChainAlioBenchmarkInput`.
    export const ChainAlioBenchmarkSchema = z.object({
      institution: z.string().describe("기준 기관 (apbaId 또는 이름) — '우리 기관'"),
      topic: z.string().optional().describe("관심 토픽 키워드 (예: '인사', '징계'). 생략 시 분류 분포 기준"),
      max: z.number().min(1).max(20).default(8).describe("각 섹션 최대 표시 (기본:8)"),
      similarityThreshold: z.number().min(0).max(1).default(0.4).describe("동종 규정 매칭 유사도 하한"),
    })
    
    export type ChainAlioBenchmarkInput = z.infer<typeof ChainAlioBenchmarkSchema>
  • Registration of the `chain_alio_benchmark` tool in the tool registry with name, description, schema, and handler.
    {
      name: "chain_alio_benchmark",
      description: "[ALIO 체인] 한 기관 벤치마킹 종합 — 프로파일 + 토픽 매칭 규정 + 동종 기관 갭 분석을 한 번에. '우리 기관 ○○ 측면 어떤가?' 시작점.",
      schema: ChainAlioBenchmarkSchema,
      handler: chainAlioBenchmark
    },
  • Error handling using `formatToolError(err, 'chain_alio_benchmark')` in the catch block, and upstream helper calls to `loadIndex`, `findInstitution`, `getCollectedInstitutions`, `titleSimilarity`, `truncateResponse`.
    return formatToolError(err, "chain_alio_benchmark")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries burden. It mentions the tool is a chain combining profile, topic matching, and gap analysis, but discloses no behavioral traits (e.g., read-only, destructive, rate limits). It gives high-level functionality only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the tool's purpose, with the essential elements front-loaded. Could be slightly more structured, but no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite clear inputs, the description lacks details about the output format or structure. As a chain tool with no output schema, it leaves a significant gap in understanding what the agent will receive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema descriptions already cover 100% of parameters. The tool description does not add meaning beyond what schema provides; it only restates the parameters' roles. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a comprehensive benchmarking tool for an institution, combining profile, topic matching regulations, and peer gap analysis. It distinguishes from siblings like 'suggest_alio_benchmark' by being a 'chain' that does all three at once.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies it is a starting point for benchmarking, but does not explicitly state when to use versus alternatives like 'suggest_alio_benchmark' or other chain tools. No guidance on when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/workbookbulb863/korean-law-alio-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server