Skip to main content
Glama

read_hwp_tables

Extract all tables from Korean HWP or HWPX files and convert them to GitHub-flavored markdown. Provide the file path to get structured table data.

Instructions

Extract every table from an HWP/HWPX file as GitHub-flavored markdown. Args: file_path.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_pathYes

Implementation Reference

  • src/server.ts:59-68 (registration)
    Tool 'read_hwp_tables' is registered in the TOOLS array with name, description, and inputSchema requiring file_path.
    {
      name: "read_hwp_tables",
      description:
        "Extract every table from an HWP/HWPX file as GitHub-flavored markdown. Args: file_path.",
      inputSchema: {
        type: "object",
        properties: { file_path: { type: "string" } },
        required: ["file_path"],
      },
    },
  • src/server.ts:512-512 (registration)
    Handler mapping: tool name 'read_hwp_tables' is mapped to the readHwpTables handler function.
    read_hwp_tables: readHwpTables,
  • The readHwpTables handler function: opens a document, walks tables via walkTables(), formats each as markdown with tableToMarkdown(), and returns the result.
    export async function readHwpTables(args: ReadHwpArgs): Promise<string> {
      let doc;
      try {
        doc = await openDocument(args.file_path);
      } catch (e) {
        return (e as Error).message;
      }
      try {
        const tables = walkTables(doc);
        if (tables.length === 0) return "(표가 없습니다 / no tables)";
        const out: string[] = [];
        tables.forEach((t, i) => {
          out.push(`### 표 ${i + 1} (${t.rows}행 x ${t.cols}열)`);
          out.push(tableToMarkdown(t));
          out.push("");
        });
        return out.join("\n");
      } catch (e) {
        return `표 추출 오류 (table extraction error): ${(e as Error).message}`;
      } finally {
        closeDocument(doc);
      }
    }
  • ReadHwpArgs interface defining the input schema with a file_path string property.
    export interface ReadHwpArgs {
      file_path: string;
    }
  • walkTables() helper function that iterates sections/paragraphs/controls, reads table dimensions and cell text, and returns an array of TableData objects.
    export function walkTables(doc: HwpDocument): TableData[] {
      const out: TableData[] = [];
      const sectionCount = doc.getSectionCount();
      for (let s = 0; s < sectionCount; s++) {
        const paraCount = doc.getParagraphCount(s);
        for (let p = 0; p < paraCount; p++) {
          const n = controlCount(doc, s, p);
          for (let ci = 0; ci < n; ci++) {
            let dimsJson: string;
            try {
              dimsJson = doc.getTableDimensions(s, p, ci);
            } catch {
              continue;
            }
            if (!dimsJson || dimsJson === "null") continue;
            let dims: TableDims;
            try {
              dims = JSON.parse(dimsJson);
            } catch {
              continue;
            }
            const rows = Number(dims.rowCount ?? dims.rows ?? dims.row_count ?? 0);
            const cols = Number(dims.colCount ?? dims.cols ?? dims.col_count ?? 0);
            const cellCount = Number(dims.cellCount ?? dims.cell_count ?? rows * cols);
            if (rows === 0 || cols === 0) continue;
            // Tables with merged cells report cellCount < rows*cols. Walk by
            // cellCount instead of grid; place by getCellInfo (row, col, span).
            const cells: string[][] = Array.from({ length: rows }, () => Array(cols).fill(""));
            for (let cellIdx = 0; cellIdx < cellCount; cellIdx++) {
              let row = 0,
                col = 0;
              try {
                const info = JSON.parse(doc.getCellInfo(s, p, ci, cellIdx));
                row = Number(info.row ?? info.r ?? 0);
                col = Number(info.col ?? info.c ?? 0);
              } catch {
                row = Math.floor(cellIdx / cols);
                col = cellIdx % cols;
              }
              if (row >= rows || col >= cols) continue;
              cells[row][col] = readCellText(doc, s, p, ci, cellIdx);
            }
            out.push({ rows, cols, cells });
          }
        }
      }
      return out;
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description bears full responsibility. It discloses the read-only extraction behavior but does not elaborate on limitations (e.g., large files), error states, or side effects. Adequate for a simple read tool but minimal extra context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 8 words, extremely concise. While efficient, it could be slightly expanded to include additional context without losing conciseness. It front-loads the key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description covers the core functionality: extracting tables as markdown. It does not address edge cases (e.g., no tables) but is sufficient for typical usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema coverage, the description only mentions 'file_path' as an argument, adding no meaning beyond the schema. The schema already defines it as a string, so the description provides no parameter-specific details like examples or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool extracts every table from HWP/HWPX files and outputs GitHub-flavored markdown. This specific verb+resource combination distinguishes it from siblings like read_hwp (whole file) and read_hwp_text (text only).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It lacks context for when extraction of tables is preferred over other read operations or how it fits into a workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/treesoop/hwp-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server