Skip to main content
Glama

validate_structure

Check heading hierarchy and section balance in markdown manuscripts to identify structural issues and improve document organization.

Instructions

Check heading hierarchy and section balance

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_pathNoPath to manuscript directory (defaults to current directory)
file_pathNoSpecific file to validate
checksNoChecks to run (heading-levels, duplicates, balance, deep-nesting)

Implementation Reference

  • MCP tool handler for 'validate_structure' that extracts arguments and delegates to WritersAid.validateStructure
    private async validateStructure(args: Record<string, unknown>) {
      const filePath = args.file_path as string | undefined;
      const checks = args.checks as string[] | undefined;
    
      return this.writersAid.validateStructure({ filePath, checks });
    }
  • Tool schema definition including input schema for validate_structure
    {
      name: "validate_structure",
      description: "Check heading hierarchy and section balance",
      inputSchema: {
        type: "object",
        properties: {
          project_path: { type: "string", description: "Path to manuscript directory (defaults to current directory)" },
          file_path: { type: "string", description: "Specific file to validate" },
          checks: {
            type: "array",
            items: { type: "string" },
            description: "Checks to run (heading-levels, duplicates, balance, deep-nesting)",
          },
        },
      },
    },
  • Core implementation of structure validation with checks for heading levels, duplicates, balance, and nesting
    async validateStructure(options: {
      filePath?: string;
      checks?: string[];
    }): Promise<StructureReport> {
      const { filePath, checks } = options;
    
      const enabledChecks = checks || [
        "heading-levels",
        "duplicate-headings",
        "section-balance",
        "deep-nesting",
      ];
    
      const issues: StructureIssue[] = [];
      const files = filePath
        ? [await this.storage.getFile(filePath)]
        : await this.storage.getAllFiles();
    
      const validFiles = files.filter((f) => f !== null);
    
      for (const file of validFiles) {
        if (enabledChecks.includes("heading-levels")) {
          issues.push(...(await this.checkHeadingLevels(file.file_path)));
        }
    
        if (enabledChecks.includes("duplicate-headings")) {
          issues.push(...(await this.checkDuplicateHeadings(file.file_path)));
        }
    
        if (enabledChecks.includes("section-balance")) {
          issues.push(...(await this.checkSectionBalance(file.file_path)));
        }
    
        if (enabledChecks.includes("deep-nesting")) {
          issues.push(...(await this.checkDeepNesting(file.file_path)));
        }
      }
    
      const errors = issues.filter((i) => i.severity === "error").length;
      const warnings = issues.filter((i) => i.severity === "warning").length;
      const info = issues.filter((i) => i.severity === "info").length;
    
      return {
        issues,
        filesChecked: validFiles.length,
        errors,
        warnings,
        info,
      };
    }
  • Delegation method in WritersAid that calls StructureValidator
    async validateStructure(options?: { filePath?: string; checks?: string[] }) {
      return this.structureValidator.validateStructure(options || {});
    }
  • Registration of validate_structure in the tool dispatcher switch statement
    case "validate_structure":
      return this.validateStructure(args);
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states what the tool does ('Check heading hierarchy and section balance') but doesn't reveal critical traits like whether it's read-only or mutative, what permissions are needed, how results are returned, or any rate limits. For a tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at just five words ('Check heading hierarchy and section balance'), with zero wasted language. It's front-loaded with the core action and resources, making it easy to parse quickly without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters, no annotations, no output schema), the description is insufficiently complete. It lacks information on behavioral traits, output format, error handling, and how it integrates with sibling tools. For a validation tool with multiple parameters and no structured output, more context is needed to guide effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters ('project_path', 'file_path', 'checks') with clear descriptions. The tool description adds no additional parameter semantics beyond what's in the schema, such as explaining the 'checks' array values in more detail. This meets the baseline for high schema coverage but doesn't enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Check heading hierarchy and section balance' clearly states the tool's function with specific verbs ('Check') and resources ('heading hierarchy', 'section balance'), making the purpose immediately understandable. However, it doesn't explicitly differentiate this validation tool from similar siblings like 'check_readability' or 'find_duplicates', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools performing various checks (e.g., 'check_readability', 'find_duplicates', 'find_gaps'), the agent receives no indication of whether this is for structural validation specifically or how it complements other tools, leaving usage context entirely implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/xiaolai/claude-writers-aid-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server