Skip to main content
Glama

get_tree_validation_report

Validate tree structure to identify antipatterns and calculate quality score for research organization hierarchies.

Instructions

Get validation report for all splits in a tree. Identifies antipatterns and provides an overall quality score. Use this after building a tree to check for common failure modes.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
treeIdYesID of the tree to validate

Implementation Reference

  • Core handler function that executes the tool logic: generates a comprehensive validation report for all splits in a given tree by calling validateSplitQuality on each split tile and aggregating scores and issues.
    getTreeValidationReport(treeId: string): {
      treeId: string;
      splitReports: SplitQualityReport[];
      overallScore: number;
      summary: string;
    } {
      const tree = this.trees.get(treeId);
      if (!tree) {
        throw new Error(`Tree ${treeId} not found`);
      }
    
      const allTiles = this.getTilesInTree(tree.rootTileId);
      const tilesWithSplits = allTiles.filter((t) => t.childrenIds.length > 0);
    
      const splitReports = tilesWithSplits.map((tile) => this.validateSplitQuality(tile.id));
    
      const overallScore =
        splitReports.length > 0
          ? Math.round(splitReports.reduce((sum, r) => sum + r.score, 0) / splitReports.length)
          : 100;
    
      const totalIssues = splitReports.reduce((sum, r) => sum + r.issues.length, 0);
      const totalErrors = splitReports.reduce(
        (sum, r) => sum + r.issues.filter((i) => i.severity === "error").length,
        0
      );
    
      let summary = `Tree has ${splitReports.length} splits with ${totalIssues} total issues (${totalErrors} errors). Overall score: ${overallScore}/100.`;
    
      if (overallScore >= 80) {
        summary += " Tree structure is good quality.";
      } else if (overallScore >= 60) {
        summary += " Some improvements recommended.";
      } else {
        summary += " Significant issues detected - review recommendations.";
      }
    
      return {
        treeId,
        splitReports,
        overallScore,
        summary,
      };
    }
  • Input schema definition for the tool, specifying the required 'treeId' parameter.
      name: "get_tree_validation_report",
      description: "Get validation report for all splits in a tree. Identifies antipatterns and provides an overall quality score. Use this after building a tree to check for common failure modes.",
      inputSchema: {
        type: "object",
        properties: {
          treeId: {
            type: "string",
            description: "ID of the tree to validate",
          },
        },
        required: ["treeId"],
      },
    },
  • src/index.ts:654-664 (registration)
    Tool handler registration in the MCP CallToolRequestSchema switch statement, delegating execution to treeManager.getTreeValidationReport.
    case "get_tree_validation_report": {
      const result = treeManager.getTreeValidationReport(args.treeId as string);
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify(result, null, 2),
          },
        ],
      };
    }
  • Key helper method called by the handler to validate individual split quality, detecting antipatterns such as vague language, catch-all buckets, mixed dimensions, retroactive splitting, and incomplete coverage.
    validateSplitQuality(tileId: string): SplitQualityReport {
      const tile = this.tiles.get(tileId);
      if (!tile) {
        throw new Error(`Tile ${tileId} not found`);
      }
    
      const issues: ValidationIssue[] = [];
      const recommendations: string[] = [];
    
      // Get children for analysis
      const children = tile.childrenIds.map((id) => this.tiles.get(id)).filter((t) => t !== undefined) as Tile[];
    
      if (children.length === 0) {
        return {
          tileId: tile.id,
          tileTitle: tile.title,
          issues: [],
          score: 100,
          recommendations: ["Tile has no children - nothing to validate"],
        };
      }
    
      // 1. Check for vague language
      const vagueIssues = this.detectVagueLanguage(tile, children);
      issues.push(...vagueIssues);
    
      // 2. Check for catch-all buckets
      const catchAllIssues = this.detectCatchAllBuckets(tile, children);
      issues.push(...catchAllIssues);
    
      // 3. Check for mixed dimensions
      const mixedDimIssues = this.detectMixedDimensions(tile, children);
      issues.push(...mixedDimIssues);
    
      // 4. Check for retroactive splitting
      const retroactiveIssues = this.detectRetroactiveSplitting(tile, children);
      issues.push(...retroactiveIssues);
    
      // 5. Check for incomplete coverage
      if (!tile.isMECE) {
        issues.push({
          type: "incomplete_coverage",
          severity: "warning",
          message: "Split has not been validated for MECE completeness",
          tileId: tile.id,
          suggestion: "Use mark_mece to validate that the split is Mutually Exclusive and Collectively Exhaustive",
        });
      }
    
      // Generate recommendations
      if (issues.length === 0) {
        recommendations.push("Split appears well-structured");
        if (tile.isMECE) {
          recommendations.push("MECE validation completed");
        }
      } else {
        const errorCount = issues.filter((i) => i.severity === "error").length;
        const warningCount = issues.filter((i) => i.severity === "warning").length;
    
        if (errorCount > 0) {
          recommendations.push(`Address ${errorCount} critical issue(s) before proceeding`);
        }
        if (warningCount > 0) {
          recommendations.push(`Review ${warningCount} warning(s) to improve split quality`);
        }
    
        // Specific recommendations
        if (issues.some((i) => i.type === "vague_language")) {
          recommendations.push("Replace vague terms with measurable physical properties or precise definitions");
        }
        if (issues.some((i) => i.type === "catch_all_bucket")) {
          recommendations.push("Replace catch-all categories with specific, splittable subsets");
        }
        if (issues.some((i) => i.type === "mixed_dimensions")) {
          recommendations.push("Use a single consistent dimension/attribute for this split level");
        }
        if (issues.some((i) => i.type === "retroactive_splitting")) {
          recommendations.push("Consider physics/math-based splits instead of known solution types");
        }
      }
    
      // Calculate score (100 - 20 per error - 10 per warning)
      const errorPenalty = issues.filter((i) => i.severity === "error").length * 20;
      const warningPenalty = issues.filter((i) => i.severity === "warning").length * 10;
      const score = Math.max(0, 100 - errorPenalty - warningPenalty);
    
      return {
        tileId: tile.id,
        tileTitle: tile.title,
        issues,
        score,
        recommendations,
      };
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool 'identifies antipatterns and provides an overall quality score,' which adds behavioral context beyond the basic 'get' action. However, it doesn't cover aspects like whether this is a read-only operation, performance implications, or error handling, leaving gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, the second adds key behavioral details, and the third provides usage guidance. Every sentence earns its place without redundancy or fluff, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no annotations, no output schema, and a simple input schema, the description is moderately complete. It covers purpose, key behaviors (antipatterns, quality score), and usage timing, but lacks details on output format, error cases, or integration with sibling tools, which could be helpful for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'treeId' documented as 'ID of the tree to validate.' The description doesn't add any additional meaning or context about this parameter beyond what the schema provides, so it meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get validation report for all splits in a tree' specifies the action (get) and resource (validation report for splits in a tree). It distinguishes from siblings by focusing on validation rather than creation, evaluation, or analysis, though it doesn't explicitly name alternatives like 'validate_split_quality'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool: 'Use this after building a tree to check for common failure modes.' This gives a specific timing and purpose, but it doesn't explicitly state when not to use it or name alternative tools like 'validate_split_quality' for comparison.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/k-chrispens/tiling-trees-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server