Skip to main content
Glama

get_context_tree

Analyze project structure to extract file headers, functions, classes, and enums with dynamic pruning based on project size for efficient codebase navigation.

Instructions

Get the structural tree of the project with file headers, function names, classes, enums, and line ranges. Automatically reads 2-line headers for file purpose. Dynamic token-aware pruning: Level 2 (deep symbols) -> Level 1 (headers only) -> Level 0 (file names only) based on project size.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
target_pathNoSpecific directory or file to analyze (relative to project root). Defaults to root.
depth_limitNoHow many folder levels deep to scan. Use 1-2 for large projects.
include_symbolsNoInclude function/class/enum names in the tree. Defaults to true.
max_tokensNoMaximum tokens for output. Auto-prunes if exceeded. Default: 20000.

Implementation Reference

  • The main handler function for the `get_context_tree` tool, which orchestrates directory walking, building the tree structure, and pruning content to fit within token limits.
    export async function getContextTree(options: ContextTreeOptions): Promise<string> {
      const entries = await walkDirectory({
        rootDir: options.rootDir,
        targetPath: options.targetPath,
        depthLimit: options.depthLimit,
      });
    
      const includeSymbols = options.includeSymbols !== false;
      const tree = await buildTree(entries, options.rootDir, includeSymbols);
      const maxTokens = options.maxTokens ?? 20000;
    
      let rendered = renderTree(tree);
      if (estimateTokens(rendered) <= maxTokens) return rendered;
    
      pruneSymbols(tree);
      rendered = renderTree(tree);
      if (estimateTokens(rendered) <= maxTokens) return `[Level 1: Headers only, symbols pruned to fit ${maxTokens} tokens]\n\n${rendered}`;
    
      pruneHeaders(tree);
      rendered = renderTree(tree);
      return `[Level 0: File names only, project too large for ${maxTokens} tokens]\n\n${rendered}`;
    }
  • Input options type definition for `getContextTree`.
    export interface ContextTreeOptions {
      rootDir: string;
      targetPath?: string;
      depthLimit?: number;
      includeSymbols?: boolean;
      maxTokens?: number;
    }
  • src/index.ts:169-169 (registration)
    Tool registration for `get_context_tree` within the MCP tool definitions.
    "get_context_tree",
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full behavioral disclosure burden effectively. Documents non-obvious behaviors: automatic 2-line header reading for file purpose and the three-level degradation strategy (Level 2→1→0) based on token limits. Could improve by stating read-only nature or output format specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two dense sentences with zero waste. First sentence establishes capability and scope; second sentence elaborates on intelligent behaviors (header extraction, dynamic pruning). Every clause provides distinct semantic value (content types, automation, fallback strategy).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 4 optional parameters (100% schema coverage) and no output schema, description adequately compensates by detailing what the tree structure contains (headers, functions, classes, enums, ranges) and how output scales. Minor gap: doesn't specify output format (JSON vs text tree) or structure despite complexity of the hierarchical data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (baseline 3). Description adds significant value by explaining how parameters interact: the 'Level 2 (deep symbols)' maps to include_symbols=true, while the pruning explanation clarifies max_tokens behavior. Elaborates on what 'file headers' means (2-line extraction) beyond schema literalism.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with specific resource 'structural tree of the project' and enumerates exact contents (file headers, function names, classes, enums, line ranges). Clearly distinguishes from siblings like semantic_code_search (semantic vs structural) and get_file_skeleton (single file vs project tree).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage guidance through the 'Dynamic token-aware pruning' explanation (indicating automatic scaling for large projects), but lacks explicit when-to-use statements versus alternatives like semantic_code_search or get_file_skeleton. No explicit exclusions or prerequisites stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ForLoopCodes/contextplus'

If you have feedback or need assistance with the MCP directory API, please join our Discord server