Skip to main content
Glama
qckfx

Tree-Hugger-JS MCP Server

by qckfx

get_functions

Extract JavaScript/TypeScript functions with metadata for code review, API analysis, test coverage, and refactoring preparation.

Instructions

Get all functions with metadata including name, type, location, and async status. Includes class methods, arrow functions, and declarations.

Examples: • Code review: get_functions() to see all functions in a file • Find async operations: get_functions({asyncOnly: true}) • API analysis: get_functions() then look for functions with 'fetch' or 'api' in names • Test coverage: get_functions() to identify functions needing tests • Refactoring prep: get_functions({includeAnonymous: false}) to focus on named functions • Performance audit: get_functions() to find large/complex functions by line count

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
includeAnonymousNoInclude anonymous functions (default: true). Set false to focus on named functions only.
asyncOnlyNoOnly return async functions (default: false). Use for async/await pattern analysis.

Implementation Reference

  • The primary handler function for the 'get_functions' tool. It checks if AST is loaded, retrieves detailed function information using tree-hugger's getFunctionDetails(), applies filters for anonymous and async functions based on input args, truncates function text previews, stores results in lastAnalysis state, and returns a formatted JSON response listing all matching functions with metadata like name, type, location, and async status.
    private async getFunctions(args: { includeAnonymous?: boolean; asyncOnly?: boolean }) {
      if (!this.currentAST) {
        return {
          content: [{
            type: "text",
            text: "No AST loaded. Please use parse_code first.",
          }],
          isError: true,
        };
      }
    
      try {
        // Use enhanced library methods for detailed function analysis
        const functionData: FunctionInfo[] = this.currentAST.tree.getFunctionDetails()
          .filter(fn => {
            if (args.asyncOnly && !fn.async) return false;
            if (args.includeAnonymous === false && !fn.name) return false;
            return true;
          })
          .map(fn => ({
            ...fn,
            text: fn.text.length > 150 ? fn.text.slice(0, 150) + '...' : fn.text,
          }));
        
        this.lastAnalysis = {
          ...this.lastAnalysis,
          functions: functionData,
          timestamp: new Date(),
        } as AnalysisResult;
    
        return {
          content: [{
            type: "text",
            text: `Found ${functionData.length} functions:\n${JSON.stringify(functionData, null, 2)}`,
          }],
        };
      } catch (error) {
        return {
          content: [{
            type: "text",
            text: `Error getting functions: ${error instanceof Error ? error.message : String(error)}`,
          }],
          isError: true,
        };
      }
    }
  • src/index.ts:422-423 (registration)
    The switch case in the MCP CallToolRequestHandler that detects 'get_functions' tool calls and dispatches to the private getFunctions implementation.
    case "get_functions":
      return await this.getFunctions(args as { includeAnonymous?: boolean; asyncOnly?: boolean });
  • Tool registration object defining the 'get_functions' name, detailed description with usage examples, and input schema specifying optional boolean parameters for filtering anonymous and async-only functions.
    {
      name: "get_functions",
      description: "Get all functions with metadata including name, type, location, and async status. Includes class methods, arrow functions, and declarations.\n\nExamples:\n• Code review: get_functions() to see all functions in a file\n• Find async operations: get_functions({asyncOnly: true})\n• API analysis: get_functions() then look for functions with 'fetch' or 'api' in names\n• Test coverage: get_functions() to identify functions needing tests\n• Refactoring prep: get_functions({includeAnonymous: false}) to focus on named functions\n• Performance audit: get_functions() to find large/complex functions by line count",
      inputSchema: {
        type: "object",
        properties: {
          includeAnonymous: {
            type: "boolean",
            description: "Include anonymous functions (default: true). Set false to focus on named functions only."
          },
          asyncOnly: {
            type: "boolean", 
            description: "Only return async functions (default: false). Use for async/await pattern analysis."
          }
        },
      },
    },
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool returns metadata (name, type, location, async status) and includes various function types, which adds behavioral context beyond the input schema. However, it doesn't mention potential limitations like performance impacts, output format details, or error handling. The description is helpful but lacks completeness for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with a clear purpose statement, followed by practical examples. Each example sentence earns its place by demonstrating specific use cases. However, the list format with bullet points is slightly verbose compared to a more condensed prose, but it remains efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and 2 parameters with full schema coverage, the description is moderately complete. It explains the tool's purpose, usage, and parameter implications through examples, but lacks details on output structure, error conditions, or performance considerations. For a read-only tool with simple parameters, this is adequate but has clear gaps in behavioral context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already fully documents the two parameters (includeAnonymous and asyncOnly). The description adds value by providing usage examples that illustrate parameter effects (e.g., using asyncOnly for async analysis, includeAnonymous for focusing on named functions), but doesn't introduce new semantic details beyond what the schema descriptions state. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves functions with specific metadata (name, type, location, async status) and includes various function types (class methods, arrow functions, declarations). It distinguishes from siblings like 'get_classes' or 'get_imports' by focusing on functions, but doesn't explicitly contrast with 'find_all_pattern' or 'find_pattern' which might also locate functions. The purpose is specific but sibling differentiation could be more explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage scenarios with examples for code review, async operation analysis, API analysis, test coverage, refactoring prep, and performance audits. It implicitly guides when to use this tool (e.g., for function-level analysis) versus alternatives like 'get_classes' for classes or 'get_imports' for imports, though it doesn't name specific exclusions. The examples effectively illustrate practical contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/qckfx/tree-hugger-js-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server