Skip to main content
Glama

find_unused_files

Identify unused TypeScript/JavaScript files in complex projects with dynamic loading patterns to optimize codebases and reduce technical debt.

Instructions

Identify genuinely unused TypeScript/JavaScript files in complex projects with dynamic loading patterns

WORKFLOW: System diagnostics and function discovery TIP: Start with health_check, use list_functions to explore capabilities SAVES: Claude context for strategic decisions

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
analysisDepthNoLevel of analysis detailcomprehensive
analysisTypeNoType of analysis to performcomprehensive
analyzeCommentsNoCheck for commented-out imports
codeNoThe code to analyze (for single-file analysis)
entryPointsNoEntry point files to start dependency traversal
excludePatternsNoFile patterns to exclude from analysis
filePathNoPath to single file to analyze
filesNoArray of specific file paths (for multi-file analysis)
includeDevArtifactsNoWhether to flag potential dev artifacts
languageNoProgramming languagetypescript
maxDepthNoMaximum directory depth for discovery (1-5)
projectPathNoAbsolute path to project root

Implementation Reference

  • Tool registration via class name assignment in the FindUnusedFilesAnalyzer class extending BasePlugin.
    export class FindUnusedFilesAnalyzer extends BasePlugin implements IPromptPlugin {
      name = 'find_unused_files';
  • Input parameter schema defining options for single-file and multi-file unused files analysis.
    parameters = {
      // Single-file parameters
      code: {
        type: 'string' as const,
        description: 'The code to analyze (for single-file analysis)',
        required: false
      },
      filePath: {
        type: 'string' as const,
        description: 'Path to single file to analyze',
        required: false
      },
      
      // Multi-file parameters  
      projectPath: {
        type: 'string' as const,
        description: 'Absolute path to project root',
        required: false
      },
      files: {
        type: 'array' as const,
        description: 'Array of specific file paths (for multi-file analysis)',
        required: false,
        items: { type: 'string' as const }
      },
      maxDepth: {
        type: 'number' as const,
        description: 'Maximum directory depth for discovery (1-5)',
        required: false,
        default: 4
      },
      
      // Analysis options
      language: {
        type: 'string' as const,
        description: 'Programming language',
        required: false,
        default: 'typescript'
      },
      analysisDepth: {
        type: 'string' as const,
        description: 'Level of analysis detail',
        enum: ['basic', 'detailed', 'comprehensive'],
        default: 'comprehensive',
        required: false
      },
      analysisType: {
        type: 'string' as const,
        description: 'Type of analysis to perform',
        enum: ['static', 'dynamic', 'comprehensive'],
        default: 'comprehensive',
        required: false
      },
      entryPoints: {
        type: 'array' as const,
        description: 'Entry point files to start dependency traversal',
        required: false,
        default: ['index.ts', 'main.ts', 'app.ts'],
        items: { type: 'string' as const }
      },
      excludePatterns: {
        type: 'array' as const,
        description: 'File patterns to exclude from analysis',
        required: false,
        default: ['*.test.ts', '*.spec.ts', '*.d.ts'],
        items: { type: 'string' as const }
      },
      analyzeComments: {
        type: 'boolean' as const,
        description: 'Check for commented-out imports',
        required: false,
        default: true
      },
      includeDevArtifacts: {
        type: 'boolean' as const,
        description: 'Whether to flag potential dev artifacts',
        required: false,
        default: false
      }
    };
  • Primary handler function that executes the tool logic, handling security, mode detection (single/multi-file), validation, model setup, and delegation to specific analysis methods.
    async execute(params: any, llmClient: any) {
      return await withSecurity(this, params, llmClient, async (secureParams) => {
        try {
          const analysisMode = this.detectAnalysisMode(secureParams);
          this.validateParameters(secureParams, analysisMode);
          const { model, contextLength } = await ModelSetup.getReadyModel(llmClient);
          
          if (analysisMode === 'single-file') {
            return await this.executeSingleFileAnalysis(secureParams, model, contextLength);
          } else {
            return await this.executeMultiFileAnalysis(secureParams, model, contextLength);
          }
          
        } catch (error: any) {
          return ErrorHandler.createExecutionError('find_unused_files', error);
        }
      });
    }
  • Helper method handling multi-file analysis execution, including file discovery, analysis performance, prompt generation, and chunked LLM execution.
    private async executeMultiFileAnalysis(params: any, model: any, contextLength: number) {
      let filesToAnalyze: string[] = params.files || 
        await this.discoverRelevantFiles(
          params.projectPath, 
          params.maxDepth,
          params.analysisType
        );
      
      const analysisResult = await this.performMultiFileAnalysis(
        filesToAnalyze,
        params,
        model,
        contextLength
      );
      
      const promptStages = this.getMultiFilePromptStages({
        ...params,
        analysisResult,
        fileCount: filesToAnalyze.length
      });
      
      const promptManager = new ThreeStagePromptManager();
      const chunkSize = TokenCalculator.calculateOptimalChunkSize(promptStages, contextLength);
      const dataChunks = promptManager.chunkDataPayload(promptStages.dataPayload, chunkSize);
      const conversation = promptManager.createChunkedConversation(promptStages, dataChunks);
      const messages = [
        conversation.systemMessage,
        ...conversation.dataMessages,
        conversation.analysisMessage
      ];
      
      return await ResponseProcessor.executeChunked(
        messages,
        model,
        contextLength,
        'find_unused_files',
        'multifile'
      );
    }
  • Key helper for multi-file analysis logic, managing caching, batch file analysis, and result aggregation.
    private async performMultiFileAnalysis(
      files: string[],
      params: any,
      model: any,
      contextLength: number
    ): Promise<any> {
      const cacheKey = this.analysisCache.generateKey(
        'find_unused_files', 
        params, 
        files
      );
      
      const cached = await this.analysisCache.get(cacheKey);
      if (cached) return cached;
      
      const fileAnalysisResults = await this.multiFileAnalysis.analyzeBatch(
        files,
        (file: string) => this.analyzeIndividualFile(file, params, model),
        contextLength
      );
      
      const aggregatedResult = {
        summary: `Unused files analysis of ${files.length} files`,
        findings: fileAnalysisResults,
        data: {
          fileCount: files.length,
          totalSize: fileAnalysisResults.reduce((sum: number, result: any) => sum + (result.size || 0), 0),
          entryPoints: params.entryPoints || ['index.ts', 'main.ts', 'app.ts'],
          excludePatterns: params.excludePatterns || ['*.test.ts', '*.spec.ts', '*.d.ts'],
          analysisTimestamp: new Date().toISOString()
        }
      };
      
      await this.analysisCache.cacheAnalysis(cacheKey, aggregatedResult, {
        modelUsed: model.identifier || 'unknown',
        executionTime: Date.now(),
        timestamp: new Date().toISOString()
      });
      
      return aggregatedResult;
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'System diagnostics and function discovery' and 'SAVES: Claude context for strategic decisions', which hints at analysis behavior and context preservation, but doesn't detail what 'genuinely unused' means, how results are returned, whether it's read-only or has side effects, or performance characteristics. For a complex analysis tool with 12 parameters, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description uses a fragmented structure with separate lines for 'WORKFLOW', 'TIP', and 'SAVES', which is somewhat organized but not optimally front-loaded. The first line clearly states the purpose, but the additional lines could be more integrated. It's reasonably concise but could be more cohesive in presentation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (12 parameters, no annotations, no output schema), the description is insufficient. It lacks details on what the tool returns, how 'unused' is determined, error handling, or performance implications. The workflow tips add some context, but for a sophisticated analysis tool, this leaves too many unknowns for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so all parameters are documented in the schema. The description doesn't add any specific parameter information beyond what's in the schema (e.g., it doesn't explain how 'analysisDepth' differs from 'analysisType' or clarify parameter interactions). With high schema coverage, the baseline is 3, and the description doesn't compensate with additional semantic context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Identify genuinely unused TypeScript/JavaScript files in complex projects with dynamic loading patterns.' This specifies the verb ('identify'), resource ('unused TypeScript/JavaScript files'), and context ('complex projects with dynamic loading patterns'). However, it doesn't explicitly differentiate from sibling tools like 'find_unused_css' or 'analyze_dependencies' beyond the language focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes workflow tips ('Start with health_check, use list_functions to explore capabilities') which imply a recommended sequence, but it doesn't explicitly state when to use this tool versus alternatives like 'analyze_dependencies' or 'find_unused_css'. The guidance is helpful but lacks clear boundaries or exclusions for sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/houtini-ai/lm'

If you have feedback or need assistance with the MCP directory API, please join our Discord server