Skip to main content
Glama

find_unused_files

Identify unused TypeScript/JavaScript files in complex projects with dynamic loading patterns to optimize codebases and reduce technical debt.

Instructions

Identify genuinely unused TypeScript/JavaScript files in complex projects with dynamic loading patterns

WORKFLOW: System diagnostics and function discovery TIP: Start with health_check, use list_functions to explore capabilities SAVES: Claude context for strategic decisions

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
analysisDepthNoLevel of analysis detailcomprehensive
analysisTypeNoType of analysis to performcomprehensive
analyzeCommentsNoCheck for commented-out imports
codeNoThe code to analyze (for single-file analysis)
entryPointsNoEntry point files to start dependency traversal
excludePatternsNoFile patterns to exclude from analysis
filePathNoPath to single file to analyze
filesNoArray of specific file paths (for multi-file analysis)
includeDevArtifactsNoWhether to flag potential dev artifacts
languageNoProgramming languagetypescript
maxDepthNoMaximum directory depth for discovery (1-5)
projectPathNoAbsolute path to project root

Implementation Reference

  • Tool registration via class name assignment in the FindUnusedFilesAnalyzer class extending BasePlugin.
    export class FindUnusedFilesAnalyzer extends BasePlugin implements IPromptPlugin {
      name = 'find_unused_files';
  • Input parameter schema defining options for single-file and multi-file unused files analysis.
    parameters = {
      // Single-file parameters
      code: {
        type: 'string' as const,
        description: 'The code to analyze (for single-file analysis)',
        required: false
      },
      filePath: {
        type: 'string' as const,
        description: 'Path to single file to analyze',
        required: false
      },
      
      // Multi-file parameters  
      projectPath: {
        type: 'string' as const,
        description: 'Absolute path to project root',
        required: false
      },
      files: {
        type: 'array' as const,
        description: 'Array of specific file paths (for multi-file analysis)',
        required: false,
        items: { type: 'string' as const }
      },
      maxDepth: {
        type: 'number' as const,
        description: 'Maximum directory depth for discovery (1-5)',
        required: false,
        default: 4
      },
      
      // Analysis options
      language: {
        type: 'string' as const,
        description: 'Programming language',
        required: false,
        default: 'typescript'
      },
      analysisDepth: {
        type: 'string' as const,
        description: 'Level of analysis detail',
        enum: ['basic', 'detailed', 'comprehensive'],
        default: 'comprehensive',
        required: false
      },
      analysisType: {
        type: 'string' as const,
        description: 'Type of analysis to perform',
        enum: ['static', 'dynamic', 'comprehensive'],
        default: 'comprehensive',
        required: false
      },
      entryPoints: {
        type: 'array' as const,
        description: 'Entry point files to start dependency traversal',
        required: false,
        default: ['index.ts', 'main.ts', 'app.ts'],
        items: { type: 'string' as const }
      },
      excludePatterns: {
        type: 'array' as const,
        description: 'File patterns to exclude from analysis',
        required: false,
        default: ['*.test.ts', '*.spec.ts', '*.d.ts'],
        items: { type: 'string' as const }
      },
      analyzeComments: {
        type: 'boolean' as const,
        description: 'Check for commented-out imports',
        required: false,
        default: true
      },
      includeDevArtifacts: {
        type: 'boolean' as const,
        description: 'Whether to flag potential dev artifacts',
        required: false,
        default: false
      }
    };
  • Primary handler function that executes the tool logic, handling security, mode detection (single/multi-file), validation, model setup, and delegation to specific analysis methods.
    async execute(params: any, llmClient: any) {
      return await withSecurity(this, params, llmClient, async (secureParams) => {
        try {
          const analysisMode = this.detectAnalysisMode(secureParams);
          this.validateParameters(secureParams, analysisMode);
          const { model, contextLength } = await ModelSetup.getReadyModel(llmClient);
          
          if (analysisMode === 'single-file') {
            return await this.executeSingleFileAnalysis(secureParams, model, contextLength);
          } else {
            return await this.executeMultiFileAnalysis(secureParams, model, contextLength);
          }
          
        } catch (error: any) {
          return ErrorHandler.createExecutionError('find_unused_files', error);
        }
      });
    }
  • Helper method handling multi-file analysis execution, including file discovery, analysis performance, prompt generation, and chunked LLM execution.
    private async executeMultiFileAnalysis(params: any, model: any, contextLength: number) {
      let filesToAnalyze: string[] = params.files || 
        await this.discoverRelevantFiles(
          params.projectPath, 
          params.maxDepth,
          params.analysisType
        );
      
      const analysisResult = await this.performMultiFileAnalysis(
        filesToAnalyze,
        params,
        model,
        contextLength
      );
      
      const promptStages = this.getMultiFilePromptStages({
        ...params,
        analysisResult,
        fileCount: filesToAnalyze.length
      });
      
      const promptManager = new ThreeStagePromptManager();
      const chunkSize = TokenCalculator.calculateOptimalChunkSize(promptStages, contextLength);
      const dataChunks = promptManager.chunkDataPayload(promptStages.dataPayload, chunkSize);
      const conversation = promptManager.createChunkedConversation(promptStages, dataChunks);
      const messages = [
        conversation.systemMessage,
        ...conversation.dataMessages,
        conversation.analysisMessage
      ];
      
      return await ResponseProcessor.executeChunked(
        messages,
        model,
        contextLength,
        'find_unused_files',
        'multifile'
      );
    }
  • Key helper for multi-file analysis logic, managing caching, batch file analysis, and result aggregation.
    private async performMultiFileAnalysis(
      files: string[],
      params: any,
      model: any,
      contextLength: number
    ): Promise<any> {
      const cacheKey = this.analysisCache.generateKey(
        'find_unused_files', 
        params, 
        files
      );
      
      const cached = await this.analysisCache.get(cacheKey);
      if (cached) return cached;
      
      const fileAnalysisResults = await this.multiFileAnalysis.analyzeBatch(
        files,
        (file: string) => this.analyzeIndividualFile(file, params, model),
        contextLength
      );
      
      const aggregatedResult = {
        summary: `Unused files analysis of ${files.length} files`,
        findings: fileAnalysisResults,
        data: {
          fileCount: files.length,
          totalSize: fileAnalysisResults.reduce((sum: number, result: any) => sum + (result.size || 0), 0),
          entryPoints: params.entryPoints || ['index.ts', 'main.ts', 'app.ts'],
          excludePatterns: params.excludePatterns || ['*.test.ts', '*.spec.ts', '*.d.ts'],
          analysisTimestamp: new Date().toISOString()
        }
      };
      
      await this.analysisCache.cacheAnalysis(cacheKey, aggregatedResult, {
        modelUsed: model.identifier || 'unknown',
        executionTime: Date.now(),
        timestamp: new Date().toISOString()
      });
      
      return aggregatedResult;
    }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/houtini-ai/lm'

If you have feedback or need assistance with the MCP directory API, please join our Discord server