Skip to main content
Glama

analyze_code

Analyze frontend source code to identify accessibility issues, CSS problems, component complexity, design inconsistencies, and performance concerns for quality improvement.

Instructions

Analyze frontend source code for quality issues: accessibility anti-patterns, CSS problems, component complexity, design inconsistencies, and performance concerns.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
directoryYesAbsolute path to the frontend source directory (e.g., /Users/me/project/src)

Implementation Reference

  • The primary handler function `analyzeCode` that executes code analysis on a given directory. It scans files for various patterns, applies rules, and aggregates findings.
    export async function analyzeCode(
      directory: string
    ): Promise<CodeAnalysisResult> {
      const { config, loaded, path: configPath } = await loadConfig(directory);
    
      const framework = await detectFramework(directory);
    
      // Convert config ignore patterns to glob-compatible patterns
      const configIgnoreGlobs = config.ignore.map((pattern) => {
        // If pattern has no glob chars and no extension, treat as a directory
        if (!pattern.includes("*") && !pattern.includes(".")) {
          return `${pattern}/**`;
        }
        // If it's a glob pattern like "*.test.*", ensure it matches in all directories
        if (pattern.startsWith("*.")) {
          return `**/${pattern}`;
        }
        return pattern;
      });
    
      const files = await collectFrontendFiles(directory, 200, configIgnoreGlobs);
    
      // Filter RULES based on config (immutable — create a new filtered array)
      const activeRules = RULES.filter((rule) => config.rules[rule.id] !== "off");
    
      // Build a list of disabled rule IDs for reporting
      const rulesDisabled = Object.entries(config.rules)
        .filter(([, status]) => status === "off")
        .map(([id]) => id);
    
      // Build a list of severity-overridden rule IDs for reporting
      const severityOverrides = Object.keys(config.severity);
    
      const findings: CodeFinding[] = [];
      let totalLines = 0;
      let componentCount = 0;
      let stylesheetCount = 0;
    
      for (const file of files) {
        totalLines += file.lineCount;
    
        // Count components and stylesheets
        if ([".tsx", ".jsx", ".vue", ".svelte"].includes(file.extension)) {
          componentCount++;
        }
        if ([".css", ".scss", ".sass", ".less"].includes(file.extension)) {
          stylesheetCount++;
        }
    
        // Run pattern-based rules (using filtered activeRules)
        for (const rule of activeRules) {
          if (!rule.fileTypes.includes(file.extension)) continue;
    
          const matches = file.content.matchAll(rule.pattern);
          let matchCount = 0;
    
          // Determine the effective severity: config override takes precedence
          const effectiveSeverity = config.severity[rule.id] ?? rule.severity;
    
          for (const match of matches) {
            matchCount++;
            if (matchCount > 5) break; // Limit findings per rule per file
    
            // Find line number
            const beforeMatch = file.content.slice(0, match.index);
            const lineNumber = beforeMatch.split("\n").length;
    
            findings.push({
              file: file.relativePath,
              line: lineNumber,
              severity: effectiveSeverity,
              category: rule.category,
              rule: rule.id,
              message: rule.message,
              suggestion: rule.suggestion,
            });
          }
        }
    
        // Run file-level checks
        const sizeCheck = checkFileSize(file);
        if (sizeCheck) findings.push(sizeCheck);
    
        const nestingCheck = checkDeepNesting(file);
        if (nestingCheck) findings.push(nestingCheck);
      }
    
      // Run project-level checks
      const errorBoundaryCheck = checkMissingErrorBoundary(files);
      if (errorBoundaryCheck) findings.push(errorBoundaryCheck);
    
      // Sort findings by severity
      const severityOrder: Record<Severity, number> = {
        critical: 0,
        high: 1,
        medium: 2,
        low: 3,
      };
      const sortedFindings = [...findings].sort(
        (a, b) => severityOrder[a.severity] - severityOrder[b.severity]
      );
    
      // Find largest files
      const sortedFiles = [...files].sort((a, b) => b.lineCount - a.lineCount);
      const largestFiles = sortedFiles.slice(0, 5).map((f) => ({
        file: f.relativePath,
        lines: f.lineCount,
      }));
    
      const avgFileSize =
        files.length > 0
          ? Math.round(totalLines / files.length)
          : 0;
    
      return {
        directory,
        timestamp: new Date().toISOString(),
        framework,
        totalFiles: files.length,
        totalLines,
        findings: sortedFindings,
        summary: {
          components: componentCount,
          stylesheets: stylesheetCount,
          avgFileSize,
          largestFiles,
        },
        configStatus: {
          loaded,
          path: configPath,
          rulesDisabled,
          severityOverrides,
        },
      };
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry full behavioral disclosure burden. While it explains what categories are analyzed, it fails to confirm whether the operation is read-only, what format results are returned in, approximate execution time, or whether it modifies any files.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the core action ('Analyze frontend source code') and follows with a colon-delimited list of specific concerns. Every word earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter input and lack of output schema, the description adequately covers the tool's functional scope. However, it lacks any indication of return value structure or reporting format, which would be helpful for a tool returning complex analysis results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with a clear example path, establishing a baseline of 3. The description adds no parameter-specific context, but none is needed given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Analyze') and resource ('frontend source code') and enumerates five distinct quality categories it checks. It implicitly distinguishes from specialized siblings like 'accessibility_audit' and 'performance_audit' by demonstrating broader scope, though it doesn't explicitly name those tools for comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use the tool by listing the specific quality issues it detects (accessibility, CSS, complexity, etc.), suggesting it for comprehensive static analysis. However, it lacks explicit guidance on when to prefer this over the specialized audit siblings or prerequisites like requiring a local directory.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/prembobby39-gif/uimax-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server