Skip to main content
Glama

Pre-commit Validation

precommit

Validate code changes before committing to detect security risks, performance issues, breaking changes, and quality problems using AI analysis.

Instructions

Pre-commit validation for code changes

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
taskYesWhat to validate for pre-commit (e.g., 'review changes before commit', 'validate security implications', 'check for breaking changes')
filesNoSpecific files to validate (optional - will analyze git changes if not provided)
focusNoValidation focus areaall
includeStagedNoInclude staged changes in validation
includeUnstagedNoInclude unstaged changes in validation
compareToNoGit ref to compare against (e.g., 'main', 'HEAD~1'). If not provided, analyzes current changes
severityNoMinimum severity level to reportmedium
providerNoAI provider to usegemini

Implementation Reference

  • The main execution logic for the precommit tool. Uses AI providers to analyze code changes, validate against specified focus areas (security, performance, etc.), and returns a structured report with approval status and recommendations.
      async handlePrecommit(params: z.infer<typeof PrecommitSchema>) {
        const providerName = params.provider || (await this.providerManager.getPreferredProvider(['openai', 'gemini', 'azure', 'grok']));
        const provider = await this.providerManager.getProvider(providerName);
        
        // Build focus-specific system prompt
        const focusPrompts = {
          security: "Focus on security implications, potential vulnerabilities, authentication issues, data exposure risks, and security best practices.",
          performance: "Focus on performance impact, efficiency concerns, resource usage, scalability implications, and optimization opportunities.",
          quality: "Focus on code quality, maintainability, readability, design patterns, architecture concerns, and technical debt.",
          tests: "Focus on test coverage, test quality, edge cases, testing strategies, and verification completeness.",
          "breaking-changes": "Focus on breaking changes, API compatibility, backward compatibility, and impact on existing functionality.",
          all: "Provide comprehensive validation covering security, performance, quality, tests, and breaking changes.",
        };
    
        const systemPrompt = `You are an expert code reviewer specializing in pre-commit validation and change analysis. Your role is to thoroughly examine code changes and provide comprehensive feedback before commits.
    
    VALIDATION FOCUS: ${focusPrompts[params.focus]}
    
    ANALYSIS AREAS:
    1. **Change Impact**: Understand what's being modified and why
    2. **Risk Assessment**: Identify potential issues and their severity
    3. **Quality Validation**: Check code quality, patterns, and maintainability
    4. **Security Review**: Look for security implications and vulnerabilities
    5. **Performance Considerations**: Evaluate performance impact
    6. **Test Coverage**: Assess testing adequacy for changes
    7. **Breaking Changes**: Identify potential compatibility issues
    
    RESPONSE FORMAT:
    Provide a structured pre-commit validation report that includes:
    - **Summary**: Brief overview of changes and overall assessment
    - **Critical Issues**: High-severity problems that must be addressed
    - **Concerns**: Medium-severity issues worth reviewing
    - **Recommendations**: Suggestions for improvement
    - **Approval Status**: Ready to commit, needs fixes, or requires further review
    
    Severity levels: Critical (blocks commit), High (should fix), Medium (consider fixing), Low (minor improvements)`;
    
        // Build the validation prompt
        let prompt = `Pre-commit Validation Task: ${params.task}`;
        
        // Add file context if provided
        if (params.files && params.files.length > 0) {
          prompt += `\n\nFiles to validate: ${params.files.join(", ")}`;
        } else {
          prompt += `\n\nPlease analyze git changes (staged: ${params.includeStaged}, unstaged: ${params.includeUnstaged})`;
          if (params.compareTo) {
            prompt += ` compared to: ${params.compareTo}`;
          }
        }
    
        prompt += `\n\nValidation focus: ${params.focus}
    Minimum severity to report: ${params.severity}
    
    Please provide a comprehensive pre-commit validation analysis with specific findings and recommendations.`;
    
        try {
          const response = await provider.generateText({
            prompt,
            systemPrompt,
            temperature: 0.3, // Lower temperature for consistent validation
            reasoningEffort: (providerName === "openai" || providerName === "azure" || providerName === "grok") ? "high" : undefined,
            useSearchGrounding: false, // Pre-commit validation doesn't need web search
            toolName: 'precommit',
          });
    
          // Build structured response
          const validation = {
            task: params.task,
            focus: params.focus,
            severity: params.severity,
            files_analyzed: params.files || "git changes",
            validation_config: {
              include_staged: params.includeStaged,
              include_unstaged: params.includeUnstaged,
              compare_to: params.compareTo,
            },
            validation_report: response.text,
            provider_used: providerName,
            model_used: response.model,
          };
    
          // Parse validation report to extract key sections (simplified extraction)
          const reportText = response.text.toLowerCase();
          const hasCriticalIssues = reportText.includes("critical") || reportText.includes("blocks commit") || reportText.includes("must fix");
          const hasHighIssues = reportText.includes("high") || reportText.includes("should fix");
          const hasMediumIssues = reportText.includes("medium") || reportText.includes("consider fixing");
          
          let approvalStatus = "approved";
          if (hasCriticalIssues) {
            approvalStatus = "blocked";
          } else if (hasHighIssues) {
            approvalStatus = "needs_review";
          } else if (hasMediumIssues) {
            approvalStatus = "approved_with_suggestions";
          }
    
          const result = {
            validation,
            approval_status: approvalStatus,
            has_critical_issues: hasCriticalIssues,
            has_high_issues: hasHighIssues,
            has_medium_issues: hasMediumIssues,
            commit_recommendation: approvalStatus === "approved" 
              ? "Changes look good and are ready to commit."
              : approvalStatus === "blocked"
              ? "Critical issues found. Do not commit until resolved."
              : approvalStatus === "needs_review"
              ? "High-priority issues found. Consider fixing before commit."
              : "Changes are acceptable but have suggestions for improvement.",
          };
    
          return {
            content: [
              {
                type: "text",
                text: JSON.stringify(result, null, 2),
              },
            ],
            metadata: {
              toolName: "precommit",
              focus: params.focus,
              approvalStatus: approvalStatus,
              provider: providerName,
              model: response.model,
              severity: params.severity,
              usage: response.usage,
              ...response.metadata,
            },
          };
        } catch (error) {
          return {
            content: [
              {
                type: "text",
                text: JSON.stringify({
                  error: "Pre-commit validation failed",
                  message: error instanceof Error ? error.message : "Unknown error",
                  task: params.task,
                  focus: params.focus,
                }, null, 2),
              },
            ],
            isError: true,
          };
        }
      }
  • src/server.ts:364-372 (registration)
    Tool registration in the MCP server, linking to the handler in aiHandlers.handlePrecommit and using PrecommitSchema for input validation.
    // Register precommit tool
    server.registerTool("precommit", {
      title: "Pre-commit Validation",
      description: "Pre-commit validation for code changes",
      inputSchema: PrecommitSchema.shape,
    }, async (args) => {
      const aiHandlers = await getHandlers();
      return await aiHandlers.handlePrecommit(args);
    });
  • Input schema definition using Zod for the precommit tool, defining parameters like task, files, focus, git options, and provider.
    const PrecommitSchema = z.object({
      task: z.string().describe("What to validate for pre-commit (e.g., 'review changes before commit', 'validate security implications', 'check for breaking changes')"),
      files: z.array(z.string()).optional().describe("Specific files to validate (optional - will analyze git changes if not provided)"),
      focus: z.enum(["security", "performance", "quality", "tests", "breaking-changes", "all"]).default("all").describe("Validation focus area"),
      includeStaged: z.boolean().optional().default(true).describe("Include staged changes in validation"),
      includeUnstaged: z.boolean().optional().default(false).describe("Include unstaged changes in validation"),
      compareTo: z.string().optional().describe("Git ref to compare against (e.g., 'main', 'HEAD~1'). If not provided, analyzes current changes"),
      severity: z.enum(["critical", "high", "medium", "low", "all"]).default("medium").describe("Minimum severity level to report"),
      provider: z.enum(["openai", "gemini", "azure", "grok"]).optional().default("gemini").describe("AI provider to use"),
    });
  • Duplicate schema definition in handler file for TypeScript inference (z.infer<typeof PrecommitSchema>) in the handlePrecommit method.
    const PrecommitSchema = z.object({
      task: z.string().describe("What to validate for pre-commit (e.g., 'review changes before commit', 'validate security implications', 'check for breaking changes')"),
      files: z.array(z.string()).optional().describe("Specific files to validate (optional - will analyze git changes if not provided)"),
      focus: z.enum(["security", "performance", "quality", "tests", "breaking-changes", "all"]).default("all").describe("Validation focus area"),
      includeStaged: z.boolean().optional().default(true).describe("Include staged changes in validation"),
      includeUnstaged: z.boolean().optional().default(false).describe("Include unstaged changes in validation"),
      compareTo: z.string().optional().describe("Git ref to compare against (e.g., 'main', 'HEAD~1'). If not provided, analyzes current changes"),
      severity: z.enum(["critical", "high", "medium", "low", "all"]).default("medium").describe("Minimum severity level to report"),
      provider: z.enum(["openai", "gemini", "azure", "grok"]).optional().default("gemini").describe("AI provider to use"),
    });
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. The description mentions 'validation' but doesn't explain what the tool actually does behaviorally—whether it runs automated checks, provides recommendations, blocks commits, or returns analysis results. It lacks details on permissions, side effects, rate limits, or output format, leaving significant gaps for an 8-parameter tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It's front-loaded and clear in its brevity, though it could benefit from more detail given the tool's complexity. The structure is appropriate for a short description, but it may be too concise for adequate understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, no annotations, no output schema), the description is incomplete. It doesn't explain what validation entails, how results are returned, or behavioral traits. For a tool with rich parameters but no structured behavioral hints, the description fails to provide sufficient context for effective use, leaving too much undefined.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 8 parameters. The description adds no additional meaning beyond what the schema provides—it doesn't explain parameter interactions, default behaviors, or practical examples. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Pre-commit validation for code changes' states the general purpose but is vague. It specifies the action ('validation') and target ('code changes') but lacks specificity about what validation entails or how it differs from sibling tools like 'review-code' or 'secaudit'. The title 'Pre-commit Validation' is essentially restated, making it somewhat tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. With sibling tools like 'review-code', 'secaudit', and 'analyze-code' available, the description offers no context about when pre-commit validation is appropriate versus other code analysis tools. Usage is implied only by the tool's name, not explained.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RealMikeChong/ultra-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server