Skip to main content
Glama

analyze_commits_impact

Analyze git commits and code changes to assess their impact on projects, helping AI understand actual work performed for resume building.

Instructions

Get commits with their code changes for impact analysis. Returns commit messages + diffs to help AI understand the actual work done.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
repo_nameNoName of the repository (optional, uses default if not specified)
sinceNoTime range for commits1 month ago
limitNoMaximum number of commits to analyze (default: 10)

Implementation Reference

  • The handler function that retrieves recent git commits by author, fetches their stats and summaries using 'git show --stat', formats them for impact analysis, and returns as TextContent. Limits to specified number of commits.
    async def analyze_commits_impact(repo_name: Optional[str], since: str, limit: int) -> list[TextContent]:
        """Get commits with their code changes for impact analysis."""
        # Resolve repo
        if not repo_name:
            repo_name = list(REPO_DICT.keys())[0] if REPO_DICT else "default"
        
        if repo_name not in REPO_DICT:
            available = ", ".join(REPO_DICT.keys())
            return [TextContent(
                type="text",
                text=f"Repository '{repo_name}' not found.\n\nAvailable repositories: {available}"
            )]
        
        repo_path = REPO_DICT[repo_name]
        
        try:
            # Get commit hashes
            cmd_log = [
                "git", "log",
                f"--author={AUTHOR_NAME}",
                "--no-merges",
                f"--since={since}",
                f"-{limit}",
                "--pretty=format:%H"
            ]
            
            result = subprocess.run(
                cmd_log,
                cwd=repo_path,
                capture_output=True,
                text=True,
                check=True
            )
            
            commit_hashes = result.stdout.strip().split('\n')
            if not commit_hashes or commit_hashes == ['']:
                return [TextContent(
                    type="text",
                    text=f"No commits found in '{repo_name}' for {since}"
                )]
            
            # Get details for each commit
            all_output = f"Commit Impact Analysis for '{repo_name}' ({since}):\n"
            all_output += f"Analyzing {len(commit_hashes)} commits\n\n"
            all_output += "="*60 + "\n\n"
            
            for i, commit_hash in enumerate(commit_hashes, 1):
                try:
                    # Get commit with stats (no full diff, just summary)
                    cmd_show = [
                        "git", "show",
                        "--stat",
                        "--format=## Commit %h - %s%n%nAuthor: %an%nDate: %ar%n",
                        commit_hash
                    ]
                    
                    result = subprocess.run(
                        cmd_show,
                        cwd=repo_path,
                        capture_output=True,
                        text=True,
                        check=True
                    )
                    
                    all_output += result.stdout + "\n"
                    all_output += "-"*60 + "\n\n"
                    
                except subprocess.CalledProcessError:
                    all_output += f"## Commit {commit_hash[:7]}\nError retrieving details\n\n"
            
            all_output += f"\n\nTotal commits analyzed: {len(commit_hashes)}\n"
            all_output += f"\nUse 'get_commit_details' with a specific commit hash to see full code changes."
            
            return [TextContent(type="text", text=all_output)]
        
        except subprocess.CalledProcessError as e:
            return [TextContent(type="text", text=f"Git error: {e.stderr}")]
  • The input schema definition for the tool, specifying parameters repo_name (optional string), since (string, default '1 month ago'), limit (number, default 10). Part of the list_tools() response.
    Tool(
        name="analyze_commits_impact",
        description="Get commits with their code changes for impact analysis. Returns commit messages + diffs to help AI understand the actual work done.",
        inputSchema={
            "type": "object",
            "properties": {
                "repo_name": {
                    "type": "string",
                    "description": "Name of the repository (optional, uses default if not specified)"
                },
                "since": {
                    "type": "string",
                    "description": "Time range for commits",
                    "default": "1 month ago"
                },
                "limit": {
                    "type": "number",
                    "description": "Maximum number of commits to analyze (default: 10)",
                    "default": 10
                }
            }
        }
    ),
  • The dispatch logic in the main @app.call_tool() handler that routes calls to this tool name to the analyze_commits_impact function with parsed arguments.
    elif name == "analyze_commits_impact":
        return await analyze_commits_impact(
            arguments.get("repo_name"),
            arguments.get("since", "1 month ago"),
            arguments.get("limit", 10)
        )
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns 'commit messages + diffs' but doesn't cover critical aspects like whether this is a read-only operation, potential rate limits, authentication requirements, error handling, or the format of the output (e.g., structured data vs. raw text). For a tool with no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, consisting of two sentences that directly state the tool's purpose and output. There's no wasted text, and it efficiently communicates key information. However, it could be slightly improved by integrating usage hints or behavioral details without adding unnecessary length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is minimally adequate. It covers the purpose and output but lacks details on behavioral traits, usage context, and error handling. Without an output schema, it doesn't fully explain return values beyond 'commit messages + diffs'. This leaves gaps for an AI agent to operate effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with clear documentation for each parameter ('repo_name', 'since', 'limit'), including defaults. The description adds no additional parameter semantics beyond what the schema provides, such as explaining the impact of 'limit' on analysis depth or the format for 'since'. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get commits with their code changes for impact analysis.' It specifies the verb ('Get'), resource ('commits'), and additional context ('with their code changes for impact analysis'). However, it doesn't explicitly distinguish this tool from sibling tools like 'get_commit_details' or 'get_git_log', which appear related to commit retrieval, so it lacks sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions the purpose but doesn't specify prerequisites, exclusions, or compare it to sibling tools such as 'get_commit_details' or 'get_git_log'. This leaves the agent without clear usage context, relying solely on the tool name and description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/eyaab/cv-resume-builder-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server