Skip to main content
Glama

get_coupling_trend

Analyze file coupling trends over git history to determine if a module is stabilizing or destabilizing, using Ca/Ce/instability metrics at past commits.

Instructions

File coupling over git history: Ca/Ce/instability at past commits. Shows if a module is stabilizing or destabilizing.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_pathYesFile path to analyze
since_daysNoAnalyze last N days (default: 90)
snapshotsNoNumber of historical snapshots (default: 6)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions analyzing 'git history' and 'past commits,' which implies read-only behavior, but doesn't specify whether it requires specific permissions, how it handles large repositories, or what the output format looks like. For a tool with no annotation coverage, this leaves significant gaps in understanding its operational traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and front-loaded, consisting of two concise sentences that directly state the tool's purpose and outcome. There's no wasted verbiage, and it efficiently communicates the core functionality. However, it could be slightly more structured by explicitly separating the action from the result.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of analyzing git history and coupling metrics, the description is incomplete. No annotations are provided to clarify behavioral aspects, and there's no output schema to describe the return values (e.g., what 'Ca/Ce/instability' data looks like). The description alone doesn't provide enough context for an agent to fully understand how to interpret or use the tool's results effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already documents all three parameters (file_path, since_days, snapshots) with their types, constraints, and defaults. The description adds no additional parameter semantics beyond what's in the schema, such as explaining the meaning of 'Ca/Ce' metrics in relation to the parameters. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool analyzes 'File coupling over git history' and provides specific metrics (Ca/Ce/instability) to show 'if a module is stabilizing or destabilizing.' It uses specific verbs ('analyze', 'shows') and identifies the resource (file/module). However, it doesn't explicitly differentiate from sibling tools like 'get_coupling' or 'get_complexity_trend' which might have overlapping analysis domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, exclusions, or suggest other tools for related analyses (e.g., 'get_coupling' for current coupling vs. this tool for historical trends). The context is implied through the mention of 'git history' and 'past commits,' but explicit usage guidelines are absent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nikolai-vysotskyi/trace-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server