Skip to main content
Glama

get_drift_signature

Scan recent snapshot history to identify specs that repeatedly drift, have vague descriptions, or lack hash records. Flags chronic problems like unstable, chronic low quality, or chronic unhashed specs.

Instructions

Scan the recent snapshot history for chronic problems: same spec_id repeatedly appearing in drifted / unknown / low-quality buckets. Specs flagged as 'unstable' (drifts every cycle), 'chronic_low_quality' (vague every cycle), or 'chronic_unhashed' (never gets a hash recorded). Use when a user asks 'which specs keep causing trouble' / 'what's the long-running pain'. Args: window (snapshots to scan, default 5), threshold (min recurrence to flag, default 3). Returns {ready, snapshots_scanned, chronic[], markdown}.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
windowNo
thresholdNo

Implementation Reference

  • The handler function get_drift_signature_tool that implements the tool logic. It reads snapshot history, counts chronic appearances of spec_ids across drifted/unknown/quality buckets, and returns a drift signature report with markdown output.
    def get_drift_signature_tool(arguments: dict) -> dict[str, Any]:
        """Scan history for chronic problems: same spec_id repeatedly showing
        up in drifted / unknown / low-quality buckets.
    
        Threshold defaults: appears in >= 3 of the last 5 snapshots → chronic.
    
        Args:
            window: int, default 5 — how many recent snapshots to scan.
            threshold: int, default 3 — minimum recurrence to flag as chronic.
        """
        window = int(arguments.get("window", 5))
        threshold = int(arguments.get("threshold", 3))
    
        snapshots = _read_snapshots(limit=window)
        if len(snapshots) < threshold:
            return {
                "ready": False,
                "snapshots_available": len(snapshots),
                "snapshots_needed": threshold,
                "chronic": [],
                "markdown": (
                    "# Drift signature\n\n"
                    f"_Need at least {threshold} snapshots; have {len(snapshots)}. "
                    f"Run `get_optimization_plan` over time to build history._"
                ),
            }
    
        drift_counts: dict[str, int] = {}
        unknown_counts: dict[str, int] = {}
        quality_counts: dict[str, int] = {}
    
        for snap in snapshots:
            for d in snap.get("drifted", []) or []:
                sid = d.get("spec_id")
                if sid:
                    drift_counts[sid] = drift_counts.get(sid, 0) + 1
            for u in snap.get("unknown", []) or []:
                sid = u.get("spec_id")
                if sid:
                    unknown_counts[sid] = unknown_counts.get(sid, 0) + 1
            for q in snap.get("quality", []) or []:
                sid = q.get("spec_id")
                if sid:
                    quality_counts[sid] = quality_counts.get(sid, 0) + 1
    
        chronic = []
        for sid, count in drift_counts.items():
            if count >= threshold:
                chronic.append({"spec_id": sid, "kind": "unstable", "appearances": count, "window": len(snapshots),
                                "reason": "Spec keeps changing after being linked — PM may be iterating without resetting downstream tests."})
        for sid, count in quality_counts.items():
            if count >= threshold:
                chronic.append({"spec_id": sid, "kind": "chronic_low_quality", "appearances": count, "window": len(snapshots),
                                "reason": "Spec is repeatedly flagged for vague language / implementation-leak / unclear roles."})
        for sid, count in unknown_counts.items():
            if count >= threshold:
                chronic.append({"spec_id": sid, "kind": "chronic_unhashed", "appearances": count, "window": len(snapshots),
                                "reason": "Linked tests never recorded an ac_hash — re-link with parse_spec._meta.ac_hash to enable drift detection."})
    
        chronic.sort(key=lambda r: (-r["appearances"], r["spec_id"]))
    
        md = ["# Drift signature", ""]
        md.append(f"- Window: last {len(snapshots)} snapshots")
        md.append(f"- Threshold: ≥ {threshold} appearances = chronic")
        md.append(f"- Chronic specs flagged: {len(chronic)}")
        md.append("")
        if not chronic:
            md.append("🟢 No chronic patterns detected. Keep an eye on the next few snapshots.")
        else:
            kind_label = {
                "unstable": "🔴 Unstable (repeatedly drifting)",
                "chronic_low_quality": "🟡 Chronically low quality",
                "chronic_unhashed": "⚪ Chronically without ac_hash",
            }
            last_kind = None
            for c in chronic:
                if c["kind"] != last_kind:
                    md.append("")
                    md.append(f"## {kind_label.get(c['kind'], c['kind'])}")
                    last_kind = c["kind"]
                md.append(f"- `{c['spec_id']}` — appeared {c['appearances']}/{c['window']} snapshots · {c['reason']}")
    
        return {
            "ready": True,
            "snapshots_scanned": len(snapshots),
            "threshold": threshold,
            "chronic": chronic,
            "markdown": "\n".join(md),
        }
  • The dispatch table _DISPATCH that maps the string 'get_drift_signature' to history_tools.get_drift_signature_tool for tool routing.
    _DISPATCH: dict[str, Callable[[dict], dict]] = {
        "get_spec_source_info": _meta_info,
        "list_specs": specs_tools.list_specs_tool,
        "fetch_spec": specs_tools.fetch_spec_tool,
        "parse_spec": specs_tools.parse_spec_tool,
        "extract_scenarios": scenarios_tools.extract_scenarios_tool,
        "generate_test_plan": scenarios_tools.generate_test_plan_tool,
        "link_test_to_spec": coverage_tools.link_test_to_spec_tool,
        "get_coverage_matrix": coverage_tools.get_coverage_matrix_tool,
        "get_drift_report": coverage_tools.get_drift_report_tool,
        "analyze_spec_quality": quality_tools.analyze_spec_quality_tool,
        "propose_spec_improvements": quality_tools.propose_spec_improvements_tool,
        "auto_link_tests": auto_link_tools.auto_link_tests_tool,
        "get_optimization_plan": optimization_tools.get_optimization_plan_tool,
        "init_spec_knowledge": spec_knowledge_tools.init_spec_knowledge_tool,
        "get_spec_context": spec_knowledge_tools.get_spec_context_tool,
        "get_spec_history": history_tools.get_spec_history_tool,
        "get_drift_signature": history_tools.get_drift_signature_tool,
        "get_telemetry": telemetry_tools.get_telemetry_tool,
    }
  • The Tool registration via @app.list_tools() that defines the name, description, and inputSchema (window, threshold) for the 'get_drift_signature' MCP tool.
    Tool(
        name="get_drift_signature",
        description=(
            "Scan the recent snapshot history for chronic problems: "
            "same spec_id repeatedly appearing in drifted / unknown / "
            "low-quality buckets. Specs flagged as 'unstable' (drifts "
            "every cycle), 'chronic_low_quality' (vague every cycle), "
            "or 'chronic_unhashed' (never gets a hash recorded). "
            "Use when a user asks 'which specs keep causing trouble' / "
            "'what's the long-running pain'. "
            "Args: window (snapshots to scan, default 5), threshold "
            "(min recurrence to flag, default 3). "
            "Returns {ready, snapshots_scanned, chronic[], markdown}."
        ),
        inputSchema={
            "type": "object",
            "properties": {
                "window": {"type": "integer", "default": 5},
                "threshold": {"type": "integer", "default": 3},
            },
        },
    ),
  • The helper _read_snapshots that reads snapshot JSON files from HISTORY_DIR, used by get_drift_signature_tool to load history data.
    def _read_snapshots(limit: int | None = None) -> list[dict]:
        """Return snapshots in chronological order (oldest first). Reads the
        directory listing; tolerates missing dir + bad files."""
        if not config.HISTORY_DIR.exists():
            return []
        files = sorted(config.HISTORY_DIR.glob("*.json"))
        if limit:
            files = files[-limit:]
        out: list[dict] = []
        for f in files:
            try:
                out.append(json.loads(f.read_text(encoding="utf-8")))
            except (OSError, json.JSONDecodeError):
                continue
        return out
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Given that no annotations are provided, the description carries the full burden. It explains the scanning logic, the three flagging categories, and the return structure. It does not explicitly state that the tool is read-only or discuss side effects, but the action 'scan' and the return of a report imply no side effects. The description is transparent enough for safe use.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, front-loading the purpose and then providing usage guidance, parameter details, and return format in a few sentences. Every sentence adds value, with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema, no nested objects), the description is complete. It covers the purpose, when to use, parameter explanations, and the structure of the return value. No additional information is needed for an agent to correctly select and invoke the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description must compensate. It fully explains both parameters: window (number of snapshots to scan, default 5) and threshold (minimum recurrence to flag, default 3). This adds crucial meaning beyond the schema's type and defaults.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it scans recent snapshot history for chronic problems like unstable, chronic_low_quality, and chronic_unhashed. Provides example queries ('which specs keep causing trouble'). However, it does not explicitly distinguish this from its sibling tools such as get_drift_report or analyze_spec_quality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises when to use the tool: when a user asks about specs that repeatedly cause trouble or long-running pain. This provides clear context. It does not mention when not to use the tool or suggest alternatives, but the guidance is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kao273183/mk-spec-master'

If you have feedback or need assistance with the MCP directory API, please join our Discord server