propose_spec_improvements
Generates a markdown coach plan with concrete rewrite suggestions to improve specifications. Groups findings by spec and issue type for PM review.
Instructions
Take analyze_spec_quality output and produce a PM-facing markdown coach plan grouping findings by spec and issue type, with concrete rewrite suggestions per finding. If analysis is not provided, runs analyze_spec_quality inline with the remaining arguments. Use this when a user says 'how do I improve this spec' or 'review my PRD'. Returns {markdown, actions[]}.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| analysis | No | Output of analyze_spec_quality. If omitted, this tool runs the analysis itself. | |
| spec_id | No | ||
| raw_text | No |
Implementation Reference
- The main handler function `propose_spec_improvements_tool` that takes analysis results (or raw text) and produces a markdown coach plan with grouped findings by spec and issue type, including severity badges, suggestions, and actionable items.
def propose_spec_improvements_tool(arguments: dict) -> dict[str, Any]: """Take the output of analyze_spec_quality (or run it inline) and produce a markdown coach plan that PMs / spec authors can act on. """ analysis = arguments.get("analysis") if not analysis: analysis = analyze_spec_quality_tool({k: v for k, v in arguments.items() if k != "analysis"}) md_lines = ["# Spec quality coach", ""] total = analysis.get("total_findings", 0) md_lines.append(f"**{total} finding(s) across {analysis.get('specs_analyzed', 0)} spec(s).**") if total == 0: md_lines += ["", "🟢 No issues caught by current heuristics. Note this checks for **vague language**, **untestable implementation refs**, and **unclear role refs** — semantic correctness still needs human review."] return {"markdown": "\n".join(md_lines), "actions": []} actions: list[dict] = [] for spec_result in analysis.get("results", []): if spec_result["finding_count"] == 0: continue md_lines.append("") md_lines.append(f"## `{spec_result['spec_id']}` — {spec_result.get('title') or ''}") md_lines.append(f"_score: {spec_result['score']}/100 · findings: {spec_result['finding_count']}_") md_lines.append("") grouped: dict[str, list[dict]] = {} for f in spec_result["findings"]: grouped.setdefault(f["issue"], []).append(f) for issue, items in grouped.items(): severity = items[0].get("severity", "warn") badge = {"error": "🔴", "warn": "🟡", "info": "🔵"}.get(severity, "•") human_name = { "vague_language": "Vague language", "untestable_implementation_ref": "Untestable / implementation-detail AC", "unclear_role_refs": "Unclear role references", }.get(issue, issue) md_lines.append(f"### {badge} {human_name} *(×{len(items)})*") md_lines.append("") for f in items: ac_label = f"`{f['ac_id']}`" if f.get("ac_id") else "_(spec-level)_" md_lines.append(f"- {ac_label}: evidence — `{f['evidence']}`") md_lines.append(f" - **Suggestion:** {f['suggestion']}") if f.get("ac_text"): md_lines.append(f" - **Source AC:** > {f['ac_text']}") md_lines.append("") actions.append( { "spec_id": spec_result["spec_id"], "issue": issue, "count": len(items), "severity": severity, } ) return { "markdown": "\n".join(md_lines), "actions": actions, } - src/mk_spec_master/server.py:292-313 (schema)Tool registration with input schema — accepts optional `analysis` (object) or `spec_id`/`raw_text` (strings) and returns {markdown, actions[]}.
Tool( name="propose_spec_improvements", description=( "Take analyze_spec_quality output and produce a PM-facing " "markdown coach plan grouping findings by spec and issue type, " "with concrete rewrite suggestions per finding. If `analysis` " "is not provided, runs analyze_spec_quality inline with the " "remaining arguments. Use this when a user says 'how do I " "improve this spec' or 'review my PRD'. " "Returns {markdown, actions[]}." ), inputSchema={ "type": "object", "properties": { "analysis": { "type": "object", "description": "Output of analyze_spec_quality. If omitted, this tool runs the analysis itself.", }, "spec_id": {"type": "string"}, "raw_text": {"type": "string"}, }, }, - src/mk_spec_master/server.py:55-55 (registration)Dispatch registration mapping the tool name 'propose_spec_improvements' to `quality_tools.propose_spec_improvements_tool`.
"propose_spec_improvements": quality_tools.propose_spec_improvements_tool, - src/mk_spec_master/server.py:293-293 (registration)Tool object registration with name 'propose_spec_improvements' and description explaining it produces a PM-facing markdown coach plan.
name="propose_spec_improvements", - Reference to `propose_spec_improvements` in the spec knowledge starter content, documenting the tool as one of the coach tools.
"(`analyze_spec_quality`, `propose_spec_improvements`, " "`get_optimization_plan`) lean on indirectly. The AI client should "