Skip to main content
Glama
srwlli

Documentation Generator MCP Server

by srwlli
planning-workflow-system-meta-plan.json68.1 kB
{ "$schema": "./tool-implementation-template-schema.json", "template_info": { "name": "Tool Implementation Plan Template", "version": "1.0.0", "created_date": "2025-10-10", "description": "META PLAN for MCP Planning Workflow System - orchestrates 4 tools for AI-assisted implementation planning", "usage": "This is a meta plan that describes the overall system. Individual tool plans will reference this.", "compliance": "Follows docs-mcp architecture patterns: ARCH-001, QUA-001, QUA-002, REF-002, REF-003, ARCH-003" }, "document_info": { "title": "MCP Planning Workflow System - Meta Implementation Plan", "tool_id": "SYSTEM", "version": "1.0.0", "created_date": "2025-10-10", "status": "completed", "estimated_effort": "17-23 hours (across 4 tools)", "description": "Suite of 4 MCP tools enabling AI-assisted implementation planning with automated preparation, iterative review loops, and user approval gates" }, "executive_summary": { "purpose": "Enable AI assistants to create high-quality implementation plans using the feature-implementation-planning-standard.json template (v1.1.0) with automated project analysis, iterative validation, quality scoring, and mandatory user approval before execution", "value_proposition": "Reduces AI planning time from 6-9 hours to 2-3 hours through automation; ensures 85+ quality scores through iterative review; prevents execution of flawed plans through validation gates; gives users final approval authority", "real_world_analogy": "Like an automated quality control system for architectural blueprints - measures compliance, checks structural integrity, identifies gaps, and requires chief architect approval before construction begins", "use_case": "User requests feature implementation → AI calls analyze_project_for_planning → AI generates plan draft → AI calls validate_implementation_plan → AI self-reviews and refines (loop until 85+) → AI presents plan → USER APPROVES → AI executes implementation", "output": "4 new MCP tools (get_planning_template, analyze_project_for_planning, validate_implementation_plan, generate_plan_review_report) + integrated workflow + comprehensive documentation" }, "risk_assessment": { "overall_risk": "Medium", "complexity": "High", "scope": "Large - 15+ files affected across system", "risk_factors": { "file_system": "Medium - Reads project files, foundation docs, standards docs; creates validation reports; no destructive operations", "dependencies": "Low - Uses only existing dependencies (pathlib, json, re, typing); no new external libraries", "performance": "Medium - analyze_project_for_planning may be slow on large codebases (1000+ files); needs optimization for scalability; validation is fast (JSON processing)", "security": "Medium - Must validate project_path inputs to prevent path traversal; must sanitize file paths; must handle permission errors gracefully; read-only operations minimize risk", "breaking_changes": "None - Purely additive; existing tools unaffected; new tools integrate seamlessly" } }, "current_state_analysis": { "affected_files": [ "server.py - Add 4 new tool definitions to list_tools() (lines ~50-150)", "tool_handlers.py - Add 4 new handlers + register in TOOL_HANDLERS dict (lines ~800+)", "constants.py - Add planning-specific paths and enums (PlanningPaths, ValidationSeverity, etc.)", "type_defs.py - Add TypedDicts: PreparationSummaryDict, ValidationResultDict, PlanReviewDict, TemplateInfoDict", "validation.py - Add validation functions: validate_plan_file_path, validate_section_name, validate_plan_json_structure", "generators/ - NEW: planning_analyzer.py (analyze projects), plan_validator.py (validate plans), review_formatter.py (format reviews)", "context/feature-implementation-planning-standard.json - REFERENCE ONLY (v1.1.0 template)", "README.md - Document 4 new tools in Available Tools section", "API.md - Add 4 tool endpoint specifications with examples", "ARCHITECTURE.md - Add Planning Workflow System section to module architecture", "CLAUDE.md - Add comprehensive AI usage guidance for planning workflow" ], "dependencies": [ "Existing: feature-implementation-planning-standard.json (v1.1.0) - Template structure defines what to analyze/validate", "Existing: ErrorResponse factory (ARCH-001) - Error handling pattern", "Existing: Structured logging (ARCH-003) - Operation logging", "Existing: BaseGenerator pattern - May be extended for new generators", "New: PlanningAnalyzer class (generators/planning_analyzer.py) - Project analysis logic", "New: PlanValidator class (generators/plan_validator.py) - Validation rules and scoring", "New: ReviewFormatter class (generators/review_formatter.py) - Review report generation" ], "architecture_context": "Operates at MCP tool layer (server.py + tool_handlers.py) with new generator modules. Integrates with existing foundation docs (ARCHITECTURE.md, API.md, COMPONENTS.md) and standards (BEHAVIOR-STANDARDS.md, COMPONENT-PATTERN.md). Creates new workflow pattern: analyze → plan → validate → review → approve → execute. Follows existing patterns: handler registry (QUA-002), TypedDict returns (QUA-001), input validation (REF-003), error factory (ARCH-001), structured logging (ARCH-003)." }, "key_features": [ "Automated project preparation - Tool #2 discovers foundation docs, coding standards, reference components in 30-60 seconds", "Template access - Tool #1 provides template sections for AI reference during planning", "Comprehensive validation - Tool #3 validates plans against 25+ quality checklist items with 0-100 scoring", "Iterative review loop - AI refines plans until validation score ≥ 85 (max 5 iterations)", "Structured review reports - Tool #4 formats validation results into actionable markdown reports", "User approval gate - Mandatory user approval required before execution (cannot be bypassed)", "Pattern discovery - Analyzes existing code to identify reusable patterns (error handling, component structure, naming conventions)", "Gap identification - Flags missing documentation, standards, or components as risks", "Dependency detection - Identifies circular dependencies, missing imports, version conflicts", "Edge case coverage - Validates that plans include 5-10 edge case scenarios" ], "tool_specification": { "system_architecture": "4 interconnected MCP tools forming a planning workflow pipeline", "tools": [ { "name": "get_planning_template", "description": "Returns feature-implementation-planning-standard.json template content or specific sections for AI reference", "input_schema": { "type": "object", "properties": { "section": { "type": "string", "enum": ["all", "0_preparation", "1_executive_summary", "2_risk_assessment", "3_current_state_analysis", "4_key_features", "5_task_id_system", "6_implementation_phases", "7_testing_strategy", "8_success_criteria", "9_implementation_checklist"], "description": "Which section to return (default: 'all')", "required": false, "default": "all" } }, "required": [] }, "output": "Template JSON content for specified section(s)" }, { "name": "analyze_project_for_planning", "description": "Analyzes project to discover foundation docs, coding standards, reference components, and patterns - automates section 0 (Preparation) of planning template", "input_schema": { "type": "object", "properties": { "project_path": { "type": "string", "description": "Absolute path to project directory to analyze", "required": true } }, "required": ["project_path"] }, "output": "PreparationSummaryDict with foundation_docs, coding_standards, reference_components, key_patterns_identified, technology_stack, gaps_and_risks" }, { "name": "validate_implementation_plan", "description": "Validates implementation plan JSON against feature-implementation-planning-standard.json quality checklist; scores 0-100 and identifies issues by severity", "input_schema": { "type": "object", "properties": { "project_path": { "type": "string", "description": "Absolute path to project directory", "required": true }, "plan_file_path": { "type": "string", "description": "Relative path to plan JSON file (e.g., 'feature-auth-plan.json')", "required": true } }, "required": ["project_path", "plan_file_path"] }, "output": "ValidationResultDict with score, validation_result (PASS/NEEDS_REVISION/FAIL), issues array, checklist_results breakdown" }, { "name": "generate_plan_review_report", "description": "Formats validation results into structured markdown review report with issues, recommendations, and approval status", "input_schema": { "type": "object", "properties": { "project_path": { "type": "string", "description": "Absolute path to project directory", "required": true }, "plan_file_path": { "type": "string", "description": "Relative path to plan JSON file", "required": true }, "validation_result": { "type": "object", "description": "Result from validate_implementation_plan tool", "required": true } }, "required": ["project_path", "plan_file_path", "validation_result"] }, "output": "Markdown-formatted review report with sections: Critical Issues, Major Issues, Minor Issues, Recommendations, Approval Status" } ] }, "architecture_design": { "data_flow_diagram": [ "┌─────────────────────────────────────────────────────────────────┐", "│ PLANNING WORKFLOW SYSTEM │", "└─────────────────────────────────────────────────────────────────┘", "", "User: \"Create implementation plan for feature X\"", " │", " ├─► TOOL #1: get_planning_template(section='all')", " │ └─► Returns: Template JSON for AI reference", " │", " ├─► TOOL #2: analyze_project_for_planning(project_path)", " │ ├─► Scans: docs/, coderef/, README.md, ARCHITECTURE.md", " │ ├─► Discovers: foundation_docs, coding_standards, patterns", " │ └─► Returns: PreparationSummaryDict (section 0 complete!)", " │", " ├─► AI: Generates plan draft using template + preparation summary", " │ └─► Saves: feature-X-plan-DRAFT.json", " │", " ├─► REVIEW LOOP (max 5 iterations):", " │ │", " │ ├─► TOOL #3: validate_implementation_plan(project_path, plan)", " │ │ ├─► Checks: 25+ quality checklist items", " │ │ ├─► Scores: 0-100 (critical: -10, major: -5, minor: -1)", " │ │ └─► Returns: ValidationResultDict with issues", " │ │", " │ ├─► TOOL #4: generate_plan_review_report(validation_result)", " │ │ └─► Returns: Markdown review report", " │ │", " │ ├─► AI: Self-reviews using validation results", " │ │ ├─► Identifies: Critical/major/minor issues", " │ │ ├─► Revises: Plan to fix issues", " │ │ └─► Saves: feature-X-plan-DRAFT-v2.json", " │ │", " │ └─► Loop until: score ≥ 85 OR iterations = 5", " │ ├─► score ≥ 85: Continue to approval", " │ └─► iterations = 5: Escalate to user", " │", " ├─► AI: Presents final plan to user", " │ ├─► Shows: Validation score, iterations, refinements made", " │ └─► Asks: \"Ready to execute?\"", " │", " ├─► USER APPROVAL GATE ◄── REQUIRED", " │ ├─► User approves: Continue to execution", " │ ├─► User requests changes: AI revises + re-validates", " │ └─► User rejects: Abort execution", " │", " └─► AI: Executes implementation following approved plan", " └─► Updates: Checklist items as tasks complete", "", "┌─────────────────────────────────────────────────────────────────┐", "│ KEY PRINCIPLES: │", "│ • Automation reduces planning time 60-70% (6-9h → 2-3h) │", "│ • Validation ensures quality ≥ 85 before user sees plan │", "│ • Review loop prevents flawed plans from reaching execution │", "│ • User approval gate gives final authority │", "└─────────────────────────────────────────────────────────────────┘" ], "module_interactions": [ "server.py (MCP layer)", " ├─► tool_handlers.py", " │ ├─► handle_get_planning_template()", " │ │ └─► Reads: context/feature-implementation-planning-standard.json", " │ │", " │ ├─► handle_analyze_project_for_planning()", " │ │ ├─► PlanningAnalyzer class (new)", " │ │ │ ├─► scan_foundation_docs()", " │ │ │ ├─► scan_coding_standards()", " │ │ │ ├─► find_reference_components()", " │ │ │ └─► identify_patterns()", " │ │ └─► Returns: PreparationSummaryDict", " │ │", " │ ├─► handle_validate_implementation_plan()", " │ │ ├─► PlanValidator class (new)", " │ │ │ ├─► load_plan_json()", " │ │ │ ├─► validate_structure()", " │ │ │ ├─► validate_completeness()", " │ │ │ ├─► validate_quality()", " │ │ │ ├─► validate_autonomy()", " │ │ │ └─► calculate_score()", " │ │ └─► Returns: ValidationResultDict", " │ │", " │ └─► handle_generate_plan_review_report()", " │ ├─► ReviewFormatter class (new)", " │ │ ├─► format_critical_issues()", " │ │ ├─► format_major_issues()", " │ │ ├─► format_minor_issues()", " │ │ ├─► format_recommendations()", " │ │ └─► format_approval_status()", " │ └─► Returns: Markdown report", " │", " ├─► validation.py", " │ ├─► validate_project_path_input()", " │ ├─► validate_plan_file_path() (new)", " │ ├─► validate_section_name() (new)", " │ └─► validate_plan_json_structure() (new)", " │", " ├─► type_defs.py", " │ ├─► PreparationSummaryDict (new)", " │ ├─► ValidationResultDict (new)", " │ ├─► PlanReviewDict (new)", " │ └─► TemplateInfoDict (new)", " │", " ├─► constants.py", " │ ├─► PlanningPaths (new)", " │ ├─► ValidationSeverity enum (new)", " │ └─► PlanStatus enum (new)", " │", " └─► error_responses.py (ARCH-001)", " └─► ErrorResponse factory methods" ], "file_structure_changes": [ "New files:", " - generators/planning_analyzer.py (~400-500 lines)", " - generators/plan_validator.py (~300-400 lines)", " - generators/review_formatter.py (~150-200 lines)", "", "Modified files:", " - server.py (add 4 tool definitions)", " - tool_handlers.py (add 4 handlers + registrations)", " - constants.py (add planning constants)", " - type_defs.py (add 4 TypedDicts)", " - validation.py (add 3 validation functions)", " - README.md (document 4 tools)", " - API.md (4 tool specifications)", " - ARCHITECTURE.md (planning workflow section)", " - CLAUDE.md (AI usage guidance)" ] }, "implementation_phases": { "phase_1_foundation_quick_win": { "title": "Phase 1: Foundation & Quick Win (Tool #1)", "duration": "2-3 hours", "description": "Implement get_planning_template tool - simplest tool that establishes infrastructure pattern for subsequent tools", "tasks": [ { "id": "META-001", "task": "Create planning-specific constants", "location": "constants.py", "details": "Add PlanningPaths class with TEMPLATE_PATH, PLANS_DIR, etc.; add ValidationSeverity enum; add PlanStatus enum", "effort": "15 minutes" }, { "id": "META-002", "task": "Create planning-specific TypedDicts", "location": "type_defs.py", "details": "Add TemplateInfoDict, PreparationSummaryDict, ValidationResultDict, PlanReviewDict", "effort": "20 minutes" }, { "id": "META-003", "task": "Create planning-specific validation functions", "location": "validation.py", "details": "Add validate_section_name(), validate_plan_file_path(), validate_plan_json_structure()", "effort": "30 minutes" }, { "id": "TOOL1-001", "task": "Implement get_planning_template tool", "location": "server.py + tool_handlers.py", "details": "Tool #1: Returns template content; simplest implementation to establish pattern", "effort": "45 minutes", "note": "See individual tool plan: tool-1-get-planning-template-plan.json" }, { "id": "META-004", "task": "Test Tool #1 in isolation", "location": "test_get_planning_template.py", "details": "Verify tool returns correct template sections", "effort": "30 minutes" } ] }, "phase_2_core_automation": { "title": "Phase 2: Core Automation (Tool #2)", "duration": "6-8 hours", "description": "Implement analyze_project_for_planning tool - most complex, highest value; automates section 0 preparation", "tasks": [ { "id": "TOOL2-001", "task": "Implement analyze_project_for_planning tool", "location": "server.py + tool_handlers.py + generators/planning_analyzer.py", "details": "Tool #2: Analyzes projects to discover docs, standards, patterns; most complex logic; ~400-500 lines", "effort": "6-8 hours", "note": "See individual tool plan: tool-2-analyze-project-for-planning-plan.json" }, { "id": "META-005", "task": "Test Tool #2 on sample projects", "location": "test_analyze_project.py", "details": "Test on: Python project, TypeScript project, project with no docs, project with full docs", "effort": "1 hour" } ] }, "phase_3_quality_system": { "title": "Phase 3: Quality & Validation (Tool #3)", "duration": "4-5 hours", "description": "Implement validate_implementation_plan tool - validates plans against quality checklist; enables review loop", "tasks": [ { "id": "TOOL3-001", "task": "Implement validate_implementation_plan tool", "location": "server.py + tool_handlers.py + generators/plan_validator.py", "details": "Tool #3: Validates plans; checks 25+ quality items; scores 0-100; identifies issues by severity; ~300-400 lines", "effort": "4-5 hours", "note": "See individual tool plan: tool-3-validate-implementation-plan-plan.json" }, { "id": "META-006", "task": "Test Tool #3 with various plan qualities", "location": "test_validate_plan.py", "details": "Test with: perfect plan (score 100), flawed plan (score 60), minimal plan (score 40)", "effort": "45 minutes" } ] }, "phase_4_polish": { "title": "Phase 4: Polish & Reporting (Tool #4)", "duration": "2-3 hours", "description": "Implement generate_plan_review_report tool - formats validation results into readable markdown reports", "tasks": [ { "id": "TOOL4-001", "task": "Implement generate_plan_review_report tool", "location": "server.py + tool_handlers.py + generators/review_formatter.py", "details": "Tool #4: Formats validation results; creates structured markdown; ~150-200 lines", "effort": "2-3 hours", "note": "See individual tool plan: tool-4-generate-plan-review-report-plan.json" }, { "id": "META-007", "task": "Test Tool #4 report formatting", "location": "test_review_formatter.py", "details": "Verify markdown formatting, issue grouping, recommendations clarity", "effort": "30 minutes" } ] }, "phase_5_integration": { "title": "Phase 5: Integration & End-to-End Testing", "duration": "3-4 hours", "description": "Test complete workflow: analyze → plan → validate → review → approve → execute", "tasks": [ { "id": "META-008", "task": "Create end-to-end workflow test", "location": "test_planning_workflow_e2e.py", "details": "Simulate full planning workflow: analyze sample project → generate mock plan → validate → review → verify user approval gate", "effort": "1.5 hours" }, { "id": "META-009", "task": "Test review loop iterations", "location": "test_review_loop.py", "details": "Verify loop continues until score ≥ 85; verify max 5 iterations enforced; verify escalation on failure", "effort": "1 hour" }, { "id": "META-010", "task": "Test user approval gate", "location": "test_user_approval_gate.py", "details": "Verify execution cannot proceed without user approval; verify approval flow is clear to AI", "effort": "30 minutes" }, { "id": "META-011", "task": "Performance testing", "location": "test_performance.py", "details": "Test analyze_project_for_planning on large codebase (1000+ files); ensure < 5 min runtime; identify optimization opportunities", "effort": "45 minutes" } ] }, "phase_6_documentation": { "title": "Phase 6: Documentation & Finalization", "duration": "2-3 hours", "description": "Update all documentation with planning workflow system guidance", "tasks": [ { "id": "DOC-001", "task": "Update README.md", "location": "README.md - Available Tools section", "details": "Add 4 new tools with brief descriptions", "effort": "15 minutes" }, { "id": "DOC-002", "task": "Update API.md", "location": "API.md", "details": "Document all 4 tool endpoints: parameters, returns, examples, error handling", "effort": "1 hour" }, { "id": "DOC-003", "task": "Update ARCHITECTURE.md", "location": "ARCHITECTURE.md - Module Architecture section", "details": "Add Planning Workflow System section with data flow diagram, module interactions", "effort": "45 minutes" }, { "id": "DOC-004", "task": "Update CLAUDE.md - Tool Catalog", "location": "CLAUDE.md", "details": "Add comprehensive AI usage guidance: when to use each tool, workflow patterns, review loop examples, user approval gate requirements", "effort": "1 hour" }, { "id": "DOC-005", "task": "Create planning workflow guide", "location": "docs/planning-workflow-guide.md (new)", "details": "User-facing guide explaining the planning workflow: how it works, benefits, examples", "effort": "30 minutes" } ] } }, "code_structure": { "handler_implementation": { "file": "tool_handlers.py", "functions": [ "handle_get_planning_template(arguments) -> list[TextContent]", "handle_analyze_project_for_planning(arguments) -> list[TextContent]", "handle_validate_implementation_plan(arguments) -> list[TextContent]", "handle_generate_plan_review_report(arguments) -> list[TextContent]" ], "pattern": "Standard handler pattern with validation, logging, error handling", "imports_required": [ "from mcp.types import TextContent", "from pathlib import Path", "import json", "from constants import PlanningPaths, ValidationSeverity, PlanStatus", "from validation import validate_project_path_input, validate_plan_file_path, validate_section_name", "from error_responses import ErrorResponse", "from logger_config import logger, log_tool_call, log_error", "from type_defs import PreparationSummaryDict, ValidationResultDict, PlanReviewDict, TemplateInfoDict", "from generators.planning_analyzer import PlanningAnalyzer", "from generators.plan_validator import PlanValidator", "from generators.review_formatter import ReviewFormatter" ], "error_handling": { "ValueError": "ErrorResponse.invalid_input() - Invalid section names, malformed paths", "FileNotFoundError": "ErrorResponse.not_found() - Template file missing, plan file missing", "PermissionError": "ErrorResponse.permission_denied() - Cannot read project files", "json.JSONDecodeError": "ErrorResponse.malformed_json() - Invalid plan JSON structure", "Exception": "ErrorResponse.generic_error() - Unexpected errors" } }, "generator_classes": [ { "file": "generators/planning_analyzer.py", "class": "PlanningAnalyzer", "inherits_from": "BaseGenerator", "methods": [ { "name": "__init__", "signature": "def __init__(self, project_path: Path)", "description": "Initialize with project path to analyze" }, { "name": "analyze", "signature": "def analyze(self) -> PreparationSummaryDict", "description": "Main analysis method - orchestrates all scans", "returns": "Complete preparation summary with docs, standards, patterns, gaps" }, { "name": "scan_foundation_docs", "signature": "def scan_foundation_docs(self) -> dict", "description": "Scans for API.md, ARCHITECTURE.md, COMPONENTS.md, SCHEMA.md", "returns": "Dict with available/missing foundation docs" }, { "name": "scan_coding_standards", "signature": "def scan_coding_standards(self) -> dict", "description": "Scans for BEHAVIOR-STANDARDS.md, COMPONENT-PATTERN.md, etc.", "returns": "Dict with available/missing standards docs" }, { "name": "find_reference_components", "signature": "def find_reference_components(self) -> dict", "description": "Searches for similar components based on file names and patterns", "returns": "Dict with primary and secondary reference components" }, { "name": "identify_patterns", "signature": "def identify_patterns(self) -> list[str]", "description": "Analyzes code to identify reusable patterns (error handling, naming, structure)", "returns": "List of pattern descriptions" }, { "name": "detect_technology_stack", "signature": "def detect_technology_stack(self) -> dict", "description": "Identifies language, framework, database, testing tools", "returns": "Dict with technology stack details" }, { "name": "identify_gaps_and_risks", "signature": "def identify_gaps_and_risks(self) -> list[str]", "description": "Identifies missing docs, standards, or potential risks", "returns": "List of gap/risk descriptions" } ] }, { "file": "generators/plan_validator.py", "class": "PlanValidator", "inherits_from": "None", "methods": [ { "name": "__init__", "signature": "def __init__(self, plan_path: Path, template_path: Path)", "description": "Initialize with paths to plan and template" }, { "name": "validate", "signature": "def validate(self) -> ValidationResultDict", "description": "Main validation method - runs all checks", "returns": "Complete validation result with score, issues, checklist results" }, { "name": "validate_structure", "signature": "def validate_structure(self) -> list[dict]", "description": "Validates plan has all required sections (0-9)", "returns": "List of structural issues" }, { "name": "validate_completeness", "signature": "def validate_completeness(self) -> list[dict]", "description": "Validates all fields filled, no placeholders, task IDs present", "returns": "List of completeness issues" }, { "name": "validate_quality", "signature": "def validate_quality(self) -> list[dict]", "description": "Validates task descriptions, success criteria measurability, edge cases", "returns": "List of quality issues" }, { "name": "validate_autonomy", "signature": "def validate_autonomy(self) -> list[dict]", "description": "Validates plan is implementable without clarification, zero ambiguity", "returns": "List of autonomy issues" }, { "name": "calculate_score", "signature": "def calculate_score(self, issues: list[dict]) -> int", "description": "Calculates 0-100 score based on issue severity (critical: -10, major: -5, minor: -1)", "returns": "Score from 0-100" } ] }, { "file": "generators/review_formatter.py", "class": "ReviewFormatter", "inherits_from": "None", "methods": [ { "name": "__init__", "signature": "def __init__(self, validation_result: ValidationResultDict)", "description": "Initialize with validation result to format" }, { "name": "format_report", "signature": "def format_report(self) -> str", "description": "Main formatting method - creates full markdown report", "returns": "Markdown-formatted review report" }, { "name": "format_issues_section", "signature": "def format_issues_section(self, severity: str, issues: list[dict]) -> str", "description": "Formats issues of a specific severity into markdown section", "returns": "Markdown section for issues" }, { "name": "format_recommendations", "signature": "def format_recommendations(self, issues: list[dict]) -> str", "description": "Generates actionable recommendations based on issues", "returns": "Markdown recommendations section" }, { "name": "format_approval_status", "signature": "def format_approval_status(self, score: int, result: str) -> str", "description": "Formats approval status with emoji indicator", "returns": "Approval status string" } ] } ] }, "integration_with_existing_system": { "follows_patterns": [ "QUA-002: Handler registry pattern - all 4 handlers registered in TOOL_HANDLERS dict", "ARCH-001: ErrorResponse factory for all error types", "REF-003: Input validation at MCP boundaries using validation.py functions", "ARCH-003: Structured logging for all operations (tool calls, errors, analysis progress)", "QUA-001: TypedDict for all complex return types (4 new TypedDicts)", "REF-002: Constants/enums instead of magic strings (PlanningPaths, ValidationSeverity, PlanStatus)" ], "constants_additions": { "file": "constants.py", "code": [ "class PlanningPaths:", " TEMPLATE_PATH = Path('context') / 'feature-implementation-planning-standard.json'", " PLANS_DIR = Path('plans') # Where plans are saved", " REVIEW_REPORTS_DIR = Path('coderef') / 'planning-reviews'", "", "class ValidationSeverity(Enum):", " CRITICAL = 'critical' # -10 points", " MAJOR = 'major' # -5 points", " MINOR = 'minor' # -1 point", "", "class PlanStatus(Enum):", " DRAFT = 'draft'", " REVIEWING = 'reviewing'", " APPROVED = 'approved'", " REJECTED = 'rejected'", " IMPLEMENTED = 'implemented'" ] }, "type_defs_additions": { "file": "type_defs.py", "code": [ "class TemplateInfoDict(TypedDict):", " section: str", " content: dict | str", "", "class PreparationSummaryDict(TypedDict):", " foundation_docs: dict # {available: [...], missing: [...]}", " coding_standards: dict # {available: [...], missing: [...]}", " reference_components: dict # {primary: str, secondary: [...]}", " key_patterns_identified: list[str]", " technology_stack: dict", " project_structure: dict", " gaps_and_risks: list[str]", "", "class ValidationIssueDict(TypedDict):", " severity: str # 'critical' | 'major' | 'minor'", " section: str", " issue: str", " suggestion: str", "", "class ValidationResultDict(TypedDict):", " validation_result: str # 'PASS' | 'PASS_WITH_WARNINGS' | 'NEEDS_REVISION' | 'FAIL'", " score: int # 0-100", " issues: list[ValidationIssueDict]", " checklist_results: dict", " approved: bool", "", "class PlanReviewDict(TypedDict):", " report_markdown: str", " summary: str", " approval_status: str" ] }, "validation_additions": { "file": "validation.py", "code": [ "def validate_section_name(section: str) -> str:", " valid_sections = ['all', '0_preparation', '1_executive_summary', ...]", " if section not in valid_sections:", " raise ValueError(f'Invalid section: {section}')", " return section", "", "def validate_plan_file_path(project_path: Path, plan_file: str) -> Path:", " # Prevent path traversal", " plan_path = (project_path / plan_file).resolve()", " if not plan_path.is_relative_to(project_path):", " raise ValueError('Plan file must be within project directory')", " return plan_path", "", "def validate_plan_json_structure(plan_data: dict) -> dict:", " # Validate plan has required top-level keys", " required_keys = ['META_DOCUMENTATION', 'UNIVERSAL_PLANNING_STRUCTURE']", " for key in required_keys:", " if key not in plan_data:", " raise ValueError(f'Plan missing required key: {key}')", " return plan_data" ] } }, "testing_strategy": { "unit_tests": [ { "test": "test_get_planning_template_all_sections", "verifies": "Returns complete template when section='all'", "task_id": "META-004" }, { "test": "test_get_planning_template_specific_section", "verifies": "Returns only requested section", "task_id": "META-004" }, { "test": "test_analyze_project_discovers_foundation_docs", "verifies": "Correctly identifies API.md, ARCHITECTURE.md in sample project", "task_id": "META-005" }, { "test": "test_analyze_project_discovers_standards", "verifies": "Correctly identifies BEHAVIOR-STANDARDS.md, COMPONENT-PATTERN.md", "task_id": "META-005" }, { "test": "test_validate_plan_perfect_score", "verifies": "Perfect plan scores 100", "task_id": "META-006" }, { "test": "test_validate_plan_critical_issues", "verifies": "Critical issues reduce score by 10 points each", "task_id": "META-006" }, { "test": "test_review_formatter_markdown_structure", "verifies": "Generated markdown has correct sections and formatting", "task_id": "META-007" } ], "integration_tests": [ { "test": "test_full_planning_workflow_e2e", "project": "Sample Python project with partial docs", "expected": "analyze → returns preparation summary → validate mock plan → score 75 → review report generated → loop iteration suggested", "task_id": "META-008" }, { "test": "test_review_loop_until_threshold", "project": "Mock plans with scores: 60, 75, 85", "expected": "Loop continues until score ≥ 85; stops at iteration 3", "task_id": "META-009" }, { "test": "test_max_iterations_enforcement", "project": "Mock plans that never reach 85", "expected": "Loop stops at iteration 5; escalates to user", "task_id": "META-009" }, { "test": "test_user_approval_gate_required", "project": "Approved plan (score 90)", "expected": "AI presents plan to user; execution blocked until user approves", "task_id": "META-010" } ], "manual_validation": [ { "step": "Run analyze_project_for_planning on docs-mcp project itself", "verify": "Discovers: API.md, ARCHITECTURE.md, COMPONENTS.md, SCHEMA.md, BEHAVIOR-STANDARDS.md; identifies Python patterns", "task_id": "META-005" }, { "step": "Create intentionally flawed plan and validate it", "verify": "Validator catches: missing sections, placeholder text, circular dependencies, insufficient edge cases", "task_id": "META-006" }, { "step": "Simulate full planning workflow with real feature request", "verify": "Workflow completes successfully; user approval gate is clear; plan quality improves through iterations", "task_id": "META-008" } ], "edge_cases": { "description": "Comprehensive edge case testing for robustness", "test_scenarios": [ { "scenario": "Project with no documentation", "setup": "Empty project directory with only code files", "expected_behavior": "analyze_project_for_planning returns gaps_and_risks: ['No foundation docs found', 'No coding standards found']; suggests creating docs as first phase", "verify": [ "foundation_docs.available = []", "foundation_docs.missing = ['API.md', 'ARCHITECTURE.md', ...]", "gaps_and_risks contains missing docs warning" ], "error_handling": "No errors - gracefully handles missing docs" }, { "scenario": "Plan with circular task dependencies", "setup": "Plan JSON where SETUP-001 depends on API-002 which depends on SETUP-001", "expected_behavior": "validate_implementation_plan detects cycle; returns critical issue: 'Circular dependency detected between SETUP-001 and API-002'", "verify": [ "validation_result = 'FAIL'", "score ≤ 50 (critical issue: -10 points)", "issues contains circular dependency error" ], "error_handling": "ValidationIssueDict with severity='critical'" }, { "scenario": "Invalid section name in get_planning_template", "setup": "Call get_planning_template(section='invalid_section')", "expected_behavior": "Returns ErrorResponse.invalid_input with suggestion of valid sections", "verify": [ "Error message contains 'Invalid section: invalid_section'", "Suggestion lists all valid sections" ], "error_handling": "ValueError caught → ErrorResponse.invalid_input()" }, { "scenario": "Plan file outside project directory (path traversal attempt)", "setup": "Call validate_implementation_plan with plan_file_path='../../../etc/passwd'", "expected_behavior": "validate_plan_file_path() raises ValueError; ErrorResponse.invalid_input() returned", "verify": [ "Path traversal detected", "Error message contains 'must be within project directory'" ], "error_handling": "ValueError → ErrorResponse.invalid_input()" }, { "scenario": "Malformed plan JSON", "setup": "Plan file with invalid JSON syntax", "expected_behavior": "Returns ErrorResponse.malformed_json() with helpful message", "verify": [ "json.JSONDecodeError caught", "Error indicates line/column of syntax error" ], "error_handling": "json.JSONDecodeError → ErrorResponse.malformed_json()" }, { "scenario": "Very large project (5000+ files)", "setup": "Run analyze_project_for_planning on large monorepo", "expected_behavior": "Completes within 5 minutes; logs progress; uses sampling for pattern detection if needed", "verify": [ "Duration < 300 seconds", "Progress logged every 1000 files", "Results still accurate despite size" ], "error_handling": "No errors - handles large projects gracefully" }, { "scenario": "Plan at max iterations without reaching threshold", "setup": "Mock 5 plan revisions that score: 60, 65, 70, 72, 74 (never reach 85)", "expected_behavior": "Loop stops at iteration 5; validation_result = 'FAIL'; suggests escalation to user", "verify": [ "iterations = 5", "score = 74 (best attempt)", "issues still present", "recommendation: 'Max iterations reached - escalate to user for guidance'" ], "error_handling": "Not an error - graceful fallback to user escalation" } ] } }, "performance_monitoring": { "description": "Performance targets and optimization strategies", "metrics_to_track": [ { "metric": "analyze_project_for_planning duration", "how_to_measure": "Log timestamp at start and end; calculate duration", "target": "< 60 seconds for projects with < 500 files; < 300 seconds for projects with < 5000 files", "logging": "logger.info(f'Analysis completed in {duration:.2f}s', extra={'files_scanned': file_count, 'duration': duration})" }, { "metric": "validate_implementation_plan duration", "how_to_measure": "Log timestamp at start and end", "target": "< 2 seconds (JSON processing should be fast)", "logging": "logger.info(f'Validation completed in {duration:.2f}s', extra={'score': score, 'issues_count': len(issues)})" }, { "metric": "Pattern discovery accuracy", "how_to_measure": "Manual review of discovered patterns", "target": "> 80% of discovered patterns are actually reusable", "logging": "logger.debug(f'Discovered {len(patterns)} patterns', extra={'patterns': patterns})" } ], "optimization_opportunities": [ { "optimization": "Parallel file scanning in analyze_project_for_planning", "rationale": "Scanning 5000+ files sequentially is slow; can parallelize file reads", "implementation": "Use concurrent.futures.ThreadPoolExecutor for file I/O operations", "expected_improvement": "50-70% faster on large projects" }, { "optimization": "Caching of analysis results", "rationale": "Re-analyzing same project multiple times wastes time", "implementation": "Cache PreparationSummaryDict in coderef/planning-cache/ with project hash; invalidate on file changes", "expected_improvement": "Near-instant results for cached projects" }, { "optimization": "Early exit from validation if critical issues found", "rationale": "No point checking all 25 items if plan already has 3 critical issues (score ≤ 70)", "implementation": "Check critical items first; exit if score drops below 70", "expected_improvement": "30-40% faster validation for obviously flawed plans" } ], "performance_targets": { "small_project": "< 100 files: analyze in < 10 seconds", "medium_project": "100-500 files: analyze in < 60 seconds", "large_project": "500-5000 files: analyze in < 300 seconds (5 minutes)" } }, "documentation_updates": { "files_to_update": [ { "file": "README.md", "section": "Available Tools", "addition": "## Planning Workflow Tools\n- `get_planning_template` - Get template sections for planning\n- `analyze_project_for_planning` - Automated project analysis for section 0\n- `validate_implementation_plan` - Validate plans with quality scoring\n- `generate_plan_review_report` - Format validation results into markdown" }, { "file": "API.md", "section": "Tool Endpoints", "addition": "Complete specifications for all 4 tools: parameters, return types, examples, error codes" }, { "file": "ARCHITECTURE.md", "section": "Module Architecture", "addition": "## Planning Workflow System\n\nNew module group for AI-assisted planning:\n- PlanningAnalyzer: Project analysis and pattern discovery\n- PlanValidator: Plan quality validation and scoring\n- ReviewFormatter: Review report generation\n\nData flow: [ASCII diagram from this meta plan]" }, { "file": "CLAUDE.md", "section": "Tool Catalog", "addition": "Complete AI usage guidance for planning workflow", "detailed_content": { "purpose": "Enable AI to create high-quality implementation plans with automated preparation and validation", "when_to_use": [ "User requests feature implementation or refactoring", "Before starting any non-trivial implementation work", "When creating plans for tools, features, or architecture changes" ], "workflow_pattern": [ "Step 1: Call analyze_project_for_planning(project_path)", "Step 2: Use analysis results to fill section 0 (Preparation)", "Step 3: Generate plan draft using template + analysis", "Step 4: Call validate_implementation_plan(plan_file)", "Step 5: Review validation results; refine plan if score < 85", "Step 6: Repeat steps 4-5 until score ≥ 85 (max 5 iterations)", "Step 7: Present plan to user with validation score", "Step 8: WAIT FOR USER APPROVAL before execution", "Step 9: Execute approved plan" ], "critical_notes": [ "User approval is MANDATORY before execution - cannot be bypassed", "Review loop should iterate until score ≥ 85 or max 5 iterations", "If max iterations reached without 85+ score, escalate to user", "Always show validation score and iteration count to user", "analyze_project_for_planning may take 1-5 minutes on large projects" ] } } ] }, "success_criteria": { "description": "Quantifiable success metrics for the planning workflow system", "functional_requirements": [ { "requirement": "All 4 tools work independently", "metric": "Tool invocation success rate", "target": "100% of valid inputs return correct results", "validation": "Run unit tests for each tool; verify all pass" }, { "requirement": "Tools integrate in workflow sequence", "metric": "End-to-end workflow completion", "target": "analyze → plan → validate → review → approve sequence completes successfully", "validation": "Run test_planning_workflow_e2e.py; verify workflow completes" }, { "requirement": "Planning time reduction", "metric": "Time to create 85+ score plan", "target": "< 3 hours (down from 6-9 hours manual)", "validation": "Measure AI time from analyze_project_for_planning to final plan approval" }, { "requirement": "Validation catches issues", "metric": "Issue detection accuracy", "target": "> 95% of known plan flaws detected", "validation": "Test with 20 intentionally flawed plans; verify 19+ have issues detected" }, { "requirement": "User approval gate cannot be bypassed", "metric": "Execution prevention before approval", "target": "100% of attempts to execute without approval are blocked", "validation": "Attempt execution without approval; verify it fails" } ], "quality_requirements": [ { "requirement": "Architecture compliance", "metric": "Pattern adherence", "target": "100% of code follows existing patterns (ARCH-001, QUA-001, QUA-002, REF-002, REF-003, ARCH-003)", "validation": [ "ARCH-001: All errors use ErrorResponse factory", "QUA-001: All complex returns use TypedDict (4 new TypedDicts)", "QUA-002: All 4 handlers registered in TOOL_HANDLERS dict", "REF-002: No magic strings (use PlanningPaths, ValidationSeverity, PlanStatus enums)", "REF-003: All inputs validated (validate_project_path_input, validate_plan_file_path, validate_section_name)", "ARCH-003: All operations logged (log_tool_call, log_error, logger.info)" ] }, { "requirement": "Plans score 85+ before user approval", "metric": "Validation score of plans reaching user", "target": "100% of plans presented to user have score ≥ 85", "validation": "Review loop continues until score ≥ 85 or max iterations; no plan < 85 reaches user except on iteration limit" }, { "requirement": "Zero critical issues reach execution", "metric": "Critical issues in executed plans", "target": "0 critical issues in any plan that reaches execution phase", "validation": "Review loop must fix all critical issues; critical issues reduce score by 10 points each (max 10 critical = score 0)" } ], "performance_requirements": [ { "requirement": "analyze_project_for_planning performance", "metric": "Analysis duration by project size", "target": "< 60s for < 500 files; < 300s for < 5000 files", "validation": "Run performance tests on sample projects of varying sizes" }, { "requirement": "validate_implementation_plan performance", "metric": "Validation duration", "target": "< 2 seconds per validation", "validation": "Time validation on sample plans; verify < 2s" } ], "security_requirements": [ { "requirement": "Path traversal prevention", "metric": "Path traversal attempts blocked", "target": "100% of path traversal attempts rejected", "validation": "Test with: '../../../etc/passwd', 'C:/Windows/System32/config/sam'; verify all rejected" }, { "requirement": "Project path validation", "metric": "Invalid paths rejected", "target": "100% of invalid paths return ErrorResponse.invalid_input()", "validation": "Test with: relative paths, non-existent paths, non-directory paths" } ] }, "changelog_entry": { "tool": "add_changelog_entry", "parameters": { "project_path": "C:/Users/willh/.mcp-servers/docs-mcp", "version": "1.4.0", "change_type": "feature", "severity": "major", "title": "Add MCP Planning Workflow System - 4 tools for AI-assisted planning", "description": "Implemented comprehensive planning workflow system with 4 new MCP tools: get_planning_template (template access), analyze_project_for_planning (automates section 0 preparation), validate_implementation_plan (quality validation and scoring), generate_plan_review_report (review formatting). System enables AI to create high-quality plans with automated analysis, iterative review loops, and mandatory user approval gates. Reduces planning time from 6-9 hours to 2-3 hours through automation.", "files": [ "server.py", "tool_handlers.py", "constants.py", "type_defs.py", "validation.py", "generators/planning_analyzer.py", "generators/plan_validator.py", "generators/review_formatter.py", "README.md", "API.md", "ARCHITECTURE.md", "CLAUDE.md", "coderef/planning-workflow-system-meta-plan.json" ], "reason": "Users needed AI-assisted implementation planning with automated preparation, quality validation, and review loops to create better plans faster while maintaining quality control", "impact": "AI can now create 85+ quality score plans in 2-3 hours (down from 6-9 hours manual); mandatory user approval gate ensures users maintain control; review loops prevent flawed plans from reaching execution" } }, "troubleshooting_guide": { "common_issues": [ { "issue": "analyze_project_for_planning takes too long", "symptom": "Tool runs for > 5 minutes on medium-sized project", "causes": [ "Project has > 5000 files", "Network-mounted filesystem (slow I/O)", "Pattern analysis scanning too many files" ], "resolution": "Implement parallel file scanning (optimization); add progress logging; consider file count limit for pattern analysis (sample 1000 files instead of all files)" }, { "issue": "Validation score stuck below 85", "symptom": "AI iterates 5 times but never reaches 85+ score", "causes": [ "Plan has fundamental structural issues", "AI not understanding validation feedback", "Validation rules too strict" ], "resolution": "Review validation issues; escalate to user for guidance; user may need to adjust requirements or provide more context" }, { "issue": "Review loop doesn't iterate", "symptom": "AI presents plan with score 70 to user without refinement", "causes": [ "AI workflow logic error", "Missing review loop implementation", "AI not checking score threshold" ], "resolution": "Verify workflow documentation in CLAUDE.md clearly states review loop requirements; add examples of review loop patterns" }, { "issue": "User approval gate bypassed", "symptom": "Execution starts without user approval", "causes": [ "AI misunderstands approval requirement", "Workflow documentation unclear", "No technical enforcement of approval gate" ], "resolution": "Clarify in CLAUDE.md that user approval is MANDATORY; add examples showing 'User: yes please proceed' pattern; consider adding approval tracking mechanism" } ] }, "review_gates": { "pre_implementation": { "reviewer": "user", "question": "Does this meta plan correctly describe the planning workflow system? Are all 4 tools well-defined?", "checkpoint": "Before creating individual tool plans" }, "post_tool_plans": { "reviewer": "user", "question": "Are all 4 individual tool plans approved and ready for implementation?", "checkpoint": "Before implementing any tools" }, "post_foundation": { "reviewer": "user", "question": "Is Tool #1 working correctly? Is infrastructure pattern established?", "checkpoint": "After Phase 1 (Tool #1), before Phase 2 (Tool #2)" }, "post_core_automation": { "reviewer": "user", "question": "Is Tool #2 analyzing projects correctly? Are discovered patterns accurate?", "checkpoint": "After Phase 2 (Tool #2), before Phase 3 (Tool #3)" }, "post_validation": { "reviewer": "user", "question": "Is Tool #3 validating plans correctly? Is scoring algorithm fair?", "checkpoint": "After Phase 3 (Tool #3), before Phase 4 (Tool #4)" }, "post_integration": { "reviewer": "user", "question": "Does the complete workflow work end-to-end? Is user approval gate clear?", "checkpoint": "After Phase 5 (Integration), before Phase 6 (Documentation)" }, "final_approval": { "reviewer": "user", "question": "Is the system ready for production use? Documentation complete?", "checkpoint": "Before marking status as 'implemented' and creating changelog entry" } }, "implementation_checklist": { "pre_implementation": [ "☐ Review meta plan for completeness", "☐ Get user approval on overall approach", "☐ Create 4 individual tool implementation plans", "☐ Get user approval on all 4 tool plans" ], "phase_1_foundation": [ "☐ META-001: Planning constants (constants.py)", "☐ META-002: Planning TypedDicts (type_defs.py)", "☐ META-003: Planning validation functions (validation.py)", "☐ TOOL1-001: Implement get_planning_template tool", "☐ META-004: Test Tool #1 in isolation" ], "phase_2_core_automation": [ "☐ TOOL2-001: Implement analyze_project_for_planning tool", "☐ META-005: Test Tool #2 on sample projects" ], "phase_3_quality_system": [ "☐ TOOL3-001: Implement validate_implementation_plan tool", "☐ META-006: Test Tool #3 with various plan qualities" ], "phase_4_polish": [ "☐ TOOL4-001: Implement generate_plan_review_report tool", "☐ META-007: Test Tool #4 report formatting" ], "phase_5_integration": [ "☐ META-008: End-to-end workflow test", "☐ META-009: Review loop iteration tests", "☐ META-010: User approval gate test", "☐ META-011: Performance testing" ], "phase_6_documentation": [ "☐ DOC-001: Update README.md", "☐ DOC-002: Update API.md", "☐ DOC-003: Update ARCHITECTURE.md", "☐ DOC-004: Update CLAUDE.md", "☐ DOC-005: Create planning workflow guide" ], "finalization": [ "☐ Add changelog entry via add_changelog_entry", "☐ Update meta plan status to 'implemented'", "☐ Update individual tool plan statuses to 'implemented'", "☐ Commit all changes", "☐ Create release notes" ] }, "task_id_reference": { "description": "Task ID prefixes for planning workflow system", "prefixes": { "META": "Meta-level tasks - testing, integration, infrastructure shared across all 4 tools", "TOOL1": "Tool #1 (get_planning_template) specific tasks", "TOOL2": "Tool #2 (analyze_project_for_planning) specific tasks", "TOOL3": "Tool #3 (validate_implementation_plan) specific tasks", "TOOL4": "Tool #4 (generate_plan_review_report) specific tasks", "DOC": "Documentation updates for all tools" }, "note": "Individual tool plans will have their own task IDs (INFRA-NNN, SCAN-NNN, VALID-NNN, FORMAT-NNN)" }, "notes_and_considerations": { "design_decisions": [ { "decision": "User approval gate is procedural, not technical", "rationale": "AI assistants work through natural language; technical enforcement would require state management and complex workflow tracking; procedural approach relies on clear documentation in CLAUDE.md", "trade_off": "Easier to implement and maintain, but relies on AI following instructions correctly; user must trust AI workflow adherence" }, { "decision": "Review loop has max 5 iterations", "rationale": "Prevents infinite loops; forces escalation to user if plan can't reach 85+ score after 5 attempts", "trade_off": "Some plans may need > 5 iterations to reach 85+, but better to escalate than loop indefinitely" }, { "decision": "Validation scoring: critical -10, major -5, minor -1", "rationale": "Critical issues should heavily impact score (each critical = 10% reduction); minor issues should accumulate but not dominate", "trade_off": "Somewhat arbitrary weights; may need tuning based on real-world usage" }, { "decision": "4 separate tools instead of 1 monolithic tool", "rationale": "Modularity allows using tools independently; composability for different workflows; clearer separation of concerns", "trade_off": "More tools to maintain and document; slightly more complex workflow for users" } ], "potential_challenges": [ { "challenge": "analyze_project_for_planning may be slow on very large codebases (10,000+ files)", "mitigation": "Implement parallel file scanning; add progress logging; consider sampling for pattern detection", "fallback": "Allow users to specify file/directory filters; cache analysis results" }, { "challenge": "Validation rules may be too strict or too lenient", "mitigation": "Start conservative (stricter rules); gather user feedback; adjust scoring weights based on real usage", "fallback": "Allow users to configure validation strictness level (strict/moderate/lenient)" }, { "challenge": "AI may not understand how to use review loop correctly", "mitigation": "Provide extensive examples in CLAUDE.md; include step-by-step workflow guide; show complete review loop examples", "fallback": "Create meta-tool that orchestrates the entire workflow (like update_changelog pattern)" }, { "challenge": "Pattern discovery accuracy may vary by language/framework", "mitigation": "Focus on universal patterns (naming conventions, error handling, file organization); avoid language-specific heuristics initially", "fallback": "Allow manual pattern specification in project config file" } ] }, "future_enhancements": { "v1_1_improvements": [ { "feature": "Configurable validation strictness", "description": "Allow projects to configure validation rules and scoring weights in .planning-config.json", "benefit": "Different projects have different quality requirements; some may need stricter validation, others more lenient", "effort": "2-3 hours" }, { "feature": "Analysis result caching", "description": "Cache PreparationSummaryDict with project hash; invalidate on file changes", "benefit": "Near-instant re-analysis for unchanged projects; speeds up iterative planning", "effort": "3-4 hours" }, { "feature": "Meta-tool for complete workflow orchestration", "description": "Single tool that guides AI through entire workflow: analyze → plan → validate → review → approve", "benefit": "Simplifies workflow for AI; reduces chance of workflow errors; similar to update_changelog pattern", "effort": "4-5 hours" }, { "feature": "Language-specific pattern analyzers", "description": "Specialized analyzers for Python, TypeScript, Go, Rust that understand language-specific patterns", "benefit": "More accurate pattern discovery; better recommendations; language-aware validation", "effort": "8-10 hours per language" }, { "feature": "Plan diff and comparison", "description": "Tool to compare plan iterations and show what changed between versions", "benefit": "Helps users understand plan evolution; tracks improvement through review iterations", "effort": "3-4 hours" }, { "feature": "Plan templates for common feature types", "description": "Pre-built plan templates for common features (authentication, CRUD API, data migration, etc.)", "benefit": "Faster planning for common patterns; consistency across similar features", "effort": "6-8 hours to create template system + 2-3 hours per template" } ] }, "next_steps": { "immediate": [ "1. User reviews and approves this meta plan", "2. Create 4 individual tool implementation plans:", " - tool-1-get-planning-template-plan.json", " - tool-2-analyze-project-for-planning-plan.json", " - tool-3-validate-implementation-plan-plan.json", " - tool-4-generate-plan-review-report-plan.json", "3. User reviews and approves all 4 individual plans", "4. Begin implementation starting with Phase 1 (Tool #1)" ], "post_implementation": [ "5. Test complete workflow end-to-end", "6. Gather user feedback on validation strictness", "7. Tune scoring weights if needed", "8. Document workflow patterns in CLAUDE.md", "9. Create planning workflow guide for users", "10. Consider v1.1 enhancements based on usage" ] } }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/srwlli/docs-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server