tailtest_scenario_plan
Generates structured scaffolding that an agent uses to write a SCENARIO PLAN, detailing language, framework, depth, adversarial count, baseline scenarios, and test file path for automated test creation.
Instructions
Return structured scaffolding the agent uses to write its SCENARIO PLAN: language, framework, depth, R15 adversarial count requirement, language and framework baseline scenarios, test file path, and prose instructions. The agent uses this scaffolding to compose the actual SCENARIO PLAN scenario lines.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| file_path | Yes | Relative or absolute path to the source file under test. | |
| project_root | No | Project root directory. Defaults to the current working directory. |
Implementation Reference
- The main handler function `scenario_plan()` that executes the tool logic. It reads depth from config, detects language/framework, computes scenario counts and adversarial requirements, and returns structured scaffolding for the agent's SCENARIO PLAN.
def scenario_plan(file_path: str, project_root: str | None = None) -> dict[str, Any]: """Return structured scaffolding the agent uses to write its SCENARIO PLAN. Args: file_path: relative or absolute path to the source file. project_root: project root for reading config.json. Defaults to cwd. Returns: Dict with: file_path, language, framework, depth, scenario_count_target, adversarial_count_required, adversarial_categories, language_baseline, framework_baseline, test_file_path, instructions. """ project_root = project_root or os.getcwd() language = detect_language(file_path) or "unknown" framework = _detect_framework(language, file_path, project_root) depth = _read_depth(project_root) count_min, count_max = SCENARIO_COUNT_BY_DEPTH[depth] adv_required = ADVERSARIAL_BY_DEPTH[depth] test_path = _test_file_path(file_path, language, framework) instructions = ( f"Generate a SCENARIO PLAN for {file_path}. " f"Depth is {depth}: produce {count_min} to {count_max} scenarios total. " f"R15 requires at least {adv_required} adversarial scenarios labeled " f"[adversarial: <category>]. Pick categories from the 8-category list " f"that genuinely apply to this file; document any skipped category with " f"a reason. Include the language baseline scenarios. " ) if framework: instructions += ( f"Include the {framework} framework baseline scenarios on top of the " f"language baseline. " ) if depth == "simple": instructions += ( "Note: at depth: simple, R15 does not apply -- generate happy-path " "scenarios only. " ) return { "file_path": file_path, "language": language, "framework": framework, "depth": depth, "scenario_count_target": [count_min, count_max], "adversarial_count_required": adv_required, "adversarial_categories": ADVERSARIAL_CATEGORIES, "language_baseline": LANGUAGE_BASELINE.get(language, []), "framework_baseline": _framework_baseline(framework), "test_file_path": test_path, "instructions": instructions, } - mcp_server/src/tailtest_mcp/server.py:37-60 (registration)Registration of 'tailtest_scenario_plan' as a Tool in the MCP server's list_tools() with input schema (file_path required, project_root optional).
Tool( name="tailtest_scenario_plan", description=( "Return structured scaffolding the agent uses to write its SCENARIO PLAN: " "language, framework, depth, R15 adversarial count requirement, language and " "framework baseline scenarios, test file path, and prose instructions. The agent " "uses this scaffolding to compose the actual SCENARIO PLAN scenario lines." ), inputSchema={ "type": "object", "properties": { "file_path": { "type": "string", "description": "Relative or absolute path to the source file under test.", }, "project_root": { "type": "string", "description": "Project root directory. Defaults to the current working directory.", }, }, "required": ["file_path"], "additionalProperties": False, }, ), - mcp_server/src/tailtest_mcp/server.py:157-165 (registration)Dispatch/call handler in server.py that imports and invokes scenario_plan() when the tool name is 'tailtest_scenario_plan'.
if name == "tailtest_scenario_plan": from .tools.scenario_plan import scenario_plan import json as _json result = scenario_plan( file_path=arguments["file_path"], project_root=arguments.get("project_root"), ) return [TextContent(type="text", text=_json.dumps(result, indent=2))] - Helper _read_depth() reads the .tailtest/config.json depth setting.
def _read_depth(project_root: str) -> str: """Read depth from .tailtest/config.json. Defaults to 'standard'.""" config_path = os.path.join(project_root, ".tailtest", "config.json") if os.path.exists(config_path): try: with open(config_path) as f: cfg = json.load(f) value = cfg.get("depth") if value in ADVERSARIAL_BY_DEPTH: return value except (json.JSONDecodeError, OSError): pass return "standard" - Helper _framework_baseline() returns framework-specific baseline scenarios (flask, fastapi, django).
def _framework_baseline(framework: str | None) -> list[str]: """Framework baseline scenarios from R2 templates.""" if framework is None: return [] if framework == "flask": return [ "Route returns 200 on valid path", "404 on unknown route", "Blueprint registration binds the correct prefix", "test_client fixture used within app context", "Validation rejects bad input", ] if framework == "fastapi": return [ "Valid request body returns expected response", "Missing required field returns 422", "Wrong field type returns 422", "Dependency override works in test (app.dependency_overrides)", ] if framework == "django": return [ "Request with valid auth", "Request without auth (expect 403/redirect)", "Model field validation rejects invalid data", "URL routes to the correct view", ] return []