validate_debug_completion
Validates that post-run debug artifacts (JSON and log file) exist and contain required content. Checks artifact JSON keys and log markers, returning any errors found.
Instructions
Validate post-run debug artifacts landed on disk with the right shape.
Three deterministic checks:
- STR-001: ``selected_robust_trial.json`` + log file both exist.
- VAL-003: artifact JSON parses and carries every required top-level key.
- BT-010: log contains each required marker substring (order irrelevant).
Args:
artifact_path: Absolute path to the JSON artifact (typically
``<workspace>/backtest/selected_robust_trial.json``).
log_path: Absolute path to the debug log file.
required_json_keys: Top-level keys expected on the artifact.
Defaults to ``["trial_number", "params", "metrics"]``.
required_log_markers: Substrings expected in the log. Defaults
to ``["STAGE 4 COMPLETE", "STAGE 5 COMPLETE", "FINAL SUCCESS"]``.
Returns ``{"any_errors": bool, "findings": [{"code", "message",
"context"}, ...]}``. Never raises — parse / file-access failures
surface as findings with the appropriate error code.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| artifact_path | Yes | ||
| log_path | Yes | ||
| required_json_keys | No | ||
| required_log_markers | No |
Implementation Reference
- Core implementation of validate_debug_completion. Performs three deterministic checks: STR-001 (artifact/log existence), VAL-003 (JSON shape keys), BT-010 (log marker presence). Returns a Report object.
def validate_debug_completion( artifact_path: "Path | str", log_path: "Path | str", required_json_keys: Iterable[str] = _DEFAULT_REQUIRED_JSON_KEYS, required_log_markers: Iterable[str] = _DEFAULT_REQUIRED_LOG_MARKERS, ) -> Report: """Return a ``Report`` with findings for every problem detected. Parameters ---------- artifact_path Path to the JSON artifact (typically ``<workspace>/backtest/selected_robust_trial.json``). log_path Path to the debug log file. required_json_keys Top-level keys expected on the artifact JSON. required_log_markers Substrings expected somewhere in the log (order irrelevant). """ report = Report() artifact_path = Path(artifact_path) log_path = Path(log_path) required_json_keys = tuple(required_json_keys) required_log_markers = tuple(required_log_markers) # --- Artifact existence + JSON shape ------------------------------------ if not artifact_path.exists(): report.add(Finding( code="STR-001", message=f"Required artifact missing: {artifact_path}", context={"file": str(artifact_path)}, )) else: try: payload = json.loads(artifact_path.read_text(encoding="utf-8")) except json.JSONDecodeError as e: report.add(Finding( code="VAL-003", message=f"Artifact is not valid JSON: {e}", context={ "file": str(artifact_path), "missing_keys": list(required_json_keys), "present_keys": [], }, )) else: missing = [k for k in required_json_keys if k not in payload] if missing: report.add(Finding( code="VAL-003", message=f"Artifact missing required keys: {missing}", context={ "file": str(artifact_path), "missing_keys": missing, "present_keys": sorted(payload.keys()), }, )) # --- Log file existence + marker presence ------------------------------- if not log_path.exists(): report.add(Finding( code="STR-001", message=f"Log file missing: {log_path}", context={"file": str(log_path)}, )) else: log_text = log_path.read_text(encoding="utf-8") missing_markers = [m for m in required_log_markers if m not in log_text] if missing_markers: last_seen: str | None = None for marker in required_log_markers: if marker in log_text: last_seen = marker report.add(Finding( code="BT-010", message=f"Required log markers absent: {missing_markers}", context={ "log_path": str(log_path), "missing_marker": missing_markers, "last_marker_seen": last_seen, }, )) return report - Finding and Report dataclasses used by the validator. Finding holds code/message/context. Report aggregates findings and provides to_dict() for MCP serialization.
@dataclass class Finding: """One issue surfaced by a validator. ``code`` is an error-catalog code (STR-*, VAL-*, PRM-*, IND-*, BT-*). ``message`` is a short human-readable summary. ``context`` carries the structured key/value fix-template fields from the catalog entry so downstream consumers can format remediation guidance deterministically. """ code: str message: str context: Dict[str, Any] = field(default_factory=dict) @dataclass class Report: """Aggregate of findings from one validator call. A validator returns one ``Report``. The caller can inspect ``any_errors`` as a one-shot gate, or iterate ``findings`` to surface every issue at once. ``to_dict()`` produces the JSON-serializable form the MCP tool wrappers return to agents. """ findings: List[Finding] = field(default_factory=list) def add(self, finding: Finding) -> None: self.findings.append(finding) @property def any_errors(self) -> bool: return len(self.findings) > 0 def to_dict(self) -> Dict[str, Any]: return { "any_errors": self.any_errors, "findings": [asdict(f) for f in self.findings], } - echolon/mcp/server.py:242-278 (registration)MCP tool registration via @server.tool() decorator. Defines the tool signature (artifact_path, log_path, required_json_keys, required_log_markers) and delegates to the core implementation. Converts Report to dict.
@server.tool() def validate_debug_completion( artifact_path: str, log_path: str, required_json_keys: list[str] | None = None, required_log_markers: list[str] | None = None, ) -> dict: """Validate post-run debug artifacts landed on disk with the right shape. Three deterministic checks: - STR-001: ``selected_robust_trial.json`` + log file both exist. - VAL-003: artifact JSON parses and carries every required top-level key. - BT-010: log contains each required marker substring (order irrelevant). Args: artifact_path: Absolute path to the JSON artifact (typically ``<workspace>/backtest/selected_robust_trial.json``). log_path: Absolute path to the debug log file. required_json_keys: Top-level keys expected on the artifact. Defaults to ``["trial_number", "params", "metrics"]``. required_log_markers: Substrings expected in the log. Defaults to ``["STAGE 4 COMPLETE", "STAGE 5 COMPLETE", "FINAL SUCCESS"]``. Returns ``{"any_errors": bool, "findings": [{"code", "message", "context"}, ...]}``. Never raises — parse / file-access failures surface as findings with the appropriate error code. """ from echolon.strategy.validators.debug_completion import ( validate_debug_completion as _impl, ) kwargs = {} if required_json_keys is not None: kwargs["required_json_keys"] = required_json_keys if required_log_markers is not None: kwargs["required_log_markers"] = required_log_markers report = _impl(artifact_path=artifact_path, log_path=log_path, **kwargs) return report.to_dict() - Re-exports Finding and Report from the validators package so debug_completion.py can import them via 'from echolon.strategy.validators import Finding, Report'.
from echolon.strategy.validators._report import Finding, Report __all__ = ["Finding", "Report"]