submit_audit_result
Submit audit results including pass/fail status, identified issues, and score to the Blind-Auditor-MCP server for code self-correction through prompt injection.
Instructions
Submit the audit result.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| passed | Yes | ||
| issues | Yes | ||
| score | No |
Implementation Reference
- src/main.py:188-214 (handler)The complete handler function for the 'submit_audit_result' tool. It is registered via the @mcp.tool() decorator. The function signature provides the input schema (passed: bool, issues: list[str], score: int=0). It validates the score against a minimum threshold of 80, updates the session audit history, and returns success or failure messages based on the passed parameter.@mcp.tool() def submit_audit_result(passed: bool, issues: list[str], score: int = 0) -> str: """Submit the audit result.""" print(f"DEBUG: submit_audit_result called: passed={passed}, score={score}", file=sys.stderr) # Hardcoded score validation MIN_SCORE = 80 if passed and score < MIN_SCORE: passed = False issues.append(f"[SYSTEM ENFORCEMENT] Score ({score}) is below minimum threshold ({MIN_SCORE}). You cannot pass code with such a low score.") session.audit_history.append({ "passed": passed, "issues": issues, "score": score, "retry_count": session.retry_count }) if passed: session.status = "APPROVED" return f"✅ AUDIT PASSED (Score: {score}/100)\n\n```\n{session.current_code}\n```" else: session.retry_count += 1 session.status = "IDLE" issues_formatted = "\n".join([f"- {issue}" for issue in issues]) return f"❌ AUDIT FAILED (Score: {score}/100)\n\n**Issues:**\n{issues_formatted}\n\nRetry count: {session.retry_count}/{rules_loader.get_max_retries()}"