Pro/Teams — second-pass adversarial certification of an architect.validate run that scored production_ready (A or B first-pass tier). Mints the certified production_ready badge when both reviewers sign off; caps the run to C/emerging when the second pass surfaces a missed production_blocker. ATOMIC ONE-SHOT, RECOVERABLE: single LLM call typically runs 60-150s server-side (empirical, on real third-party code at high reasoning effort — small payloads finish faster). This exceeds the standard MCP-client tool-call idle budget (~60s in Claude Code), so the FIRST `notifications/progress` event fires at t=0 and carries the same run_id you passed in. If your client closes the tool-call early, recover the cert verdict via `me.validation_history(run_id=<that-id>)` once the server-side LLM call lands — same pattern as architect.validate. The run is atomic by contract — no in_progress lifecycle, no cancellation, no resume. If the cert call fails outright (provider error, persistence error), a fresh `architect.certify` is the recovery path (eligibility gate enforces the retry budget). For long-running cert workflows the answer is to re-validate, not to make this tool stateful. Eligibility gate (typed rejection enum on failure): caller must own the run, run must be tier=production_ready, less than 24h old, not already certified, and within the cert retry budget (max 3 attempts per run). Reads first-pass findings from the persisted run; the caller must re-send the code (the architect persists findings + recommendations, never code, by design — privacy-preserving). The cert outcome updates the persisted run's result_json so the public review URL + me.validation_history(run_id=...) reflect it. ENTERPRISE-SAFE: code is processed transiently by the LLM provider (OpenAI, no-training-on-API-data) and dropped; JSON-escaped + delimited as inert untrusted data so prompt-injection inside payloads is ignored. UK/EU data residency (Cloud Run europe-west2). Auth: Bearer <token>.