tool_assess_submission_readiness
Determine if a Gradescope submission is ready for auto-grading by analyzing content and providing a confidence score with reading strategies.
Instructions
Assess whether an agent should auto-grade a specific submission.
Returns a crop-first read plan, fallback rules for whole-page/adjacent-page
reads, and a coarse confidence score that can be used to skip or escalate.
Args:
course_id: The Gradescope course ID.
assignment_id: The assignment ID.
question_id: The question ID.
submission_id: The question submission ID.Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| course_id | Yes | ||
| assignment_id | Yes | ||
| question_id | Yes | ||
| submission_id | Yes |
Implementation Reference
- The core implementation of the assess_submission_readiness tool.
def assess_submission_readiness( course_id: str, assignment_id: str, question_id: str, submission_id: str, ) -> str: """Assess how safely an agent can auto-grade a specific submission. Returns the preferred read order, page/crop hints, and a confidence score that can be used to skip or escalate uncertain submissions. """ if not course_id or not assignment_id or not question_id or not submission_id: return ( "Error: course_id, assignment_id, question_id, and submission_id " "are required." ) try: questions = _fetch_assignment_questions(course_id, assignment_id) ctx = _get_grading_context(course_id, question_id, submission_id) prompt_text, explanation = _extract_outline_prompt_and_reference( course_id, assignment_id, question_id ) except AuthError as e: return f"Authentication error: {e}" except ValueError as e: return f"Error: {e}" except Exception as e: return f"Error assessing submission readiness: {e}" props = ctx["props"] question = props.get("question", {}) parameters = question.get("parameters") or {} crop_rects = parameters.get("crop_rect_list", []) pages = [ page for page in props.get("pages", []) if isinstance(page, dict) and page.get("url") ] relevant_pages = _select_relevant_pages(pages, crop_rects) page_count = len(relevant_pages) reference_answer = explanation or None readiness, reasons, action = _compute_readiness( prompt_text, reference_answer, crop_rects, relevant_pages ) question_label = _build_question_label(question_id, questions) strategy = [ "1. Read the crop region only.", "2. If the crop looks truncated or handwriting crosses the border, read the whole page.", "3. If the reasoning still looks incomplete, inspect the previous and next page.", ] lines = [ f"## Readiness Assessment — {question_label}", f"- submission_id: `{submission_id}`", f"- readiness: `{readiness:.2f}`", f"- status: `{action}`", "", "### Read Order", ] lines.extend(f"- {step}" for step in strategy) - src/gradescope_mcp/server.py:643-663 (registration)The registration of the tool tool_assess_submission_readiness in the MCP server.
@mcp.tool() def tool_assess_submission_readiness( course_id: str, assignment_id: str, question_id: str, submission_id: str, ) -> str: """Assess whether an agent should auto-grade a specific submission. Returns a crop-first read plan, fallback rules for whole-page/adjacent-page reads, and a coarse confidence score that can be used to skip or escalate. Args: course_id: The Gradescope course ID. assignment_id: The assignment ID. question_id: The question ID. submission_id: The question submission ID. """ return assess_submission_readiness( course_id, assignment_id, question_id, submission_id )