Skip to main content
Glama

tool_assess_submission_readiness

Determine if a Gradescope submission is ready for auto-grading by analyzing content and providing a confidence score with reading strategies.

Instructions

Assess whether an agent should auto-grade a specific submission.

Returns a crop-first read plan, fallback rules for whole-page/adjacent-page
reads, and a coarse confidence score that can be used to skip or escalate.

Args:
    course_id: The Gradescope course ID.
    assignment_id: The assignment ID.
    question_id: The question ID.
    submission_id: The question submission ID.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
course_idYes
assignment_idYes
question_idYes
submission_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The core implementation of the assess_submission_readiness tool.
    def assess_submission_readiness(
        course_id: str,
        assignment_id: str,
        question_id: str,
        submission_id: str,
    ) -> str:
        """Assess how safely an agent can auto-grade a specific submission.
    
        Returns the preferred read order, page/crop hints, and a confidence score
        that can be used to skip or escalate uncertain submissions.
        """
        if not course_id or not assignment_id or not question_id or not submission_id:
            return (
                "Error: course_id, assignment_id, question_id, and submission_id "
                "are required."
            )
    
        try:
            questions = _fetch_assignment_questions(course_id, assignment_id)
            ctx = _get_grading_context(course_id, question_id, submission_id)
            prompt_text, explanation = _extract_outline_prompt_and_reference(
                course_id, assignment_id, question_id
            )
        except AuthError as e:
            return f"Authentication error: {e}"
        except ValueError as e:
            return f"Error: {e}"
        except Exception as e:
            return f"Error assessing submission readiness: {e}"
    
        props = ctx["props"]
        question = props.get("question", {})
        parameters = question.get("parameters") or {}
        crop_rects = parameters.get("crop_rect_list", [])
        pages = [
            page for page in props.get("pages", [])
            if isinstance(page, dict) and page.get("url")
        ]
        relevant_pages = _select_relevant_pages(pages, crop_rects)
        page_count = len(relevant_pages)
        reference_answer = explanation or None
        readiness, reasons, action = _compute_readiness(
            prompt_text, reference_answer, crop_rects, relevant_pages
        )
        question_label = _build_question_label(question_id, questions)
    
        strategy = [
            "1. Read the crop region only.",
            "2. If the crop looks truncated or handwriting crosses the border, read the whole page.",
            "3. If the reasoning still looks incomplete, inspect the previous and next page.",
        ]
    
        lines = [
            f"## Readiness Assessment — {question_label}",
            f"- submission_id: `{submission_id}`",
            f"- readiness: `{readiness:.2f}`",
            f"- status: `{action}`",
            "",
            "### Read Order",
        ]
        lines.extend(f"- {step}" for step in strategy)
  • The registration of the tool tool_assess_submission_readiness in the MCP server.
    @mcp.tool()
    def tool_assess_submission_readiness(
        course_id: str,
        assignment_id: str,
        question_id: str,
        submission_id: str,
    ) -> str:
        """Assess whether an agent should auto-grade a specific submission.
    
        Returns a crop-first read plan, fallback rules for whole-page/adjacent-page
        reads, and a coarse confidence score that can be used to skip or escalate.
    
        Args:
            course_id: The Gradescope course ID.
            assignment_id: The assignment ID.
            question_id: The question ID.
            submission_id: The question submission ID.
        """
        return assess_submission_readiness(
            course_id, assignment_id, question_id, submission_id
        )
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses behavioral traits such as returning a 'crop-first read plan', 'fallback rules', and a 'coarse confidence score', which adds context about outputs. However, it lacks details on permissions, rate limits, or side effects, which are important for a tool that might influence grading decisions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the purpose, followed by details on returns and parameters. Every sentence adds value without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (assessing submission readiness) and the presence of an output schema (which likely covers return values), the description is mostly complete. It explains the purpose, parameters, and key outputs like confidence scores. However, without annotations, it could benefit from more behavioral context, such as error handling or performance implications.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It lists the four parameters (course_id, assignment_id, question_id, submission_id) and briefly explains they are IDs for Gradescope entities, adding meaning beyond the schema. However, it does not provide format details or examples, leaving some gaps in parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to assess whether an agent should auto-grade a specific submission. It specifies the verb 'assess' and the resource 'submission readiness', distinguishing it from siblings like tool_apply_grade (which applies grades) or tool_get_submission_grading_context (which retrieves context).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for auto-grading decisions but does not explicitly state when to use this tool versus alternatives. It mentions returns like 'crop-first read plan' and 'coarse confidence score', suggesting it's for pre-grading assessment, but lacks explicit guidance on prerequisites or comparisons to siblings like tool_smart_read_submission or tool_get_next_ungraded.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Yuanpeng-Li/gradescope-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server