Skip to main content
Glama

tool_smart_read_submission

Generate a prioritized reading plan for student submissions by organizing pages into tiers: crop regions first, then full pages, and adjacent pages if needed, with confidence scores and recommended actions.

Instructions

Get a smart, tiered reading plan for a student's submission.

Returns page URLs in priority order:
- Tier 1: Crop region only (read FIRST)
- Tier 2: Full page (if answer overflows crop)
- Tier 3: Adjacent pages (if still incomplete)

Also returns confidence score and recommended action.

Args:
    course_id: The Gradescope course ID.
    assignment_id: The assignment ID.
    question_id: The question ID.
    submission_id: The question submission ID.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
course_idYes
assignment_idYes
question_idYes
submission_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The core implementation of the smart_read_submission tool logic.
    def smart_read_submission(
        course_id: str,
        assignment_id: str,
        question_id: str,
        submission_id: str,
    ) -> str:
        """Get a smart, tiered reading plan for a student's submission.
    
        Returns page image URLs in priority order:
        1. **Tier 1 (Crop Only):** The crop region URLs for the question's designated area.
           Agent should read ONLY this first. If the answer is fully contained, grade it.
        2. **Tier 2 (Full Page):** If handwriting exits the crop boundary or reasoning
           appears truncated, read the full page(s) containing the crop.
        3. **Tier 3 (Adjacent Pages):** If the answer still appears incomplete, read the
           previous and next pages.
    
        Also returns the confidence score to decide whether to auto-grade or skip.
    
        Args:
            course_id: The Gradescope course ID.
            assignment_id: The assignment ID.
            question_id: The question ID.
            submission_id: The question submission ID.
        """
        if not course_id or not assignment_id or not question_id or not submission_id:
            return "Error: all four IDs are required."
    
        try:
            questions = _fetch_assignment_questions(course_id, assignment_id)
            ctx = _get_grading_context(course_id, question_id, submission_id)
            prompt_text, explanation = _extract_outline_prompt_and_reference(
                course_id, assignment_id, question_id,
            )
        except AuthError as e:
            return f"Authentication error: {e}"
        except (ValueError, Exception) as e:
  • Tool registration for tool_smart_read_submission, which wraps the smart_read_submission helper function.
    @mcp.tool()
    def tool_smart_read_submission(
        course_id: str,
        assignment_id: str,
        question_id: str,
        submission_id: str,
    ) -> str:
        """Get a smart, tiered reading plan for a student's submission.
    
        Returns page URLs in priority order:
        - Tier 1: Crop region only (read FIRST)
        - Tier 2: Full page (if answer overflows crop)
        - Tier 3: Adjacent pages (if still incomplete)
    
        Also returns confidence score and recommended action.
    
        Args:
            course_id: The Gradescope course ID.
            assignment_id: The assignment ID.
            question_id: The question ID.
            submission_id: The question submission ID.
        """
        return smart_read_submission(
            course_id, assignment_id, question_id, submission_id
        )
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it returns a structured reading plan with prioritized tiers (crop region, full page, adjacent pages), a confidence score, and a recommended action. However, it lacks details on permissions, rate limits, error handling, or whether it's read-only or mutative, leaving gaps for an AI agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by a bulleted list detailing the output structure, and ends with parameter explanations. Every sentence adds value without redundancy, making it efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (involving tiered analysis), no annotations, and an output schema (which covers return values), the description is mostly complete. It explains the purpose, output format, and parameters well. However, it could improve by addressing behavioral aspects like permissions or error cases, slightly reducing completeness for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate. It explicitly lists all four parameters (course_id, assignment_id, question_id, submission_id) and clarifies they are IDs for Gradescope entities, adding crucial meaning beyond the schema's generic titles like 'Course Id'. This fully addresses the parameter semantics gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('get a smart, tiered reading plan') and identifies the resource ('student's submission'). It distinguishes itself from siblings by focusing on reading plan generation rather than grading, assessment, or data retrieval functions like tool_get_student_submission or tool_assess_submission_readiness.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the parameters (course_id, assignment_id, etc.), suggesting it's for analyzing specific submissions within Gradescope. However, it doesn't explicitly state when to use this tool versus alternatives like tool_get_student_submission or tool_assess_submission_readiness, nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Yuanpeng-Li/gradescope-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server