Skip to main content
Glama

tool_get_next_ungraded

Navigate to the next ungraded submission in Gradescope to continue grading workflows, returning full context or confirmation when all submissions are graded.

Instructions

Navigate to the next ungraded submission.

Returns the full grading context for the next ungraded submission,
or a message that all submissions are graded.

Args:
    course_id: The current course ID.
    question_id: The current question ID.
    submission_id: The current Question Submission ID (optional).
        If omitted or invalid (e.g. a Global Submission ID), the tool
        will auto-discover a valid submission to navigate from.
    output_format: "markdown" (default) or "json".

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
course_idYes
question_idYes
submission_idNo
output_formatNomarkdown

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The core implementation of the get_next_ungraded logic.
    def get_next_ungraded(
        course_id: str, question_id: str, submission_id: str = "",
        output_format: str = "markdown",
    ) -> str:
        """Navigate to the next ungraded submission for the same question.
    
        Returns the grading context for the next ungraded submission,
        or a message if all submissions are graded.
    
        Args:
            course_id: The Gradescope course ID.
            question_id: The current question ID.
            submission_id: The current Question Submission ID (optional).
                If omitted or invalid, auto-discovers a valid submission.
                NOTE: This must be a Question Submission ID, not a Global
                Submission ID from get_assignment_submissions.
            output_format: "markdown" (default) or "json".
        """
        if not course_id or not question_id:
            return "Error: course_id and question_id are required."
    
        # Try the provided submission_id first; fall back to auto-discovery
        ctx = None
        if submission_id:
            try:
                ctx = _get_grading_context(course_id, question_id, submission_id)
            except ValueError as e:
                if "404" in str(e):
                    # Likely a Global Submission ID — fall back to auto-discovery
                    ctx = None
                else:
                    return f"Error: {e}"
            except AuthError as e:
                return f"Authentication error: {e}"
            except Exception as e:
                return f"Error: {e}"
    
        if ctx is None:
            # Auto-discover a valid question submission ID
            try:
                auto_sid = _find_question_submission_id(course_id, question_id)
                ctx = _get_grading_context(course_id, question_id, auto_sid)
            except AuthError as e:
                return f"Authentication error: {e}"
            except ValueError as e:
                return f"Error: {e}"
            except Exception as e:
  • The MCP tool registration and wrapper function for get_next_ungraded.
    @mcp.tool()
    def tool_get_next_ungraded(
        course_id: str, question_id: str, submission_id: str = "",
        output_format: str = "markdown",
    ) -> str:
        """Navigate to the next ungraded submission.
    
        Returns the full grading context for the next ungraded submission,
        or a message that all submissions are graded.
    
        Args:
            course_id: The current course ID.
            question_id: The current question ID.
            submission_id: The current Question Submission ID (optional).
                If omitted or invalid (e.g. a Global Submission ID), the tool
                will auto-discover a valid submission to navigate from.
            output_format: "markdown" (default) or "json".
        """
        return get_next_ungraded(course_id, question_id, submission_id, output_format)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behaviors: it returns grading context or a completion message, handles optional submission_id with auto-discovery, and supports output formats. However, it lacks details on permissions, rate limits, or side effects (e.g., whether navigation affects state). It adds useful context but is incomplete for a tool with mutation-like navigation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by return behavior and parameter details in a structured format. Every sentence adds value—no fluff or repetition—making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (navigation with auto-discovery), no annotations, and an output schema (which handles return values), the description is largely complete. It covers purpose, usage, parameters, and output behavior. However, it lacks some behavioral context like error handling or prerequisites, leaving minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains all four parameters: course_id and question_id as required IDs, submission_id as optional with auto-discovery behavior, and output_format with default and options. This adds significant meaning beyond the bare schema, though it could detail ID formats or validation rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Navigate to the next ungraded submission') and resource (grading context), distinguishing it from siblings like tool_get_submission_grading_context (which gets context for a specific submission) or tool_get_grading_progress (which tracks overall progress). It precisely defines the verb and scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (grading workflow) and mentions an alternative outcome ('all submissions are graded'), but does not explicitly state when to use this tool versus alternatives like tool_get_submission_grading_context (for a known submission) or tool_get_grading_progress (for progress overview). It provides clear context but lacks explicit sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Yuanpeng-Li/gradescope-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server