Skip to main content
Glama

tool_get_grading_progress

Monitor grading progress for an assignment by viewing question-level status, graded submissions count, assigned graders, and completion percentages.

Instructions

Get the grading progress dashboard for an assignment.

Shows each question's grading status: how many submissions have been graded,
assigned graders, and completion percentage. Requires instructor/TA access.

Args:
    course_id: The Gradescope course ID.
    assignment_id: The assignment ID.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
course_idYes
assignment_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Implementation of the `get_grading_progress` tool, which fetches the grading dashboard for an assignment and returns a markdown summary of grading progress per question.
    def get_grading_progress(course_id: str, assignment_id: str) -> str:
        """Get the grading progress dashboard for an assignment.
    
        Shows each question's grading status: how many submissions have been graded,
        assigned graders, and completion percentage. Requires instructor/TA access.
    
        Args:
            course_id: The Gradescope course ID.
            assignment_id: The assignment ID.
        """
        if not course_id or not assignment_id:
            return "Error: both course_id and assignment_id are required."
    
        try:
            conn = get_connection()
            url = f"{conn.gradescope_base_url}/courses/{course_id}/assignments/{assignment_id}/grade.json"
            resp = conn.session.get(url)
        except AuthError as e:
            return f"Authentication error: {e}"
        except Exception as e:
            return f"Error fetching grading progress: {e}"
    
        if resp.status_code != 200:
            return f"Error: Cannot access grading dashboard (status {resp.status_code})."
    
        try:
            data = resp.json()
        except Exception:
            return "Error: Failed to parse grading dashboard response."
    
        # Extract assignment data
        assignments = data.get("assignments", {})
        if not assignments:
            return f"No grading data found for assignment `{assignment_id}`."
    
        # assignments can be dict or list
        if isinstance(assignments, dict):
            assignment_data = assignments.get(assignment_id, {})
            if not assignment_data:
                # Try first value
                assignment_data = next(iter(assignments.values()), {})
        elif isinstance(assignments, list):
            assignment_data = assignments[0] if assignments else {}
        else:
            assignment_data = {}
    
        questions = assignment_data.get("questions", {})
        if not questions:
            return f"No questions found in grading dashboard for assignment `{assignment_id}`."
    
        lines = [f"## Grading Progress — Assignment {assignment_id}\n"]
    
        # Build question tree for nice formatting
        total_graded = 0
        total_count = 0
    
        lines.append("| Question | Type | Graded | Total | Progress | Graders |")
        lines.append("|----------|------|--------|-------|----------|---------|")
    
        # Group questions — two passes to handle any ordering
        question_groups = {}  # parent_id -> group data with children
        child_map = {}  # parent_id -> list of child questions
        standalone = []
    
        # First pass: identify groups and collect children
        for qid, q in questions.items():
            if q.get("question_group"):
                question_groups[q["id"]] = {**q, "children": []}
            elif q.get("parent_id"):
                child_map.setdefault(q["parent_id"], []).append(q)
            else:
                standalone.append(q)
    
        # Second pass: attach children to their groups
        for parent_id, children in child_map.items():
            if parent_id in question_groups:
                question_groups[parent_id]["children"].extend(children)
            else:
                # Parent not found as a group — treat as standalone
                standalone.extend(children)
    
        group_num = 0
        for gid, group in sorted(question_groups.items(), key=lambda x: x[1].get("index", 0)):
            group_num += 1
            group_title = group.get("title", "")
            children = sorted(group.get("children", []), key=lambda x: x.get("index", 0))
    
            for i, child in enumerate(children, 1):
                graded = child.get("total_graded_count", 0)
                count = child.get("total_count", 0)
                total_graded += graded
                total_count += count
                pct = f"{graded / count * 100:.0f}%" if count > 0 else "N/A"
                graders = ", ".join(g.get("name", "?") for g in child.get("graders", []))
                child_title = child.get("title", "")
                label = f"Q{group_num}.{i}"
                if child_title:
                    label += f" {child_title}"
                qtype = child.get("type", "")
                lines.append(
                    f"| {label} (`{child['id']}`) | {qtype} | {graded} | {count} | {pct} | {graders or 'Unassigned'} |"
                )
    
        for idx, sq in enumerate(sorted(standalone, key=lambda x: x.get("index", 0)), 1):
            graded_count = sq.get("total_graded_count", 0)
            count = sq.get("total_count", 0)
            total_graded += graded_count
            total_count += count
            pct = f"{graded_count / count * 100:.0f}%" if count > 0 else "N/A"
            graders = ", ".join(g.get("name", "?") for g in sq.get("graders", []))
            sq_title = sq.get("title", "")
            label = f"Q{group_num + idx}"
            if sq_title:
                label += f" {sq_title}"
            lines.append(
                f"| {label} (`{sq['id']}`) | {sq.get('type', '')} | {graded_count} | {count} | {pct} | {graders or 'Unassigned'} |"
            )
    
        lines.append("")
    
        # Summary
        if total_count > 0:
            overall_pct = total_graded / total_count * 100
            lines.append(f"**Overall progress:** {total_graded}/{total_count} ({overall_pct:.0f}%)")
    
        # Action button
        action = data.get("action_button", {})
        if action:
            lines.append(f"\n**Next step:** [{action.get('text', 'Continue')}]({action.get('link', '')})")
    
        return "\n".join(lines)
  • Registration of the `tool_get_grading_progress` MCP tool in the server definition.
    @mcp.tool()
    def tool_get_grading_progress(course_id: str, assignment_id: str) -> str:
        """Get the grading progress dashboard for an assignment.
    
        Shows each question's grading status: how many submissions have been graded,
        assigned graders, and completion percentage. Requires instructor/TA access.
    
        Args:
            course_id: The Gradescope course ID.
            assignment_id: The assignment ID.
        """
        return get_grading_progress(course_id, assignment_id)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by stating access requirements ('Requires instructor/TA access') and describing what the dashboard shows. However, it doesn't mention potential limitations like rate limits, whether the data is real-time or cached, error conditions, or response format details. For a tool with no annotations, this leaves some behavioral aspects unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and concise: a clear purpose statement, followed by what the dashboard shows, then access requirements, and finally parameter explanations. Every sentence adds value with zero wasted words. It's front-loaded with the core functionality and efficiently covers necessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which handles return values), no annotations, and simple parameters, the description is reasonably complete. It covers purpose, usage context, access requirements, and parameter basics. The main gap is lack of behavioral details like rate limits or error handling, but the output schema reduces the need to describe return format. For a read-only progress dashboard tool, this is fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the schema provides no parameter documentation. The description compensates by listing both parameters (course_id and assignment_id) and explaining they are Gradescope IDs. However, it doesn't provide format examples, validation rules, or where to find these IDs. Given the 0% schema coverage, the description adds meaningful but incomplete parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get the grading progress dashboard for an assignment' with specific details about what it shows (grading status, submissions graded, assigned graders, completion percentage). It distinguishes from siblings like tool_get_assignment_statistics or tool_get_assignment_submissions by focusing on grading progress rather than general statistics or raw submissions. However, it doesn't explicitly name alternatives, keeping it at 4 instead of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: when needing grading progress information for an assignment. It specifies access requirements ('Requires instructor/TA access'), which helps determine appropriateness. It doesn't explicitly state when not to use it or name specific alternatives among the many sibling tools, but the context is sufficiently clear for effective usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Yuanpeng-Li/gradescope-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server