Skip to main content
Glama

tool_get_assignment_outline

Retrieve the hierarchical question structure, rubric outline, IDs, types, and weights for a Gradescope assignment to understand its organization and requirements.

Instructions

Get the question/rubric outline for an assignment.

Returns the hierarchical question structure with IDs, types, weights, and
question text. Essential for understanding how an assignment is structured.
Requires instructor/TA access.

Args:
    course_id: The Gradescope course ID.
    assignment_id: The assignment ID.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
course_idYes
assignment_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The handler implementation for `tool_get_assignment_outline` which fetches assignment structure and formats it into a markdown outline.
    def get_assignment_outline(course_id: str, assignment_id: str) -> str:
        """Get the question/rubric outline for an assignment.
    
        Returns the hierarchical question structure with IDs, types, weights,
        and question text. This is the foundation for rubric creation and grading.
    
        Args:
            course_id: The Gradescope course ID.
            assignment_id: The assignment ID.
        """
        if not course_id or not assignment_id:
            return "Error: both course_id and assignment_id are required."
    
        try:
            props = _get_outline_data(course_id, assignment_id)
        except AuthError as e:
            return f"Authentication error: {e}"
        except ValueError as e:
            return f"Error: {e}"
        except Exception as e:
            return f"Error fetching outline: {e}"
    
        questions = props.get("questions", {})
        if not questions:
            return f"No questions found for assignment `{assignment_id}`."
    
        tree = _build_question_tree(questions)
    
        # Format output
        assignment_info = props.get("assignment", {})
        lines = [f"## Assignment Outline\n"]
    
        if assignment_info:
            atype = assignment_info.get("type", "Unknown")
            lines.append(f"**Type:** {atype}")
    
        lines.append(f"**Total questions:** {len(questions)}\n")
    
        group_num = 0
        for group in tree:
            group_num += 1
            group_title = group["title"] or f"Question Group {group_num}"
            group_weight = group["weight"]
            lines.append(f"### {group_title} ({group_weight} pts)\n")
    
            if group["children"]:
                lines.append("| # | Question ID | Weight | Type | Question Text |")
                lines.append("|---|-------------|--------|------|---------------|")
                for i, child in enumerate(group["children"], 1):
                    text = _extract_text_content(child["content"])
                    # Truncate for table display
                    if len(text) > 120:
                        text = text[:117] + "..."
                    # Escape pipes in text
                    text = text.replace("|", "\\|")
                    lines.append(
                        f"| {group_num}.{i} | `{child['id']}` | {child['weight']} | "
                        f"{child['type']} | {text} |"
                    )
                lines.append("")
            else:
                # It's a standalone question (no children)
                if group["id"]:
                    lines.append(f"**Question ID:** `{group['id']}`")
                text = _extract_text_content(group["content"])
                if text:
                    lines.append(f"_{text[:200]}_\n")
                else:
                    lines.append("")
    
        return "\n".join(lines)
  • Tool registration for `tool_get_assignment_outline` in the MCP server.
    def tool_get_assignment_outline(course_id: str, assignment_id: str) -> str:
        """Get the question/rubric outline for an assignment.
    
        Returns the hierarchical question structure with IDs, types, weights, and
        question text. Essential for understanding how an assignment is structured.
        Requires instructor/TA access.
    
        Args:
            course_id: The Gradescope course ID.
            assignment_id: The assignment ID.
        """
        return get_assignment_outline(course_id, assignment_id)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses access requirements ('Requires instructor/TA access') and hints at the return content ('hierarchical question structure with IDs, types, weights, and question text'), but lacks details on rate limits, error handling, or whether it's read-only (implied by 'Get' but not explicit).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the purpose, followed by key details on returns and access, with a brief parameter section. Every sentence adds value without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (2 parameters, no annotations, but has an output schema), the description is mostly complete. It covers purpose, usage context, and parameter semantics, but since the output schema exists, it need not explain return values. A minor gap is lack of behavioral details like error cases or performance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining that course_id is 'The Gradescope course ID' and assignment_id is 'The assignment ID', clarifying their roles beyond the schema's generic titles. However, it does not specify format or constraints (e.g., numeric vs. string IDs).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the question/rubric outline') and resource ('for an assignment'), distinguishing it from siblings like tool_get_assignment_details or tool_get_question_rubric by focusing on hierarchical structure rather than general details or specific rubric items.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides clear context by stating this is 'Essential for understanding how an assignment is structured' and 'Requires instructor/TA access', which helps guide usage. However, it does not explicitly mention when not to use it or name alternatives among siblings, such as tool_get_assignment_details for broader information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Yuanpeng-Li/gradescope-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server