Skip to main content
Glama

get_assignment_rubric_details

Retrieve detailed rubric criteria and rating descriptions for Canvas assignments to understand grading expectations and requirements.

Instructions

Get detailed rubric criteria and rating descriptions for an assignment.

    Args:
        course_identifier: The Canvas course code (e.g., badm_554_120251_246794) or ID
        assignment_id: The Canvas assignment ID
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
course_identifierYes
assignment_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The handler function implementing the get_assignment_rubric_details tool. It fetches assignment details including rubric from Canvas API and formats a detailed text response with criteria, ratings, and descriptions.
    async def get_assignment_rubric_details(course_identifier: str | int,
                                          assignment_id: str | int) -> str:
        """Get detailed rubric criteria and rating descriptions for an assignment.
    
        Args:
            course_identifier: The Canvas course code (e.g., badm_554_120251_246794) or ID
            assignment_id: The Canvas assignment ID
        """
        course_id = await get_course_id(course_identifier)
        assignment_id_str = str(assignment_id)
    
        # Get assignment details with full rubric information
        response = await make_canvas_request(
            "get",
            f"/courses/{course_id}/assignments/{assignment_id_str}",
            params={"include[]": ["rubric", "rubric_settings"]}
        )
    
        if "error" in response:
            return f"Error fetching assignment rubric details: {response['error']}"
    
        # Check if assignment has rubric
        rubric = response.get("rubric")
        if not rubric:
            assignment_name = response.get("name", "Unknown Assignment")
            course_display = await get_course_code(course_id) or course_identifier
            return f"No rubric found for assignment '{assignment_name}' in course {course_display}."
    
        # Format detailed rubric information
        assignment_name = response.get("name", "Unknown Assignment")
        course_display = await get_course_code(course_id) or course_identifier
        rubric_settings = response.get("rubric_settings", {})
        use_rubric_for_grading = response.get("use_rubric_for_grading", False)
    
        result = f"Detailed Rubric for Assignment '{assignment_name}' in Course {course_display}:\n\n"
    
        # Rubric metadata
        result += f"Assignment ID: {assignment_id}\n"
        result += f"Used for Grading: {'Yes' if use_rubric_for_grading else 'No'}\n"
        if rubric_settings:
            result += f"Total Points Possible: {rubric_settings.get('points_possible', 'N/A')}\n"
        result += f"Number of Criteria: {len(rubric)}\n\n"
    
        # Detailed criteria and ratings
        result += "Detailed Criteria and Rating Scales:\n"
        result += "=" * 60 + "\n"
    
        total_points = 0
        for i, criterion in enumerate(rubric, 1):
            criterion_id = criterion.get("id", "N/A")
            description = criterion.get("description", "No description")
            long_description = criterion.get("long_description", "")
            points = criterion.get("points", 0)
            ratings = criterion.get("ratings", [])
    
            result += f"\nCriterion #{i}: {description}\n"
            result += f"Criterion ID: {criterion_id}\n"
            result += f"Maximum Points: {points}\n"
    
            if long_description and long_description != description:
                result += f"Full Description: {long_description}\n"
    
            if ratings:
                result += f"\nRating Scale ({len(ratings)} levels):\n"
                # Sort ratings by points (highest to lowest)
                sorted_ratings = sorted(ratings, key=lambda x: x.get("points", 0), reverse=True)
    
                for _, rating in enumerate(sorted_ratings):
                    rating_description = rating.get("description", "No description")
                    rating_points = rating.get("points", 0)
                    rating_id = rating.get("id", "N/A")
                    long_desc = rating.get("long_description", "")
    
                    result += f"  {rating_points} pts: {rating_description}"
                    if rating_id != "N/A":
                        result += f" [ID: {rating_id}]"
                    result += "\n"
    
                    if long_desc and long_desc != rating_description:
                        # Format long description nicely
                        formatted_desc = long_desc.replace("\\n", "\n    ")
                        result += f"    Details: {formatted_desc}\n"
            else:
                result += "No rating scale defined for this criterion.\n"
    
            total_points += points
            result += "\n" + "-" * 40 + "\n"
    
        result += f"\nTotal Rubric Points: {total_points}"
    
        return result
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes a read operation ('Get'), implying it's likely non-destructive, but doesn't specify authentication needs, rate limits, error conditions, or the structure of the returned data. This leaves significant gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by parameter details. It's efficient with minimal waste, though the parameter explanations could be slightly more integrated into the flow rather than listed as 'Args:'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which handles return values), no annotations, and low complexity, the description is moderately complete. It covers the purpose and parameters well but lacks behavioral context like error handling or usage prerequisites, which is a notable gap for a tool with no annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for both parameters: it clarifies that 'course_identifier' can be a Canvas course code or ID with an example, and 'assignment_id' is a Canvas assignment ID. Since schema description coverage is 0%, this compensates well by providing practical usage details beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('detailed rubric criteria and rating descriptions for an assignment'), making the purpose specific and understandable. However, it doesn't explicitly distinguish this tool from sibling tools like 'get_rubric_details' or 'list_assignment_rubrics', which might have overlapping functionality, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_rubric_details' or 'list_assignment_rubrics'. It lacks context about prerequisites, such as whether the assignment must have a rubric attached, or any exclusions for its use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/vishalsachdev/canvas-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server