Skip to main content
Glama

get_rubric

Read-only

Retrieve rubric criteria, ratings, and points for a Canvas course using rubric ID or assignment ID.

Instructions

Get detailed rubric criteria, ratings, and points.

    Accepts either rubric_id or assignment_id (at least one required).
    If both provided, uses rubric_id (more specific).

    Args:
        course_identifier: Course code or Canvas ID
        rubric_id: Canvas rubric ID (direct lookup)
        assignment_id: Canvas assignment ID (get rubric attached to assignment)
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
course_identifierYes
rubric_idNo
assignment_idNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The main handler for the get_rubric tool. It takes course_identifier plus either rubric_id or assignment_id. When rubric_id is provided (Path 1), it fetches rubric details directly from /courses/{id}/rubrics/{rubric_id}. When assignment_id is provided (Path 2), it fetches the assignment with rubric and rubric_settings includes. Returns formatted rubric criteria, ratings, IDs, and points.
    async def get_rubric(course_identifier: str | int,
                         rubric_id: str | int | None = None,
                         assignment_id: str | int | None = None) -> str:
        """Get detailed rubric criteria, ratings, and points.
    
        Accepts either rubric_id or assignment_id (at least one required).
        If both provided, uses rubric_id (more specific).
    
        Args:
            course_identifier: Course code or Canvas ID
            rubric_id: Canvas rubric ID (direct lookup)
            assignment_id: Canvas assignment ID (get rubric attached to assignment)
        """
        if rubric_id is None and assignment_id is None:
            return (
                "Error: You must provide either rubric_id or assignment_id.\n\n"
                "Usage:\n"
                "  - get_rubric(course, rubric_id=123) — look up rubric directly\n"
                "  - get_rubric(course, assignment_id=456) — get rubric attached to an assignment\n"
                "\nUse list_rubrics to find rubric IDs for a course."
            )
    
        course_id = await get_course_id(course_identifier)
        course_display = await get_course_code(course_id) or course_identifier
    
        # Path 1: Look up by rubric_id (preferred when both provided)
        if rubric_id is not None:
            rubric_id_str = str(rubric_id)
    
            response = await make_canvas_request(
                "get",
                f"/courses/{course_id}/rubrics/{rubric_id_str}",
                params={"include[]": ["assessments", "associations"]}
            )
    
            if "error" in response:
                return f"Error fetching rubric: {response['error']}"
    
            title = response.get("title", "Untitled Rubric")
            points_possible = response.get("points_possible", 0)
            reusable = response.get("reusable", False)
            read_only = response.get("read_only", False)
            data = response.get("data", [])
    
            result = f"Rubric '{title}' in Course {course_display}:\n\n"
            result += f"Rubric ID: {rubric_id}\n"
            result += f"Total Points: {points_possible}\n"
            result += f"Reusable: {'Yes' if reusable else 'No'}\n"
            result += f"Read Only: {'Yes' if read_only else 'No'}\n"
    
            if data:
                result += f"Number of Criteria: {len(data)}\n\n"
                result += "Criteria and Ratings:\n"
                result += "=" * 50 + "\n"
    
                for i, criterion in enumerate(data, 1):
                    criterion_id = criterion.get("id", "N/A")
                    description = criterion.get("description", "No description")
                    long_description = criterion.get("long_description", "")
                    points = criterion.get("points", 0)
                    ratings = criterion.get("ratings", [])
    
                    result += f"\nCriterion #{i}: {description}\n"
                    result += f"  ID: {criterion_id}\n"
                    result += f"  Points: {points}\n"
    
                    if long_description and long_description != description:
                        result += f"  Description: {truncate_text(long_description, 200)}\n"
    
                    if ratings:
                        sorted_ratings = sorted(ratings, key=lambda x: x.get("points", 0), reverse=True)
                        for rating in sorted_ratings:
                            rating_desc = rating.get("description", "No description")
                            rating_points = rating.get("points", 0)
                            rating_id = rating.get("id", "N/A")
                            result += f"  - {rating_points} pts: {rating_desc} [ID: {rating_id}]\n"
    
                            rating_long_desc = rating.get("long_description", "")
                            if rating_long_desc and rating_long_desc != rating_desc:
                                result += f"    {truncate_text(rating_long_desc, 100)}\n"
    
                    result += "\n"
            else:
                result += "\nNo criteria defined for this rubric.\n"
    
            return result
    
        # Path 2: Look up via assignment_id
        assignment_id_str = str(assignment_id)
    
        response = await make_canvas_request(
            "get",
            f"/courses/{course_id}/assignments/{assignment_id_str}",
            params={"include[]": ["rubric", "rubric_settings"]}
        )
    
        if "error" in response:
            return f"Error fetching rubric: {response['error']}"
    
        rubric = response.get("rubric")
        if not rubric:
            assignment_name = response.get("name", "Unknown Assignment")
            return f"No rubric found for assignment '{assignment_name}' in course {course_display}."
    
        assignment_name = response.get("name", "Unknown Assignment")
        rubric_settings = response.get("rubric_settings", {})
        use_rubric_for_grading = response.get("use_rubric_for_grading", False)
    
        result = f"Rubric for Assignment '{assignment_name}' in Course {course_display}:\n\n"
    
        # Grading config (only available via assignment path)
        result += "Grading Config:\n"
        result += f"  Used for Grading: {'Yes' if use_rubric_for_grading else 'No'}\n"
        if rubric_settings:
            result += f"  Points Possible: {rubric_settings.get('points_possible', 'N/A')}\n"
        result += f"Number of Criteria: {len(rubric)}\n\n"
    
        # Criteria and ratings
        result += "Criteria and Ratings:\n"
        result += "=" * 50 + "\n"
    
        total_points = 0
        for i, criterion in enumerate(rubric, 1):
            criterion_id = criterion.get("id", "N/A")
            description = criterion.get("description", "No description")
            long_description = criterion.get("long_description", "")
            points = criterion.get("points", 0)
            ratings = criterion.get("ratings", [])
    
            result += f"\nCriterion #{i}: {description}\n"
            result += f"  ID: {criterion_id}\n"
            result += f"  Points: {points}\n"
    
            if long_description and long_description != description:
                result += f"  Description: {truncate_text(long_description, 200)}\n"
    
            if ratings:
                sorted_ratings = sorted(ratings, key=lambda x: x.get("points", 0), reverse=True)
                for rating in sorted_ratings:
                    rating_desc = rating.get("description", "No description")
                    rating_points = rating.get("points", 0)
                    rating_id = rating.get("id", "N/A")
                    result += f"  - {rating_points} pts: {rating_desc} [ID: {rating_id}]\n"
    
                    rating_long_desc = rating.get("long_description", "")
                    if rating_long_desc and rating_long_desc != rating_desc:
                        result += f"    {truncate_text(rating_long_desc, 100)}\n"
    
            total_points += points
            result += "\n"
    
        result += f"Total Rubric Points: {total_points}"
    
        return result
  • The function register_rubric_tools(mcp) is the registration function that wraps @mcp.tool() decorator around get_rubric, registering it as an MCP tool with FastMCP.
    def register_rubric_tools(mcp: FastMCP) -> None:
        """Register all rubric-related MCP tools."""
  • register_rubric_tools(mcp) is called in server.py for educator roles, which causes get_rubric to be registered as an MCP tool.
    register_rubric_tools(mcp)
  • Re-exports register_rubric_tools from the rubrics module through __init__.py for clean imports.
    from .rubrics import register_rubric_tools
  • The register_rubric_tools function acts as both registration and container; the @mcp.tool decorator on line 385 is what actually registers get_rubric with the MCP server.
    def register_rubric_tools(mcp: FastMCP) -> None:
        """Register all rubric-related MCP tools."""
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, and the description consistently describes a read operation. It adds behavioral context by detailing parameter precedence and lookup methods, which goes beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, using three sentences plus a labeled Args list. Every sentence adds value, with the main purpose front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description needs only to explain input and behavior. It covers purpose, parameter selection logic, and parameter meanings completely.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description fully explains each parameter's purpose and format: course_identifier as course code or ID, rubric_id for direct lookup, assignment_id for attached rubric. This adds significant value over the raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves detailed rubric criteria, ratings, and points. It distinguishes itself from sibling tools like list_rubrics (listing) and create_rubric (creation) by focusing on retrieval with specific identifiers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use rubric_id vs assignment_id, including precedence when both are provided. It does not explicitly mention when not to use the tool, but the guidance is sufficient for correct invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/vishalsachdev/canvas-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server