Skip to main content
Glama

get_rubric_details

Retrieve detailed rubric criteria and scoring information from Canvas courses to understand assessment requirements and grading standards.

Instructions

Get detailed rubric criteria and scoring information.

    Args:
        course_identifier: The Canvas course code (e.g., badm_554_120251_246794) or ID
        rubric_id: The Canvas rubric ID
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
course_identifierYes
rubric_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The core handler function for the 'get_rubric_details' MCP tool. Fetches rubric details via Canvas API endpoint /courses/{course_id}/rubrics/{rubric_id}, parses the response, and formats a detailed textual summary of the rubric including criteria, ratings, points, and metadata.
    @validate_params
    async def get_rubric_details(course_identifier: str | int,
                               rubric_id: str | int) -> str:
        """Get detailed rubric criteria and scoring information.
    
        Args:
            course_identifier: The Canvas course code (e.g., badm_554_120251_246794) or ID
            rubric_id: The Canvas rubric ID
        """
        course_id = await get_course_id(course_identifier)
        rubric_id_str = str(rubric_id)
    
        # Get detailed rubric information
        response = await make_canvas_request(
            "get",
            f"/courses/{course_id}/rubrics/{rubric_id_str}",
            params={"include[]": ["assessments", "associations"]}
        )
    
        if "error" in response:
            return f"Error fetching rubric details: {response['error']}"
    
        # Extract rubric details
        title = response.get("title", "Untitled Rubric")
        context_code = response.get("context_code", "")
        context_type = response.get("context_type", "")
        points_possible = response.get("points_possible", 0)
        reusable = response.get("reusable", False)
        read_only = response.get("read_only", False)
        data = response.get("data", [])
    
        course_display = await get_course_code(course_id) or course_identifier
    
        result = f"Detailed Rubric Information for Course {course_display}:\n\n"
        result += f"Title: {title}\n"
        result += f"Rubric ID: {rubric_id}\n"
        result += f"Context: {context_type} ({context_code})\n"
        result += f"Total Points: {points_possible}\n"
        result += f"Reusable: {'Yes' if reusable else 'No'}\n"
        result += f"Read Only: {'Yes' if read_only else 'No'}\n\n"
    
        # Detailed criteria and ratings
        if data:
            result += "Detailed Criteria and Ratings:\n"
            result += "=" * 50 + "\n"
    
            for i, criterion in enumerate(data, 1):
                criterion_id = criterion.get("id", "N/A")
                description = criterion.get("description", "No description")
                long_description = criterion.get("long_description", "")
                points = criterion.get("points", 0)
                ratings = criterion.get("ratings", [])
    
                result += f"\nCriterion #{i}: {description}\n"
                result += f"ID: {criterion_id}\n"
                result += f"Points: {points}\n"
    
                if long_description:
                    result += f"Description: {truncate_text(long_description, 200)}\n"
    
                if ratings:
                    result += f"Rating Levels ({len(ratings)}):\n"
                    for j, rating in enumerate(ratings):
                        rating_description = rating.get("description", "No description")
                        rating_points = rating.get("points", 0)
                        rating_id = rating.get("id", "N/A")
    
                        result += f"  {j+1}. {rating_description} ({rating_points} pts) [ID: {rating_id}]\n"
    
                        if rating.get("long_description"):
                            result += f"     {truncate_text(rating.get('long_description'), 100)}\n"
    
                result += "\n"
    
        return result
  • Top-level registration call in register_all_tools that invokes register_rubric_tools(mcp), which defines and registers the get_rubric_details tool using @mcp.tool() decorator.
    register_rubric_tools(mcp)
  • Helper function used by rubric tools to validate and parse rubric criteria JSON structure.
    def validate_rubric_criteria(criteria_json: str) -> dict[str, Any]:
        """Validate and parse rubric criteria JSON structure.
    
        Args:
            criteria_json: JSON string containing rubric criteria
    
        Returns:
            Parsed criteria dictionary
    
        Raises:
            ValueError: If JSON is invalid or structure is incorrect
        """
        # Preprocess the string to handle common issues
        cleaned_json = preprocess_criteria_string(criteria_json)
    
        try:
            criteria = json.loads(cleaned_json)
        except json.JSONDecodeError as e:
            # Try alternative parsing methods if JSON fails
            try:
                # Maybe it's a Python literal string representation
                import ast
                criteria = ast.literal_eval(cleaned_json)
                if isinstance(criteria, dict):
                    # Successfully parsed as Python literal, continue with validation
                    pass
                else:
                    raise ValueError("Parsed result is not a dictionary")
            except (ValueError, SyntaxError):
                # Both JSON and literal_eval failed, provide detailed error
                error_msg = f"Invalid JSON format: {str(e)}\n"
                error_msg += f"Original string length: {len(criteria_json)}\n"
                error_msg += f"Cleaned string length: {len(cleaned_json)}\n"
                error_msg += f"First 200 characters of original: {repr(criteria_json[:200])}\n"
                error_msg += f"First 200 characters of cleaned: {repr(cleaned_json[:200])}\n"
                if len(cleaned_json) > 200:
                    error_msg += f"Last 100 characters of cleaned: {repr(cleaned_json[-100:])}"
                error_msg += "\nAlso failed to parse as Python literal. Please ensure the criteria is valid JSON."
                raise ValueError(error_msg) from e
    
        if not isinstance(criteria, dict):
            raise ValueError("Criteria must be a JSON object (dictionary)")
    
        # Validate each criterion
        for criterion_key, criterion_data in criteria.items():
            if not isinstance(criterion_data, dict):
                raise ValueError(f"Criterion {criterion_key} must be an object")
    
            if "description" not in criterion_data:
                raise ValueError(f"Criterion {criterion_key} must have a 'description' field")
    
            if "points" not in criterion_data:
                raise ValueError(f"Criterion {criterion_key} must have a 'points' field")
    
            try:
                points = float(criterion_data["points"])
                if points < 0:
                    raise ValueError(f"Criterion {criterion_key} points must be non-negative")
            except (ValueError, TypeError) as err:
                raise ValueError(f"Criterion {criterion_key} points must be a valid number") from err
    
            # Validate ratings if present - handle both object and array formats
            if "ratings" in criterion_data:
                ratings = criterion_data["ratings"]
    
                # Handle both object and array formats
                if isinstance(ratings, dict):
                    # Object format: {"1": {...}, "2": {...}}
                    for rating_key, rating_data in ratings.items():
                        if not isinstance(rating_data, dict):
                            raise ValueError(f"Rating {rating_key} in criterion {criterion_key} must be an object")
    
                        if "description" not in rating_data:
                            raise ValueError(f"Rating {rating_key} in criterion {criterion_key} must have a 'description' field")
    
                        if "points" not in rating_data:
                            raise ValueError(f"Rating {rating_key} in criterion {criterion_key} must have a 'points' field")
    
                        try:
                            rating_points = float(rating_data["points"])
                            if rating_points < 0:
                                raise ValueError(f"Rating {rating_key} points must be non-negative")
                        except (ValueError, TypeError) as err:
                            raise ValueError(f"Rating {rating_key} points must be a valid number") from err
    
                elif isinstance(ratings, list):
                    # Array format: [{"description": ..., "points": ...}, ...]
                    for i, rating_data in enumerate(ratings):
                        if not isinstance(rating_data, dict):
                            raise ValueError(f"Rating {i} in criterion {criterion_key} must be an object")
    
                        if "description" not in rating_data:
                            raise ValueError(f"Rating {i} in criterion {criterion_key} must have a 'description' field")
    
                        if "points" not in rating_data:
                            raise ValueError(f"Rating {i} in criterion {criterion_key} must have a 'points' field")
    
                        try:
                            rating_points = float(rating_data["points"])
                            if rating_points < 0:
                                raise ValueError(f"Rating {i} points must be non-negative")
                        except (ValueError, TypeError) as err:
                            raise ValueError(f"Rating {i} points must be a valid number") from err
    
                else:
                    raise ValueError(f"Criterion {criterion_key} ratings must be an object or array")
    
        return criteria
  • Imports the register_rubric_tools function, enabling its use in server.py for tool registration.
    from .rubrics import register_rubric_tools
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'Get' implies a read operation, it doesn't specify whether this requires authentication, what permissions are needed, whether it returns paginated results, or what happens with invalid inputs. The description lacks crucial behavioral context for a tool that fetches data from an external system.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and well-structured. The purpose is stated clearly in the first sentence, followed by parameter documentation in a standard format. Every sentence earns its place with no redundant information or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there's an output schema (which handles return values) and only 2 parameters, the description is reasonably complete for a simple retrieval tool. However, the lack of annotations means important behavioral aspects (authentication, error handling, rate limits) are undocumented. The description covers the basics but misses operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description explicitly documents both parameters with examples (e.g., 'badm_554_120251_246794'), adding meaningful context beyond the schema which has 0% description coverage. However, it doesn't explain the relationship between the two parameters or what happens if the rubric doesn't belong to the specified course. The parameter documentation is helpful but incomplete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('detailed rubric criteria and scoring information'). It distinguishes itself from siblings like 'get_assignment_rubric_details' by focusing on rubric-specific details rather than assignment context. However, it doesn't explicitly differentiate from 'list_all_rubrics' which might list rubrics without details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose this over 'get_assignment_rubric_details' (for rubric details within an assignment) or 'list_all_rubrics' (for a list without details). There are no prerequisites, exclusions, or context about when this tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/vishalsachdev/canvas-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server