Skip to main content
Glama

update_rubric

Modify an existing rubric in a Canvas course by updating its title, criteria structure, or comment settings to adjust grading standards.

Instructions

Update an existing rubric in the specified course.

    Args:
        course_identifier: The Canvas course code (e.g., badm_554_120251_246794) or ID
        rubric_id: The ID of the rubric to update
        title: Optional new title for the rubric
        criteria: Optional JSON string or dictionary containing updated rubric criteria structure
        free_form_criterion_comments: Optional boolean to allow free-form comments
        skip_updating_points_possible: Skip updating points possible calculation (default: False)
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
course_identifierYes
rubric_idYes
titleNo
criteriaNo
free_form_criterion_commentsNo
skip_updating_points_possibleNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states this is an update operation (implying mutation) but doesn't cover critical behaviors: whether it requires specific permissions, if changes are reversible, what happens to existing rubric data not mentioned in parameters, rate limits, or error conditions. The description lacks behavioral context beyond the basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is reasonably concise with a clear opening sentence followed by parameter details. However, the parameter list is formatted as a code block (Args:) which disrupts flow, and some explanations are overly terse (e.g., 'Optional boolean to allow free-form comments'). It could be more integrated and front-loaded with critical usage information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 parameters with 0% schema coverage, no annotations, but an output schema exists, the description is moderately complete. It identifies all parameters and their basic purpose, but lacks depth on behavioral aspects, error handling, and parameter constraints. The output schema mitigates the need to describe return values, but overall context for safe and effective use is insufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It lists all 6 parameters with brief explanations, but these add minimal semantic value beyond parameter names (e.g., 'Optional new title for the rubric' doesn't clarify format constraints or length limits). For complex parameters like 'criteria' (JSON string/dictionary), it doesn't explain structure or validation rules, leaving significant gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Update an existing rubric') and the resource ('in the specified course'), making the purpose immediately understandable. However, it doesn't explicitly differentiate this tool from sibling tools like 'edit_rubric' or 'modify_rubric' (which don't exist in the sibling list), though it does distinguish from 'create_rubric' and 'delete_rubric' by implication.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing edit permissions), compare with similar tools like 'associate_rubric_with_assignment', or specify scenarios where updating is appropriate versus creating a new rubric. The agent must infer usage from context alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/vishalsachdev/canvas-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server