Skip to main content
Glama

get_rubric_assessment

Read-only

Retrieve rubric assessment scores for a student submission by providing course, assignment, and user IDs. Get a detailed breakdown of criteria scores.

Instructions

Get rubric assessment scores for a specific submission.

    Args:
        course_identifier: Course code or Canvas ID
        assignment_id: Canvas assignment ID
        user_id: Canvas user ID of the student
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
course_identifierYes
assignment_idYes
user_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare readOnlyHint=true, indicating a safe read operation. The description adds minimal behavioral context beyond that, such as that it retrieves scores for a specific submission. It does not disclose any additional traits like return format (though output schema exists) or potential errors. Since annotations cover the safety profile, the description's value is moderate, earning a 3.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: one sentence plus a list of parameter descriptions. Every word is purposeful, and the structure is front-loaded with the action. No unnecessary information is included, making it efficient for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (3 parameters, read-only, with an output schema), the description is mostly complete. It identifies the parameters and the core action. It could mention that a rubric must be associated with the assignment, but this is not critical. The presence of an output schema reduces the need to describe return values. Overall, it is sufficiently complete for its context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage, so the description's parameter descriptions are crucial. The description provides brief explanations for each parameter (e.g., 'Canvas ID' for assignment_id). However, these are very basic and do not add details about expected formats, constraints, or how to obtain the IDs. The description partially compensates for the schema gap but is minimal, resulting in a score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get rubric assessment scores') and the resource ('for a specific submission'). It distinguishes the tool from siblings like 'get_rubric' (which gets rubric details) and 'grade_with_rubric' (which writes grades), as it is explicitly for reading assessment scores. However, it does not explicitly differentiate from other similar read tools, so it loses a point.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus alternatives. It merely states the action and parameters. There is no mention of prerequisites (e.g., assignment must have a rubric), nor any when-not-to-use instructions. This lack of usage context makes it hard for an AI to decide when this tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/vishalsachdev/canvas-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server