Skip to main content
Glama

get_submission_rubric_assessment

Retrieve rubric assessment scores for a student's submission in Canvas by providing course, assignment, and user identifiers.

Instructions

Get rubric assessment scores for a specific submission.

Args: course_identifier: The Canvas course code (e.g., badm_554_120251_246794) or ID assignment_id: The Canvas assignment ID user_id: The Canvas user ID of the student

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
course_identifierYes
assignment_idYes
user_idYes

Implementation Reference

  • The handler function implementing the get_submission_rubric_assessment tool. It fetches the submission via Canvas API, anonymizes data, retrieves the rubric assessment, matches it with rubric criteria, and formats a detailed report.
    async def get_submission_rubric_assessment(course_identifier: str | int, assignment_id: str | int, user_id: str | int) -> str: """Get rubric assessment scores for a specific submission. Args: course_identifier: The Canvas course code (e.g., badm_554_120251_246794) or ID assignment_id: The Canvas assignment ID user_id: The Canvas user ID of the student """ course_id = await get_course_id(course_identifier) assignment_id_str = str(assignment_id) user_id_str = str(user_id) # Get submission with rubric assessment response = await make_canvas_request( "get", f"/courses/{course_id}/assignments/{assignment_id_str}/submissions/{user_id_str}", params={"include[]": ["rubric_assessment", "full_rubric_assessment"]} ) if "error" in response: return f"Error fetching submission rubric assessment: {response['error']}" # Anonymize submission data to protect student privacy try: response = anonymize_response_data(response, data_type="submissions") except Exception as e: log_error( "Failed to anonymize rubric assessment data", exc=e, course_id=course_id, assignment_id=assignment_id, user_id=user_id ) # Continue with original data for functionality # Check if submission has rubric assessment rubric_assessment = response.get("rubric_assessment") if not rubric_assessment: # Get user and assignment names for better error message assignment_response = await make_canvas_request( "get", f"/courses/{course_id}/assignments/{assignment_id_str}" ) assignment_name = assignment_response.get("name", "Unknown Assignment") if "error" not in assignment_response else "Unknown Assignment" course_display = await get_course_code(course_id) or course_identifier return f"No rubric assessment found for user {user_id} on assignment '{assignment_name}' in course {course_display}." # Get assignment details for context assignment_response = await make_canvas_request( "get", f"/courses/{course_id}/assignments/{assignment_id_str}", params={"include[]": ["rubric"]} ) assignment_name = assignment_response.get("name", "Unknown Assignment") if "error" not in assignment_response else "Unknown Assignment" rubric_data = assignment_response.get("rubric", []) if "error" not in assignment_response else [] # Format rubric assessment course_display = await get_course_code(course_id) or course_identifier result = f"Rubric Assessment for User {user_id} on '{assignment_name}' in Course {course_display}:\n\n" # Submission details submitted_at = format_date(response.get("submitted_at")) graded_at = format_date(response.get("graded_at")) score = response.get("score", "Not graded") result += "Submission Details:\n" result += f" Submitted: {submitted_at}\n" result += f" Graded: {graded_at}\n" result += f" Score: {score}\n\n" # Rubric assessment details result += "Rubric Assessment:\n" result += "=" * 30 + "\n" total_rubric_points = 0 for criterion_id, assessment in rubric_assessment.items(): # Find criterion details from rubric data criterion_info = None for criterion in rubric_data: if str(criterion.get("id")) == str(criterion_id): criterion_info = criterion break criterion_description = criterion_info.get("description", f"Criterion {criterion_id}") if criterion_info else f"Criterion {criterion_id}" points = assessment.get("points", 0) comments = assessment.get("comments", "") rating_id = assessment.get("rating_id") result += f"\n{criterion_description}:\n" result += f" Points Awarded: {points}\n" if rating_id and criterion_info: # Find the rating description for rating in criterion_info.get("ratings", []): if str(rating.get("id")) == str(rating_id): result += f" Rating: {rating.get('description', 'N/A')} ({rating.get('points', 0)} pts)\n" break if comments: result += f" Comments: {comments}\n" total_rubric_points += points result += f"\nTotal Rubric Points: {total_rubric_points}" return result
  • The call to register_rubric_tools which registers the get_submission_rubric_assessment tool among others.
    register_rubric_tools(mcp) register_peer_review_tools(mcp)
  • Import of register_rubric_tools function used for tool registration.
    from .rubrics import register_rubric_tools
  • Helper function to build form data for rubric assessments, used in grading tools but supports the overall rubric functionality.
    def build_rubric_assessment_form_data( rubric_assessment: dict[str, Any], comment: str | None = None ) -> dict[str, str]: """Convert rubric assessment dict to Canvas form-encoded format. Canvas API expects rubric assessment data as form-encoded parameters with bracket notation: rubric_assessment[criterion_id][field]=value Args: rubric_assessment: Dict mapping criterion IDs to assessment data Format: {"criterion_id": {"points": X, "rating_id": Y, "comments": Z}} comment: Optional overall comment for the submission Returns: Flattened dict with Canvas bracket notation keys Example: Input: {"_8027": {"points": 2, "rating_id": "blank", "comments": "Great work"}} Output: { "rubric_assessment[_8027][points]": "2", "rubric_assessment[_8027][rating_id]": "blank", "rubric_assessment[_8027][comments]": "Great work" } """ form_data: dict[str, str] = {} # Transform rubric_assessment object into Canvas's form-encoded format for criterion_id, assessment in rubric_assessment.items(): # Points are required if "points" in assessment: form_data[f"rubric_assessment[{criterion_id}][points]"] = str(assessment["points"]) # Rating ID is optional but recommended if "rating_id" in assessment: form_data[f"rubric_assessment[{criterion_id}][rating_id]"] = str(assessment["rating_id"]) # Comments are optional if "comments" in assessment: form_data[f"rubric_assessment[{criterion_id}][comments]"] = str(assessment["comments"]) # Add optional overall comment if comment: form_data["comment[text_comment]"] = comment return form_data
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/vishalsachdev/canvas-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server