grade_with_rubric
Submit grades for Canvas assignments using rubric criteria by mapping criterion IDs to points, ratings, and feedback comments.
Instructions
Submit grades using rubric criteria.
This tool submits grades for individual rubric criteria. The rubric must already be
associated with the assignment and configured for grading (use_for_grading=true).
IMPORTANT NOTES:
- Criterion IDs often start with underscore (e.g., "_8027")
- Use list_assignment_rubrics or get_rubric_details to find criterion IDs and rating IDs
- Points must be within the range defined by the rubric criterion
- The rubric must be attached to the assignment before grading
Args:
course_identifier: The Canvas course code (e.g., badm_554_120251_246794) or ID
assignment_id: The Canvas assignment ID
user_id: The Canvas user ID of the student
rubric_assessment: Dict mapping criterion IDs to assessment data
Format: {
"criterion_id": {
"points": <number>, # Required: points awarded
"rating_id": "<string>", # Optional: specific rating ID
"comments": "<string>" # Optional: feedback comments
}
}
comment: Optional overall comment for the submission
Example Usage:
{
"course_identifier": "60366",
"assignment_id": "1440586",
"user_id": "9824",
"rubric_assessment": {
"_8027": {
"points": 2,
"rating_id": "blank",
"comments": "Great work!"
}
},
"comment": "Nice job on this assignment"
}
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| course_identifier | Yes | ||
| assignment_id | Yes | ||
| user_id | Yes | ||
| rubric_assessment | Yes | ||
| comment | No |
Implementation Reference
- src/canvas_mcp/tools/rubrics.py:663-786 (handler)Primary MCP tool handler for 'grade_with_rubric'. Validates rubric configuration, builds form data, submits grade to Canvas API /submissions endpoint, and returns confirmation.@mcp.tool() @validate_params async def grade_with_rubric(course_identifier: str | int, assignment_id: str | int, user_id: str | int, rubric_assessment: dict[str, Any], comment: str | None = None) -> str: """Submit grades using rubric criteria. This tool submits grades for individual rubric criteria. The rubric must already be associated with the assignment and configured for grading (use_for_grading=true). IMPORTANT NOTES: - Criterion IDs often start with underscore (e.g., "_8027") - Use list_assignment_rubrics or get_rubric_details to find criterion IDs and rating IDs - Points must be within the range defined by the rubric criterion - The rubric must be attached to the assignment before grading Args: course_identifier: The Canvas course code (e.g., badm_554_120251_246794) or ID assignment_id: The Canvas assignment ID user_id: The Canvas user ID of the student rubric_assessment: Dict mapping criterion IDs to assessment data Format: { "criterion_id": { "points": <number>, # Required: points awarded "rating_id": "<string>", # Optional: specific rating ID "comments": "<string>" # Optional: feedback comments } } comment: Optional overall comment for the submission Example Usage: { "course_identifier": "60366", "assignment_id": "1440586", "user_id": "9824", "rubric_assessment": { "_8027": { "points": 2, "rating_id": "blank", "comments": "Great work!" } }, "comment": "Nice job on this assignment" } """ course_id = await get_course_id(course_identifier) assignment_id_str = str(assignment_id) user_id_str = str(user_id) # CRITICAL: Verify rubric is configured for grading BEFORE submitting assignment_check = await make_canvas_request( "get", f"/courses/{course_id}/assignments/{assignment_id_str}", params={"include[]": ["rubric_settings"]} ) if "error" not in assignment_check: use_rubric_for_grading = assignment_check.get("use_rubric_for_grading", False) if not use_rubric_for_grading: return ( "⚠️ ERROR: Rubric is not configured for grading!\n\n" "The rubric exists but 'use_for_grading' is set to FALSE.\n" "Grades will NOT be saved to the gradebook.\n\n" "To fix this:\n" "1. Use list_assignment_rubrics to verify rubric settings\n" "2. Use associate_rubric_with_assignment with use_for_grading=True\n" "3. Or configure the rubric in Canvas UI: Assignment Settings → Rubric → Use for Grading\n\n" f"Assignment: {assignment_check.get('name', 'Unknown')}\n" f"Course ID: {course_id}\n" f"Assignment ID: {assignment_id}\n" ) # Build form data in Canvas's expected format form_data = build_rubric_assessment_form_data(rubric_assessment, comment) # Submit the grade with rubric assessment using form encoding response = await make_canvas_request( "put", f"/courses/{course_id}/assignments/{assignment_id_str}/submissions/{user_id_str}", data=form_data, use_form_data=True ) if "error" in response: return f"Error submitting rubric grade: {response['error']}" # Get assignment details for confirmation assignment_response = await make_canvas_request( "get", f"/courses/{course_id}/assignments/{assignment_id_str}" ) assignment_name = assignment_response.get("name", "Unknown Assignment") if "error" not in assignment_response else "Unknown Assignment" # Calculate total points from rubric assessment total_points = sum(criterion.get("points", 0) for criterion in rubric_assessment.values()) course_display = await get_course_code(course_id) or course_identifier result = "Rubric Grade Submitted Successfully!\n\n" result += f"Course: {course_display}\n" result += f"Assignment: {assignment_name}\n" result += f"Student ID: {user_id}\n" result += f"Total Rubric Points: {total_points}\n" result += f"Grade: {response.get('grade', 'N/A')}\n" result += f"Score: {response.get('score', 'N/A')}\n" result += f"Graded At: {format_date(response.get('graded_at'))}\n" if comment: result += f"Overall Comment: {comment}\n" result += "\nRubric Assessment Summary:\n" for criterion_id, assessment in rubric_assessment.items(): points = assessment.get("points", 0) rating_id = assessment.get("rating_id", "") comments = assessment.get("comments", "") result += f" Criterion {criterion_id}: {points} points" if rating_id: result += f" (Rating: {rating_id})" if comments: result += f"\n Comment: {truncate_text(comments, 100)}" result += "\n" return result
- Helper function used by grade_with_rubric to format rubric assessment data into Canvas API's required form-encoded structure.def build_rubric_assessment_form_data( rubric_assessment: dict[str, Any], comment: str | None = None ) -> dict[str, str]: """Convert rubric assessment dict to Canvas form-encoded format. Canvas API expects rubric assessment data as form-encoded parameters with bracket notation: rubric_assessment[criterion_id][field]=value Args: rubric_assessment: Dict mapping criterion IDs to assessment data Format: {"criterion_id": {"points": X, "rating_id": Y, "comments": Z}} comment: Optional overall comment for the submission Returns: Flattened dict with Canvas bracket notation keys Example: Input: {"_8027": {"points": 2, "rating_id": "blank", "comments": "Great work"}} Output: { "rubric_assessment[_8027][points]": "2", "rubric_assessment[_8027][rating_id]": "blank", "rubric_assessment[_8027][comments]": "Great work" } """ form_data: dict[str, str] = {} # Transform rubric_assessment object into Canvas's form-encoded format for criterion_id, assessment in rubric_assessment.items(): # Points are required if "points" in assessment: form_data[f"rubric_assessment[{criterion_id}][points]"] = str(assessment["points"]) # Rating ID is optional but recommended if "rating_id" in assessment: form_data[f"rubric_assessment[{criterion_id}][rating_id]"] = str(assessment["rating_id"]) # Comments are optional if "comments" in assessment: form_data[f"rubric_assessment[{criterion_id}][comments]"] = str(assessment["comments"]) # Add optional overall comment if comment: form_data["comment[text_comment]"] = comment return form_data
- Helper for validating rubric criteria structure, used in rubric creation/updating tools.def validate_rubric_criteria(criteria_json: str) -> dict[str, Any]: """Validate and parse rubric criteria JSON structure. Args: criteria_json: JSON string containing rubric criteria Returns: Parsed criteria dictionary Raises: ValueError: If JSON is invalid or structure is incorrect """ # Preprocess the string to handle common issues cleaned_json = preprocess_criteria_string(criteria_json) try: criteria = json.loads(cleaned_json) except json.JSONDecodeError as e: # Try alternative parsing methods if JSON fails try: # Maybe it's a Python literal string representation import ast criteria = ast.literal_eval(cleaned_json) if isinstance(criteria, dict): # Successfully parsed as Python literal, continue with validation pass else: raise ValueError("Parsed result is not a dictionary") except (ValueError, SyntaxError): # Both JSON and literal_eval failed, provide detailed error error_msg = f"Invalid JSON format: {str(e)}\n" error_msg += f"Original string length: {len(criteria_json)}\n" error_msg += f"Cleaned string length: {len(cleaned_json)}\n" error_msg += f"First 200 characters of original: {repr(criteria_json[:200])}\n" error_msg += f"First 200 characters of cleaned: {repr(cleaned_json[:200])}\n" if len(cleaned_json) > 200: error_msg += f"Last 100 characters of cleaned: {repr(cleaned_json[-100:])}" error_msg += "\nAlso failed to parse as Python literal. Please ensure the criteria is valid JSON." raise ValueError(error_msg) from e if not isinstance(criteria, dict): raise ValueError("Criteria must be a JSON object (dictionary)") # Validate each criterion for criterion_key, criterion_data in criteria.items(): if not isinstance(criterion_data, dict): raise ValueError(f"Criterion {criterion_key} must be an object") if "description" not in criterion_data: raise ValueError(f"Criterion {criterion_key} must have a 'description' field") if "points" not in criterion_data: raise ValueError(f"Criterion {criterion_key} must have a 'points' field") try: points = float(criterion_data["points"]) if points < 0: raise ValueError(f"Criterion {criterion_key} points must be non-negative") except (ValueError, TypeError) as err: raise ValueError(f"Criterion {criterion_key} points must be a valid number") from err # Validate ratings if present - handle both object and array formats if "ratings" in criterion_data: ratings = criterion_data["ratings"] # Handle both object and array formats if isinstance(ratings, dict): # Object format: {"1": {...}, "2": {...}} for rating_key, rating_data in ratings.items(): if not isinstance(rating_data, dict): raise ValueError(f"Rating {rating_key} in criterion {criterion_key} must be an object") if "description" not in rating_data: raise ValueError(f"Rating {rating_key} in criterion {criterion_key} must have a 'description' field") if "points" not in rating_data: raise ValueError(f"Rating {rating_key} in criterion {criterion_key} must have a 'points' field") try: rating_points = float(rating_data["points"]) if rating_points < 0: raise ValueError(f"Rating {rating_key} points must be non-negative") except (ValueError, TypeError) as err: raise ValueError(f"Rating {rating_key} points must be a valid number") from err elif isinstance(ratings, list): # Array format: [{"description": ..., "points": ...}, ...] for i, rating_data in enumerate(ratings): if not isinstance(rating_data, dict): raise ValueError(f"Rating {i} in criterion {criterion_key} must be an object") if "description" not in rating_data: raise ValueError(f"Rating {i} in criterion {criterion_key} must have a 'description' field") if "points" not in rating_data: raise ValueError(f"Rating {i} in criterion {criterion_key} must have a 'points' field") try: rating_points = float(rating_data["points"]) if rating_points < 0: raise ValueError(f"Rating {i} points must be non-negative") except (ValueError, TypeError) as err: raise ValueError(f"Rating {i} points must be a valid number") from err else: raise ValueError(f"Criterion {criterion_key} ratings must be an object or array") return criteria
- src/canvas_mcp/tools/__init__.py:1-29 (registration)Imports register_rubric_tools which defines and registers the grade_with_rubric MCP tool among others."""Tool modules for Canvas MCP server.""" from .courses import register_course_tools from .assignments import register_assignment_tools from .discussions import register_discussion_tools from .other_tools import register_other_tools from .rubrics import register_rubric_tools from .peer_reviews import register_peer_review_tools from .peer_review_comments import register_peer_review_comment_tools from .messaging import register_messaging_tools from .student_tools import register_student_tools from .accessibility import register_accessibility_tools from .discovery import register_discovery_tools from .code_execution import register_code_execution_tools __all__ = [ 'register_course_tools', 'register_assignment_tools', 'register_discussion_tools', 'register_other_tools', 'register_rubric_tools', 'register_peer_review_tools', 'register_peer_review_comment_tools', 'register_messaging_tools', 'register_student_tools', 'register_accessibility_tools', 'register_discovery_tools', 'register_code_execution_tools' ]
- TypeScript implementation of gradeWithRubric, used via code_execution tool for similar functionality.export async function gradeWithRubric( input: GradeWithRubricInput ): Promise<GradeResponse> { const { courseIdentifier, assignmentId, userId, rubricAssessment, grade, comment } = input; // Validate: Must have either rubricAssessment OR grade if (!rubricAssessment && !grade && grade !== 0) { throw new Error('Must provide either rubricAssessment or grade'); } let formData: Record<string, string> = {}; // Handle rubric-based grading if (rubricAssessment && Object.keys(rubricAssessment).length > 0) { validateRubricAssessment(rubricAssessment); formData = buildRubricAssessmentFormData(rubricAssessment, comment); } // Handle simple grading else if (grade !== undefined) { formData['submission[posted_grade]'] = String(grade); if (comment) { formData['comment[text_comment]'] = comment; } } // Canvas API endpoint for updating submission const endpoint = `/courses/${courseIdentifier}/assignments/${assignmentId}/submissions/${userId}`; try { // Submit the grade with rubric assessment using form encoding const response = await canvasPutForm<GradeResponse>(endpoint, formData); return response; } catch (error: any) { throw new Error( `Failed to grade submission: ${error.message}\n` + `Check that rubric is configured for grading and criterion IDs are correct.` ); } }