generate_peer_review_report
Generate comprehensive peer review completion reports for Canvas assignments with executive summaries, detailed analytics, and actionable recommendations to track student participation and identify follow-up needs.
Instructions
Generate comprehensive peer review completion report with executive summary, detailed analytics, and actionable follow-up recommendations.
Args:
course_identifier: Canvas course code (e.g., badm_554_120251_246794) or ID
assignment_id: Canvas assignment ID
report_format: Report format (markdown, csv, json)
include_executive_summary: Include executive summary
include_student_details: Include student details
include_action_items: Include action items
include_timeline_analysis: Include timeline analysis
save_to_file: Save report to local file
filename: Custom filename for saved report
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| assignment_id | Yes | ||
| course_identifier | Yes | ||
| filename | No | ||
| include_action_items | No | ||
| include_executive_summary | No | ||
| include_student_details | No | ||
| include_timeline_analysis | No | ||
| report_format | No | markdown | |
| save_to_file | No |
Implementation Reference
- The MCP tool handler function 'generate_peer_review_report' that implements the core tool logic, including parameter handling, Canvas API integration via PeerReviewAnalyzer, report generation, and optional file saving.@mcp.tool() @validate_params async def generate_peer_review_report( course_identifier: str | int, assignment_id: str | int, report_format: str = "markdown", include_executive_summary: bool = True, include_student_details: bool = True, include_action_items: bool = True, include_timeline_analysis: bool = True, save_to_file: bool = False, filename: str = None ) -> str: """Generate comprehensive peer review completion report with executive summary, detailed analytics, and actionable follow-up recommendations. Args: course_identifier: Canvas course code (e.g., badm_554_120251_246794) or ID assignment_id: Canvas assignment ID report_format: Report format (markdown, csv, json) include_executive_summary: Include executive summary include_student_details: Include student details include_action_items: Include action items include_timeline_analysis: Include timeline analysis save_to_file: Save report to local file filename: Custom filename for saved report """ try: course_id = await get_course_id(course_identifier) analyzer = PeerReviewAnalyzer() result = await analyzer.generate_report( course_id=course_id, assignment_id=int(assignment_id), report_format=report_format, include_executive_summary=include_executive_summary, include_student_details=include_student_details, include_action_items=include_action_items, include_timeline_analysis=include_timeline_analysis ) if "error" in result: return f"Error generating peer review report: {result['error']}" # Handle file saving if requested if save_to_file and "report" in result: import os from datetime import datetime if not filename: timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") filename = f"peer_review_report_{assignment_id}_{timestamp}.{report_format}" try: # Save to current working directory with open(filename, 'w', encoding='utf-8') as f: f.write(result["report"]) result["saved_to"] = os.path.abspath(filename) except Exception as save_error: result["save_error"] = f"Failed to save file: {str(save_error)}" if report_format in ["csv", "markdown"]: return result.get("report", json.dumps(result, indent=2)) else: return json.dumps(result, indent=2) except Exception as e: return f"Error in generate_peer_review_report: {str(e)}"
- Core helper method in PeerReviewAnalyzer class that performs the actual report generation by aggregating analytics data and formatting the output in markdown, CSV, or JSON.async def generate_report( self, course_id: int, assignment_id: int, report_format: str = "markdown", include_executive_summary: bool = True, include_student_details: bool = True, include_action_items: bool = True, include_timeline_analysis: bool = True ) -> dict[str, Any]: """Generate comprehensive peer review completion report.""" try: # Get analytics data analytics = await self.get_completion_analytics( course_id, assignment_id, include_student_details=True ) if "error" in analytics: return analytics # Get assignment info assignments_data = await self.get_assignments(course_id, assignment_id) if "error" in assignments_data: return assignments_data assignment_info = assignments_data["assignment_info"] if report_format == "markdown": return self._generate_markdown_report( analytics, assignment_info, include_executive_summary, include_student_details, include_action_items, include_timeline_analysis ) elif report_format == "csv": return self._generate_csv_report(analytics, assignment_info) elif report_format == "json": return { "assignment_info": assignment_info, "analytics": analytics, "generated_at": datetime.datetime.now().isoformat() } else: return {"error": f"Unsupported report format: {report_format}"} except Exception as e: return {"error": f"Exception in generate_report: {str(e)}"}
- src/canvas_mcp/server.py:52-52 (registration)The registration call to register_peer_review_tools(mcp) within register_all_tools, which defines and registers the generate_peer_review_report tool using FastMCP decorators.register_peer_review_tools(mcp)
- Supporting helper method that generates the detailed markdown report content including summaries, action items, and recommendations.def _generate_markdown_report( self, analytics: dict[str, Any], assignment_info: dict[str, Any], include_executive_summary: bool, include_student_details: bool, include_action_items: bool, include_timeline_analysis: bool ) -> dict[str, str]: """Generate a markdown-formatted report.""" summary = analytics["summary"] completion_groups = analytics.get("completion_groups", {}) report_lines = [] # Header report_lines.extend([ "# Peer Review Completion Report", f"**Assignment:** {assignment_info['name']} (ID: {assignment_info['id']})", f"**Generated:** {datetime.datetime.now().strftime('%B %d, %Y')}", "", "---", "" ]) # Executive Summary if include_executive_summary: report_lines.extend([ "## Executive Summary", "", "| Metric | Count | Percentage |", "|--------|-------|------------|", f"| **Total Students Enrolled** | {summary['total_students_enrolled']} | 100% |", f"| **Students with Submissions** | {summary['students_with_submissions']} | {round(summary['students_with_submissions']/summary['total_students_enrolled']*100, 1)}% |", f"| **Total Peer Reviews Assigned** | {summary['total_reviews_assigned']} | - |", f"| **Peer Reviews Completed** | {summary['reviews_completed']} | {summary['completion_rate_percent']}% |", f"| **Students with All Reviews Complete** | {summary['students_all_complete']} | {round(summary['students_all_complete']/summary['total_students_enrolled']*100, 1)}% |", "", "---", "" ]) # Action items if include_action_items: urgent_students = completion_groups.get("none_complete", []) partial_students = completion_groups.get("partial_complete", []) if urgent_students: report_lines.extend([ "## 🚨 Immediate Action Required", "", f"**Students with NO peer reviews completed ({len(urgent_students)} students):**", ]) for student in urgent_students[:5]: # Show first 5 pending_reviews = student.get("pending_reviews", []) reviewee_names = [pr["reviewee_name"] for pr in pending_reviews[:2]] report_lines.append( f"- {student['student_name']} (ID: {student['student_id']}): " f"Assigned to review {' and '.join(reviewee_names)} " f"({student['completed_count']}/{student['assigned_count']} complete)" ) if len(urgent_students) > 5: report_lines.append(f"- [{len(urgent_students) - 5} more students...]") report_lines.extend([ "", "**Contact Information:**", "- Send urgent reminder emails", "- Consider deadline extensions", "- Follow up within 24 hours", "", "---", "" ]) if partial_students: report_lines.extend([ "## ⚠️ Partial Completion Follow-up", "", f"**Students with partial reviews completed ({len(partial_students)} students):**", ]) for student in partial_students[:5]: # Show first 5 pending_reviews = student.get("pending_reviews", []) if pending_reviews: pending_name = pending_reviews[0]["reviewee_name"] report_lines.append( f"- {student['student_name']}: " f"{student['completed_count']}/{student['assigned_count']} complete, " f"pending review of {pending_name}" ) if len(partial_students) > 5: report_lines.append(f"- [{len(partial_students) - 5} more students...]") report_lines.extend([ "", "---", "" ]) # Fully engaged students complete_students = completion_groups.get("all_complete", []) if complete_students: report_lines.extend([ f"## ✅ Fully Engaged Students ({len(complete_students)} students)", "", "**Students with all peer reviews completed:**", "- High participation rate indicates good course engagement", "- Consider highlighting exemplary completion in class", "", "---", "" ]) # Recommendations if include_action_items: report_lines.extend([ "## Recommendations", "", "### Immediate (Next 24 hours)", f"1. Contact {len(urgent_students)} students with zero completions", f"2. Send automated reminder to {len(partial_students)} partial completions", "", "### Short-term (Next week)", "1. Review peer review assignment timing", "2. Consider automated reminders for future assignments", "", "### Process Improvements", "1. Set peer review assignments 24-48 hours after due date", "2. Implement interim completion checkpoints", "3. Add peer review completion to participation grade", "", "---", "", "*Report generated using Canvas Peer Review Analytics Tool*" ]) return {"report": "\n".join(report_lines)}
- src/canvas_mcp/tools/peer_reviews.py:12-12 (registration)The registration function that contains the @mcp.tool() decorated definitions for peer review tools, including generate_peer_review_report.def register_peer_review_tools(mcp: FastMCP):