generate_peer_review_feedback_report
Generate instructor-ready reports on peer review quality for Canvas assignments, with customizable report types and output formats to assess student feedback effectiveness.
Instructions
Create instructor-ready reports on peer review quality.
Args:
course_identifier: Canvas course code or ID
assignment_id: Canvas assignment ID
report_type: Report type (comprehensive, summary, individual)
include_student_names: Whether to include student names
format_type: Output format (markdown, html, text)
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| assignment_id | Yes | ||
| course_identifier | Yes | ||
| format_type | No | markdown | |
| include_student_names | No | ||
| report_type | No | comprehensive |
Implementation Reference
- The core handler function for the generate_peer_review_feedback_report tool. It orchestrates data collection from PeerReviewCommentAnalyzer and generates a markdown report for instructors.@mcp.tool() @validate_params async def generate_peer_review_feedback_report( course_identifier: str | int, assignment_id: str | int, report_type: str = "comprehensive", include_student_names: bool = False, format_type: str = "markdown" ) -> str: """ Create instructor-ready reports on peer review quality. Args: course_identifier: Canvas course code or ID assignment_id: Canvas assignment ID report_type: Report type (comprehensive, summary, individual) include_student_names: Whether to include student names format_type: Output format (markdown, html, text) """ try: course_id = await get_course_id(course_identifier) analyzer = PeerReviewCommentAnalyzer() # Get analytics data analytics_data = await analyzer.analyze_peer_review_quality( course_id=course_id, assignment_id=int(assignment_id) ) if "error" in analytics_data: return f"Error getting analytics data: {analytics_data['error']}" # Get problematic reviews problematic_data = await analyzer.identify_problematic_peer_reviews( course_id=course_id, assignment_id=int(assignment_id) ) # Get assignment info assignment_response = await make_canvas_request( "get", f"/courses/{course_id}/assignments/{assignment_id}" ) assignment_name = assignment_response.get("name", "Unknown Assignment") if "error" not in assignment_response else "Unknown Assignment" # Generate report based on type if format_type.lower() == "markdown": return _generate_markdown_report( analytics_data, problematic_data, assignment_name, report_type ) else: return f"Error: Unsupported format '{format_type}'. Currently only 'markdown' is supported." except Exception as e: return f"Error in generate_peer_review_feedback_report: {str(e)}"
- Supporting helper function that formats the peer review analytics into a detailed Markdown report used by the main handler.def _generate_markdown_report( analytics_data: dict[str, Any], problematic_data: dict[str, Any], assignment_name: str, report_type: str ) -> str: """Generate a markdown report from analytics data.""" overall = analytics_data.get("overall_analysis", {}) metrics = analytics_data.get("detailed_metrics", {}) flagged = analytics_data.get("flagged_reviews", []) recommendations = analytics_data.get("recommendations", []) word_stats = metrics.get("word_count_stats", {}) constructiveness = metrics.get("constructiveness_analysis", {}) sentiment = metrics.get("sentiment_analysis", {}) problematic_summary = problematic_data.get("flag_summary", {}) report_lines = [ f"# Peer Review Quality Report: {assignment_name}", "", f"**Generated on:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", f"**Report Type:** {report_type.title()}", "", "## Executive Summary", "", f"- **Total Reviews Analyzed:** {overall.get('total_reviews_analyzed', 0)}", f"- **Average Quality Score:** {overall.get('average_quality_score', 0)}/5.0", f"- **High Quality Reviews:** {overall.get('quality_distribution', {}).get('high_quality', 0)}", f"- **Medium Quality Reviews:** {overall.get('quality_distribution', {}).get('medium_quality', 0)}", f"- **Low Quality Reviews:** {overall.get('quality_distribution', {}).get('low_quality', 0)}", "", "## Word Count Statistics", "", f"- **Average Words per Comment:** {word_stats.get('mean', 0)}", f"- **Median Words:** {word_stats.get('median', 0)}", f"- **Range:** {word_stats.get('min', 0)} - {word_stats.get('max', 0)} words", f"- **Standard Deviation:** {word_stats.get('std_dev', 0)}", "", "## Comment Quality Analysis", "", f"- **Constructive Feedback:** {constructiveness.get('constructive_feedback_count', 0)} reviews", f"- **Generic Comments:** {constructiveness.get('generic_comments', 0)} reviews", f"- **Specific Suggestions:** {constructiveness.get('specific_suggestions', 0)} reviews", "", "## Sentiment Distribution", "", f"- **Positive Sentiment:** {sentiment.get('positive_sentiment', 0)*100:.1f}%", f"- **Neutral Sentiment:** {sentiment.get('neutral_sentiment', 0)*100:.1f}%", f"- **Negative Sentiment:** {sentiment.get('negative_sentiment', 0)*100:.1f}%", "" ] if problematic_summary: report_lines.extend([ "## Flagged Issues", "", ]) for flag_type, count in problematic_summary.items(): flag_name = flag_type.replace("_", " ").title() report_lines.append(f"- **{flag_name}:** {count} reviews") report_lines.append("") if flagged and report_type == "comprehensive": report_lines.extend([ "## Sample Low-Quality Reviews", "", ]) for i, review in enumerate(flagged[:5]): # Show top 5 report_lines.extend([ f"### Review {i+1}", f"- **Quality Score:** {review.get('quality_score', 0)}/5.0", f"- **Word Count:** {review.get('word_count', 0)}", f"- **Flag Reason:** {review.get('flag_reason', 'Unknown')}", f"- **Comment Preview:** \"{review.get('comment', 'No comment')}\"", "" ]) if recommendations: report_lines.extend([ "## Recommendations", "", ]) for i, rec in enumerate(recommendations, 1): report_lines.append(f"{i}. {rec}") report_lines.append("") report_lines.extend([ "---", "*Generated by Canvas MCP Peer Review Comment Analyzer*" ]) return "\n".join(report_lines)
- src/canvas_mcp/server.py:53-53 (registration)Top-level registration call in the main server setup that invokes the module-specific tool registration function, which defines and registers the tool via @mcp.tool() decorator.register_peer_review_comment_tools(mcp)
- src/canvas_mcp/tools/peer_review_comments.py:17-17 (registration)Module-level registration function that defines and registers the peer review comment tools using @mcp.tool() decorators, including the target tool.def register_peer_review_comment_tools(mcp: FastMCP):