Skip to main content
Glama

generate_peer_review_report

Generate peer review completion reports with executive summaries, analytics, and actionable recommendations for Canvas assignments.

Instructions

Generate comprehensive peer review completion report with executive summary, detailed analytics, and actionable follow-up recommendations.

    Args:
        course_identifier: Canvas course code (e.g., badm_554_120251_246794) or ID
        assignment_id: Canvas assignment ID
        report_format: Report format (markdown, csv, json)
        include_executive_summary: Include executive summary
        include_student_details: Include student details
        include_action_items: Include action items
        include_timeline_analysis: Include timeline analysis
        save_to_file: Save report to local file
        filename: Custom filename for saved report
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
course_identifierYes
assignment_idYes
report_formatNomarkdown
include_executive_summaryNo
include_student_detailsNo
include_action_itemsNo
include_timeline_analysisNo
save_to_fileNo
filenameNo

Implementation Reference

  • Primary MCP tool handler for generate_peer_review_report. Decorated with @mcp.tool(), validates params, delegates to PeerReviewAnalyzer.generate_report, handles output formatting and optional file saving.
    @mcp.tool()
    @validate_params
    async def generate_peer_review_report(
        course_identifier: str | int,
        assignment_id: str | int,
        report_format: str = "markdown",
        include_executive_summary: bool = True,
        include_student_details: bool = True,
        include_action_items: bool = True,
        include_timeline_analysis: bool = True,
        save_to_file: bool = False,
        filename: str = None
    ) -> str:
        """Generate comprehensive peer review completion report with executive summary, detailed analytics, and actionable follow-up recommendations.
    
        Args:
            course_identifier: Canvas course code (e.g., badm_554_120251_246794) or ID
            assignment_id: Canvas assignment ID
            report_format: Report format (markdown, csv, json)
            include_executive_summary: Include executive summary
            include_student_details: Include student details
            include_action_items: Include action items
            include_timeline_analysis: Include timeline analysis
            save_to_file: Save report to local file
            filename: Custom filename for saved report
        """
        try:
            course_id = await get_course_id(course_identifier)
            analyzer = PeerReviewAnalyzer()
    
            result = await analyzer.generate_report(
                course_id=course_id,
                assignment_id=int(assignment_id),
                report_format=report_format,
                include_executive_summary=include_executive_summary,
                include_student_details=include_student_details,
                include_action_items=include_action_items,
                include_timeline_analysis=include_timeline_analysis
            )
    
            if "error" in result:
                return f"Error generating peer review report: {result['error']}"
    
            # Handle file saving if requested
            if save_to_file and "report" in result:
                import os
                from datetime import datetime
    
                if not filename:
                    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
                    filename = f"peer_review_report_{assignment_id}_{timestamp}.{report_format}"
    
                try:
                    # Save to current working directory
                    with open(filename, 'w', encoding='utf-8') as f:
                        f.write(result["report"])
                    result["saved_to"] = os.path.abspath(filename)
                except Exception as save_error:
                    result["save_error"] = f"Failed to save file: {str(save_error)}"
    
            if report_format in ["csv", "markdown"]:
                return result.get("report", json.dumps(result, indent=2))
            else:
                return json.dumps(result, indent=2)
    
        except Exception as e:
            return f"Error in generate_peer_review_report: {str(e)}"
  • Core helper method in PeerReviewAnalyzer class that implements the report generation logic by aggregating completion analytics and assignment data, then dispatching to format-specific generators.
    async def generate_report(
        self,
        course_id: int,
        assignment_id: int,
        report_format: str = "markdown",
        include_executive_summary: bool = True,
        include_student_details: bool = True,
        include_action_items: bool = True,
        include_timeline_analysis: bool = True
    ) -> dict[str, Any]:
        """Generate comprehensive peer review completion report."""
    
        try:
            # Get analytics data
            analytics = await self.get_completion_analytics(
                course_id, assignment_id, include_student_details=True
            )
    
            if "error" in analytics:
                return analytics
    
            # Get assignment info
            assignments_data = await self.get_assignments(course_id, assignment_id)
            if "error" in assignments_data:
                return assignments_data
    
            assignment_info = assignments_data["assignment_info"]
    
            if report_format == "markdown":
                return self._generate_markdown_report(
                    analytics, assignment_info, include_executive_summary,
                    include_student_details, include_action_items, include_timeline_analysis
                )
            elif report_format == "csv":
                return self._generate_csv_report(analytics, assignment_info)
            elif report_format == "json":
                return {
                    "assignment_info": assignment_info,
                    "analytics": analytics,
                    "generated_at": datetime.datetime.now().isoformat()
                }
            else:
                return {"error": f"Unsupported report format: {report_format}"}
    
        except Exception as e:
            return {"error": f"Exception in generate_report: {str(e)}"}
  • Registration function that calls register_peer_review_tools(mcp) at line 52, which in turn defines and registers the generate_peer_review_report tool.
    def register_all_tools(mcp: FastMCP) -> None:
        """Register all MCP tools, resources, and prompts."""
        log_info("Registering Canvas MCP tools...")
    
        # Register tools by category
        register_course_tools(mcp)
        register_assignment_tools(mcp)
        register_discussion_tools(mcp)
        register_other_tools(mcp)
        register_rubric_tools(mcp)
        register_peer_review_tools(mcp)
        register_peer_review_comment_tools(mcp)
        register_messaging_tools(mcp)
        register_student_tools(mcp)
        register_accessibility_tools(mcp)
        register_discovery_tools(mcp)
        register_code_execution_tools(mcp)
    
        # Register resources and prompts
        register_resources_and_prompts(mcp)
    
        log_info("All Canvas MCP tools registered successfully!")
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/vishalsachdev/canvas-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server