Skip to main content
Glama

extract_peer_review_dataset

Export peer review data from Canvas assignments for analysis. Choose output formats, include analytics, anonymize student information, and save files locally.

Instructions

Export all peer review data in various formats for analysis. Args: course_identifier: Canvas course code or ID assignment_id: Canvas assignment ID output_format: Output format (csv, json, xlsx) include_analytics: Include quality analytics in output anonymize_data: Anonymize student data save_locally: Save file locally filename: Custom filename (optional)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
course_identifierYes
assignment_idYes
output_formatNocsv
include_analyticsNo
anonymize_dataNo
save_locallyNo
filenameNo

Implementation Reference

  • The handler function that implements the core logic of the 'extract_peer_review_dataset' tool. It fetches peer review comments, optionally adds analytics, and exports data in CSV or JSON format, with options for saving locally or returning as string.
    @mcp.tool() @validate_params async def extract_peer_review_dataset( course_identifier: str | int, assignment_id: str | int, output_format: str = "csv", include_analytics: bool = True, anonymize_data: bool = True, save_locally: bool = True, filename: str | None = None ) -> str: """ Export all peer review data in various formats for analysis. Args: course_identifier: Canvas course code or ID assignment_id: Canvas assignment ID output_format: Output format (csv, json, xlsx) include_analytics: Include quality analytics in output anonymize_data: Anonymize student data save_locally: Save file locally filename: Custom filename (optional) """ try: course_id = await get_course_id(course_identifier) analyzer = PeerReviewCommentAnalyzer() # Get the comment data comments_data = await analyzer.get_peer_review_comments( course_id=course_id, assignment_id=int(assignment_id), include_reviewer_info=True, include_reviewee_info=True, include_submission_context=True, anonymize_students=anonymize_data ) if "error" in comments_data: return f"Error getting comments data: {comments_data['error']}" # Generate filename if not provided if not filename: assignment_name = comments_data.get("assignment_info", {}).get("assignment_name", "assignment") safe_name = "".join(c for c in assignment_name if c.isalnum() or c in (' ', '-', '_')).rstrip() filename = f"peer_reviews_{safe_name}_{assignment_id}" # Include analytics if requested if include_analytics: analytics_data = await analyzer.analyze_peer_review_quality( course_id=course_id, assignment_id=int(assignment_id) ) if "error" not in analytics_data: comments_data["quality_analytics"] = analytics_data # Export based on format if output_format.lower() == "json": output_filename = f"{filename}.json" if save_locally: with open(output_filename, 'w', encoding='utf-8') as f: json.dump(comments_data, f, indent=2, ensure_ascii=False) return f"Data exported to {output_filename}" else: return json.dumps(comments_data, indent=2) elif output_format.lower() == "csv": output_filename = f"{filename}.csv" if save_locally: with open(output_filename, 'w', newline='', encoding='utf-8') as f: writer = csv.writer(f) # Write header writer.writerow([ 'review_id', 'reviewer_id', 'reviewer_name', 'reviewee_id', 'reviewee_name', 'comment_text', 'word_count', 'character_count', 'timestamp' ]) # Write data for review in comments_data.get("peer_reviews", []): reviewer = review.get("reviewer", {}) reviewee = review.get("reviewee", {}) content = review.get("review_content", {}) writer.writerow([ review.get("review_id", ""), reviewer.get("student_id", ""), reviewer.get("student_name", ""), reviewee.get("student_id", ""), reviewee.get("student_name", ""), content.get("comment_text", ""), content.get("word_count", 0), content.get("character_count", 0), content.get("timestamp", "") ]) return f"Data exported to {output_filename}" else: # Return CSV as string csv_lines = [] csv_lines.append("review_id,reviewer_id,reviewer_name,reviewee_id,reviewee_name,comment_text,word_count,character_count,timestamp") for review in comments_data.get("peer_reviews", []): reviewer = review.get("reviewer", {}) reviewee = review.get("reviewee", {}) content = review.get("review_content", {}) # Escape quotes in comment text comment_text = content.get("comment_text", "").replace('"', '""') csv_lines.append(f'"{review.get("review_id", "")}","{reviewer.get("student_id", "")}","{reviewer.get("student_name", "")}","{reviewee.get("student_id", "")}","{reviewee.get("student_name", "")}","{comment_text}",{content.get("word_count", 0)},{content.get("character_count", 0)},"{content.get("timestamp", "")}"') return "\n".join(csv_lines) else: return f"Error: Unsupported output format '{output_format}'. Supported formats: csv, json" except Exception as e: return f"Error in extract_peer_review_dataset: {str(e)}"
  • The registration call for the peer review comment tools module in the main server setup function, which registers the extract_peer_review_dataset tool among others.
    register_peer_review_comment_tools(mcp)
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/vishalsachdev/canvas-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server