Skip to main content
Glama

identify_problematic_peer_reviews

Flag peer reviews in Canvas assignments that may require instructor attention based on specified criteria to maintain review quality.

Instructions

Flag reviews that may need instructor attention. Args: course_identifier: Canvas course code or ID assignment_id: Canvas assignment ID criteria: JSON string of custom flagging criteria (optional)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
assignment_idYes
course_identifierYes
criteriaNo

Implementation Reference

  • MCP tool handler function for 'identify_problematic_peer_reviews', decorated with @mcp.tool() and @validate_params. Handles input validation, Canvas API calls via analyzer, and returns JSON results or errors.
    async def identify_problematic_peer_reviews( course_identifier: str | int, assignment_id: str | int, criteria: str | None = None ) -> str: """ Flag reviews that may need instructor attention. Args: course_identifier: Canvas course code or ID assignment_id: Canvas assignment ID criteria: JSON string of custom flagging criteria (optional) """ try: course_id = await get_course_id(course_identifier) analyzer = PeerReviewCommentAnalyzer() # Parse criteria if provided parsed_criteria = None if criteria: try: parsed_criteria = json.loads(criteria) except json.JSONDecodeError: return "Error: criteria must be valid JSON" result = await analyzer.identify_problematic_peer_reviews( course_id=course_id, assignment_id=int(assignment_id), criteria=parsed_criteria ) if "error" in result: return f"Error identifying problematic reviews: {result['error']}" return json.dumps(result, indent=2) except Exception as e: return f"Error in identify_problematic_peer_reviews: {str(e)}"
  • Core implementation of the analysis logic in PeerReviewCommentAnalyzer.identify_problematic_peer_reviews method. Fetches peer reviews, applies flagging criteria (word count, generic phrases, quality score, harsh language), summarizes flags, and returns detailed results.
    async def identify_problematic_peer_reviews( self, course_id: int, assignment_id: int, criteria: dict[str, Any] | None = None ) -> dict[str, Any]: """ Flag reviews that may need instructor attention. Args: course_id: Canvas course ID assignment_id: Canvas assignment ID criteria: Custom flagging criteria (optional) Returns: Dict containing flagged reviews and reasons """ try: # Default criteria default_criteria = { "min_word_count": 10, "generic_phrases": ["good job", "nice work", "looks good"], "max_quality_score": 2.0 } if criteria: default_criteria.update(criteria) # Get comments for analysis comments_data = await self.get_peer_review_comments( course_id, assignment_id, anonymize_students=True ) if "error" in comments_data: return comments_data reviews = comments_data.get("peer_reviews", []) flagged_reviews = [] for review in reviews: content = review.get("review_content", {}) comment_text = content.get("comment_text", "") word_count = content.get("word_count", 0) flags = [] # Check word count if word_count < default_criteria["min_word_count"]: flags.append("too_short") # Check for generic phrases text_lower = comment_text.lower() for phrase in default_criteria["generic_phrases"]: if phrase in text_lower: flags.append("generic_language") break # Check quality score quality_score = self._calculate_quality_score(comment_text) if quality_score <= default_criteria["max_quality_score"]: flags.append("low_quality") # Check for copy-paste patterns (identical comments) # This would require comparing against all other comments # Check for potentially inappropriate content if any(word in text_lower for word in self.quality_keywords['harsh']): flags.append("potentially_harsh") if flags: flagged_reviews.append({ "review_id": review.get("review_id"), "reviewer_id": review.get("reviewer", {}).get("anonymous_id", "Unknown"), "reviewee_id": review.get("reviewee", {}).get("anonymous_id", "Unknown"), "flags": flags, "comment_preview": comment_text[:100] + "..." if len(comment_text) > 100 else comment_text, "word_count": word_count, "quality_score": round(quality_score, 1) }) # Categorize flags flag_summary = Counter() for review in flagged_reviews: for flag in review["flags"]: flag_summary[flag] += 1 result = { "total_reviews_analyzed": len(reviews), "total_flagged": len(flagged_reviews), "flag_summary": dict(flag_summary), "flagged_reviews": flagged_reviews, "criteria_used": default_criteria } return result except Exception as e: return {"error": f"Failed to identify problematic reviews: {str(e)}"}
  • Tool registration via @mcp.tool() decorator on the handler function.
    @mcp.tool() @validate_params async def identify_problematic_peer_reviews( course_identifier: str | int, assignment_id: str | int, criteria: str | None = None ) -> str: """ Flag reviews that may need instructor attention. Args: course_identifier: Canvas course code or ID assignment_id: Canvas assignment ID criteria: JSON string of custom flagging criteria (optional) """ try: course_id = await get_course_id(course_identifier) analyzer = PeerReviewCommentAnalyzer() # Parse criteria if provided parsed_criteria = None if criteria: try: parsed_criteria = json.loads(criteria) except json.JSONDecodeError: return "Error: criteria must be valid JSON" result = await analyzer.identify_problematic_peer_reviews( course_id=course_id, assignment_id=int(assignment_id), criteria=parsed_criteria ) if "error" in result: return f"Error identifying problematic reviews: {result['error']}" return json.dumps(result, indent=2) except Exception as e: return f"Error in identify_problematic_peer_reviews: {str(e)}"

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/vishalsachdev/canvas-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server