Skip to main content
Glama

identify_problematic_peer_reviews

Flag peer reviews in Canvas assignments that may require instructor attention based on customizable criteria.

Instructions

    Flag reviews that may need instructor attention.

    Args:
        course_identifier: Canvas course code or ID
        assignment_id: Canvas assignment ID
        criteria: JSON string of custom flagging criteria (optional)
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
course_identifierYes
assignment_idYes
criteriaNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • MCP tool handler: resolves course ID, instantiates analyzer, parses optional JSON criteria, calls core analyzer method, returns JSON result or error.
    @mcp.tool()
    @validate_params
    async def identify_problematic_peer_reviews(
        course_identifier: str | int,
        assignment_id: str | int,
        criteria: str | None = None
    ) -> str:
        """
        Flag reviews that may need instructor attention.
    
        Args:
            course_identifier: Canvas course code or ID
            assignment_id: Canvas assignment ID
            criteria: JSON string of custom flagging criteria (optional)
        """
        try:
            course_id = await get_course_id(course_identifier)
            analyzer = PeerReviewCommentAnalyzer()
    
            # Parse criteria if provided
            parsed_criteria = None
            if criteria:
                try:
                    parsed_criteria = json.loads(criteria)
                except json.JSONDecodeError:
                    return "Error: criteria must be valid JSON"
    
            result = await analyzer.identify_problematic_peer_reviews(
                course_id=course_id,
                assignment_id=int(assignment_id),
                criteria=parsed_criteria
            )
    
            if "error" in result:
                return f"Error identifying problematic reviews: {result['error']}"
    
            return json.dumps(result, indent=2)
    
        except Exception as e:
            return f"Error in identify_problematic_peer_reviews: {str(e)}"
  • Core helper method in PeerReviewCommentAnalyzer class: implements flagging logic using default/custom criteria for short comments, generic language, low quality score, harsh language; returns flagged reviews summary and details.
    async def identify_problematic_peer_reviews(
        self,
        course_id: int,
        assignment_id: int,
        criteria: dict[str, Any] | None = None
    ) -> dict[str, Any]:
        """
        Flag reviews that may need instructor attention.
    
        Args:
            course_id: Canvas course ID
            assignment_id: Canvas assignment ID
            criteria: Custom flagging criteria (optional)
    
        Returns:
            Dict containing flagged reviews and reasons
        """
        try:
            # Default criteria
            default_criteria = {
                "min_word_count": 10,
                "generic_phrases": ["good job", "nice work", "looks good"],
                "max_quality_score": 2.0
            }
    
            if criteria:
                default_criteria.update(criteria)
    
            # Get comments for analysis
            comments_data = await self.get_peer_review_comments(
                course_id, assignment_id, anonymize_students=True
            )
    
            if "error" in comments_data:
                return comments_data
    
            reviews = comments_data.get("peer_reviews", [])
            flagged_reviews = []
    
            for review in reviews:
                content = review.get("review_content", {})
                comment_text = content.get("comment_text", "")
                word_count = content.get("word_count", 0)
    
                flags = []
    
                # Check word count
                if word_count < default_criteria["min_word_count"]:
                    flags.append("too_short")
    
                # Check for generic phrases
                text_lower = comment_text.lower()
                for phrase in default_criteria["generic_phrases"]:
                    if phrase in text_lower:
                        flags.append("generic_language")
                        break
    
                # Check quality score
                quality_score = self._calculate_quality_score(comment_text)
                if quality_score <= default_criteria["max_quality_score"]:
                    flags.append("low_quality")
    
                # Check for copy-paste patterns (identical comments)
                # This would require comparing against all other comments
    
                # Check for potentially inappropriate content
                if any(word in text_lower for word in self.quality_keywords['harsh']):
                    flags.append("potentially_harsh")
    
                if flags:
                    flagged_reviews.append({
                        "review_id": review.get("review_id"),
                        "reviewer_id": review.get("reviewer", {}).get("anonymous_id", "Unknown"),
                        "reviewee_id": review.get("reviewee", {}).get("anonymous_id", "Unknown"),
                        "flags": flags,
                        "comment_preview": comment_text[:100] + "..." if len(comment_text) > 100 else comment_text,
                        "word_count": word_count,
                        "quality_score": round(quality_score, 1)
                    })
    
            # Categorize flags
            flag_summary = Counter()
            for review in flagged_reviews:
                for flag in review["flags"]:
                    flag_summary[flag] += 1
    
            result = {
                "total_reviews_analyzed": len(reviews),
                "total_flagged": len(flagged_reviews),
                "flag_summary": dict(flag_summary),
                "flagged_reviews": flagged_reviews,
                "criteria_used": default_criteria
            }
    
            return result
    
        except Exception as e:
            return {"error": f"Failed to identify problematic reviews: {str(e)}"}
  • Registration call within register_all_tools function that invokes the peer review comment tools registration, which defines and registers the identify_problematic_peer_reviews tool via @mcp.tool() decorator.
    register_peer_review_comment_tools(mcp)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool 'flags' reviews, implying a read-only analysis, but doesn't clarify if it modifies data, requires specific permissions, has rate limits, or what the output entails. This is inadequate for a tool that likely interacts with peer review data, leaving behavioral traits unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured: a purpose statement followed by parameter explanations in a clear format. It avoids unnecessary words, though the parameter details could be more informative. The front-loading of the purpose is effective, making it easy to scan.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which reduces the need to describe return values) but no annotations and low schema coverage, the description is moderately complete. It covers the basic purpose and parameters but lacks behavioral context and usage guidelines, making it adequate but with clear gaps for a tool that likely involves data analysis.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate by explaining parameters. It lists args with brief notes (e.g., 'Canvas course code or ID'), but this adds minimal semantic value beyond the schema's titles. For example, 'criteria' is described as 'JSON string of custom flagging criteria (optional)' without detailing format or examples, failing to fully address the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Flag reviews that may need instructor attention.' It specifies the action (flag) and resource (reviews), though it doesn't explicitly differentiate from sibling tools like 'analyze_peer_review_quality' or 'get_peer_review_followup_list' that might have overlapping functions. The purpose is clear but lacks sibling differentiation for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context, or exclusions, such as how it differs from 'analyze_peer_review_quality' or other peer-review-related tools in the sibling list. This leaves the agent without usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/vishalsachdev/canvas-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server