Skip to main content
Glama

tool_get_answer_group_detail

Inspect answer group details including members, crops, and graded status to prepare for batch grading on Gradescope.

Instructions

Get detail for one answer group: members, crops, graded status.

Use this to inspect what answers are in a group before batch-grading.

Args:
    course_id: The Gradescope course ID.
    question_id: The question ID.
    group_id: The answer group ID.
    output_format: "markdown" or "json" for structured output.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
course_idYes
question_idYes
group_idYes
output_formatNomarkdown

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Implementation of the answer group detail logic.
    def get_answer_group_detail(
        course_id: str,
        question_id: str,
        group_id: str,
        output_format: str = "markdown",
    ) -> str:
        """Get detailed information about a specific answer group.
    
        Shows the group's title, member submissions, graded status, and
        representative crop images. Use this to understand what answers
        are in a group before batch-grading.
    
        Args:
            course_id: The Gradescope course ID.
            question_id: The question ID.
            group_id: The answer group ID (from get_answer_groups).
            output_format: "markdown" (default) or "json" for structured output.
        """
        if not course_id or not question_id or not group_id:
            return "Error: course_id, question_id, and group_id are required."
    
        try:
            data = _fetch_answer_groups_json(course_id, question_id)
        except AuthError as e:
            return f"Authentication error: {e}"
        except ValueError as e:
            return f"Error: {e}"
        except Exception as e:
            return f"Error fetching answer group detail: {e}"
    
        groups = data.get("groups", [])
        submissions = data.get("submissions", [])
    
        # Find the target group
        target_group = None
        for g in groups:
            if str(g["id"]) == str(group_id):
  • Registration and MCP tool definition for get_answer_group_detail.
    @mcp.tool()
    def tool_get_answer_group_detail(
        course_id: str,
        question_id: str,
        group_id: str,
        output_format: str = "markdown",
    ) -> str:
        """Get detail for one answer group: members, crops, graded status.
    
        Use this to inspect what answers are in a group before batch-grading.
    
        Args:
            course_id: The Gradescope course ID.
            question_id: The question ID.
            group_id: The answer group ID.
            output_format: "markdown" or "json" for structured output.
        """
        return get_answer_group_detail(course_id, question_id, group_id, output_format)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool is for inspection ('inspect what answers are in a group'), implying it's read-only, but doesn't explicitly state this or cover other behavioral aspects like authentication needs, rate limits, or error handling. It adds some context about the output format but lacks comprehensive behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a clear purpose statement, usage guideline, and parameter list in just a few sentences. Every sentence adds value without redundancy, making it easy to parse and front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no annotations, but with an output schema), the description is reasonably complete. It covers purpose, usage, and parameters adequately. The presence of an output schema means the description doesn't need to explain return values, but it could benefit from more behavioral context (e.g., read-only confirmation).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description lists all four parameters with brief explanations, adding meaning beyond the schema (which has 0% description coverage). It clarifies that 'output_format' accepts 'markdown' or 'json' for structured output, which is not evident from the schema alone. However, it doesn't provide deeper semantics for IDs (e.g., format or sourcing), leaving some gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get detail for one answer group') and the resources involved ('members, crops, graded status'). It distinguishes from sibling tools like 'tool_get_answer_groups' (which likely lists groups) and 'tool_grade_answer_group' (which performs grading), making the purpose unambiguous and well-differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use this to inspect what answers are in a group before batch-grading.' This provides clear context and distinguishes it from grading tools like 'tool_grade_answer_group', offering practical guidance for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Yuanpeng-Li/gradescope-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server