Skip to main content
Glama

canvas_get_course_grades

Retrieve course grades by providing the course ID, enabling easy access to student performance data within the Canvas Learning Management System.

Instructions

Get grades for a course

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
course_idYesID of the course

Implementation Reference

  • src/index.ts:389-399 (registration)
    Registration of the canvas_get_course_grades tool in the TOOLS array, including name, description, and input schema requiring a course_id.
    {
      name: "canvas_get_course_grades",
      description: "Get grades for a course",
      inputSchema: {
        type: "object",
        properties: {
          course_id: { type: "number", description: "ID of the course" }
        },
        required: ["course_id"]
      }
    },
  • Execution handler for canvas_get_course_grades tool within the CallToolRequestSchema switch statement. Extracts course_id from arguments and calls CanvasClient.getCourseGrades().
    case "canvas_get_course_grades": {
      const { course_id } = args as { course_id: number };
      if (!course_id) throw new Error("Missing required field: course_id");
      
      const grades = await this.client.getCourseGrades(course_id);
      return {
        content: [{ type: "text", text: JSON.stringify(grades, null, 2) }]
      };
    }
  • Core implementation of getCourseGrades in CanvasClient class. Makes API request to /courses/{courseId}/enrollments?include[]=grades&include[]=observed_users to fetch grades data.
    async getCourseGrades(courseId: number): Promise<CanvasEnrollment[]> {
      const response = await this.client.get(`/courses/${courseId}/enrollments`, {
        params: {
          include: ['grades', 'observed_users']
        }
      });
      return response.data;
    }
    
    async getUserGrades(): Promise<any> {
  • Type definitions for CanvasEnrollment (containing grades field) and CanvasGrades interfaces, defining the structure of the output data returned by getCourseGrades.
    export interface CanvasEnrollment {
      readonly id: EnrollmentId;
      readonly user_id: UserId;
      readonly course_id: CourseId;
      readonly type: CanvasEnrollmentType;
      readonly role: string;
      readonly enrollment_state: CanvasEnrollmentState;
      readonly grades?: CanvasGrades;
      readonly user?: CanvasUser;
      readonly observed_users?: CanvasUser[];
    }
    
    export type CanvasEnrollmentType =
      | 'StudentEnrollment'
      | 'TeacherEnrollment'
      | 'TaEnrollment'
      | 'DesignerEnrollment'
      | 'ObserverEnrollment';
    
    export type CanvasEnrollmentState =
      | 'active'
      | 'invited'
      | 'inactive'
      | 'completed'
      | 'rejected';
    
    export interface CanvasGrades {
      readonly current_score: number | null;
      readonly final_score: number | null;
      readonly current_grade: string | null;
      readonly final_grade: string | null;
      readonly override_score?: number | null;
      readonly override_grade?: string | null;
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states 'Get grades for a course', implying a read-only operation, but does not specify whether it requires authentication, returns paginated results, includes historical data, or has rate limits. For a tool with zero annotation coverage, this is a significant gap, as critical behavioral traits are left undocumented.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description 'Get grades for a course' is a single, efficient sentence that is front-loaded with the core action. It wastes no words, making it highly concise. However, it lacks structural elements like bullet points or additional context that could enhance clarity without sacrificing brevity, preventing a perfect score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a grades retrieval tool, no annotations, and no output schema, the description is incomplete. It does not explain what the return values include (e.g., grade data format, student information, timestamps) or address potential behavioral aspects like error handling. For a tool that likely returns structured data, this leaves significant gaps in understanding its full context and usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the parameter 'course_id' documented as 'ID of the course'. The description does not add any meaning beyond this, such as format examples (e.g., numeric ID) or constraints. Since the schema already provides adequate parameter documentation, the baseline score of 3 is appropriate, as the description neither compensates nor detracts from the schema's clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get grades for a course' clearly states the verb ('Get') and resource ('grades for a course'), making the purpose understandable. However, it lacks specificity about what 'grades' entails (e.g., all grades, aggregated, per student) and does not differentiate from the sibling tool 'canvas_get_user_grades', which might retrieve grades for a specific user rather than a course. This vagueness prevents a higher score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as 'canvas_get_user_grades' for user-specific grades or 'canvas_get_submission' for detailed submission data. There is no mention of prerequisites, context, or exclusions, leaving the agent to infer usage based on the tool name alone. This lack of explicit guidance limits effectiveness in tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/DMontgomery40/mcp-canvas-lms'

If you have feedback or need assistance with the MCP directory API, please join our Discord server