School MCP
Server Quality Checklist
- Disambiguation4/5
While there are numerous grade-related tools, each serves a distinct purpose ranging from comprehensive dumps (ask_about_grades) to specific course deep-dives (get_grade_details), projections (simulate_grade_scenario), and filtered views (get_failing_or_low_grades). An agent can distinguish between them based on query specificity.
Naming Consistency4/5Tools overwhelmingly follow a snake_case verb_noun pattern. The three exceptions (ask_about_grades, calculate_grade_needed, simulate_grade_scenario) use action verbs appropriate to their computational or conversational nature rather than simple retrieval, maintaining overall readability.
Tool Count4/5Nineteen tools is slightly above the ideal range but justified by the server's comprehensive coverage of academic analytics. The count reflects necessary granularity for grade analysis without excessive redundancy, though it approaches the upper limit of manageable scope.
Completeness3/5The server covers grade analysis extensively but has notable gaps: while it can list documents, messages, and report cards, it cannot retrieve their actual content. Additionally, there are no tools for submitting assignments or replying to communications, creating dead ends in workflows.
Average 3.8/5 across 18 of 19 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 19 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully clarifies the categories of data returned (holidays, early releases, school events), but lacks information about error handling, authentication requirements, or behavior when no events exist for the specified month.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with no redundant information. The first sentence establishes the operation and scope, while the second clarifies the return value types, placing the most critical information first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (two primitive parameters) and lack of output schema, the description adequately compensates by detailing the specific types of events returned. However, it could improve by noting error conditions or empty result behaviors.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for both parameters ('Month number (1-12)' and '4-digit year'), establishing a baseline score. The description references the monthly scope ('for a given month') but does not add semantic meaning, validation rules, or examples beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('school calendar events'), and clarifies the temporal scope ('for a given month'). It implicitly distinguishes from grade-related siblings by specifying event types like 'holidays' and 'early releases,' though it does not explicitly contrast with the sibling tool `get_schedule`.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as `get_schedule` (which may return daily class schedules versus monthly calendar events). There are no stated prerequisites or conditions that would help an agent decide between this and similar tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the output structure ('Shows weight, points earned vs possible, and percentage'), which helps compensate for the missing output schema. However, it omits safety indicators (read-only nature), error handling, or data freshness details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences: purpose definition, output specification, and use-case guidance. Each sentence earns its place with no redundancy. Front-loaded with the core action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single string parameter, no output schema), the description adequately compensates by detailing the expected return structure (weights, percentages). It meets minimum viability but could strengthen completeness by noting authentication requirements or error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for its single parameter (courseName). The description does not add parameter-specific guidance beyond the schema, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly defines the action ('Get'), resource ('grade breakdown by assignment category'), and scope ('for a specific course'). It effectively distinguishes from siblings like get_grade_summary or get_gradebook by specifying the categorical aggregation aspect (Tests, Homework, Quizzes).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides an implied use case ('Useful for finding which category is dragging down a grade'), suggesting diagnostic scenarios. However, it lacks explicit when-not-to-use guidance or comparisons to alternatives like get_grade_details for individual assignments.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It compensates partially by disclosing the return structure (message list with sender, subject, date, read status) since there is no output schema. However, it lacks safety disclosures, pagination behavior, or error handling details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste: first establishes purpose, second describes return payload. Efficiently front-loaded with no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter tool with no annotations or output schema, the description adequately compensates by detailing the returned message fields. Missing only minor details like pagination or empty result behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (the unreadOnly parameter is well-documented in the schema itself). The description does not mention the filtering capability at all, but given the schema fully documents the single parameter, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Get) and resource (messages from teachers and school administration). It effectively distinguishes this tool from academic-focused siblings like get_grades or get_attendance by specifying the communication domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. While the domain is distinct from grade/attendance siblings, the description does not clarify boundaries with ask_about_grades or specify prerequisites like authentication requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It valuably clarifies the return format ('document names and metadata, not binary content'), setting expectations about data completeness. However, it lacks information on safety characteristics, potential rate limits, or pagination behavior for large document lists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficiently structured sentences. The first establishes purpose and scope with examples; the second clarifies output boundaries. There is no redundant or filler text, and the critical limitation ('not binary content') is prominently placed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the tool is structurally simple. The description adequately compensates for the missing output schema by specifying the return values (names/metadata) and limitations (no binary content). For a basic listing operation, this level of documentation is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. According to the scoring rubric, this establishes a baseline of 4. The description correctly implies no filtering or input parameters are required by the standalone phrasing 'Get the list'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get'), resource ('documents'), and scope ('available to the student'), with helpful examples ('forms, letters'). However, it does not explicitly distinguish when to use this versus similar student information tools like 'get_student_info' or 'get_messages'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description does not mention prerequisites, filtering capabilities (none possible given zero parameters), or contextual triggers for selecting this over other student data retrieval tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses return value structure ('document listing with dates and descriptions'), compensating for the missing output schema. However, it omits behavioral traits like safety, idempotency, or what happens when no report cards exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero redundancy: first states the operation, second states the return value. Every word earns its place and the description is front-loaded with the action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool without output schema, the description adequately explains what the tool returns. It could be improved by mentioning edge cases (empty results) or pagination, but it meets the minimum viable threshold for completeness given the low complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, establishing a baseline of 4. The description adds value by specifying the tool operates on 'the student', implying the student identifier comes from authentication context rather than explicit parameters, which clarifies the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States a clear verb ('Get') and resource ('report cards') scoped to 'the student'. However, it does not explicitly distinguish from sibling 'get_documents', which could also retrieve student documents, leaving ambiguity about which tool to use for document retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like 'get_documents', 'get_grade_summary', or 'get_gradebook'. Does not mention prerequisites or conditions where this tool is preferred over querying live grade data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the return structure (assignment name, course, due date, point value) compensating for the lack of output schema. However, it omits operational details such as whether results are sorted, pagination behavior, error handling, or confirmation that this is a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three tightly constructed sentences: first establishes purpose and scope, second discloses return values, third provides usage context. No redundant words or tautologies. Information is front-loaded effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool without output schema or annotations, the description adequately covers the essential contract: what it retrieves, from where, and what data is returned. Could be improved by noting read-only nature or error conditions, but sufficient for the complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% ('Number of days from today to look ahead'). The description references 'next N days' which aligns with the 'days' parameter, but adds no additional semantic detail, examples, or clarifications beyond what the schema already provides. Baseline 3 is appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Get upcoming assignments), scope (within next N days, across all courses), and return content. However, it does not explicitly differentiate from sibling tool 'get_missing_assignments' despite the semantic overlap between 'upcoming' and 'missing' deadlines.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context ('Useful for planning what to work on') but lacks explicit guidance on when to use this versus alternatives like get_missing_assignments or get_calendar. No 'when-not-to-use' or prerequisite information is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It partially satisfies this by enumerating return fields (score, points possible, due date, type) and scope ('every assignment'), which helps the agent anticipate the data structure. However, it omits operational details like authentication requirements, rate limits, pagination behavior for large courses, or data privacy considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is optimally concise at three sentences with zero redundancy. It follows effective front-loading: action (Get), resource (assignment-level data), return specification (Returns every assignment...), and usage guidance (Use this to...). Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 parameter, simple string, no nested objects) and absence of an output schema, the description adequately compensates by detailing the return data structure and fields. It appropriately does not attempt to document parameters already well-covered by the schema. Minor gap: doesn't explicitly confirm this is read-only (though implied by 'Get').
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'courseName' parameter, the baseline is 3. The description references it implicitly ('for a specific course'), but adds no additional syntax guidance, validation rules, or format examples beyond what the schema already provides ('Partial or full course name...').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'detailed assignment-level data for a specific course' with specific verbs and resource identification. It effectively distinguishes from siblings like 'get_grade_summary' (high-level) and 'get_assignments_due' (temporal focus) through the 'detailed' and 'deeply analyze' qualifiers, though it doesn't explicitly contrast with the similarly-named 'get_gradebook' sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes one usage cue: 'Use this to deeply analyze a single class,' which implies when to use it (deep analysis scenarios). However, it lacks explicit guidance on when NOT to use it (e.g., for quick overviews) or which sibling to prefer for other use cases like 'get_grade_summary'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden. It explains the threshold filtering behavior but omits critical behavioral details: what constitutes the grade value (current vs final?), return format structure, and behavior when no courses match the threshold.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-constructed sentences. First sentence front-loads the core action; second adds usage value without redundancy. No filler or tautology.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool with full schema coverage, but lacks return value description needed given the absence of an output schema (doesn't indicate whether it returns course names, IDs, or detailed grade objects).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description mentions 'given grade threshold' which conceptually aligns with the parameter, but adds no syntax details, valid ranges, or semantic clarification beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb (Get) + resource (courses/grades) with scope constraint (below threshold). The name and description together effectively distinguish this as a filtered/risk-identification view versus siblings like get_grade_details or get_grade_summary which presumably return comprehensive data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context ('Useful for quickly finding classes at risk of failing') which suggests when to use it (intervention scenarios). However, lacks explicit 'when not to use' guidance or named alternatives for different grade queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It successfully discloses the return payload structure by listing specific fields (name, address, phone, principal), compensating for the missing output schema. However, it lacks details on caching, authentication requirements, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with optimal information density. Front-loaded with action verb 'Get', followed by resource identification and specific field enumeration. No redundant words or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no input parameters) and lack of output schema, the description adequately compensates by enumerating the expected return fields. For a straightforward metadata retrieval tool, this level of detail is sufficient, though mentioning data freshness or caching would improve it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which establishes a baseline score of 4. The description correctly does not invent parameters that don't exist in the schema, maintaining consistency with the empty properties object.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Get) and resource (student's school information), and specifies exact fields returned (name, address, phone number, principal). It implicitly distinguishes from student-data siblings (grades, attendance, etc.) by focusing on institutional metadata, though it doesn't explicitly mention siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or alternatives are stated, but usage is clearly implied by the specific resource (school institution data vs student performance data). The field listing (principal, address) provides implicit context that this retrieves administrative contact information rather than academic records.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the data granularity ('for each period') and content types (absences, tardies, reason codes), which helps the agent understand the return structure. However, it lacks disclosure of error behaviors, authentication requirements, or privacy constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence of 12 words that immediately conveys the tool's function. Every clause earns its place by specifying distinct data elements retrieved.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates adequately by enumerating the specific data fields returned (absences, tardies, reason codes, period breakdown). For a simple read-only tool with no parameters, this provides sufficient context for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters and 100% schema description coverage, this earns the baseline score of 4. The description correctly omits parameter discussion since none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Get' with the clear resource 'student's attendance record', and distinguishes itself from grade-focused siblings (get_gradebook, get_gpa, etc.) and scheduling tools by specifying unique attendance data: absences, tardies, and reason codes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus siblings like get_schedule (which returns period timing but not attendance status) or get_student_info. No prerequisites or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses calculation logic (accounts for assignment weight and existing grade) and return format (percentage) since no output schema exists. Lacks explicit safety declaration (read-only) and edge case handling (impossible targets).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose (sentence 1), behavioral logic (sentence 2), output format (sentence 3). Front-loaded with specific action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a calculation tool with 3 well-documented parameters. Compensates for missing output schema by specifying return format. Lacks only edge case documentation (e.g., unattainable targets) to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for all 3 parameters. Description mentions 'assignment's weight' and 'existing grade' implying lookup behavior beyond inputs, but does not elaborate on parameter syntax or validation beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb (Calculate) + resource (score/grade) + scope (upcoming assignment to reach target). Distinguishes from retrieval siblings like get_gradebook and simulation siblings by focusing on single-target calculation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies use case (planning for upcoming assignments), but fails to explicitly differentiate from sibling simulate_grade_scenario or state when to prefer this over ask_about_grades. No explicit 'when not to use' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It successfully communicates the sorting behavior (lowest-to-highest) and scope (all courses). However, lacks details on output format, handling of ungraded courses, or whether the operation is read-only/safe.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence front-loads the action and sorting logic; second sentence provides usage rationale. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no annotations, the description adequately covers the tool's purpose and behavior. Minor gap: without an output schema, could have briefly described return structure (e.g., 'returns list of course names with current scores').
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters with 100% schema coverage (trivially). Description correctly focuses on behavior rather than inventing parameter documentation. Baseline 4 appropriate for parameterless tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Get') + resource ('courses') + distinctive behavior ('sorted by current grade from lowest to highest'). The sorting detail clearly distinguishes this from siblings like get_failing_or_low_grades (filtered) or get_gradebook (likely unsorted raw data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context ('Quick overview to see which classes need the most attention') suggesting when to use it (prioritization). However, lacks explicit guidance on when to choose alternatives like get_failing_or_low_grades (for only at-risk courses) or get_gradebook (for full data).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry the full burden of behavioral disclosure. It adds useful context by explicitly excluding photo binary data from the response, but fails to state whether the operation is read-only, requires specific permissions, or has rate limiting implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of exactly two sentences with zero filler. The first sentence front-loads the core functionality and complete field list, while the second provides a precise negative constraint regarding photo data. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately compensates by listing the five specific data fields returned. However, it stops short of describing the response structure, nesting, or data types, leaving minor ambiguity about how the information is formatted.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema defines zero parameters. Per evaluation guidelines, parameterless tools receive a baseline score of 4, as there are no parameter semantics requiring clarification beyond what the empty schema structure indicates.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Get' with the resource 'student's profile information' and explicitly enumerates the returned data fields (name, grade level, student ID, birthdate, school). It effectively distinguishes from siblings like get_grades or get_attendance by focusing on static demographic profile data rather than academic or scheduling information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a negative constraint ('Does not return the photo binary') which implicitly sets expectations, but lacks explicit guidance on when to select this tool versus alternatives like get_school_info or get_schedule. No prerequisites, conditions, or 'use when' directives are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It successfully discloses the scope of returned data ('full course-by-course data including current grades, all assignments, scores, and missing work'), but does not explicitly address safety (read-only status), authentication requirements, or rate limiting implications of such a comprehensive data dump.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of exactly three sentences that are front-loaded with the core action and each earns its place: the first establishes purpose, the second details the payload, and the third provides usage context. There is no redundant or wasteful language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately compensates by detailing what the tool returns ('current grades, all assignments, scores, and missing work'). However, it could briefly mention whether this includes historical data or only current term information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which per guidelines establishes a baseline score of 4. The description correctly implies no filtering is possible by stating it returns 'full' and 'comprehensive' data for all courses, aligning with the empty parameters schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Get', 'Returns') and clearly identifies the resource as a 'comprehensive grade context dump' containing 'full course-by-course data.' It effectively distinguishes itself from siblings like get_gpa or get_missing_assignments by emphasizing 'comprehensive' coverage versus targeted queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'as a starting point for complex analysis' with concrete examples like 'what should I focus on this week.' This implicitly contrasts with sibling tools that handle specific, narrow queries (e.g., get_grade_summary, calculate_grade_needed).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return structure ('Returns all courses with their assignments and scores') and temporal indexing behavior. Does not mention safety properties (read-only) or error conditions, though 'Get' in name strongly implies read-only behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose statement, return value disclosure, and parameter usage guidance. Front-loaded with core action and appropriately sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a single-parameter read tool with no output schema. Covers purpose, return payload structure, and parameter usage. Minor gap: does not describe error behavior for invalid reporting periods or auth requirements, though these may be implicitly understood in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds valuable imperative usage context ('Use reportingPeriod 0 for the current period. Increment for previous periods.') that clarifies the indexing semantics beyond the schema's mechanical description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Get') + resource ('full gradebook') + scope ('for a reporting period'). Effectively distinguishes from siblings like 'get_grade_summary' and 'get_grade_details' by emphasizing 'full' scope and 'all courses with their assignments and scores'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear parameter usage logic (0=current, increment for previous) but lacks explicit guidance on when to choose this over siblings like 'get_grade_summary' or 'get_failing_or_low_grades'. Differentiation is implied through 'full' but not stated directly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden and successfully discloses key behavioral trait: sorting logic ('sorted by impact'). Also clarifies scope ('every course' with no filtering). Missing safety/auth details, but 'Find' implies read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. Front-loaded with core action ('Find all missing...'), followed by return behavior and value proposition. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking output schema, description adequately explains return characteristics (sorted by point value/impact) and scope. Sufficient for a zero-parameter tool, though return data structure details would elevate to 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present in schema, meeting baseline expectation of 4. Description appropriately focuses on behavior rather than non-existent parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Find' with clear resource 'missing or unscored assignments' and scope 'across every course'. Distinguishes from sibling get_assignments_due (upcoming deadlines) by focusing on missing/unscored work, and from grade retrieval tools by targeting specific assignment gaps.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context ('so the student knows which missing work hurts their grade the most') suggesting prioritization use cases, but lacks explicit when-to-use guidance or contrasts with alternatives like get_assignments_due or get_failing_or_low_grades.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It effectively discloses return structure ('Returns each period with...') compensating for the lack of output schema. While it implies read-only safety via 'Get', explicit confirmation of no side effects would strengthen this further.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence establishes purpose and scope; second sentence documents return values. Front-loaded with action verb and appropriately sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter read operation, the description is sufficiently complete. It compensates for the missing output schema by detailing the return structure (periods, course details, meeting times). No significant gaps given the low complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present (baseline 4). The phrase 'for the current term' adds semantic value by explaining why no term-selection parameter is needed, confirming the tool uses implicit context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Get), resource (student's class schedule), and scope (current term). The return value details (course name, teacher, room number) clearly distinguish this from sibling get_calendar and grade-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance provided. However, the specificity of 'class schedule' versus the sibling 'get_calendar' implies distinct use cases, though an explicit comparison would strengthen agent selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It establishes the hypothetical nature effectively but fails to disclose whether the operation is read-only (doesn't modify actual grades), what the return format looks like (percentage, letter grade, or detailed breakdown), or any rate limiting considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of exactly two high-value sentences: one defining the function and one providing a concrete example. There is no redundant information, and the structure efficiently front-loads the core purpose before illustrating with the example.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and lack of output schema, the description adequately covers the input requirements. It could be improved by mentioning the return value format (e.g., projected percentage or letter grade), but the core functionality is sufficiently described for an agent to select and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Although schema coverage is 100%, the description adds significant value through the concrete example showing the exact array structure [{score: 85, outOf: 100}]. This clarifies the semantics of how to construct hypothetical assignments beyond the raw schema definitions, particularly for the nested object structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Project' with clear resource 'student's grade' and scope 'hypothetical scores on future assignments.' It clearly distinguishes from sibling tools like get_grade_details or calculate_grade_needed by emphasizing the hypothetical/future scenario aspect rather than current state or target calculation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The example 'what if I got 85, 90, and 95 on my next 3 assignments' provides clear contextual guidance on when to invoke this tool. However, it lacks explicit guidance on when NOT to use it (e.g., doesn't mention to use calculate_grade_needed instead for target-grade calculations) or explicit naming of alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It effectively communicates the data source characteristics ('live'), calculation methodology ('unweighted', '4.0 scale'), and the complete return structure (per-course breakdown with specific fields), though it omits error handling or permission requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficiently structured sentences. The first establishes the operation (compute GPA), and the second details the return value. Every clause adds value without repetition of the schema or arbitrary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the absence of an output schema, the description thoroughly documents the return values (unweighted GPA, scale, and detailed per-course breakdown structure). Combined with behavioral details and zero parameters needing explanation, the description is complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters, the baseline score is 4 per the rubric. The description appropriately requires no additional parameter documentation, though it implicitly confirms the tool handles data sourcing internally by referencing 'live gradebook data'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Compute') and resource ('GPA from live gradebook data'), clearly distinguishing this from mere retrieval tools like 'get_gradebook' or 'get_report_cards'. It further differentiates by specifying the calculation method ('unweighted', '4.0 scale') and the detailed output structure.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage guidance by emphasizing 'Compute' and 'live gradebook data', suggesting this is for current calculated values rather than official records. However, it lacks explicit 'when to use' guidance or named alternatives among the many siblings (e.g., when to prefer this over 'calculate_grade_needed' or 'get_grade_summary').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/shengdynasty/school-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server