BC Curriculum
Server Details
Query the full BC K-12 curriculum: Big Ideas, Competencies, Content, and more.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pdg6/bc-curriculum-mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 8 of 8 tools scored.
Each tool has a clearly distinct purpose with no overlap. Tools like get_course_curriculum (retrieve full curriculum), get_competency_connections (find related competencies), and search_cross_curricular (identify interdisciplinary overlaps) serve unique functions within the curriculum analysis domain. The descriptions reinforce these distinctions, making tool selection unambiguous.
All tool names follow a consistent verb_noun pattern using snake_case (e.g., get_course_curriculum, list_courses, search_curriculum). The verbs (get, list, search) are appropriately matched to their actions, and the nouns clearly indicate the target resources (e.g., curriculum, courses, changes). This consistency enhances predictability and usability.
With 8 tools, the server is well-scoped for its purpose of BC curriculum analysis. Each tool serves a specific, valuable function—from retrieving curriculum data to analyzing changes, progressions, and interdisciplinary connections. The count is neither too sparse nor bloated, covering core workflows without redundancy.
The toolset provides comprehensive coverage for curriculum analysis, including retrieval (get_course_curriculum, list_courses), search (search_curriculum), change tracking (get_curriculum_changes, get_course_history), and advanced analysis (get_competency_connections, get_grade_progression, search_cross_curricular). There are no obvious gaps; agents can perform full lifecycle operations from discovery to detailed analysis.
Available Tools
8 toolsget_competency_connectionsGet Competency ConnectionsARead-onlyIdempotentInspect
Find curricular competencies that appear across multiple subjects or courses. Useful for interdisciplinary curriculum design and identifying transferable skills.
Args:
competency_text (string): A competency description to find connections for
scope (string, optional): Where to search ('same_subject', 'cross_subject', 'all'). Default 'all'.
Returns: Related competencies from other courses/subjects with similarity ranking.
| Name | Required | Description | Default |
|---|---|---|---|
| scope | No | Where to search for related competencies | all |
| competency_text | Yes | A competency description to find connections for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=false, and idempotentHint=true, covering safety and idempotency. The description adds useful context about the tool's purpose (finding connections across subjects) and return format (similarity ranking), but doesn't disclose additional behavioral traits like rate limits, authentication needs, or data freshness. With annotations covering core safety, a 3 is appropriate as the description adds some value without contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a clear purpose statement, usage context, and parameter/return details in separate sections. Every sentence earns its place, with no redundant information. The front-loaded purpose statement immediately communicates the tool's value, making it easy for an agent to understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, 100% schema coverage, no output schema), the description is mostly complete. It covers purpose, usage, parameters, and returns, but lacks details on output structure (beyond 'similarity ranking') and potential limitations. With annotations providing safety context, it's sufficient but could be slightly enhanced for a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters clearly documented in the schema. The description adds minimal value beyond the schema by briefly mentioning the parameters in the Args section and clarifying the scope options, but doesn't provide additional semantics like examples or edge cases. Baseline 3 is correct when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('find curricular competencies that appear across multiple subjects or courses') and distinguishes it from siblings by focusing on interdisciplinary connections. It explicitly mentions the resource ('curricular competencies') and the goal ('identifying transferable skills'), making it distinct from tools like get_course_curriculum or search_curriculum.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('useful for interdisciplinary curriculum design and identifying transferable skills'), which implicitly differentiates it from siblings focused on single courses or historical data. However, it doesn't explicitly state when not to use it or name specific alternatives among the sibling tools, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_course_curriculumGet Course CurriculumARead-onlyIdempotentInspect
Get the complete BC curriculum for a specific course: Big Ideas, Curricular Competencies (grouped by domain), and Content/KDU items with elaborations. Returns the full three-column structure used by BC Ministry of Education.
Args:
subject (string): Subject slug (e.g., 'adst', 'science')
grade (integer): Grade level (0=K, 1-12)
course (string, optional): Course slug (e.g., 'technology-explorations'). If omitted, returns all courses for that subject+grade.
Returns: Complete three-column curriculum structure per course, including elaborations.
| Name | Required | Description | Default |
|---|---|---|---|
| grade | Yes | Grade level (0=Kindergarten, 1-12) | |
| course | No | Course slug (e.g., 'technology-explorations'). If omitted, returns all courses for subject+grade. | |
| subject | Yes | BC curriculum subject slug (e.g., 'adst', 'science') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=false, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond annotations: it specifies the exact structure of the return data ('three-column structure used by BC Ministry of Education' with specific components like Big Ideas and elaborations), which helps the agent understand output format. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by a structured breakdown of args and returns. Every sentence adds value: the first defines the tool, the args section clarifies parameter usage, and the returns section specifies output format. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, 100% schema coverage, rich annotations), the description is complete. It covers purpose, usage, parameter behavior, and output structure. With annotations handling safety and idempotency, and no output schema provided, the description adequately explains what the tool returns ('Complete three-column curriculum structure per course, including elaborations').
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters well-documented in the schema (e.g., grade range 0-12, subject enum values, course optional behavior). The description adds minimal extra semantics: it clarifies '0=K' for grade and provides example slugs, but these are already implied in the schema. Baseline 3 is appropriate since the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get the complete BC curriculum'), the resource ('for a specific course'), and the detailed output structure ('Big Ideas, Curricular Competencies grouped by domain, Content/KDU items with elaborations, three-column structure'). It distinguishes from siblings by focusing on complete curriculum retrieval rather than connections, history, changes, progression, listing, or searching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Get the complete BC curriculum for a specific course' and clarifies the optional 'course' parameter behavior ('If omitted, returns all courses for that subject+grade'). This distinguishes it from siblings like list_courses (which likely lists courses without curriculum details) and search_curriculum (which might search within curriculum content).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_course_historyGet Course HistoryARead-onlyIdempotentInspect
Show the crawl history and change timeline for a specific course. Includes each crawl snapshot (date, item counts, content hash) and a changelog of all detected modifications.
Args:
subject (string): Subject slug (e.g., 'science')
grade (integer): Grade level (0=K, 1-12)
course (string, optional): Course slug (e.g., 'chemistry'). If omitted, shows history for all courses at subject+grade.
Returns: Timeline of crawl snapshots and detected changes per course.
| Name | Required | Description | Default |
|---|---|---|---|
| grade | Yes | Grade level (0=Kindergarten, 1-12) | |
| course | No | Course slug (e.g., 'chemistry'). If omitted, shows history for all courses at subject+grade. | |
| subject | Yes | BC curriculum subject slug (e.g., 'adst', 'science') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable behavioral context beyond annotations by specifying what data is included (date, item counts, content hash, changelog) and clarifying the optional course parameter behavior (shows all courses if omitted). No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose in the first sentence. The 'Args' and 'Returns' sections are structured but slightly redundant with the schema. Every sentence adds value, though some information duplication exists.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations (covering safety and idempotency), and 100% schema coverage, the description is mostly complete. It explains the return format (timeline of snapshots and changes) despite no output schema. Minor gaps include lack of pagination or rate limit details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all parameters. The description's 'Args' section repeats schema information without adding significant meaning beyond it. The baseline score of 3 reflects adequate but redundant parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('show', 'includes') and resources ('crawl history', 'change timeline', 'crawl snapshot', 'changelog'). It distinguishes from siblings by focusing on historical crawl data rather than current curriculum, connections, or search functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (to view crawl history and change timeline for courses). It doesn't explicitly mention when not to use it or name specific alternatives among siblings, but the purpose naturally differentiates it from tools like get_course_curriculum (current content) or get_curriculum_changes (broader changes).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_curriculum_changesGet Curriculum ChangesARead-onlyIdempotentInspect
Show what changed in BC curriculum since a given date. Detects added, removed, and modified Big Ideas, Competencies, and Content items across crawl runs. Requires at least two crawls to have change data.
Args:
since (string, optional): ISO date (e.g., '2026-01-15'). Default: last 30 days.
subject (string, optional): Filter by subject slug
grade (integer, optional): Filter by grade level
change_type (string, optional): Filter by change type ('added', 'removed', 'modified', 'all'). Default 'all'.
limit (integer, optional): Max entries to return (default 50, max 100)
Returns: Course-level summary of which courses changed, plus item-level detail of what specifically was added/removed/modified.
| Name | Required | Description | Default |
|---|---|---|---|
| grade | No | Grade level (0=Kindergarten, 1-12) | |
| limit | No | Maximum changelog entries to return (default 50) | |
| since | No | ISO date string — show changes detected after this date (e.g., '2026-01-15'). Defaults to last 30 days. | |
| subject | No | BC curriculum subject slug (e.g., 'adst', 'science') | |
| change_type | No | Filter by type of change | all |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and idempotent behavior. The description adds valuable context beyond annotations: it explains the prerequisite ('Requires at least two crawls to have change data') and describes the return format ('Course-level summary... plus item-level detail'). This enhances the agent's understanding of how the tool behaves and what to expect, though it doesn't mention rate limits or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured: a clear purpose statement, a prerequisite note, a bulleted list of parameters with key details, and a summary of returns. Every sentence adds value without redundancy. It's front-loaded with the core functionality and maintains a logical flow, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (change detection across curriculum components) and the absence of an output schema, the description does a good job of explaining what the tool returns ('Course-level summary... plus item-level detail'). It covers prerequisites, parameters, and output expectations. However, it could be more complete by detailing the structure of the returned data or example outputs, which would help the agent interpret results better.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, meaning all parameters are well-documented in the schema itself. The description lists parameters with brief notes (e.g., default values, filters) but doesn't add significant semantic meaning beyond what the schema provides. For example, it doesn't explain the implications of 'change_type' filtering or how 'subject' slugs map to curriculum areas. The baseline of 3 is appropriate given the comprehensive schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Show what changed in BC curriculum since a given date. Detects added, removed, and modified Big Ideas, Competencies, and Content items across crawl runs.' It specifies the exact resource (BC curriculum), the action (show changes), and the scope of changes (Big Ideas, Competencies, Content items). It also distinguishes itself from siblings by focusing on change detection rather than general curriculum retrieval or searching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage context: 'Requires at least two crawls to have change data.' This tells the agent when the tool can be used (prerequisite). However, it does not specify when to use this tool versus alternatives like 'get_course_history' or 'search_curriculum', which might overlap in functionality. The guidance is clear but lacks sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_grade_progressionGet Grade ProgressionARead-onlyIdempotentInspect
Show how Big Ideas, Competencies, and Content progress across grade levels for a BC subject. Useful for understanding scaffolding, prerequisites, and learning trajectories. When a query is provided, filters to only matching items at each grade — showing a focused vertical thread rather than a full data dump.
Args:
subject (string): Subject slug
grade_from (integer): Starting grade (0=K, 1-12)
grade_to (integer): Ending grade (0=K, 1-12)
focus (string, optional): Which element to trace ('big_ideas', 'competencies', 'content', 'all'). Default 'all'.
query (string, optional): Focus on a specific concept (e.g., 'evidence', 'multiplication'). Only matching items shown at each grade.
Returns: Grade-by-grade breakdown of curriculum elements showing progression, optionally filtered to a concept thread.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds valuable context about filtering behavior ('filters to only matching items at each grade') and output format ('grade-by-grade breakdown'), which goes beyond what annotations provide. No contradictions exist with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a clear purpose statement upfront, followed by usage context, and then parameter explanations. Every sentence adds value: the first defines the tool's core function, the second explains when to use it, and the third clarifies filtering behavior. No redundant or unnecessary information is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (progression analysis with filtering), rich annotations (covering safety and idempotency), and lack of output schema, the description provides strong contextual completeness. It explains the tool's purpose, usage scenarios, filtering behavior, and return format. The only minor gap is that without an output schema, more detail on the 'grade-by-grade breakdown' structure could be helpful, but the description adequately compensates.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (per context signals), so the schema fully documents all parameters. The description adds some semantic context about parameter purposes (e.g., 'focus on a specific concept' for query, 'which element to trace' for focus), but doesn't provide additional syntax or format details beyond what the schema likely contains. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Show how...progress'), identifies the resources (Big Ideas, Competencies, Content for a BC subject), and distinguishes from siblings by focusing on vertical progression across grades rather than connections, curriculum details, or search functions. It explicitly mentions 'grade-by-grade breakdown' which differentiates it from tools like get_course_curriculum or search_curriculum.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('useful for understanding scaffolding, prerequisites, and learning trajectories') and when it's particularly valuable ('When a query is provided, filters to only matching items...showing a focused vertical thread rather than a full data dump'). It implicitly contrasts with siblings by emphasizing progression analysis versus other curriculum exploration methods.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_coursesList BC CoursesARead-onlyIdempotentInspect
List all available courses in the BC curriculum database (K-12). Use this to discover what courses are available before querying specific curriculum data.
Args:
subject (string, optional): Filter by subject slug
grade (integer, optional): Filter by grade level (0=K, 1-12)
Returns: List of courses with subject, grade, name, and URL.
| Name | Required | Description | Default |
|---|---|---|---|
| grade | No | Grade level (0=Kindergarten, 1-12) | |
| subject | No | BC curriculum subject slug (e.g., 'adst', 'science') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=false, and idempotentHint=true, covering safety and idempotency. The description adds useful context about the tool's role in discovery and the database scope (BC curriculum, K-12), which isn't covered by annotations. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by a usage guideline and a structured Args/Returns section. Every sentence adds value without redundancy, making it efficient and well-organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (list operation), rich annotations, and 100% schema coverage, the description is largely complete. It lacks an output schema, but the Returns section describes the response format. The only minor gap is no explicit mention of pagination or limits, but this is acceptable for a simple list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters (subject and grade) well-documented in the schema. The description adds minimal value beyond the schema by mentioning filtering but doesn't provide additional syntax or format details. Baseline 3 is appropriate given the comprehensive schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and resource 'all available courses in the BC curriculum database (K-12)', specifying the scope. It distinguishes from siblings by indicating this is for discovery before querying specific curriculum data, unlike tools like get_course_curriculum or search_curriculum that likely retrieve detailed content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states 'Use this to discover what courses are available before querying specific curriculum data', providing clear when-to-use guidance. It implies alternatives like get_course_curriculum for detailed data, though it doesn't name specific siblings, the context is sufficient for differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_cross_curricularSearch Cross-Curricular ConnectionsARead-onlyIdempotentInspect
Find curriculum elements shared between two or more subjects at the same grade level. Identifies overlapping competencies, big ideas, and content across subjects. Essential for interdisciplinary planning.
Args:
subjects (string[]): Two or more subject slugs to compare (e.g., ['science', 'adst'])
grade (integer): Grade level (0=K, 1-12)
focus (string, optional): Which element to compare ('big_ideas', 'competencies', 'content', 'all'). Default 'all'.
query (string, optional): Narrow to a specific concept (e.g., 'evidence', 'design thinking')
limit (integer, optional): Max connections to return (default 20, max 50)
Returns: Groups of curriculum items connected by shared language across subjects.
| Name | Required | Description | Default |
|---|---|---|---|
| focus | No | Which curriculum element to compare across subjects | all |
| grade | Yes | Grade level (0=Kindergarten, 1-12) | |
| limit | No | Maximum connections to return (default 20) | |
| query | No | Optional: narrow to a specific concept (e.g., 'evidence', 'design thinking') | |
| subjects | Yes | Two or more subject slugs to compare (e.g., ['science', 'adst']) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this as a safe, read-only, idempotent operation with a closed world. The description adds valuable context about what the tool actually finds ('curriculum elements shared between two or more subjects', 'Identifies overlapping competencies, big ideas, and content'), which helps the agent understand the specific type of cross-curricular analysis being performed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by a clear Args section. Every sentence earns its place: the first states the purpose, the second elaborates on what's identified, and the third provides usage context. The parameter documentation is organized and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only search tool with comprehensive annotations and full schema coverage, the description provides good context about what the tool finds and its interdisciplinary planning use case. The lack of output schema means the Returns statement is helpful, though it could be more specific about the return format. Overall, it's mostly complete for this type of tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all parameters thoroughly. The description's Args section essentially repeats what's in the schema without adding significant additional semantic context. The baseline of 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Find curriculum elements shared between two or more subjects at the same grade level' with specific resources identified ('competencies, big ideas, and content across subjects'). It distinguishes from siblings by focusing on cross-curricular connections rather than single-subject curriculum retrieval or historical changes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Essential for interdisciplinary planning'), but doesn't explicitly state when not to use it or name specific alternatives among sibling tools. It implies usage for finding overlaps rather than retrieving individual subject curriculum.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_curriculumSearch BC CurriculumARead-onlyIdempotentInspect
Search BC curriculum (K-12) for standards, competencies, content items, and assessment resources using full-text search. Returns structured results with source metadata.
Args:
query (string): Natural language search query (e.g., 'empathetic design thinking', 'coding and computational thinking')
subject (string, optional): Filter by subject slug (e.g., 'adst', 'science')
grade (integer, optional): Filter by grade level (0=K, 1-12)
content_type (string, optional): Filter by content type ('big_idea', 'competency', 'content_item', 'elaboration', 'assessment', 'all')
limit (integer, optional): Max results (default 10, max 50)
Returns: Matching curriculum elements with source type, course, subject, and grade metadata.
| Name | Required | Description | Default |
|---|---|---|---|
| grade | No | Grade level (0=Kindergarten, 1-12) | |
| limit | No | Maximum results to return (default 10) | |
| query | Yes | Natural language search query (e.g., 'empathetic design thinking') | |
| subject | No | BC curriculum subject slug (e.g., 'adst', 'science') | |
| content_type | No | Type of curriculum content to filter by | all |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a read-only, non-destructive, idempotent operation with a closed-world assumption. The description adds useful context beyond this: it specifies the search scope ('K-12'), mentions 'structured results with source metadata', and notes default/max values for 'limit'. However, it doesn't detail behavioral aspects like rate limits, authentication needs, or pagination, which would be helpful given the lack of an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded: the first sentence clearly states the purpose and key features. The parameter list is organized efficiently with brief explanations and examples, and the return statement is concise. Every sentence adds value without redundancy, making it easy to scan and understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (5 parameters, no output schema) and rich annotations, the description is mostly complete. It covers purpose, usage context, parameters, and return format. However, without an output schema, more detail on the structure of 'matching curriculum elements' (e.g., fields in the results) would enhance completeness, though the mention of 'source metadata' provides some guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all parameters. The description adds minimal value beyond the schema: it provides example queries and clarifies '0=K' for grade, but most parameter details (e.g., enum values, defaults, constraints) are already in the schema. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search BC curriculum'), the resource ('standards, competencies, content items, and assessment resources'), and the method ('using full-text search'). It distinguishes itself from siblings like 'get_course_curriculum' (which likely retrieves a specific course) and 'search_cross_curricular' (which likely searches across subjects) by focusing on comprehensive K-12 curriculum search with filtering options.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: for searching BC curriculum with full-text capabilities and filtering. It doesn't explicitly state when not to use it or name alternatives, but the sibling tools suggest distinct purposes (e.g., 'get_course_curriculum' for specific courses, 'search_cross_curricular' for cross-subject searches), implying this is for general curriculum search. No explicit exclusions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!