Skip to main content
Glama

BC Curriculum

Server Details

Query the full BC K-12 curriculum: Big Ideas, Competencies, Content, and more.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pdg6/bc-curriculum-mcp-server
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap. Tools like get_course_curriculum (retrieve full curriculum), get_competency_connections (find related competencies), and search_cross_curricular (identify interdisciplinary overlaps) serve unique functions within the curriculum analysis domain. The descriptions reinforce these distinctions, making tool selection unambiguous.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case (e.g., get_course_curriculum, list_courses, search_curriculum). The verbs (get, list, search) are appropriately matched to their actions, and the nouns clearly indicate the target resources (e.g., curriculum, courses, changes). This consistency enhances predictability and usability.

Tool Count5/5

With 8 tools, the server is well-scoped for its purpose of BC curriculum analysis. Each tool serves a specific, valuable function—from retrieving curriculum data to analyzing changes, progressions, and interdisciplinary connections. The count is neither too sparse nor bloated, covering core workflows without redundancy.

Completeness5/5

The toolset provides comprehensive coverage for curriculum analysis, including retrieval (get_course_curriculum, list_courses), search (search_curriculum), change tracking (get_curriculum_changes, get_course_history), and advanced analysis (get_competency_connections, get_grade_progression, search_cross_curricular). There are no obvious gaps; agents can perform full lifecycle operations from discovery to detailed analysis.

Available Tools

8 tools
get_competency_connectionsGet Competency ConnectionsA
Read-onlyIdempotent
Inspect

Find curricular competencies that appear across multiple subjects or courses. Useful for interdisciplinary curriculum design and identifying transferable skills.

Args:

  • competency_text (string): A competency description to find connections for

  • scope (string, optional): Where to search ('same_subject', 'cross_subject', 'all'). Default 'all'.

Returns: Related competencies from other courses/subjects with similarity ranking.

ParametersJSON Schema
NameRequiredDescriptionDefault
scopeNoWhere to search for related competenciesall
competency_textYesA competency description to find connections for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=false, and idempotentHint=true, covering safety and idempotency. The description adds useful context about the tool's purpose (finding connections across subjects) and return format (similarity ranking), but doesn't disclose additional behavioral traits like rate limits, authentication needs, or data freshness. With annotations covering core safety, a 3 is appropriate as the description adds some value without contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a clear purpose statement, usage context, and parameter/return details in separate sections. Every sentence earns its place, with no redundant information. The front-loaded purpose statement immediately communicates the tool's value, making it easy for an agent to understand.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, 100% schema coverage, no output schema), the description is mostly complete. It covers purpose, usage, parameters, and returns, but lacks details on output structure (beyond 'similarity ranking') and potential limitations. With annotations providing safety context, it's sufficient but could be slightly enhanced for a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters clearly documented in the schema. The description adds minimal value beyond the schema by briefly mentioning the parameters in the Args section and clarifying the scope options, but doesn't provide additional semantics like examples or edge cases. Baseline 3 is correct when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('find curricular competencies that appear across multiple subjects or courses') and distinguishes it from siblings by focusing on interdisciplinary connections. It explicitly mentions the resource ('curricular competencies') and the goal ('identifying transferable skills'), making it distinct from tools like get_course_curriculum or search_curriculum.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('useful for interdisciplinary curriculum design and identifying transferable skills'), which implicitly differentiates it from siblings focused on single courses or historical data. However, it doesn't explicitly state when not to use it or name specific alternatives among the sibling tools, which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_course_curriculumGet Course CurriculumA
Read-onlyIdempotent
Inspect

Get the complete BC curriculum for a specific course: Big Ideas, Curricular Competencies (grouped by domain), and Content/KDU items with elaborations. Returns the full three-column structure used by BC Ministry of Education.

Args:

  • subject (string): Subject slug (e.g., 'adst', 'science')

  • grade (integer): Grade level (0=K, 1-12)

  • course (string, optional): Course slug (e.g., 'technology-explorations'). If omitted, returns all courses for that subject+grade.

Returns: Complete three-column curriculum structure per course, including elaborations.

ParametersJSON Schema
NameRequiredDescriptionDefault
gradeYesGrade level (0=Kindergarten, 1-12)
courseNoCourse slug (e.g., 'technology-explorations'). If omitted, returns all courses for subject+grade.
subjectYesBC curriculum subject slug (e.g., 'adst', 'science')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=false, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond annotations: it specifies the exact structure of the return data ('three-column structure used by BC Ministry of Education' with specific components like Big Ideas and elaborations), which helps the agent understand output format. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by a structured breakdown of args and returns. Every sentence adds value: the first defines the tool, the args section clarifies parameter usage, and the returns section specifies output format. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, 100% schema coverage, rich annotations), the description is complete. It covers purpose, usage, parameter behavior, and output structure. With annotations handling safety and idempotency, and no output schema provided, the description adequately explains what the tool returns ('Complete three-column curriculum structure per course, including elaborations').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters well-documented in the schema (e.g., grade range 0-12, subject enum values, course optional behavior). The description adds minimal extra semantics: it clarifies '0=K' for grade and provides example slugs, but these are already implied in the schema. Baseline 3 is appropriate since the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the complete BC curriculum'), the resource ('for a specific course'), and the detailed output structure ('Big Ideas, Curricular Competencies grouped by domain, Content/KDU items with elaborations, three-column structure'). It distinguishes from siblings by focusing on complete curriculum retrieval rather than connections, history, changes, progression, listing, or searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Get the complete BC curriculum for a specific course' and clarifies the optional 'course' parameter behavior ('If omitted, returns all courses for that subject+grade'). This distinguishes it from siblings like list_courses (which likely lists courses without curriculum details) and search_curriculum (which might search within curriculum content).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_course_historyGet Course HistoryA
Read-onlyIdempotent
Inspect

Show the crawl history and change timeline for a specific course. Includes each crawl snapshot (date, item counts, content hash) and a changelog of all detected modifications.

Args:

  • subject (string): Subject slug (e.g., 'science')

  • grade (integer): Grade level (0=K, 1-12)

  • course (string, optional): Course slug (e.g., 'chemistry'). If omitted, shows history for all courses at subject+grade.

Returns: Timeline of crawl snapshots and detected changes per course.

ParametersJSON Schema
NameRequiredDescriptionDefault
gradeYesGrade level (0=Kindergarten, 1-12)
courseNoCourse slug (e.g., 'chemistry'). If omitted, shows history for all courses at subject+grade.
subjectYesBC curriculum subject slug (e.g., 'adst', 'science')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable behavioral context beyond annotations by specifying what data is included (date, item counts, content hash, changelog) and clarifying the optional course parameter behavior (shows all courses if omitted). No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose in the first sentence. The 'Args' and 'Returns' sections are structured but slightly redundant with the schema. Every sentence adds value, though some information duplication exists.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations (covering safety and idempotency), and 100% schema coverage, the description is mostly complete. It explains the return format (timeline of snapshots and changes) despite no output schema. Minor gaps include lack of pagination or rate limit details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all parameters. The description's 'Args' section repeats schema information without adding significant meaning beyond it. The baseline score of 3 reflects adequate but redundant parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('show', 'includes') and resources ('crawl history', 'change timeline', 'crawl snapshot', 'changelog'). It distinguishes from siblings by focusing on historical crawl data rather than current curriculum, connections, or search functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (to view crawl history and change timeline for courses). It doesn't explicitly mention when not to use it or name specific alternatives among siblings, but the purpose naturally differentiates it from tools like get_course_curriculum (current content) or get_curriculum_changes (broader changes).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_curriculum_changesGet Curriculum ChangesA
Read-onlyIdempotent
Inspect

Show what changed in BC curriculum since a given date. Detects added, removed, and modified Big Ideas, Competencies, and Content items across crawl runs. Requires at least two crawls to have change data.

Args:

  • since (string, optional): ISO date (e.g., '2026-01-15'). Default: last 30 days.

  • subject (string, optional): Filter by subject slug

  • grade (integer, optional): Filter by grade level

  • change_type (string, optional): Filter by change type ('added', 'removed', 'modified', 'all'). Default 'all'.

  • limit (integer, optional): Max entries to return (default 50, max 100)

Returns: Course-level summary of which courses changed, plus item-level detail of what specifically was added/removed/modified.

ParametersJSON Schema
NameRequiredDescriptionDefault
gradeNoGrade level (0=Kindergarten, 1-12)
limitNoMaximum changelog entries to return (default 50)
sinceNoISO date string — show changes detected after this date (e.g., '2026-01-15'). Defaults to last 30 days.
subjectNoBC curriculum subject slug (e.g., 'adst', 'science')
change_typeNoFilter by type of changeall
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and idempotent behavior. The description adds valuable context beyond annotations: it explains the prerequisite ('Requires at least two crawls to have change data') and describes the return format ('Course-level summary... plus item-level detail'). This enhances the agent's understanding of how the tool behaves and what to expect, though it doesn't mention rate limits or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: a clear purpose statement, a prerequisite note, a bulleted list of parameters with key details, and a summary of returns. Every sentence adds value without redundancy. It's front-loaded with the core functionality and maintains a logical flow, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (change detection across curriculum components) and the absence of an output schema, the description does a good job of explaining what the tool returns ('Course-level summary... plus item-level detail'). It covers prerequisites, parameters, and output expectations. However, it could be more complete by detailing the structure of the returned data or example outputs, which would help the agent interpret results better.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning all parameters are well-documented in the schema itself. The description lists parameters with brief notes (e.g., default values, filters) but doesn't add significant semantic meaning beyond what the schema provides. For example, it doesn't explain the implications of 'change_type' filtering or how 'subject' slugs map to curriculum areas. The baseline of 3 is appropriate given the comprehensive schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Show what changed in BC curriculum since a given date. Detects added, removed, and modified Big Ideas, Competencies, and Content items across crawl runs.' It specifies the exact resource (BC curriculum), the action (show changes), and the scope of changes (Big Ideas, Competencies, Content items). It also distinguishes itself from siblings by focusing on change detection rather than general curriculum retrieval or searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage context: 'Requires at least two crawls to have change data.' This tells the agent when the tool can be used (prerequisite). However, it does not specify when to use this tool versus alternatives like 'get_course_history' or 'search_curriculum', which might overlap in functionality. The guidance is clear but lacks sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_grade_progressionGet Grade ProgressionA
Read-onlyIdempotent
Inspect

Show how Big Ideas, Competencies, and Content progress across grade levels for a BC subject. Useful for understanding scaffolding, prerequisites, and learning trajectories. When a query is provided, filters to only matching items at each grade — showing a focused vertical thread rather than a full data dump.

Args:

  • subject (string): Subject slug

  • grade_from (integer): Starting grade (0=K, 1-12)

  • grade_to (integer): Ending grade (0=K, 1-12)

  • focus (string, optional): Which element to trace ('big_ideas', 'competencies', 'content', 'all'). Default 'all'.

  • query (string, optional): Focus on a specific concept (e.g., 'evidence', 'multiplication'). Only matching items shown at each grade.

Returns: Grade-by-grade breakdown of curriculum elements showing progression, optionally filtered to a concept thread.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds valuable context about filtering behavior ('filters to only matching items at each grade') and output format ('grade-by-grade breakdown'), which goes beyond what annotations provide. No contradictions exist with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a clear purpose statement upfront, followed by usage context, and then parameter explanations. Every sentence adds value: the first defines the tool's core function, the second explains when to use it, and the third clarifies filtering behavior. No redundant or unnecessary information is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (progression analysis with filtering), rich annotations (covering safety and idempotency), and lack of output schema, the description provides strong contextual completeness. It explains the tool's purpose, usage scenarios, filtering behavior, and return format. The only minor gap is that without an output schema, more detail on the 'grade-by-grade breakdown' structure could be helpful, but the description adequately compensates.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (per context signals), so the schema fully documents all parameters. The description adds some semantic context about parameter purposes (e.g., 'focus on a specific concept' for query, 'which element to trace' for focus), but doesn't provide additional syntax or format details beyond what the schema likely contains. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Show how...progress'), identifies the resources (Big Ideas, Competencies, Content for a BC subject), and distinguishes from siblings by focusing on vertical progression across grades rather than connections, curriculum details, or search functions. It explicitly mentions 'grade-by-grade breakdown' which differentiates it from tools like get_course_curriculum or search_curriculum.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('useful for understanding scaffolding, prerequisites, and learning trajectories') and when it's particularly valuable ('When a query is provided, filters to only matching items...showing a focused vertical thread rather than a full data dump'). It implicitly contrasts with siblings by emphasizing progression analysis versus other curriculum exploration methods.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_coursesList BC CoursesA
Read-onlyIdempotent
Inspect

List all available courses in the BC curriculum database (K-12). Use this to discover what courses are available before querying specific curriculum data.

Args:

  • subject (string, optional): Filter by subject slug

  • grade (integer, optional): Filter by grade level (0=K, 1-12)

Returns: List of courses with subject, grade, name, and URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
gradeNoGrade level (0=Kindergarten, 1-12)
subjectNoBC curriculum subject slug (e.g., 'adst', 'science')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=false, and idempotentHint=true, covering safety and idempotency. The description adds useful context about the tool's role in discovery and the database scope (BC curriculum, K-12), which isn't covered by annotations. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by a usage guideline and a structured Args/Returns section. Every sentence adds value without redundancy, making it efficient and well-organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (list operation), rich annotations, and 100% schema coverage, the description is largely complete. It lacks an output schema, but the Returns section describes the response format. The only minor gap is no explicit mention of pagination or limits, but this is acceptable for a simple list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters (subject and grade) well-documented in the schema. The description adds minimal value beyond the schema by mentioning filtering but doesn't provide additional syntax or format details. Baseline 3 is appropriate given the comprehensive schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and resource 'all available courses in the BC curriculum database (K-12)', specifying the scope. It distinguishes from siblings by indicating this is for discovery before querying specific curriculum data, unlike tools like get_course_curriculum or search_curriculum that likely retrieve detailed content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states 'Use this to discover what courses are available before querying specific curriculum data', providing clear when-to-use guidance. It implies alternatives like get_course_curriculum for detailed data, though it doesn't name specific siblings, the context is sufficient for differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_cross_curricularSearch Cross-Curricular ConnectionsA
Read-onlyIdempotent
Inspect

Find curriculum elements shared between two or more subjects at the same grade level. Identifies overlapping competencies, big ideas, and content across subjects. Essential for interdisciplinary planning.

Args:

  • subjects (string[]): Two or more subject slugs to compare (e.g., ['science', 'adst'])

  • grade (integer): Grade level (0=K, 1-12)

  • focus (string, optional): Which element to compare ('big_ideas', 'competencies', 'content', 'all'). Default 'all'.

  • query (string, optional): Narrow to a specific concept (e.g., 'evidence', 'design thinking')

  • limit (integer, optional): Max connections to return (default 20, max 50)

Returns: Groups of curriculum items connected by shared language across subjects.

ParametersJSON Schema
NameRequiredDescriptionDefault
focusNoWhich curriculum element to compare across subjectsall
gradeYesGrade level (0=Kindergarten, 1-12)
limitNoMaximum connections to return (default 20)
queryNoOptional: narrow to a specific concept (e.g., 'evidence', 'design thinking')
subjectsYesTwo or more subject slugs to compare (e.g., ['science', 'adst'])
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as a safe, read-only, idempotent operation with a closed world. The description adds valuable context about what the tool actually finds ('curriculum elements shared between two or more subjects', 'Identifies overlapping competencies, big ideas, and content'), which helps the agent understand the specific type of cross-curricular analysis being performed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by a clear Args section. Every sentence earns its place: the first states the purpose, the second elaborates on what's identified, and the third provides usage context. The parameter documentation is organized and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only search tool with comprehensive annotations and full schema coverage, the description provides good context about what the tool finds and its interdisciplinary planning use case. The lack of output schema means the Returns statement is helpful, though it could be more specific about the return format. Overall, it's mostly complete for this type of tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all parameters thoroughly. The description's Args section essentially repeats what's in the schema without adding significant additional semantic context. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find curriculum elements shared between two or more subjects at the same grade level' with specific resources identified ('competencies, big ideas, and content across subjects'). It distinguishes from siblings by focusing on cross-curricular connections rather than single-subject curriculum retrieval or historical changes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Essential for interdisciplinary planning'), but doesn't explicitly state when not to use it or name specific alternatives among sibling tools. It implies usage for finding overlaps rather than retrieving individual subject curriculum.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_curriculumSearch BC CurriculumA
Read-onlyIdempotent
Inspect

Search BC curriculum (K-12) for standards, competencies, content items, and assessment resources using full-text search. Returns structured results with source metadata.

Args:

  • query (string): Natural language search query (e.g., 'empathetic design thinking', 'coding and computational thinking')

  • subject (string, optional): Filter by subject slug (e.g., 'adst', 'science')

  • grade (integer, optional): Filter by grade level (0=K, 1-12)

  • content_type (string, optional): Filter by content type ('big_idea', 'competency', 'content_item', 'elaboration', 'assessment', 'all')

  • limit (integer, optional): Max results (default 10, max 50)

Returns: Matching curriculum elements with source type, course, subject, and grade metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
gradeNoGrade level (0=Kindergarten, 1-12)
limitNoMaximum results to return (default 10)
queryYesNatural language search query (e.g., 'empathetic design thinking')
subjectNoBC curriculum subject slug (e.g., 'adst', 'science')
content_typeNoType of curriculum content to filter byall
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, non-destructive, idempotent operation with a closed-world assumption. The description adds useful context beyond this: it specifies the search scope ('K-12'), mentions 'structured results with source metadata', and notes default/max values for 'limit'. However, it doesn't detail behavioral aspects like rate limits, authentication needs, or pagination, which would be helpful given the lack of an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded: the first sentence clearly states the purpose and key features. The parameter list is organized efficiently with brief explanations and examples, and the return statement is concise. Every sentence adds value without redundancy, making it easy to scan and understand.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (5 parameters, no output schema) and rich annotations, the description is mostly complete. It covers purpose, usage context, parameters, and return format. However, without an output schema, more detail on the structure of 'matching curriculum elements' (e.g., fields in the results) would enhance completeness, though the mention of 'source metadata' provides some guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all parameters. The description adds minimal value beyond the schema: it provides example queries and clarifies '0=K' for grade, but most parameter details (e.g., enum values, defaults, constraints) are already in the schema. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search BC curriculum'), the resource ('standards, competencies, content items, and assessment resources'), and the method ('using full-text search'). It distinguishes itself from siblings like 'get_course_curriculum' (which likely retrieves a specific course) and 'search_cross_curricular' (which likely searches across subjects) by focusing on comprehensive K-12 curriculum search with filtering options.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for searching BC curriculum with full-text capabilities and filtering. It doesn't explicitly state when not to use it or name alternatives, but the sibling tools suggest distinct purposes (e.g., 'get_course_curriculum' for specific courses, 'search_cross_curricular' for cross-subject searches), implying this is for general curriculum search. No explicit exclusions are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.