Skip to main content
Glama

Server Details

Australian VET National Register — qualifications, units, skill sets, AQF ladders.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
ladderCInspect

Get the full AQF qualification ladder for a training package.

ParametersJSON Schema
NameRequiredDescriptionDefault
package_codeYes3-letter training package code (e.g. BSB, TLI)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, yet description fails to disclose return format (hierarchical object? flat list?), error behavior (invalid package codes?), or data characteristics (static vs dynamic, size). The agent receives no behavioral hints beyond the implied read-only nature of 'Get'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence of nine words with zero redundancy. 'full' and 'AQF' earn their place by specifying scope and domain. However, extreme brevity sacrifices necessary behavioral context given zero annotation coverage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter simple getter, but lacks output description required when no output schema exists. No explanation of what constitutes a 'ladder' (e.g., progression from Certificate I to Graduate Diploma) leaves semantic gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter description. Description mentions 'training package' which aligns with parameter 'package_code', reinforcing the semantic mapping, but adds no syntax specifics, validation rules, or examples beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific resource (AQF qualification ladder) and action (Get). The term 'ladder' distinguishes this from sibling search/lookup/verify tools which imply querying or validating, while this implies retrieving a complete structured hierarchy. 'AQF' provides domain context, though briefly defining what AQF stands for would strengthen further.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus siblings like 'search' or 'lookup', or when a user might need the 'full' ladder versus a specific qualification. No mention of prerequisites or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookupAInspect

Get full detail on a specific National Register record by code, including component units.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesUnit code, qualification code, or RTO code
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. Adds valuable context that response includes 'component units' and 'full detail', but omits error handling (e.g., invalid codes), authentication requirements, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single sentence (12 words) that is front-loaded with the action verb. No filler or redundant phrases; every word contributes to understanding the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a simple single-parameter lookup tool with complete schema documentation. Adequately describes the retrieval scope, though could benefit from noting expected behavior when a code is not found given the lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the 'code' parameter fully documented as 'Unit code, qualification code, or RTO code'. Description references 'by code' but does not add syntax examples, format constraints, or clarification beyond what the schema already provides, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly specifies the verb (get full detail), resource (National Register record), and scope (by code, including component units). Implicitly distinguishes from sibling 'search' by emphasizing specific code-based retrieval, though explicit differentiation is absent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage guidance through 'by code' phrasing, suggesting use when a specific identifier is known. However, lacks explicit when/when-not guidance or direct comparison to siblings like 'search' (for broad queries) or 'verify' (for validation).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_teqsa_providersAInspect

Search TEQSA-registered higher education providers in Australia. Returns provider details, registration status, CRICOS codes, and dual-regulation status (TEQSA + ASQA).

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoSearch by provider name
statusNoFilter by registration status
categoryNoFilter by category (e.g. university, institute)
dual_regulatedNoOnly show providers with both TEQSA and ASQA registration
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It effectively discloses return values ('provider details, registration status, CRICOS codes') compensating for lack of output schema. 'Search' implies read-only, though explicit safety declaration is absent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence efficiently structured as 'Search [resource]. Returns [fields].' Every clause provides essential information with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-parameter search tool without output schema, the description adequately covers return structure. Lacks explicit read-only/safety declaration which would be expected given no annotations, but remains highly functional.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline 3. Description adds semantic context for 'dual_regulated' parameter by defining 'dual-regulation status (TEQSA + ASQA)', but does not elaborate on other parameters beyond schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb ('Search') and resource ('TEQSA-registered higher education providers in Australia'), clearly distinguishing from the generic 'search' sibling tool through domain specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Domain specificity implies when to use (Australian higher education provider lookups), but provides no explicit guidance on when to use versus siblings like 'lookup' or 'verify', nor any prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verifyBInspect

Verify a RTOpacks Statement of Provenance ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
sop_idYesSOP-ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to explain what verification entails (e.g., cryptographic validation vs. existence check), success/failure outcomes, side effects, or idempotency. 'Verify' is left semantically opaque.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single-sentence description is efficiently structured with no wasted words. However, given the lack of annotations and output schema, it is arguably too terse—leaving significant behavioral information unstated rather than strategically omitted.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with complete schema coverage, the description identifies the core operation adequately. However, as a verification tool with no output schema or annotations, it lacks necessary context about return values, error states, and what 'RTOpacks' refers to, leaving agents under-informed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage but only provides the tautological description 'SOP-ID'. The tool description adds value by expanding this acronym to 'Statement of Provenance ID', clarifying the parameter's domain meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Verify') and identifies the exact resource ('RTOpacks Statement of Provenance ID'), making the purpose clear. However, it does not explicitly differentiate this verification tool from sibling lookup/search tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the 'lookup' or 'search' siblings, nor are prerequisites or error conditions mentioned. The agent must infer usage from the verb 'verify' alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources