RTOpacks
Server Details
Australian VET National Register — qualifications, units, skill sets, AQF ladders.
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolsladderCInspect
Get the full AQF qualification ladder for a training package.
| Name | Required | Description | Default |
|---|---|---|---|
| package_code | Yes | 3-letter training package code (e.g. BSB, TLI) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, yet description fails to disclose return format (hierarchical object? flat list?), error behavior (invalid package codes?), or data characteristics (static vs dynamic, size). The agent receives no behavioral hints beyond the implied read-only nature of 'Get'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence of nine words with zero redundancy. 'full' and 'AQF' earn their place by specifying scope and domain. However, extreme brevity sacrifices necessary behavioral context given zero annotation coverage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter simple getter, but lacks output description required when no output schema exists. No explanation of what constitutes a 'ladder' (e.g., progression from Certificate I to Graduate Diploma) leaves semantic gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter description. Description mentions 'training package' which aligns with parameter 'package_code', reinforcing the semantic mapping, but adds no syntax specifics, validation rules, or examples beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific resource (AQF qualification ladder) and action (Get). The term 'ladder' distinguishes this from sibling search/lookup/verify tools which imply querying or validating, while this implies retrieving a complete structured hierarchy. 'AQF' provides domain context, though briefly defining what AQF stands for would strengthen further.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus siblings like 'search' or 'lookup', or when a user might need the 'full' ladder versus a specific qualification. No mention of prerequisites or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lookupAInspect
Get full detail on a specific National Register record by code, including component units.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | Unit code, qualification code, or RTO code |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Adds valuable context that response includes 'component units' and 'full detail', but omits error handling (e.g., invalid codes), authentication requirements, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence (12 words) that is front-loaded with the action verb. No filler or redundant phrases; every word contributes to understanding the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a simple single-parameter lookup tool with complete schema documentation. Adequately describes the retrieval scope, though could benefit from noting expected behavior when a code is not found given the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the 'code' parameter fully documented as 'Unit code, qualification code, or RTO code'. Description references 'by code' but does not add syntax examples, format constraints, or clarification beyond what the schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly specifies the verb (get full detail), resource (National Register record), and scope (by code, including component units). Implicitly distinguishes from sibling 'search' by emphasizing specific code-based retrieval, though explicit differentiation is absent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage guidance through 'by code' phrasing, suggesting use when a specific identifier is known. However, lacks explicit when/when-not guidance or direct comparison to siblings like 'search' (for broad queries) or 'verify' (for validation).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchBInspect
Search the National Register for qualifications, units of competency, skill sets, and RTOs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | all | |
| limit | No | ||
| query | Yes | Keyword or partial code |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With zero annotations provided, the description carries full disclosure burden but omits critical behavioral details: search behavior (fuzzy vs exact matching), pagination model, rate limits, or return structure. The 'partial code' hint exists only in the schema description, not the tool description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence of 11 words with clear front-loading of action and scope. Zero redundant content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Core search scope is covered, but significant gaps remain given no output schema and no annotations. No description of return values, result ranking, or how the default 'all' type behaves versus specific filtering.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is low at 33%. The description compensates partially by mapping enumerated type values (qualification, unit, skillset, rto) to human-readable domain terms. However, it completely ignores the 'limit' parameter and adds no syntax guidance beyond the schema's 'Keyword or partial code'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb (Search) and resource (National Register) with concrete scope (qualifications, units of competency, skill sets, RTOs). However, it does not explicitly distinguish from the sibling tool 'search_teqsa_providers' or clarify when to use 'search' versus 'lookup'/'verify'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance offered on when to prefer this tool over siblings like 'lookup' (likely exact retrieval) or 'verify', nor any prerequisites or exclusion criteria mentioned. Usage must be inferred from parameter names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_teqsa_providersAInspect
Search TEQSA-registered higher education providers in Australia. Returns provider details, registration status, CRICOS codes, and dual-regulation status (TEQSA + ASQA).
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Search by provider name | |
| status | No | Filter by registration status | |
| category | No | Filter by category (e.g. university, institute) | |
| dual_regulated | No | Only show providers with both TEQSA and ASQA registration |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It effectively discloses return values ('provider details, registration status, CRICOS codes') compensating for lack of output schema. 'Search' implies read-only, though explicit safety declaration is absent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently structured as 'Search [resource]. Returns [fields].' Every clause provides essential information with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-parameter search tool without output schema, the description adequately covers return structure. Lacks explicit read-only/safety declaration which would be expected given no annotations, but remains highly functional.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description adds semantic context for 'dual_regulated' parameter by defining 'dual-regulation status (TEQSA + ASQA)', but does not elaborate on other parameters beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Search') and resource ('TEQSA-registered higher education providers in Australia'), clearly distinguishing from the generic 'search' sibling tool through domain specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Domain specificity implies when to use (Australian higher education provider lookups), but provides no explicit guidance on when to use versus siblings like 'lookup' or 'verify', nor any prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verifyBInspect
Verify a RTOpacks Statement of Provenance ID.
| Name | Required | Description | Default |
|---|---|---|---|
| sop_id | Yes | SOP-ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to explain what verification entails (e.g., cryptographic validation vs. existence check), success/failure outcomes, side effects, or idempotency. 'Verify' is left semantically opaque.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is efficiently structured with no wasted words. However, given the lack of annotations and output schema, it is arguably too terse—leaving significant behavioral information unstated rather than strategically omitted.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with complete schema coverage, the description identifies the core operation adequately. However, as a verification tool with no output schema or annotations, it lacks necessary context about return values, error states, and what 'RTOpacks' refers to, leaving agents under-informed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage but only provides the tautological description 'SOP-ID'. The tool description adds value by expanding this acronym to 'Statement of Provenance ID', clarifying the parameter's domain meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Verify') and identifies the exact resource ('RTOpacks Statement of Provenance ID'), making the purpose clear. However, it does not explicitly differentiate this verification tool from sibling lookup/search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus the 'lookup' or 'search' siblings, nor are prerequisites or error conditions mentioned. The agent must infer usage from the verb 'verify' alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!