Skip to main content
Glama

Prelude NZ — Instrument Rental

Server Details

Search and rent musical instruments in New Zealand. Pricing, teachers, and FAQs.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.7/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: get FAQs, get pricing for a specific instrument, search instruments, and search teachers. The descriptions specify different resources (FAQs, pricing, instruments, teachers) and actions (get vs. search), making misselection unlikely.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with hyphens (e.g., get-faqs-tool, search-instruments-tool). The verbs 'get' and 'search' are used appropriately based on the action, and the naming style is uniform throughout the set.

Tool Count4/5

Four tools are reasonable for an instrument rental domain, covering core informational needs. However, it feels slightly thin as it lacks tools for actual rental operations (e.g., reserve or rent an instrument), which might be expected for a rental service.

Completeness3/5

The tools cover informational aspects well (FAQs, pricing, instrument search, teacher search), but there are notable gaps in the rental lifecycle. Missing are tools for actions like reserving, renting, or managing rentals, which are essential for a rental service, potentially causing agent failures in transactional tasks.

Available Tools

4 tools
get-faqs-toolGet Faqs ToolBInspect

Get frequently asked questions about renting instruments from Prelude NZ. Covers how rental works, plans, pricing, delivery, returns, and more.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a read operation ('Get'), but doesn't specify whether it returns all FAQs at once, if there's pagination, authentication requirements, rate limits, or what format the response takes. The description adds minimal behavioral context beyond the basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences. The first sentence states the core purpose, and the second sentence efficiently lists the coverage areas. Every sentence adds value without redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter read tool with no annotations and no output schema, the description provides adequate basic information about what the tool does and what content it covers. However, it lacks important context about response format, pagination, and how it differs from sibling tools, which would be helpful for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters (schema coverage 100%), so there are no parameters to document. The description appropriately doesn't discuss parameters, and the baseline for zero parameters is 4. It does mention what topics the FAQs cover, which provides context about the content returned.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get frequently asked questions about renting instruments from Prelude NZ.' It specifies the resource (FAQs) and domain (instrument rentals), but doesn't explicitly differentiate from sibling tools like get-pricing-tool or search-instruments-tool, which handle different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions what topics the FAQs cover (rental process, plans, pricing, etc.), but doesn't indicate whether users should use this for general rental questions versus get-pricing-tool for specific pricing queries or search-instruments-tool for instrument availability.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-pricing-toolGet Pricing ToolAInspect

Get rental pricing plans for a specific instrument at Prelude NZ. Returns all available plan options with monthly and total prices.

ParametersJSON Schema
NameRequiredDescriptionDefault
instrumentYesThe instrument slug (e.g. "student-violin", "intermediate-flute"). Use search-instruments first to find slugs.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool returns pricing data (a read operation) and specifies the output format ('all available plan options with monthly and total prices'), which adds useful context. However, it lacks details on error handling, rate limits, or authentication needs, leaving behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and includes all necessary details without waste. Every word earns its place, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is mostly complete: it states the purpose, output format, and implies usage. However, it could improve by addressing potential errors or limitations, though the lack of annotations and simple schema reduces the need for extensive detail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single parameter 'instrument' with its description. The description does not add any parameter-specific details beyond what the schema provides, such as format examples or constraints, but it implies the parameter's role in fetching pricing. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get rental pricing plans'), resource ('for a specific instrument at Prelude NZ'), and scope ('all available plan options with monthly and total prices'). It distinguishes from sibling tools like search-instruments-tool by focusing on pricing rather than searching or FAQs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'for a specific instrument' and referencing 'Use search-instruments first to find slugs' in the schema, which suggests when to use this tool (after finding an instrument slug). However, it does not explicitly state when not to use it or name alternatives beyond the implied sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search-instruments-toolSearch Instruments ToolAInspect

Search instruments available for rental at Prelude NZ. Filter by instrument category (e.g. violin, flute) or family (e.g. strings, woodwind). Returns name, category, tier, pricing, availability, and a link to the instrument page.

ParametersJSON Schema
NameRequiredDescriptionDefault
familyNoFilter by instrument family slug (e.g. "strings", "woodwind", "brass", "keyboard", "percussion").
categoryNoFilter by instrument category slug (e.g. "violin", "flute", "trumpet", "clarinet", "cello").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adequately describes the search functionality and return format but lacks details about pagination, rate limits, authentication requirements, error conditions, or whether this is a read-only operation. The description doesn't contradict annotations since none exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: one stating the purpose and filters, another detailing the return values. It's appropriately sized with minimal waste, though it could be slightly more front-loaded by mentioning the return format earlier.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 2 parameters, 100% schema coverage, and no output schema, the description is adequate but has gaps. It explains what the tool does and what it returns, but lacks behavioral context (pagination, errors, etc.) that would be helpful given no annotations. The absence of an output schema means the description should ideally provide more detail about return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description mentions filtering by 'instrument category' and 'family' but doesn't add meaningful semantic context beyond what the schema provides about slugs and examples. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search instruments available for rental'), the target resource ('instruments at Prelude NZ'), and distinguishes it from siblings by focusing on instrument search rather than FAQs, pricing, or teachers. It explicitly mentions filtering capabilities and what information is returned.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (when searching for rental instruments with category/family filters) but doesn't explicitly state when NOT to use it or mention alternatives. No guidance is provided about choosing between this tool and sibling tools like get-pricing-tool or search-teachers-tool for related queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search-teachers-toolSearch Teachers ToolAInspect

Find music teachers in New Zealand listed on Prelude. Filter by instrument and region. Returns teacher names, regions, instruments taught, and a link to their profile page for full details and contact.

ParametersJSON Schema
NameRequiredDescriptionDefault
regionNoFilter by NZ region slug (e.g. "auckland", "wellington", "canterbury", "otago").
instrumentNoFilter by instrument category slug (e.g. "violin", "flute", "trumpet", "piano").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the return format ('teacher names, regions, instruments taught, and a link to their profile page'), which is useful, but lacks details on pagination, rate limits, or error handling, leaving gaps for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and efficiently structured in two sentences: one for the action and filters, and one for the return values. Every sentence adds value without redundancy, making it appropriately concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with filters), no annotations, and no output schema, the description is fairly complete by specifying the resource, filters, and return format. However, it could improve by addressing behavioral aspects like result limits or error cases, slightly reducing completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already documents both parameters thoroughly. The description adds minimal value beyond the schema by mentioning filtering by instrument and region, but does not provide additional syntax or format details, aligning with the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Find', 'Filter') and resources ('music teachers in New Zealand listed on Prelude'), distinguishing it from sibling tools like get-faqs-tool or get-pricing-tool by focusing on teacher search rather than FAQs or pricing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning filtering capabilities ('Filter by instrument and region'), but it does not explicitly state when to use this tool versus alternatives like search-instruments-tool, nor does it provide exclusions or prerequisites for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources