Bandago Van Rentals
Server Details
Real-time passenger van rental availability and pricing across major US cities.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolscheck_availabilityAInspect
Check which vehicles are available for rental on specific dates and locations. Uses the scheduling engine to account for existing reservations, vehicle positioning, and fleet consolidation. Returns a list of available vehicle types (rate codes) with availability urgency labels.
| Name | Required | Description | Default |
|---|---|---|---|
| end_city | No | 2-letter return location code. Defaults to start_city if not specified (round-trip). | |
| end_date | Yes | Rental end date (YYYY-MM-DD) | |
| rate_code | No | Filter by vehicle type/rate code (e.g., 'Ford Transit', 'Sprinter'). Omit to check all types. | |
| start_city | Yes | 2-letter pickup location code (e.g., 'LA', 'NA', 'SF') | |
| start_date | Yes | Rental start date (YYYY-MM-DD) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries the full burden and succeeds well: discloses use of 'scheduling engine' accounting for reservations/positioning/consolidation, and details return format (list of rate codes with urgency labels). Could improve by explicitly stating this is read-only/side-effect free.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences with no waste. Front-loaded with the core action. Second sentence provides valuable implementation context (scheduling engine). Third explains output. Appropriately sized for complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 simple parameters with 100% schema coverage and no output schema, the description adequately compensates by describing the return structure (list with urgency labels). No critical gaps for a query tool of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline applies. Description mentions 'specific dates and locations' and 'rate codes' but does not add semantic details beyond what the schema already documents (e.g., date formats, default behaviors for end_city).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Check which vehicles are available for rental on specific dates and locations' provides clear verb (check), resource (vehicles), and scope (dates/locations). It distinguishes from siblings (e.g., get_rate_quote focuses on pricing, list_locations lists codes without checking actual availability).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through return value description ('Returns a list of available vehicle types...'), suggesting this is a prerequisite step before booking. However, lacks explicit guidance on when to use vs alternatives (e.g., 'use this before get_rate_quote') or when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_rate_quoteAInspect
Get a full pricing breakdown for a rental including day rate, insurance options, mileage allowance, applicable taxes, and grand total. Uses the Bandago rate engine with DIA (days-in-advance) multipliers and multi-jurisdiction tax calculations.
| Name | Required | Description | Default |
|---|---|---|---|
| No | Customer email (used for VIP rate detection) | ||
| end_city | No | 2-letter return location code. Defaults to start_city. | |
| end_date | Yes | Rental end date (YYYY-MM-DD) | |
| end_time | No | Return time in 24hr format without colon (e.g., '1000' for 10:00 AM, '1400' for 2:00 PM). Defaults to '1000'. Valid values: 0900, 0930, 1000, 1030, 1100, 1130, 1200, 1230, 1300, 1330, 1400, 1430, 1500, 1530, 1600, 1630. | |
| rate_code | Yes | Vehicle type/rate code (e.g., 'Ford Transit', 'Sprinter') | |
| start_city | Yes | 2-letter pickup location code | |
| start_date | Yes | Rental start date (YYYY-MM-DD) | |
| start_time | No | Pickup time in 24hr format without colon (e.g., '1000' for 10:00 AM, '1400' for 2:00 PM). Defaults to '1000'. Valid values: 0900, 0930, 1000, 1030, 1100, 1130, 1200, 1230, 1300, 1330, 1400, 1430, 1500, 1530, 1600, 1630. | |
| discount_code | No | Promotional or discount code |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds valuable context about calculation methodology: Bandago rate engine, DIA (days-in-advance) multipliers, and multi-jurisdiction tax calculations. However, misses opportunity to state this is read-only/nondestructive or that it doesn't create a reservation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two dense sentences with zero waste. Front-loaded with user value (pricing breakdown components), followed by technical implementation details. Every clause earns its place—first sentence defines outputs, second explains calculation engine.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for missing output schema by enumerating expected return components (day rate, insurance, mileage, taxes, grand total). Technical details about rate engine provide appropriate depth for a 9-parameter calculation tool. Would benefit from mentioning if quote is binding or estimated.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (baseline 3). Description mentions DIA multipliers which loosely contextualizes date parameters, but adds no specific guidance on parameter formats, valid ranges, or interdependencies beyond the schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity with 'Get a full pricing breakdown' (verb + resource), and clearly differentiates from siblings like check_availability (binary status) and get_reservation_link (booking action) by detailing specific output components: day rate, insurance, taxes, and grand total.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for cost calculation before booking through detailed output specification, but lacks explicit workflow guidance such as when to prefer this over check_availability or that it should be called before get_reservation_link. No 'when-not-to-use' guidance provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_reservation_linkAInspect
Generate a link to start a reservation on Bandago's website with dates and locations pre-filled. The customer will choose their vehicle and complete the booking on the website.
| Name | Required | Description | Default |
|---|---|---|---|
| end_city | No | 2-letter return location code. Defaults to start_city if not specified (round-trip). | |
| end_date | Yes | Rental end date (YYYY-MM-DD) | |
| end_time | No | Return time in 24hr format without colon (e.g., '1000' for 10:00 AM, '1400' for 2:00 PM). Defaults to start_time. Valid values: 0900, 0930, 1000, 1030, 1100, 1130, 1200, 1230, 1300, 1330, 1400, 1430, 1500, 1530, 1600, 1630. | |
| rate_code | No | Filter to a specific vehicle type (e.g., 'Ford Transit', 'Sprinter'). When provided, only this vehicle will be shown on the booking page. | |
| start_city | Yes | 2-letter pickup location code (e.g., 'LA', 'NA', 'SF') | |
| start_date | Yes | Rental start date (YYYY-MM-DD) | |
| start_time | No | Pickup time in 24hr format without colon (e.g., '1000' for 10:00 AM, '1400' for 2:00 PM). Defaults to '1000'. Valid values: 0900, 0930, 1000, 1030, 1100, 1130, 1200, 1230, 1300, 1330, 1400, 1430, 1500, 1530, 1600, 1630. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden and successfully discloses key behavioral traits: it clarifies this generates a handoff link rather than completing the reservation (customer finishes on website), implies idempotent/read-only behavior (URL generation), and explains the vehicle selection happens externally. Missing explicit mention of link expiration or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence establishes core function (generate pre-filled link), second sentence establishes customer workflow completion. Every clause earns its place; no redundancy with schema or name.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong given 100% input schema coverage. Description implies return value is a URL via 'Generate a link,' though explicit 'Returns: URL string' would be better given no output schema exists. Covers the handoff workflow sufficiently for a 7-parameter tool with simple flat structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with detailed formats (e.g., '24hr format without colon', '2-letter location code'). Description mentions 'dates and locations pre-filled' which loosely maps to parameters but adds no syntax, validation rules, or cross-parameter dependencies beyond the excellent schema documentation. Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'Generate' and resource 'link' with 'dates and locations pre-filled' clarifying scope. Implicitly distinguishes from siblings like get_rate_quote (returns pricing data) and check_availability (returns availability status) by describing the handoff-to-website behavior, though it doesn't explicitly name the alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage guidance by stating 'The customer will choose their vehicle and complete the booking on the website,' indicating this is for scenarios where the customer completes booking manually. However, lacks explicit when-not guidance or direct comparison to siblings (e.g., 'use this instead of get_rate_quote when you need a booking URL, not pricing').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_locationsAInspect
List Bandago rental locations with addresses and contact info. Returns active locations by default.
| Name | Required | Description | Default |
|---|---|---|---|
| include_inactive | No | Include inactive/non-public locations (default: false) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full behavioral disclosure burden. It successfully communicates what data is returned (addresses and contact info) and the default filtering behavior (active locations only). However, it omits pagination details, authentication requirements, and exact return structure which would be necessary for full transparency on a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. First sentence front-loads the verb and resource ('List Bandago rental locations'), immediately clarifying purpose. Second sentence adds the default scope constraint. No redundant phrases or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a simple list tool with one optional boolean parameter. Description compensates for missing output schema by specifying return content (addresses and contact info). With 100% schema coverage and straightforward functionality, additional elaboration would be unnecessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (the include_inactive parameter is fully documented in schema). Description mentions 'Returns active locations by default,' which aligns with the parameter's default value but does not add syntax, format, or usage semantics beyond what the schema already provides. Baseline 3 is appropriate when schema carries the descriptive load.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb 'List' + resource 'Bandago rental locations' + return details 'addresses and contact info'. Clearly distinguishes from siblings (check_availability, get_rate_quote, etc., which concern booking transactions rather than directory listings).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through functional description but lacks explicit when-to-use guidance or alternatives. Does not indicate relationship to sibling tools (e.g., whether to call this before check_availability). 'Returns active locations by default' hints at filtering use cases but doesn't constitute explicit usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_rate_codesAInspect
List vehicle types/rate codes (e.g., Ford Transit, Sprinter) with specs. These codes are used to identify vehicle classes for availability and pricing.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It indicates the operation returns 'specs' (specifications), adding some behavioral context about the payload content. However, it omits details about pagination, caching, output structure, or whether these are static master data versus dynamic values.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence defines the action and resource with examples; second sentence provides contextual utility. Every word earns its place without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero input parameters and no output schema, the description provides adequate conceptual coverage by identifying what is returned (codes/types with specs) and their business purpose. However, without an output schema, it could have elaborated slightly on the structure or scope of 'specs' to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters, which per guidelines sets a baseline of 4. The description appropriately requires no additional parameter clarification since there are no inputs to document.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('List') and specific resource ('vehicle types/rate codes'), complete with concrete examples ('Ford Transit, Sprimmer') and mentions 'specs'. It implicitly distinguishes from siblings like check_availability and list_locations by focusing on vehicle classification codes rather than locations or availability status, though it does not explicitly name siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence explains that codes are 'used to identify vehicle classes for availability and pricing,' implying relevance when working with availability/pricing workflows (relating to siblings check_availability and get_rate_quote). However, it lacks explicit guidance like 'call this first to get valid rate codes' or clear when-not conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!