Skip to main content
Glama

Hit The Road Rentals

Server Details

Search campervans and motorhomes worldwide. 300+ rental companies. AU, NZ, US, CA, UK and more.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
HitTheRoad-Git/hittheroad-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 2 of 2 tools scored.

Server CoherenceA
Disambiguation5/5

The two tools have clearly distinct purposes: list_locations is for discovering valid city names, while search_campervans is for searching actual rentals. There is no overlap in functionality, making it easy for an agent to choose the right tool for each task.

Naming Consistency5/5

Both tools follow a consistent verb_noun pattern (list_locations, search_campervans) with clear, descriptive names. The naming style is uniform and predictable across the tool set.

Tool Count2/5

With only 2 tools, the server feels thin for a rental service domain. While the tools cover initial discovery and search, there are obvious gaps in operations like booking, managing reservations, or viewing details, which limits the server's usefulness for comprehensive rental workflows.

Completeness2/5

The tool set is severely incomplete for a rental service. It lacks essential operations such as booking a campervan, viewing rental details, managing bookings, or handling payments. Agents will hit dead ends after searching, unable to complete typical rental tasks.

Available Tools

2 tools
list_locationsA
Read-only
Inspect

List all searchable pickup cities by country. Call this to find valid city names before searching.

ParametersJSON Schema
NameRequiredDescriptionDefault
countryNoCountry code filter (e.g. AU, NZ, US). Omit to return all supported countries.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It describes the tool's purpose and usage context but doesn't disclose behavioral traits like rate limits, authentication needs, response format, or pagination. The description doesn't contradict any annotations, but lacks operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that are front-loaded with the tool's purpose followed by usage guidance. Every sentence earns its place with no wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simple purpose (list cities), single optional parameter, and no output schema, the description is reasonably complete. It explains what the tool does and when to use it, though it could benefit from mentioning response format or any limitations. The lack of annotations means some behavioral context is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the 'country' parameter. The description adds value by explaining the purpose of finding 'valid city names' and the tool's role in the workflow, but doesn't provide additional parameter semantics beyond what the schema offers. With 1 parameter and high schema coverage, baseline is 3, but the description adds meaningful context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('all searchable pickup cities by country'), specifying that it returns city names for use in searching. It distinguishes from the sibling tool 'search_campervans' by focusing on location metadata rather than actual search operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool: 'Call this to find valid city names before searching.' This provides clear context for usage versus the sibling search tool and indicates it's a prerequisite step for accurate searches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_campervansA
Read-only
Inspect

Search campervan and motorhome rentals. Returns a URL that pre-fills the search form with your trip details. Click Search on the page to see live results with pricing, availability, and booking options from 160+ rental companies.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesPickup city name, e.g. "Sydney", "Auckland", "Los Angeles". Required.
countryYesCountry code: AU, NZ, US, CA, GB, DE, FR, IT, ES, NL. Required.
pickup_dateYesPickup date YYYY-MM-DD. Required.
dropoff_cityNoReturn city if different from pickup (one-way). Optional.
dropoff_dateYesReturn date YYYY-MM-DD. Required.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the tool's behavior by explaining it returns a URL that pre-fills a search form and requires user interaction ('Click Search on the page'), but it doesn't cover aspects like rate limits, error handling, or authentication needs. It adds some context but leaves gaps in behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by essential details about the return value and user actions. Every sentence adds value without redundancy, making it efficiently structured and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, no annotations, and no output schema, the description provides a good overview of purpose and behavior but lacks details on output format (beyond mentioning a URL) and error handling. It's mostly complete but could benefit from more context on limitations or expected results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already documents all parameters thoroughly. The description doesn't add any additional meaning or clarification beyond what the schema provides, such as explaining parameter interactions or constraints. Baseline 3 is appropriate when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Search campervan and motorhome rentals') and resource ('rentals'), distinguishing it from the sibling 'list_locations' by focusing on search functionality rather than location listing. It provides concrete details about what the tool does and what it returns.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating it's for searching rentals and returns a URL for pre-filled forms, but it doesn't explicitly mention when to use this tool versus alternatives like 'list_locations' or other search methods. It provides clear context but lacks explicit exclusions or comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.