openvan-travel
Server Details
Vanlife & RV travel data — fuel prices, weather, currency, events, news (free, 11 MCP tools)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Kopaev/openvan-camp-public-api
- GitHub Stars
- 0
- Server Listing
- OpenVan MCP Server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 11 of 11 tools scored. Lowest: 3.3/5.
Each tool has a clearly distinct purpose targeting specific resources like fuel prices, events, weather, or stories, with no overlap in functionality. For example, compare_fuel_prices and find_cheapest_fuel serve different comparison and search roles, while get_fuel_prices provides detailed data, making misselection unlikely.
All tool names follow a consistent verb_noun pattern (e.g., compare_fuel_prices, get_currency_rate, list_events), using snake_case throughout. This predictability aids agents in understanding and selecting tools without confusion from mixed conventions.
With 11 tools, the count is well-scoped for the vanlife travel domain, covering key areas like fuel, events, weather, and stories. Each tool earns its place by addressing distinct needs, avoiding bloat while providing comprehensive coverage for planning and information retrieval.
The tool set offers strong coverage for vanlife travel, including fuel price comparison, event listings, weather suitability, and story searches, with no dead ends. A minor gap exists in lacking update or delete operations for user-specific data, but core informational and planning workflows are fully supported.
Available Tools
11 toolscompare_fuel_pricesCompare Fuel PricesARead-onlyIdempotentInspect
Compare current prices for one fuel type across 2-10 countries. Returns sorted table cheapest-first.
| Name | Required | Description | Default |
|---|---|---|---|
| fuel_type | No | Fuel type to compare. | diesel |
| country_codes | Yes | Array of 2-10 ISO 3166-1 alpha-2 country codes to compare. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare this as read-only, non-destructive, idempotent, and open-world, covering safety and reliability. The description adds valuable context about the output format ('sorted table cheapest-first') and the 2-10 country constraint, which goes beyond what annotations provide. No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place: the first defines the core functionality and constraints, the second specifies the output format. No wasted words, and the most important information (what it does) comes first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only comparison tool with comprehensive annotations and full schema coverage, the description provides sufficient context about purpose, constraints, and output format. The main gap is the lack of output schema, but the description compensates by specifying the return format ('sorted table cheapest-first'). It could be more complete by mentioning data freshness or source limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents both parameters (fuel_type with enum values and default, country_codes with ISO code format and 2-10 range). The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('compare current prices'), resource ('for one fuel type'), and scope ('across 2-10 countries'), with explicit output format ('sorted table cheapest-first'). It distinguishes from siblings like 'find_cheapest_fuel' by focusing on multi-country comparison rather than single-location optimization.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool (comparing prices across multiple countries for one fuel type) and implicitly distinguishes from 'find_cheapest_fuel' which likely finds cheapest fuel in a single location. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the many sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_vanbasketCompare Food PricesARead-onlyIdempotentInspect
Compare food price index between two countries (world average = 100). Higher number = more expensive food.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Destination country ISO alpha-2 code. | |
| from | Yes | Home country ISO alpha-2 code. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a read-only, non-destructive, idempotent, and open-world operation. The description adds useful context about the scale (world average = 100) and interpretation (higher number = more expensive), which aren't covered by annotations. However, it doesn't disclose additional behavioral traits like rate limits, error handling, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and includes essential details (scale and interpretation). Every word earns its place, with no redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 simple parameters), rich annotations (covering safety and behavior), and no output schema, the description is reasonably complete. It explains the purpose, scale, and interpretation, but could benefit from mentioning output format or example usage to fully compensate for the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for 'from' and 'to' as ISO alpha-2 codes. The description adds meaning by explaining that these parameters represent countries for comparison and the output interpretation, but doesn't provide additional syntax or format details beyond what the schema already covers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: comparing food price indices between two countries with a specific scale (world average = 100) and interpretation (higher number = more expensive). It distinguishes from siblings like compare_fuel_prices and get_vanbasket by specifying food prices rather than fuel or general basket data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (comparing two countries' food prices) but doesn't explicitly state when to use this tool versus alternatives like compare_fuel_prices for fuel or get_currency_rate for currency conversions. It provides the scale and interpretation, which helps guide usage, but lacks explicit exclusions or sibling comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_cheapest_fuelFind Cheapest FuelARead-onlyIdempotentInspect
Find the cheapest countries for a given fuel type in a region (or worldwide). Useful for route planning.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | How many cheapest countries to return. | |
| region | No | Region to search. Default: world (all countries). | world |
| fuel_type | No | Fuel type. | diesel |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide key behavioral hints (readOnlyHint: true, openWorldHint: true, idempotentHint: true, destructiveHint: false), covering safety and idempotency. The description adds minimal context beyond this, mentioning 'Useful for route planning,' which gives some application insight but doesn't disclose additional traits like rate limits, data freshness, or error handling. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded: two sentences that directly state the purpose and a brief usage hint. Every sentence earns its place without redundancy or fluff, making it easy for an AI agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no output schema), rich annotations cover safety and idempotency, and 100% schema coverage documents inputs well. The description provides purpose and a usage hint, which is adequate. However, it could be more complete by explaining output format (e.g., list of countries with prices) or data sources, though annotations help mitigate gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for all parameters (limit, region, fuel_type) including defaults and constraints. The description adds no additional parameter semantics beyond what the schema provides, such as explaining how 'region' affects results or what 'cheapest' means in practice. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Find the cheapest countries for a given fuel type in a region (or worldwide).' It specifies the verb ('find'), resource ('cheapest countries'), and scope ('region or worldwide'). However, it doesn't explicitly differentiate from sibling tools like 'compare_fuel_prices' or 'get_fuel_prices', which might offer similar functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage context with 'Useful for route planning,' suggesting when this tool might be applicable. However, it doesn't explicitly state when to use this tool versus alternatives like 'compare_fuel_prices' or 'get_fuel_prices,' nor does it provide exclusions or prerequisites for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_currency_rateConvert CurrencyARead-onlyIdempotentInspect
Convert an amount between two currencies using live rates (150+ currencies, daily updates).
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target currency ISO 4217 code, e.g. USD. | |
| from | Yes | Source currency ISO 4217 code, e.g. EUR. | |
| amount | No | Amount to convert. Default 1. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover key traits (read-only, open-world, idempotent, non-destructive), so the bar is lower. The description adds valuable context beyond annotations by specifying 'live rates' and 'daily updates,' which informs about data freshness and real-time behavior, though it doesn't detail rate limits or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Convert an amount between two currencies') and adds only essential context ('using live rates (150+ currencies, daily updates)'). Every part earns its place with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity, rich annotations, and 100% schema coverage, the description is mostly complete. However, the lack of an output schema means the description could benefit from hinting at return values (e.g., converted amount), though it adequately covers the core functionality and constraints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing full parameter documentation. The description adds no specific parameter semantics beyond what the schema already states, such as currency code formats or amount handling. Baseline 3 is appropriate since the schema carries the full burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Convert an amount between two currencies') and the resource ('live rates'), distinguishing it from sibling tools which focus on fuel prices, events, weather, etc. It provides scope details ('150+ currencies, daily updates') that further clarify its unique purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('using live rates') but does not explicitly state when to use this tool versus alternatives. No exclusions or specific scenarios are mentioned, leaving usage guidance at an implied level without clear differentiation from potential currency-related tools not present in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_eventGet Event DetailsARead-onlyIdempotentInspect
Get full details for a single vanlife event by its slug.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Event slug, e.g. caravan-salon-duesseldorf-2026. | |
| locale | No | en |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true, covering safety and idempotency. The description adds context about what 'full details' means, but doesn't disclose additional behavioral traits like rate limits, authentication needs, or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that's front-loaded with the core purpose. No wasted words, efficiently communicates the tool's function and key parameter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read operation with good annotations (readOnly, idempotent, non-destructive) but no output schema, the description adequately covers the purpose. However, it doesn't explain what 'full details' includes or the response format, which would be helpful given the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50% (only 'slug' has description, 'locale' lacks description). The description mentions 'by its slug' which aligns with the schema's documented parameter, but doesn't add meaning beyond what's already in the schema for 'slug' or compensate for the undocumented 'locale' parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get full details') and resource ('for a single vanlife event'), specifying it's for a single event by slug. It distinguishes from sibling 'list_events' which presumably returns multiple events without detailed information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context: use when you need detailed information about a specific event identified by slug. It doesn't explicitly state when not to use or name alternatives, but the specificity suggests it's for single-event lookup versus 'list_events' for multiple events.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_fuel_pricesGet Fuel PricesARead-onlyIdempotentInspect
Current retail fuel prices (gasoline, diesel, LPG, CNG) for 125+ countries. Pass country_code to get one country in detail; omit it for a summary list.
| Name | Required | Description | Default |
|---|---|---|---|
| country_code | No | ISO 3166-1 alpha-2 country code, e.g. DE. If omitted, returns all countries. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover safety (readOnlyHint=true, destructiveHint=false), idempotency (idempotentHint=true), and data scope (openWorldHint=true). The description adds valuable context about the data coverage ('125+ countries') and the two modes of operation (detailed vs. summary), which goes beyond what annotations provide. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence and usage guidelines in the second. Both sentences earn their place by providing essential information without redundancy, making it highly efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter), rich annotations (covering safety and behavior), and 100% schema coverage, the description is largely complete. It explains the two usage modes and data scope. The main gap is the lack of output schema, but the description compensates by hinting at return formats ('detail' vs. 'summary list').
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'country_code' fully documented in the schema (ISO 3166-1 alpha-2 code, optional). The description adds minimal semantics by reiterating that omitting it returns all countries, which is already implied in the schema. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('get'), resource ('fuel prices'), and scope ('current retail fuel prices for 125+ countries'), with explicit mention of fuel types (gasoline, diesel, LPG, CNG). It distinguishes itself from siblings like 'compare_fuel_prices' and 'find_cheapest_fuel' by focusing on retrieval rather than comparison or optimization.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool vs. alternatives: 'Pass country_code to get one country in detail; omit it for a summary list.' This directly addresses the key decision point for using the tool, though it does not explicitly mention when to use sibling tools like 'compare_fuel_prices'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vanbasketGet Food Price IndexBRead-onlyIdempotentInspect
Get VanBasket food price index details for one country.
| Name | Required | Description | Default |
|---|---|---|---|
| country_code | Yes | ISO 3166-1 alpha-2 country code. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds context by specifying it retrieves 'details' for 'one country', which implies a focused query, but doesn't disclose behavioral traits like rate limits, authentication needs, or output format. With annotations providing core safety info, the description adds minimal extra value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every part earns its place by specifying the action, resource, and scope concisely.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema), annotations cover safety and idempotency, and schema fully documents the parameter. The description is adequate but lacks context on output format, error handling, or sibling tool differentiation, leaving gaps for an agent to infer usage in a broader toolset.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'country_code' fully documented as an ISO 3166-1 alpha-2 code. The description adds no additional parameter semantics beyond what the schema provides, such as examples or constraints. Baseline 3 is appropriate since the schema handles all parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get VanBasket food price index details for one country.' It specifies the verb ('Get'), resource ('VanBasket food price index details'), and scope ('for one country'). However, it doesn't explicitly differentiate from sibling tools like 'compare_vanbasket', which might offer comparative analysis versus this single-country retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'compare_vanbasket' for multi-country comparisons or 'list_vansky_top' for broader listings, nor does it specify prerequisites or exclusions. The agent must infer usage from the description alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vansky_weatherGet VanSky Weather ScoreARead-onlyIdempotentInspect
Get VanSky vanlife weather suitability score (0-100) for a country: van_score, sleep_score, solar yield, driving conditions, awning safety, condensation risk, 7-day forecast.
| Name | Required | Description | Default |
|---|---|---|---|
| country_code | Yes | ISO 3166-1 alpha-2 country code, e.g. DE. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond annotations by specifying the score range (0-100), output components (e.g., solar yield, condensation risk), and forecast duration (7-day), which helps the agent understand the tool's behavioral scope and output structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence that efficiently conveys the tool's purpose, output details, and scope without any wasted words. It is front-loaded with the core action and resource, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (weather scoring with multiple output components), annotations cover safety and behavior well, and the schema fully documents the single parameter. However, there is no output schema, so the description partially compensates by listing output components, though it lacks details on return format or structure. This is adequate but not fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'country_code' fully documented in the schema. The description does not add any parameter-specific semantics beyond what the schema provides, such as explaining how the country code affects the weather score calculation. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get') and resource ('VanSky vanlife weather suitability score') with detailed output components (van_score, sleep_score, etc.). It distinguishes from sibling tools like 'compare_fuel_prices' or 'get_currency_rate' by focusing exclusively on weather suitability scoring for vanlife.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for vanlife weather assessment but does not explicitly state when to use this tool versus alternatives like 'list_vansky_top' or 'search_stories'. It provides context (country-based scoring) but lacks explicit exclusions or comparisons to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_eventsList Vanlife EventsARead-onlyIdempotentInspect
List vanlife events: expos (Caravan Salon), festivals, meetups, forums, road trips. Filter by status, type, country, or free-text search.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Event type filter. | |
| limit | No | ||
| locale | No | Language for localized fields. | en |
| search | No | Free-text search in event name. | |
| status | No | Event time status. | upcoming |
| country | No | ISO 3166-1 alpha-2 country code. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds useful context by specifying the types of events included (expos, festivals, etc.) and filtering capabilities, which enhances understanding beyond annotations without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and lists filter options without redundancy. Every word contributes to clarity, making it appropriately sized and well-structured for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, no output schema) and rich annotations, the description is mostly complete. It covers the resource scope and filtering options, but lacks details on output format (e.g., pagination, result structure) which would be helpful since there's no output schema, leaving a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is high (83%), with most parameters well-documented in the schema (e.g., 'type' enum values, 'locale' options). The description mentions filter categories (status, type, country, free-text search) but doesn't add significant semantic details beyond what the schema provides, aligning with the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and resource 'vanlife events' with specific examples (expos, festivals, meetups, forums, road trips). It distinguishes from siblings like 'get_event' (singular) and 'search_stories' (different resource), establishing a clear scope for retrieving multiple events.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage by listing filter options (status, type, country, free-text search), which helps the agent understand when to apply this tool. However, it lacks explicit guidance on when to use alternatives like 'get_event' (for single events) or 'search_stories' (for non-event content), missing full sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_vansky_topList Top VanSky CountriesARead-onlyIdempotentInspect
List the top N countries with the highest VanSky van-travel suitability score today.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | How many top-scoring countries to return. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context by specifying 'today' (temporal scope) and 'top N' (ranking logic), which aren't covered by annotations. No contradictions exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every element ('List', 'top N countries', 'VanSky van-travel suitability score', 'today') contributes directly to understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple single parameter, comprehensive annotations, and clear purpose, the description is largely complete. However, without an output schema, it could briefly hint at return format (e.g., 'returns a ranked list') to fully compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameter 'limit' is fully documented in the schema. The description mentions 'top N' but adds no additional semantic details beyond what the schema provides, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List'), the resource ('top N countries'), and the specific metric ('highest VanSky van-travel suitability score today'). It distinguishes from siblings by focusing on country rankings rather than fuel prices, events, or other travel data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when seeking top-ranked countries for van travel suitability, but provides no explicit guidance on when to choose this tool over alternatives like 'get_vansky_weather' or 'list_events'. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_storiesSearch Vanlife NewsARead-onlyIdempotentInspect
Search aggregated vanlife news stories (7 languages, 400+ sources). Filter by search query, category, country, locale.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| locale | No | en | |
| search | No | Full-text search in story title. | |
| country | No | ISO 3166-1 alpha-2 country code. | |
| category | No | Category slug, e.g. camping, travel, gear, festival, industry. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, indicating a safe, non-destructive, and repeatable read operation. The description adds valuable context beyond this: it specifies the scope ('aggregated vanlife news stories'), language support ('7 languages'), and source coverage ('400+ sources'), which helps the agent understand the tool's capabilities and data richness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and key details without unnecessary words. Every part ('Search aggregated vanlife news stories', '7 languages, 400+ sources', 'Filter by...') contributes directly to understanding the tool's function and scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (5 parameters, no output schema) and rich annotations, the description provides good context: it covers purpose, scope, and filtering capabilities. However, it lacks details on output format (e.g., what fields are returned) or pagination behavior, which would be helpful since there's no output schema. Still, it's mostly complete for a search tool with clear annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 60% (3 out of 5 parameters have descriptions: 'search', 'country', 'category'), with 'limit' and 'locale' lacking descriptions. The description lists filter types ('search query, category, country, locale'), which partially aligns with parameters but doesn't add detailed semantics beyond the schema. For example, it doesn't explain 'locale' options or 'limit' usage. Baseline 3 is appropriate as the schema covers most parameters adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search aggregated vanlife news stories') and resource ('news stories'), with additional scope details ('7 languages, 400+ sources'). It distinguishes from sibling tools like 'list_events' or 'get_vansky_weather' by focusing on news content rather than events, weather, or pricing data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through the filter parameters listed ('Filter by search query, category, country, locale'), suggesting it's for finding specific news content. However, it doesn't explicitly state when to use this tool versus alternatives like 'list_events' for events or 'get_vansky_weather' for weather data, nor does it provide exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.