outdoorithm
Server Details
Campground discovery, availability, planning, and booking handoffs across US public lands.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 18 of 18 tools scored. Lowest: 3.9/5.
Each tool targets a distinct camping-related task (search, availability, weather, gear, alerts, etc.) with clear descriptions that prevent confusion. Even similar tools like check_availability and create_alert are differentiated by real-time checking vs. ongoing monitoring.
All tool names follow a consistent verb_noun pattern in snake_case (e.g., build_packing_list, check_availability, search_campgrounds). The only exception is health_check, which is a common convention and does not disrupt the overall pattern.
With 18 tools, the server covers a broad range of camping trip planning needs without being overwhelming. Each tool serves a clear purpose and the count is appropriate for the domain's complexity.
The tool set covers the full workflow from discovery (search, details, similar), preparation (packing list, gear, weather, drive times), booking (availability, alerts, reservation), and safety (check_safety). The inclusion of query_database allows advanced queries, and health_check ensures operational awareness.
Available Tools
18 toolsbuild_packing_listBuild Packing ListARead-onlyInspect
Generate a context-aware packing checklist for a camping trip.
Returns categorized items from the gear catalog, adjusted for campground amenities, activities, weather, and group size.
Args: campground_id: Optional CUID to personalize based on campground amenities trip_type: car_camping, backpacking, rv, or glamping (default car_camping) season: spring, summer, fall, or winter (optional) adults: Number of adults (default 2) children: Number of children (default 0) has_pets: Whether bringing pets (default false) activities: Planned activities (hiking, fishing, swimming, kayaking, biking, beach) nights: Number of nights (default 2)
| Name | Required | Description | Default |
|---|---|---|---|
| adults | No | ||
| nights | No | ||
| season | No | ||
| children | No | ||
| has_pets | No | ||
| trip_type | No | car_camping | |
| activities | No | ||
| campground_id | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark readOnlyHint=true and destructiveHint=false. The description adds context about return format (categorized items) and adjustments (amenities, activities, weather, group size), which is valuable beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with a clear purpose sentence, followed by a brief output description and then a well-organized parameter list. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 8 parameters, 0% schema coverage, and presence of output schema, the description covers all necessary context for use. It explains adjustments and returns, leaving no gaps for typical usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, so description fully compensates by explaining each parameter with meanings, defaults, and allowed values (e.g., 'trip_type: car_camping, backpacking, rv, or glamping (default car_camping)'). This is essential for correct invocation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Generate a context-aware packing checklist for a camping trip', with specific verb and resource. It distinguishes from siblings by focusing on packing checklist generation rather than individual gear retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for camping trip packing but does not explicitly mention when to avoid or compare with siblings like 'get_gear_for_campground'. Lacks explicit exclusions or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
calculate_drive_timesCalculate Drive TimesARead-onlyInspect
Calculate realistic drive times from an origin to multiple campgrounds using Mapbox routing with live traffic.
Uses the Mapbox Matrix API for traffic-aware routing. Maximum 9 destinations per request.
Args: origin_latitude: Latitude of starting location origin_longitude: Longitude of starting location destinations: List of {campground_id: str, latitude: float, longitude: float} (max 9)
| Name | Required | Description | Default |
|---|---|---|---|
| destinations | Yes | ||
| origin_latitude | Yes | ||
| origin_longitude | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the readOnlyHint and destructiveHint annotations, the description discloses use of Mapbox Matrix API, traffic-awareness, and a maximum of 9 destinations per request. These details add meaningful behavioral context about limitations and external dependencies.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise – three short paragraphs – with the main purpose front-loaded, followed by API details and parameter definitions. No redundant information; each sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema, the description adequately covers key contextual aspects: origin, multiple destinations, max 9, traffic-aware routing. Minor gap: it could mention the unit of returned times (e.g., minutes), but this is likely provided in the output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by listing each argument with clear semantics: 'origin_latitude: Latitude of starting location', etc., and specifying the expected structure of destinations as objects with campground_id, latitude, and longitude, which is critical since the schema has additionalProperties: true.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Calculate realistic drive times from an origin to multiple campgrounds using Mapbox routing with live traffic', which clearly defines the specific verb and resource. It differentiates from sibling tools like search_campgrounds or get_campground_details by focusing on drive time calculation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide guidance on when to use this tool versus alternatives like search_campgrounds or get_weather. No explicit 'when to use' or 'when not to use' information is given, limiting its utility for an AI agent choosing among sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_availabilityCheck AvailabilityARead-onlyInspect
Check real-time campground availability for specific dates.
Queries the camply-service to check live campsite availability against reservation systems. Supports 30+ providers including RecreationDotGov, ReserveCalifornia, state parks, and county parks.
Args: campground_id: Campground CUID (e.g., "ReserveCalifornia:uuid:725") start_date: Check-in date in YYYY-MM-DD format end_date: Check-out date in YYYY-MM-DD format min_nights: Minimum consecutive nights required (1-7, default 1)
| Name | Required | Description | Default |
|---|---|---|---|
| end_date | Yes | ||
| min_nights | No | ||
| start_date | Yes | ||
| campground_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the description's main job is to add behavioral context. It explains that the tool queries 'live campsite availability against reservation systems' and supports '30+ providers', which gives the agent useful insight into the source and nature of the data. There is no contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a title line, a brief paragraph explaining the tool's function and provider coverage, and a clear Args section. It is concise (about 5 sentences) and front-loaded with the purpose. Every sentence contributes meaning, though it could be slightly more streamlined.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not shown), the description does not need to explain return values. It covers all input parameters adequately and provides important context about the service (camply-service) and providers. It does not mention potential latency or connectivity requirements, but for a straightforward check tool, it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing detailed explanations for each parameter in the Args section. It includes the format of campground_id (e.g., 'ReserveCalifornia:uuid:725'), date formats, and the range and default for min_nights (1-7, default 1). This adds significant meaning beyond the raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description begins with a clear verb+resource statement ('Check real-time campground availability for specific dates'), and further elaborates by specifying the service (camply-service) and the supported providers (30+ including RecreationDotGov, ReserveCalifornia). Although it does not explicitly distinguish from sibling tools, the unique action of checking live availability against reservation systems sets it apart from other camping-related tools like get_campground_details or search_campgrounds.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description clearly states what the tool does and when to use it (for checking real-time availability for specific dates). However, it does not provide explicit guidance on when not to use it or mention alternative tools. The context is clear, but exclusions are missing, so it scores a 4.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_safetyCheck SafetyARead-onlyInspect
Check campgrounds for safety issues, community atmosphere, and active alerts.
Checks:
Greenbook community safety (vibe scores, discrimination reports, flag status)
Active campground alerts (closures, restrictions) from enriched data
Red flag detection (prevents recommending unsafe campgrounds)
Args: campground_ids: List of campground CUIDs to check
| Name | Required | Description | Default |
|---|---|---|---|
| campground_ids | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and no destructiveness. Description adds value by detailing the specific safety checks performed (vibe scores, discrimination reports, flag status), including the nuance of preventing unsafe recommendations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is succinct with a clear header, bulleted checks, and an Args section. Every sentence is purposeful, no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, explanation of return values is unnecessary. The description covers the tool's scope and key behaviors adequately for a straightforward safety check tool. Minor gap: does not mention if it checks only public or all campgrounds.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so description must compensate. It merely restates the parameter name and type as 'List of campground CUIDs to check', adding minimal meaning beyond the input schema. No details on format, constraints, or behavior for invalid IDs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Check campgrounds for safety issues' with specific checks (Greenbook community safety, active alerts, red flag detection). Clearly distinguishes from sibling tools like check_availability or get_campground_details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description lists what is checked but provides no explicit guidance on when to use this tool versus alternatives like search_campgrounds or get_campground_details. Context is implied (safety assessment) but no when-not-to-use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_gearCompare GearARead-onlyInspect
Compare 2-3 gear items side-by-side with specs, pros/cons, verdicts, and comparison summary.
Supports lookup by unique_id with slug fallback. Use search_gear first if the user hasn't named specific products.
Args: gear_ids: List of 2-3 gear item identifiers (unique_id or slug)
| Name | Required | Description | Default |
|---|---|---|---|
| gear_ids | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the tool's safety is clear. The description adds value by explaining the comparison behavior, output contents (specs, pros/cons, verdicts, summary), and identifier resolution, going beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the main purpose in the first sentence, and all additional information (identifier support, usage guidance) is succinct and relevant. No redundant language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema, the description need not detail return values. It covers the tool's function, input constraints, identifier resolution, and when to use alternatives, making it complete for effective agent selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The sole parameter 'gear_ids' is described as 'List of 2-3 gear item identifiers (unique_id or slug)', adding semantic meaning beyond the schema's array of strings. It also constrains the count (2-3) and identifier types, compensating for 0% schema description coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Compare 2-3 gear items side-by-side with specs, pros/cons, verdicts, and comparison summary', specifying the verb (compare) and resource (gear items). It distinguishes from the sibling tool 'search_gear' which is for finding products, not comparing them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use search_gear first if the user hasn't named specific products', providing clear guidance on when to use this tool vs the alternative. It also explains identifier resolution: 'Supports lookup by unique_id with slug fallback'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_alertCreate AlertAInspect
Create a campsite availability alert that monitors for openings and notifies the user.
Requires an Outdoorithm API key (generate at outdoorithm.com/dashboard/api-keys). The alert runs continuously, checking every 2-15 minutes depending on subscription tier.
Args: api_key: User's Outdoorithm API key from their dashboard settings. campground_ids: List of campground CUIDs to monitor. All must use the same reservation provider. Get CUIDs from search_campgrounds or get_campground_details. Example: ["RecreationDotGov:232447:2991"] start_date: Earliest check-in date to monitor (YYYY-MM-DD). Required for all date types. end_date: Latest check-out date (YYYY-MM-DD). Required when date_type is "range". date_type: How dates are interpreted. "range" monitors a specific date window (default), "perpetual" monitors rolling 6 months from start_date, "specific_months" monitors only certain months each year. specific_months: Month numbers 1-12 to monitor. Required when date_type is "specific_months". min_nights: Minimum consecutive nights needed (1-14, default 1). max_nights: Maximum consecutive nights (1-14, must be >= min_nights). Omit for no max. days_of_week: Preferred check-in days as integers (0=Monday through 6=Sunday). Omit for any day. notify_email: Send email when availability found (default true). notify_sms: Send SMS when availability found (default false, requires phone on account). name: Display name for this alert. Auto-generated from campground and dates if omitted.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| api_key | Yes | ||
| end_date | No | ||
| date_type | No | range | |
| max_nights | No | ||
| min_nights | No | ||
| notify_sms | No | ||
| start_date | Yes | ||
| days_of_week | No | ||
| notify_email | No | ||
| campground_ids | Yes | ||
| specific_months | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=false. The description adds behavioral context beyond annotations, such as continuous monitoring at intervals, which is useful for an AI agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear purpose statement, prerequisite note, frequency detail, and a clean Arg block. Minor redundancy but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers creation context, prerequisites, parameter behaviors, and alert lifecycle (continuous monitoring). With an output schema present, return value details are unnecessary, so completeness is high.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema description coverage, the description provides thorough explanations for all 12 parameters in the 'Args' section, adding meaning beyond the input schema's property definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Create', resource 'campsite availability alert', and purpose 'monitors for openings and notifies the user'. It differentiates from sibling tools like list_alerts and delete_alert.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains prerequisites (API key, CUIDs from sibling tools) and parameter details, but lacks explicit guidance on when not to use this tool versus alternatives like check_availability for one-time checks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_alertDelete AlertADestructiveInspect
Permanently delete a campsite availability alert.
This cannot be undone. All associated notification history will also be deleted. Consider using toggle_alert to pause instead of deleting.
Requires an Outdoorithm API key (generate at outdoorithm.com/dashboard/api-keys).
Args: api_key: User's Outdoorithm API key from their dashboard settings. alert_id: UUID of the alert to delete. Get this from list_alerts.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| alert_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructiveHint=true. The description adds crucial context: 'This cannot be undone' and 'All associated notification history will also be deleted,' exceeding annotation info.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise: two-line purpose, warning, sibling suggestion, then parameter docs. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully covers purpose, usage, behavioral implications, and parameter meaning. Output schema exists so return values are not needed. Completeness is high.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, so the description fully explains both parameters with actionable details: API key source and instruction to get alert_id from list_alerts.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool deletes a campsite availability alert. It uses a specific verb (delete) and resource (alert), and distinguishes from sibling toggle_alert by noting it's permanent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises when to use toggle_alert instead for pausing, providing clear usage boundaries. Also mentions the requirement for an API key.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_similar_campgroundsFind Similar CampgroundsARead-onlyInspect
Find campgrounds similar to a given campground using vector embedding similarity.
Uses pre-computed embeddings combining review text (semantic) and structured attributes (amenities, terrain, activities) for nuanced similarity matching.
Args: campground_id: CUID of the target campground to find similar ones for result_limit: Number of results (1-50, default 20) max_distance_km: Optional max distance in km (omit for nationwide search) semantic_weight: Weight for text/review similarity (0-1, default 0.6) structured_weight: Weight for attribute similarity (0-1, default 0.4)
| Name | Required | Description | Default |
|---|---|---|---|
| result_limit | No | ||
| campground_id | Yes | ||
| max_distance_km | No | ||
| semantic_weight | No | ||
| structured_weight | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds transparency by revealing the use of pre-computed embeddings, the combination of semantic and structured attributes, and the role of weight parameters. This provides behavioral context beyond the annotations, though it could mention any rate limits or performance implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured, with a clear summary line followed by an Args section listing parameters. Every sentence adds value, and there is no wasted text. It fits within a few lines while being informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 5 parameters (one required) and an output schema exists, the description covers the core purpose, parameters, and algorithm. It does not explicitly describe the return format, but the output schema fills that gap. It is complete enough for an AI to select and invoke correctly, lacking only a brief mention of the output structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage, so the description fully compensates by explaining each parameter: campground_id, result_limit, max_distance_km, semantic_weight, structured_weight. It provides meaning, defaults, and constraints (e.g., max_distance_km optional, weight ranges). This is excellent parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds campgrounds similar to a given one using vector embedding similarity. It specifies the verb 'find', resource 'campgrounds similar to a given campground', and the method, distinguishing it from sibling tools like search_campgrounds which likely use keyword search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides details on how similarity works (semantic and structured weights) and parameter guidance, but does not explicitly state when to use this tool versus alternatives like search_campgrounds. It implies usage for similarity-based recommendations but lacks explicit when-not-to-use or alternative comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_campground_detailsGet Campground DetailsARead-onlyInspect
Get comprehensive details for a specific campground including amenities, reviews, safety, and seasonal ratings.
Returns ~80 curated fields (not the raw 214-column dump) including:
Identity, location, fees, site info
All amenity and activity booleans + descriptions
Review summaries, best campsites, common complaints, tips
Greenbook community safety (vibe score, flag status)
Seasonal LLM ratings (spring/summer/fall/winter)
Sentiment trend direction
Hero image with credits
Args: campground_id: Unique campground identifier (CUID format, e.g., "RecreationDotGov:232447:1074")
| Name | Required | Description | Default |
|---|---|---|---|
| campground_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so safety is clear. The description adds behavioral context about curated output (80 fields vs raw 214), data categories, and parameter format, enhancing transparency beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose and uses bullet points for field categories, improving readability. However, it is somewhat lengthy and could be more concise without loss of clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of the output (80 curated fields), the description adequately summarizes what is returned and includes parameter details. Output schema exists, so return values are covered elsewhere. Slight gap: no mention of error handling or if campground_id is validated.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, but the description compensates with a clear 'Args:' section explaining campground_id format (CUID, example provided). This adds meaning beyond the bare schema type.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly uses verb 'get' and resource 'campground details', and contrasts with sibling tools like search_campgrounds and check_availability by specifying it returns comprehensive details for a specific campground.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies usage for detailed lookups, it lacks explicit guidance on when to use this tool versus alternatives (e.g., search_campgrounds for lists, check_availability for bookings). No 'when not to use' or missing prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_gear_for_campgroundGet Gear for CampgroundARead-onlyInspect
Get personalized gear recommendations for a specific campground.
Returns pre-computed recommendations across slot types: essential (must-have), activity (activity-matched), comfort (nice-to-have), amenity_gap (compensates for missing amenities). Each includes product details and purchase links.
Args: campground_id: Campground CUID slot_type: Optional filter — one of essential/activity/comfort/amenity_gap
| Name | Required | Description | Default |
|---|---|---|---|
| slot_type | No | ||
| campground_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and destructiveHint=false, which the description respects. Beyond that, the description adds behavioral detail: recommendations are pre-computed, slot types are categories (essential, activity, comfort, amenity_gap), and each includes product details and purchase links. This provides useful transparency beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded with the purpose. It uses a brief header, then lists slot types, and ends with parameter documentation. Every sentence serves a purpose, though the Args section could be integrated more smoothly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema and two parameters, the description covers the key aspects: what it returns (with slot types and product details) and the parameters. It does not address pagination or edge cases, but for a recommendation look-up, this is adequate and complete enough for an agent to select and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description carries the full burden of explaining parameters. It documents both parameters: campground_id (Campground CUID) and slot_type (optional filter with specific allowed values). This adds essential meaning, though it omits data types and format details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's action: 'Get personalized gear recommendations for a specific campground.' It uses a specific verb ('get') and resource ('gear recommendations'), and the campground focus distinguishes it from sibling tools like search_gear or build_packing_list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what the tool returns and mentions an optional filter, but it does not provide explicit guidance on when to use this tool versus alternatives (e.g., search_gear for general gear search, build_packing_list for list creation). Usage context is implied but not stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_weatherGet WeatherARead-onlyInspect
Get camping-specific 7-day weather forecast with suitability ratings and gear recommendations.
Uses the Open-Meteo API (no key required) to provide:
Daily high/low temps, conditions, precipitation, wind
Camping suitability rating per day (excellent/good/fair/poor/not_recommended)
Gear recommendations based on conditions
Safety warnings for extreme weather
Best camping days in the forecast period
Args: latitude: Campground latitude longitude: Campground longitude campground_name: Campground name for context (optional) check_in_date: Planned check-in date ISO format (optional) check_out_date: Planned check-out date ISO format (optional)
| Name | Required | Description | Default |
|---|---|---|---|
| latitude | Yes | ||
| longitude | Yes | ||
| check_in_date | No | ||
| check_out_date | No | ||
| campground_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the description adds value by detailing the data source (Open-Meteo), output components (ratings, gear, warnings), and that no API key is needed. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear lead sentence, bullet points for output details, and a separate args section. Every sentence adds value, and the format is easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema, the description's outline of return values (daily temps, conditions, ratings, gear, warnings, best days) is sufficient. All five parameters are explained, and the context about the API makes it self-contained.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by listing all five parameters with their roles: latitude/longitude as required coordinates, and optional campground_name, check_in_date, check_out_date. This adds essential meaning beyond the bare schema types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets a camping-specific 7-day weather forecast with suitability ratings and gear recommendations. It uses a specific verb ('get') and resource ('weather'), distinguishing it from siblings like 'check_safety' which may have different focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the tool uses the Open-Meteo API with no key required, providing clear context on how to use it. While it doesn't explicitly state when not to use it or list alternatives, the purpose is clear enough for most agents to decide appropriately.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
health_checkHealth CheckARead-onlyInspect
Check server health and connectivity.
Returns: Dictionary with health status including: - status: "healthy" or "unhealthy" - version: Server version - environment: Current environment (dev/staging/prod)
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint, so the description adds value by detailing the return format (status, version, environment). This discloses behavioral traits beyond what annotations alone convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences plus a bullet list, no redundant information. Front-loaded with the purpose. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and an output schema (implied by description), the description fully covers purpose and output. No missing elements for a simple health check.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so baseline is 4. The description doesn't need to add parameter information, and it correctly omits it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Check server health and connectivity' with specific verb and resource. Differentiates from sibling tools which are about camping/gear, making its purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives. However, for a generic health check, usage is implied as a first step or for connectivity verification. Score 3 because lack of explicit context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_alertsList AlertsARead-onlyInspect
List all campsite availability alerts for an Outdoorithm user.
Requires an Outdoorithm API key (generate at outdoorithm.com/dashboard/api-keys).
Args: api_key: User's Outdoorithm API key from their dashboard settings. status: Filter by alert status. One of: "active", "paused", "expired", "error", or "permanent_error". Omit to return all alerts.
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | ||
| api_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already set readOnlyHint=true, so the description adds value by detailing authentication requirements (API key) and allowed status values, but does not discuss rate limits or behavior when no status is provided beyond implying all alerts.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Every sentence adds essential information: purpose first, then authentication, then each parameter. No extraneous text; the structure is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema, the description need not explain return values. It covers input parameters and auth. However, it lacks mention of pagination or potential limits, which would be helpful for a list operation, though not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates: it clarifies the api_key source ('from their dashboard settings') and lists the exact valid status values, which the schema only defines as a generic string.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'List all campsite availability alerts for an Outdoorithm user'—a clear verb+resource combination that distinguishes from sibling tools like create_alert, delete_alert, and toggle_alert.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context that the tool lists alerts and requires an API key, but does not explicitly mention when not to use it or compare it with alternatives like search_campgrounds or other listing tools, though the purpose alone suggests it's for reading alerts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
prepare_reservationPrepare ReservationARead-onlyInspect
Prepare a one-tap booking handoff for the user's chosen campground/dates.
Returns a pre-filled deep link to the operator's reservation page plus the booking-window context (release date/time, ToS-compliant guidance, alert suggestion) the agent needs to advise the user. Does NOT book on behalf — third-party booking is prohibited by Recreation.gov, ReserveCalifornia, ReserveAmerica, and every other supported public-land operator.
Pair with check_availability first to confirm the dates are reservable
and to surface site-specific booking_url values when available.
Args:
campground_id: Outdoorithm CUID (e.g. RecreationDotGov:232447).
start_date: Check-in date (YYYY-MM-DD).
end_date: Check-out date (YYYY-MM-DD).
party_size: Optional group size. Surfaced in the user-facing summary;
most operators don't accept this in URL params, so it isn't
embedded in the deep link.
| Name | Required | Description | Default |
|---|---|---|---|
| end_date | Yes | ||
| party_size | No | ||
| start_date | Yes | ||
| campground_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description aligns with annotations (readOnlyHint=true, destructiveHint=false) and adds key behavioral details: no booking performed, returns deep link and context, and lists supported operators. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is well-structured with summary paragraph, prohibition note, pairing guidance, and Args list. Front-loaded with purpose. Every sentence provides necessary information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given output schema exists, description adds value by detailing return content (deep link, booking-window context, guidance). It also includes necessary context about operator policies and pairing with sibling tool check_availability.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates via the Args section: explains campground_id format with example, date format (YYYY-MM-DD), and party_size optionality and behavior. Adds meaning beyond schema types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the tool prepares a one-tap booking handoff, returning a pre-filled deep link and booking-window context. It distinguishes from siblings by explicitly stating it does NOT book on behalf and third-party booking is prohibited, and recommends pairing with check_availability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance: use check_availability first to confirm reservable dates and surface booking_url. It also states when not to use (for actual booking) and explains the prohibition by multiple operators.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
query_databaseQuery DatabaseARead-onlyInspect
Execute a read-only SQL query against Outdoorithm's campground and gear database.
The query MUST be a SELECT statement with a LIMIT clause (max 50 rows). Only the tables listed below are accessible.
AVAILABLE TABLES AND COLUMNS:
composite_campground_details (PREFERRED — enriched campground view)
Identity: cuid, name, state, nearest_city, rec_area_name, seo_slug, latitude, longitude, elevation Quality: avg_sentiment (0-5), review_count, google_total_reviews Sentiment Trends: sentiment_trend_direction ('improving'/'declining'/'stable'), sentiment_trend_change, sentiment_trend_baseline, sentiment_trend_recent Greenbook Safety: greenbook_vibe_score (0-100), greenbook_flag_status ('ok'/'caution'/'yellow_flag'/'red_flag'), greenbook_discrimination_count, greenbook_review_count Seasonal (1-5): best_season, summer_score, fall_score, winter_score, spring_score Experience (1-5 + bool): family_friendly_score, is_family_friendly, solitude_score, is_secluded, adventure_score, is_adventure, relaxation_score, is_relaxation Activity Scores: stargazing_score, has_stargazing, kayaking_score, has_kayaking Terrain (1-5 + bool): mountain_score/is_mountain, forest_score/is_forest, lakeside_score/is_lakeside, riverside_score/is_riverside, beach_score/is_beach, desert_score/is_desert, coastal_score/is_coastal Cell Coverage: cell_coverage_avg, cell_coverage_verizon, cell_coverage_att, cell_coverage_tmobile, cell_coverage_no_signal_pct Pricing: standard_site_fee, fee_low, fee_high Amenities (bool): potable_water, water_hookups, electricity_hookups, sewer_hookups, showers, flush_toilets, vault_toilets, dump_station, camp_store, wifi, cell_phone_service, pets_allowed Camping Types (bool): tent, rv, primitive, group, cabin_lodging, glamping Activities (bool): hiking, fishing, swimming, boating, biking, wildlife_viewing, climbing, beach_activities Features (text): accessibility_features, natural_features_and_scenery, proximity_to_water_features, sites_privacy, sites_size, best_campsites Reviews (text): common_complaints, common_criticisms, positive_highlights, user_reviews_summary, hazards, pet_related_reviews Practical (text): cell_phone_service_description, weather_and_seasons, open_and_closed_season, tips_and_recommendations, hiking_description, fishing_description, swimming_description Reservations: reservable (bool), reservation_platforms (text)
campground_dimension_scores
cuid, dimension_key (hiking_quality/fishing_quality/family_friendliness/etc.), raw_score (1-5), state_percentile (0-1), national_percentile (0-1), activity_mention_count, mention_sentiment_avg, confidence, evidence, state, is_applicable
campground_community_scores (Greenbook detail)
cuid, community_vibe_score (0-100), greenbook_flag_status, total_classified_reviews, discrimination_report_count, severe_discrimination_count, high_discrimination_count, staff_discrimination_count, camper_discrimination_count, friendly_staff_count, unfriendly_staff_count, friendly_campers_count, unfriendly_campers_count, safety_concern_mentions, hostile_symbol_mentions
campground_details (LEGACY — prefer composite_campground_details)
cuid, name, state, avg_sentiment (0-5), review_count
gear_items
Product catalog with "Name", "Price", "Manufacturer", "Weight", "ProductOverview", "Keywords", "RecommendationCategory", unique_id, slug
campground_gear_recommendations
campground_cuid, gear_item_id, slot_type, rank, recommendation_reason, match_score
EXAMPLE QUERIES:
SELECT name, state, avg_sentiment FROM composite_campground_details WHERE state = 'CA' AND avg_sentiment >= 4.0 ORDER BY avg_sentiment DESC LIMIT 10
SELECT cuid, dimension_key, raw_score, state_percentile FROM campground_dimension_scores WHERE dimension_key = 'hiking_quality' AND raw_score >= 4.0 ORDER BY state_percentile DESC LIMIT 20
SELECT cuid, community_vibe_score, greenbook_flag_status FROM campground_community_scores WHERE greenbook_flag_status = 'ok' AND community_vibe_score >= 70 ORDER BY community_vibe_score DESC LIMIT 20
Args: sql_query: A SELECT query with LIMIT clause (max 50 rows) explanation: Brief description of what you're looking for (for logging)
| Name | Required | Description | Default |
|---|---|---|---|
| sql_query | Yes | ||
| explanation | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description reinforces the read-only nature (consistent with readOnlyHint annotation) and adds critical behavioral details: SQL syntax rules, maximum rows, and exact table/column accessibility. This goes beyond annotations which only indicate read-only and non-destructive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is longer than typical but well-structured with sections (constraints, tables, examples). The length is justified by the database complexity. It front-loads the core constraint (SELECT with LIMIT) before diving into schema details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (multiple tables with many columns) and that an output schema exists (but not shown here), the description is remarkably complete. It covers query rules, accessible data, and examples, leaving no ambiguity about what the tool can do.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds immense value beyond the input schema which only defines 'sql_query' as a string. It specifies the required format (SELECT with LIMIT), lists all tables and columns, and provides example queries. This fully compensates for the 0% schema description coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Execute a read-only SQL query against Outdoorithm's campground and gear database', clearly identifying the tool's action (query), resource (database), and constraints. It distinguishes itself from sibling tools like search_campgrounds or check_safety by its SQL query nature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states that the query must be a SELECT with LIMIT (max 50 rows) and lists accessible tables, providing clear usage constraints. It implicitly suggests use for data retrieval but lacks explicit comparison to siblings. However, the context of sibling tools (e.g., build_packing_list) makes the intended use clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_campgroundsSearch CampgroundsARead-onlyInspect
Search for campgrounds with 40+ filters including location, amenities, activities, and community safety.
Requires at least one location constraint: near_location, latitude+longitude, or states.
Args: near_location: City, address, or landmark (e.g., "Oakland, CA", "Yosemite") latitude: Latitude coordinate longitude: Longitude coordinate max_distance_miles: Maximum distance from location in miles states: US state codes to filter by (e.g., ["CA", "OR"]) max_price_per_night: Maximum price filter min_price_per_night: Minimum price filter requires_accessibility: Require wheelchair/ADA accessible features requires_water_hookups: Require water hookups requires_electric_hookups: Require electric hookups requires_potable_water: Require potable water requires_showers: Require shower facilities requires_flush_toilets: Require flush toilets requires_dump_station: Require RV dump station prefers_family_friendly: Prefer family-friendly campgrounds prefers_secluded: Prefer secluded/private campgrounds prefers_adventure: Prefer adventure-oriented campgrounds prefers_relaxation: Prefer relaxation-focused campgrounds prefers_quiet: Prefer quiet campgrounds prefers_privacy: Prefer private campsites prefers_lakefront: Prefer lakefront locations prefers_oceanfront: Prefer ocean/beach locations prefers_riverfront: Prefer riverside locations prefers_mountain: Prefer mountain settings prefers_forest: Prefer forested settings prefers_beach: Prefer beach settings prefers_desert: Prefer desert settings prefers_stargazing: Prefer dark sky areas prefers_kayaking: Prefer kayaking access prefers_hiking: Prefer hiking trails prefers_fishing: Prefer fishing access prefers_swimming: Prefer swimming areas prefers_boating: Prefer boating access preferred_season: Preferred season (summer/fall/winter/spring) min_seasonal_score: Minimum seasonal score (1-5) require_greenbook_safe: Only show campgrounds with OK safety status min_greenbook_vibe_score: Minimum community vibe score (0-100) no_discrimination_reports: Exclude campgrounds with discrimination reports requires_cell_coverage: Require cell phone coverage preferred_carrier: Preferred carrier (verizon/att/tmobile) min_cell_coverage_score: Minimum cell coverage score (1-5) min_review_count: Minimum number of reviews min_avg_sentiment: Minimum average sentiment (0-5) sentiment_trending: Filter by trend (improving/stable/declining) only_improving_sentiment: Only show campgrounds with improving reviews camping_type: Camping type (tent/rv/primitive/group/cabin/glamping) allows_pets: Only show pet-friendly campgrounds limit: Maximum results (1-50, default 20)
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| states | No | ||
| latitude | No | ||
| longitude | No | ||
| allows_pets | No | ||
| camping_type | No | ||
| near_location | No | ||
| prefers_beach | No | ||
| prefers_quiet | No | ||
| prefers_desert | No | ||
| prefers_forest | No | ||
| prefers_hiking | No | ||
| prefers_boating | No | ||
| prefers_fishing | No | ||
| prefers_privacy | No | ||
| min_review_count | No | ||
| preferred_season | No | ||
| prefers_kayaking | No | ||
| prefers_mountain | No | ||
| prefers_secluded | No | ||
| prefers_swimming | No | ||
| requires_showers | No | ||
| min_avg_sentiment | No | ||
| preferred_carrier | No | ||
| prefers_adventure | No | ||
| prefers_lakefront | No | ||
| max_distance_miles | No | ||
| min_seasonal_score | No | ||
| prefers_oceanfront | No | ||
| prefers_relaxation | No | ||
| prefers_riverfront | No | ||
| prefers_stargazing | No | ||
| sentiment_trending | No | ||
| max_price_per_night | No | ||
| min_price_per_night | No | ||
| requires_dump_station | No | ||
| require_greenbook_safe | No | ||
| requires_accessibility | No | ||
| requires_cell_coverage | No | ||
| requires_flush_toilets | No | ||
| requires_potable_water | No | ||
| requires_water_hookups | No | ||
| min_cell_coverage_score | No | ||
| prefers_family_friendly | No | ||
| min_greenbook_vibe_score | No | ||
| only_improving_sentiment | No | ||
| no_discrimination_reports | No | ||
| requires_electric_hookups | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and non-destructive behavior. The description adds that a location constraint is required, implying failure or empty results otherwise. No further behavioral details like error handling or pagination are disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with purpose and requirement, then structured as a parameter list with one-line definitions. Though lengthy due to many parameters, the layout is scannable. Some redundancy could be trimmed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers all parameters and the location requirement. With an output schema present, return value details are unnecessary. The description could mention error handling or behavior when no results, but overall is fairly complete for a complex search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates well by providing brief but meaningful explanations for all 48 parameters (e.g., 'near_location: City, address, or landmark'). This adds value beyond the schema's type and default.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it searches campgrounds with 40+ filters across location, amenities, activities, and safety. Differentiates from sibling tools like 'find_similar_campgrounds' and 'get_campground_details' by focusing on search with extensive filtering.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly requires at least one location constraint (near_location, latitude+longitude, or states), guiding when to use. However, it does not mention when to avoid this tool in favor of siblings like 'check_availability' or 'find_similar_campgrounds'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_gearSearch GearARead-onlyInspect
Search the Outdoorithm gear catalog by keyword, category, price range, kit, necessity, or recommendation tier.
Returns matching products with prices, verdicts, and purchase links. For campground-specific recommendations, see get_gear_for_campground.
Args: query: Text search across product name, manufacturer, keywords, and description kit_name: Filter by kit (e.g., Basics, Camp Kitchen, Hiking, Cold Weather) category_name: Filter by category (e.g., Shelter, Sleeping Gear, Furniture, Electronics) max_price: Maximum price in USD min_price: Minimum price in USD necessity: Filter by level — Essential, Helpful, or Optional recommendation_tier: Filter by tier — Our Pick, Value Pick, or Comfort Pick limit: Max results (1-20, default 10)
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No | ||
| kit_name | No | ||
| max_price | No | ||
| min_price | No | ||
| necessity | No | ||
| category_name | No | ||
| recommendation_tier | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds no additional behavioral context (e.g., rate limits, authentication needs) beyond what annotations provide, but it does confirm the tool is a search operation. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured: a concise purpose statement followed by a bulleted parameter list. It is front-loaded with the most important information. However, it could be slightly more concise (e.g., combining 'query' and 'kit_name' lines).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 8 optional parameters, no required parameters, and an output schema exists (describing return format), the description covers all necessary aspects: what the tool does, what filters are available, and what is returned (matching products with prices, verdicts, purchase links). No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage, meaning the schema itself provides no parameter descriptions. The description compensates fully by detailing each parameter (query, kit_name, category_name, max_price, min_price, necessity, recommendation_tier, limit) with explanations of allowed values and defaults, adding substantial meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states it searches the gear catalog by multiple filters (keyword, category, price range, kit, necessity, recommendation tier) and returns matching products with prices, verdicts, and purchase links. It clearly distinguishes from the sibling tool 'get_gear_for_campground' for campground-specific recommendations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear guidance on when to use this tool and points to an alternative ('For campground-specific recommendations, see get_gear_for_campground'). It lists all filter parameters, enabling appropriate invocation, but does not explicitly state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toggle_alertToggle AlertAInspect
Pause or resume a campsite availability alert.
If the alert is active, it will be paused (stops checking for availability). If the alert is paused, it will be resumed (starts checking again).
Requires an Outdoorithm API key (generate at outdoorithm.com/dashboard/api-keys).
Args: api_key: User's Outdoorithm API key from their dashboard settings. alert_id: UUID of the alert to toggle. Get this from list_alerts.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| alert_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate non-read-only and non-destructive; the description adds detail about the toggle behavior (pausing/resuming), which is consistent and informative.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is brief with a clear summary and structured argument list; every sentence is valuable and there is no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With an output schema present, the description adequately covers prerequisites and parameter sources; it does not need to explain return values, making it complete for this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema coverage, the 'Args' section adds meaning: explains api_key's source and alert_id's origin from list_alerts, providing context beyond the schema's type-only definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Pause or resume') and the resource ('campsite availability alert'), distinguishing it from siblings like create_alert, delete_alert, and list_alerts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use (active vs paused) and mentions the required API key and source of alert_id, but does not explicitly exclude other contexts or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!