Skip to main content
Glama

Hawaii Conditions MCP Server

Server Details

Real-time surf, weather, trail status, volcano, ocean safety, and restaurants for Hawaii.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 20 of 20 tools scored. Lowest: 2.4/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but the three add_funds tools (add_funds_5, add_funds_10, add_funds_20) are essentially duplicates with different amounts, causing potential confusion. Better to have a single add_funds tool with an amount parameter.

Naming Consistency4/5

Tool names mostly follow a consistent verb_noun pattern (get_weather, search_restaurants, etc.), but the add_funds tools break this pattern by appending the amount as a suffix rather than using a parameter. Some names like create_wallet_setup are slightly verbose, but overall consistent.

Tool Count4/5

With 20 tools covering weather, ocean, volcano, trails, restaurants, and payment management, the count is well-scoped for the domain. It is not excessive, and each tool serves a specific function.

Completeness4/5

The tool set covers the main domains of Hawaii conditions well, including weather, surf, ocean safety, volcano, trails, and restaurants. Some missing aspects like tide information or hiking trail details beyond closures, but overall it provides a comprehensive surface for the stated purpose.

Available Tools

20 tools
add_funds_10AInspect

Charge your saved card $10 — balance credited after Stripe payment confirmation. Free to call.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full responsibility. It discloses the charge, payment confirmation dependency, and that it's free. However, it omits failure modes, prerequisites (saved card), or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words, front-loads key information (charge $10, payment flow). Ideal length for the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description covers the essential aspects: action, amount, payment method, and post-payment confirmation. Minor missing details like error handling or prerequisites, but acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema is empty (0 parameters), so the description compensates by explaining the fixed behavior: charging $10 to a saved card. This adds meaning beyond the bare schema, but could note that no parameters are needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (charge $10), the payment method (saved card), and the outcome (balance credited after Stripe confirmation). It also distinguishes from siblings by specifying the exact amount.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly guide when to use this tool versus siblings like add_funds_5 or add_funds_20. The amount is implied by the name, but no exclusion or context is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

add_funds_20AInspect

Charge your saved card $20 — balance credited after Stripe payment confirmation. Free to call.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description partially reveals behavior: balance credited after Stripe confirmation and free to call. However, it omits prerequisites (must have saved card), failure modes, and side effects of a payment tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the key action, no unnecessary words. Efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no params, no output schema), the description is fairly complete. It covers the action, payment flow, and cost. Missing prerequisites (saved card) and error states, but sibling tools hint at setup needs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no parameters (100% coverage), so the description does not need to add parameter info. It mentions the fixed amount, which aligns with the tool name, but adds no extra semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Charge'), the resource ('your saved card'), and the specific amount ($20), which distinguishes it from sibling tools like add_funds_10 and add_funds_5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for charging $20 to a saved card, but does not explicitly state when not to use it or mention alternatives, though sibling tools cover other amounts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

add_funds_5AInspect

Charge your saved card $5 — balance credited after Stripe payment confirmation. Free to call.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the core behavior (charging a card and crediting balance after payment confirmation) and states that it is 'Free to call', implying no extra fee. However, it lacks details on failure modes, authentication requirements, or limits. Given no annotations, the description carries the transparency burden adequately but not comprehensively.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—two short sentences that convey the purpose and key details. Every word serves a purpose; there is no redundancy or wasted space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with no parameters and no output schema, the description covers the main action and outcome. It mentions the payment mechanism and cost. It could be more complete by hinting at prerequisites (e.g., 'requires a saved card'), but overall it provides sufficient context for correct use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, so the schema coverage is trivially 100%. According to guidelines, baseline for 0 parameters is 4. The description does not need to add parameter information, and it does not contradict the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Charge your saved card $5') and specifies the result ('balance credited after Stripe payment confirmation'). It distinguishes itself from sibling tools like add_funds_10 by the fixed amount, making it easy for an agent to select the correct tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool over siblings (e.g., add_funds_10) or alternatives like create_wallet_setup. It does not mention prerequisites such as having a saved card or a Stripe connection, nor does it specify when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_wallet_setupAInspect

Initialise Stripe SetupIntent — returns client_secret for saving a card. Free.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries full burden. It discloses that the tool returns client_secret and is 'free', but does not mention side effects, idempotency, rate limits, or authentication requirements. For a tool that likely creates a server-side resource, more behavioral context is needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that conveys all essential information without extraneous words. It is front-loaded with the action and result.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (no parameters, no output schema), the description is mostly complete. It explains the purpose and primary output. However, it could mention the relationship with sibling tools like link_stripe_customer to provide full context, but is otherwise sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so schema coverage is 100% by default. The description adds value by specifying exactly what the tool does and returns ('returns client_secret for saving a card'), and indicates it is free. This goes beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it initializes a Stripe SetupIntent and returns client_secret for saving a card. It uses a specific verb ('Initialise') and resource ('Stripe SetupIntent'), and distinguishes itself from sibling tools like add_funds_* and save_payment_method by focusing on the setup step.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., needing a linked Stripe customer via link_stripe_customer) or situations where it should not be used. A simple statement like 'Use before save_payment_method' would help.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_balanceAInspect

Returns current balance, original load amount, and low-balance flag. Free.

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNoOptional api key. Prefer X-MCP-Account header.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description does not disclose whether this is safe/read-only, authentication requirements, or error scenarios. Minimal behavioral context beyond what schema implies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise, front-loaded with action and return values. Every word is necessary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately specifies the three returned pieces of information. However, it does not mention potential errors or edge cases, slightly reducing completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%; the description adds no additional meaning to the optional api_key parameter beyond what the schema already describes.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool returns current balance, original load amount, and low-balance flag. No sibling tools have similar functionality, so differentiation is clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies use when balance data is needed, but no explicit when-to-use vs alternatives. The 'Free' note suggests no cost but doesn't provide context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_full_briefingBInspect

All five data sources combined — weather, surf, trails, volcano, ocean safety. Best value at $2.00.

ParametersJSON Schema
NameRequiredDescriptionDefault
islandYesIsland name (e.g. oahu, maui, kauai, big-island)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description mentions a cost of $2.00, but with no annotations and no explanation of how payment works (e.g., deducted from balance, requires a wallet), the behavioral impact is unclear. The tool's side effects and prerequisites are not disclosed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences, front-loading the core purpose. It is efficient but could benefit from structure like bullet points or a clearer separation of features and cost.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and the monetary implication, the description is incomplete. It does not explain the output format, payment mechanism, or any dependencies on other tools (e.g., wallet setup). The agent lacks critical context to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the single parameter 'island', so the schema alone documents it completely. The description adds no additional meaning beyond what the schema provides, meeting the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states it combines all five data sources (weather, surf, trails, volcano, ocean safety), clearly distinguishing it from individual data source siblings. The verb 'combined' and resource 'briefing' precisely convey the tool's function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for users wanting all data sources at a bundled price, but does not explicitly state when to use this versus individual tools, nor does it mention prerequisites like having sufficient funds. No exclusions or alternatives are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_moon_phaseCInspect

Current moon phase, illumination percentage, moonrise/moonset times, and days to next full/new moon. Free.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoDate in YYYY-MM-DD format (defaults to today)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must disclose behavior. It mentions 'Current moon phase' but the date parameter allows any date, which is inconsistent. Does not mention authentication, rate limits, or read-only nature. Lacks critical operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is a single sentence with essential information. The word 'Free' is extraneous but does not harm clarity. Front-loads the key outputs. Could be slightly more structured but efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Description lists main return fields, which partially compensates for missing output schema. However, it does not mention timezone, location assumptions, or data accuracy. Slightly incomplete for a tool with no annotations and no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has one parameter with 100% description coverage. Description does not add extra meaning beyond the schema for the date parameter. The listed outputs (moon phase, illumination, etc.) are not parameters. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool returns moon phase data including illumination, times, and days to next full/new moon. This distinguishes it from sibling tools like get_sun_times. However, it could be more explicit about returning data for any date, not just 'current'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as get_sun_times. The description only includes 'Free' which is business context, not usage advice. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ocean_safetyCInspect

Box jellyfish, rip currents, NOAA marine alerts. Cost: $0.50.

ParametersJSON Schema
NameRequiredDescriptionDefault
islandYesIsland name (e.g. oahu, maui, kauai, big-island)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavior. It mentions a cost of $0.50 but does not explain if this is per call or subscription. It does not state whether the operation is read-only, destructive, or what side effects occur (e.g., data fetching vs. real-time alerts). Minimal behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very short (two phrases) and front-loaded with key topics. It is concise but could be more structured (e.g., starting with a verb). Overall efficient, though lacking some content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description should explain what the tool returns (format, scope). It only lists topics and cost, omitting return structure, pagination, or data freshness. The single parameter is well-documented in schema, but overall context is insufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the parameter 'island' is already documented with enum values. The description adds no additional meaning (e.g., specifying which islands are supported beyond the enum). Baseline score of 3 applies as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description lists topics (box jellyfish, rip currents, NOAA marine alerts) which hint at ocean safety, but it lacks a clear verb+resource statement like 'Retrieve ocean safety alerts.' The name 'get_ocean_safety' partially compensates, but without a clear action, purpose is vague.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus siblings like get_surf_conditions or get_weather. The description does not mention prerequisites, exclusions, or alternatives, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_restaurant_detailsAInspect

Full hours, reviews, photos for a restaurant by place_id. Cost: $0.15.

ParametersJSON Schema
NameRequiredDescriptionDefault
place_idYesGoogle place_id returned from search_restaurants
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty so description carries full burden. Discloses cost, but no info on read-only nature, errors, or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise: one sentence plus cost. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With a single parameter and no output schema, the description lists key return fields (hours, reviews, photos) and cost, making it complete for a simple retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameter descriptions. The description adds only 'by place_id' and cost, not providing significant extra meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves restaurant details (hours, reviews, photos) by place_id. It is distinct from sibling search_restaurants.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage after obtaining a place_id from search_restaurants, and cost is mentioned. However, no explicit when-not or alternatives are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_sun_timesCInspect

Sunrise, sunset, and daylight duration by island (HST). Free.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoDate in YYYY-MM-DD format (defaults to today)
islandNooahu
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description only mentions 'Free', which is not a behavioral trait. It fails to disclose rate limits, data freshness, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise (one sentence), but at the cost of omitting necessary details. It is front-loaded but incomplete.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has 2 optional parameters, no output schema, and no annotations. The description lacks information on return format, timezone specifics, and default behavior, making it insufficient for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50% (only 'date' has a description). The description adds minimal parameter context beyond 'by island', not compensating for the missing parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides sunrise, sunset, and daylight duration by island, with timezone HST. It is distinct from sibling tools like get_moon_phase or get_weather.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. For example, it doesn't explain why a user would pick this over get_weather for sun data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_surf_conditionsAInspect

Wave height, period, direction + 3-day forecast. Cost: $0.10.

ParametersJSON Schema
NameRequiredDescriptionDefault
islandYesIsland name (e.g. oahu, maui, kauai, big-island)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral traits such as rate limits, data freshness, or side effects. For a read-only data retrieval tool, the missing transparency is acceptable but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is very concise with one sentence plus cost. It is efficient and front-loaded, though it lacks structural elements like bullet points or separate sections.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity, single parameter, and no output schema, the description adequately states what the tool returns. It could elaborate on output format, but the core information is present.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds no additional meaning beyond the schema's island parameter enumeration and description, which is already sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly specifies the tool returns wave height, period, direction, and a 3-day forecast. It also mentions cost. This distinguishes it from siblings like get_weather or get_ocean_safety.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage for surf conditions data, but does not explicitly state when to use versus alternatives or provide exclusions. The context is clear enough for an agent to infer appropriate use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_trail_statusBInspect

NPS alerts and state trail closures. Cost: $0.25.

ParametersJSON Schema
NameRequiredDescriptionDefault
islandYesIsland name (e.g. oahu, maui, kauai, big-island)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries burden; it only mentions cost ($0.25) but no details on read-only nature, authentication needs, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences front-load purpose and cost; no wasted words, though could mention return format for completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter enum tool with no output schema, description adequately covers purpose and cost; no significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with an enum description; description adds no extra meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves NPS alerts and state trail closures, which is a specific verb+resource, distinguishing it from siblings like get_volcano_status or get_ocean_safety.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives; no context about prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_volcano_statusAInspect

Live Kīlauea status from USGS HVO. Cost: $0.25.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It mentions cost ($0.25) and that it's 'Live' data from USGS, but fails to state whether it is read-only, if it has rate limits, or any side effects. The cost is a positive addition, but overall transparency is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two short sentences with no fluff. The most important information (live Kīlauea status) appears first. Every word serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool is simple with no parameters or output schema, but the description lacks details on the format of the returned status (e.g., alert level, text). It is adequate but incomplete for an agent needing to interpret the response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Tool has zero parameters, so baseline is 4. The description does not need to add parameter meaning. It correctly implies no input needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves live Kīlauea volcano status from USGS HVO. The verb 'get' and specific resource 'volcano status' with location and source make purpose unambiguous. No sibling tool duplicates this functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when Kīlauea status is needed but provides no guidance on when not to use it, prerequisites, or comparisons with alternatives. The $0.25 cost hint suggests it's paid, but no explicit usage rules.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_weatherAInspect

5-day forecast, UV index, wind, sunrise/sunset. Cost: $0.10.

ParametersJSON Schema
NameRequiredDescriptionDefault
islandYesIsland name (e.g. oahu, maui, kauai, big-island)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description must cover behavioral traits. It discloses the cost ($0.10) and the data provided (forecast, UV, wind, sunrise/sunset). However, it lacks details on data freshness, update frequency, or whether the call is read-only (assumed but not stated). Sufficient for a simple tool but not fully transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise: one sentence plus cost. No wasted words. Key information is front-loaded. Every element earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with one parameter and no output schema, the description is adequate. It states what data is returned and cost. Minor omission: no mention of output format or time resolution for the forecast, but not critical given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The only parameter, 'island', is fully covered by the input schema (enum values with description). The description adds no additional meaning beyond what the schema provides. Baseline 3 is appropriate given 100% schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides a 5-day forecast along with UV index, wind, and sunrise/sunset. This distinguishes it from sibling tools like get_sun_times or get_moon_phase, which cover only specific weather aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The cost is mentioned but does not provide when-to-use or when-not-to-use context. Sibling tools exist (e.g., get_sun_times, get_moon_phase) but no comparison is made.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pingAInspect

Health check — returns server status. Free.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, but the description 'Free.' suggests no cost or rate limit, and 'returns server status' indicates a read-only operation. For a simple health check, this is sufficiently transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—two short fragments—and front-loads the primary purpose. Every word serves a purpose without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple health check with no parameters and no output schema, the description is fully complete. It conveys all necessary information for an AI agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has no parameters, so the input schema fully covers semantics. The description adds no parameter-specific information, but with zero parameters, baseline 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Health check — returns server status. Free.' This clearly defines the verb (check), resource (server), and outcome (status). It distinguishes this tool from siblings, none of which are health checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit guidance on when or when not to use this tool. However, as a health check, its use case is self-evident. No alternatives or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_transactionsAInspect

View recent credits and debits (default 20, max 50). Free.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of transactions to return (max 50)
api_keyNoOptional api key. Prefer X-MCP-Account header.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description clarifies that the tool is free and provides transaction viewing, which implies a read-only operation. However, it does not explicitly state non-destructive behavior or handling of limits or errors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, with two short sentences that convey the tool's purpose, constraints, and a notable feature (free), all without superfluous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the tool's purpose and parameters but does not describe the response format, which may be needed for agents to process the output correctly, especially since no output schema is provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already covers both parameters with descriptions, and the tool description merely restates the limit constraints without adding new semantic information beyond what is in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb-resource combination ('View recent credits and debits') and includes default/max constraints, clearly distinguishing it from sibling tools like get_balance or add_funds_*.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clearly states its function and constraints, providing sufficient context for the agent to understand typical use, though it does not explicitly exclude alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_agentAInspect

Create a prepaid account. Returns api_key (format: mcp_live_…). Store it immediately — shown only once. Free.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoOptional unique identifier for this agent
display_nameNoHuman-readable name for this agent
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses that the api_key is shown only once and must be stored, and states it's free. However, it doesn't mention any required authentication or potential side effects beyond creation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with action and output, no wasted words. Each sentence provides critical information (action, return format, storage requirement, cost).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple creation tool with few parameters, it covers key aspects: what it does, what it returns (with format), and a behavior note (only once). It lacks error conditions or duplicate handling but is largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with both parameters described. The description adds no additional meaning beyond the schema's 'optional' and 'human-readable' hints. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Create a prepaid account' and mentions the return of an api_key with format hint, clearly defining the tool's action and resource. It distinguishes from siblings like add_funds and get_balance which are account management operations after creation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies this tool is for initial account creation but does not provide explicit guidance on when to use it versus alternatives (e.g., create_wallet_setup). No when-not or exclusion criteria are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_payment_methodBInspect

Attach a Stripe payment method (pm_…) to your account for autonomous top-ups. Free.

ParametersJSON Schema
NameRequiredDescriptionDefault
payment_method_idYesStripe payment method ID (pm_…)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It only states 'Free' which is trivial. It doesn't mention if this is a mutating operation, if it replaces existing methods, if authentication is required, or any error conditions. The description carries the full burden and does not adequately address this.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at two short sentences, with the key information front-loaded. Every word serves a purpose, including 'Free' which adds a trivial but relevant note. No filler or repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (1 parameter, no output schema), the description is borderline adequate. However, it lacks guidance on error states, idempotency, or whether this acts as a set-default. For completeness, it could mention that the payment method is used for top-ups and may require prior setup.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the parameter description is clear. The description adds the context that the payment method is attached 'to your account for autonomous top-ups', which reinforces the parameter's role but does not add strictly new semantic value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Attach' and the resource 'Stripe payment method (pm_…)' with a specific purpose 'for autonomous top-ups'. It effectively distinguishes this tool from siblings like add_funds_* which add funds, and link_stripe_customer which links a customer.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, context, or when not to use it. For example, it doesn't clarify if this is needed before using add_funds_* or if there is a default payment method.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_restaurantsBInspect

Find restaurants by location, cuisine, price range, open-now filter. Cost: $0.25.

ParametersJSON Schema
NameRequiredDescriptionDefault
priceNoPrice range
cuisineNoCuisine type (e.g. Hawaiian, Japanese, Mexican)
locationYesLocation to search (e.g. Waikiki, Kailua, Lahaina)
open_nowNoFilter for currently open restaurants
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description must carry the burden. It discloses cost ($0.25) but no other behavioral traits such as read-only nature, rate limits, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with purpose front-loaded. Efficient, though cost information could be integrated or placed in a note for better structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description should explain return format, pagination, or sorting. It only mentions filters and cost, leaving significant gaps for a search tool's completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so each parameter already has clear descriptions. The tool description lists filter types but does not add significant meaning beyond the schema, meeting baseline expectations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Find' and the resource 'restaurants' with specific filters (location, cuisine, price range, open-now), distinguishing it from sibling tools like get_restaurant_details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives (e.g., get_restaurant_details) or when not to use it. The cost is mentioned but does not provide usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources