Skip to main content
Glama

Server Details

Hawaii MCP: tours, events, weather, restaurants, and day-plan itineraries across 4 islands.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
baphometnxg/aloha-fyi-mcp
GitHub Stars
0
Server Listing
aloha-fyi-hawaii

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 6 of 6 tools scored. Lowest: 3.5/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose targeting different aspects of Hawaii travel: restaurants, deals, weather, day planning, events, and tours. There is no overlap in functionality, and the descriptions make it easy to differentiate when to use each tool.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern with 'find_', 'get_', 'plan_', and 'search_' prefixes followed by 'hawaii_' and a descriptive noun. This predictable naming convention makes the tool set easy to navigate and understand.

Tool Count5/5

With 6 tools, the server is well-scoped for its Hawaii travel and activity domain. Each tool covers a key area (dining, deals, weather, planning, events, tours) without redundancy, making the count appropriate and manageable.

Completeness4/5

The tool set comprehensively covers major travel needs: dining, activities, weather, planning, events, and tours. A minor gap exists in accommodation-related tools (e.g., hotel searches), but agents can work around this given the strong coverage of other essential services.

Available Tools

6 tools
find_hawaii_restaurantsHawaii Restaurants & FoodAInspect

Find restaurants, coffee shops, poke bars, ramen, bakeries, and food trucks in Waikiki and across Oahu. 540+ curated spots across fine dining, casual, local plates, and specialty categories. Use when users ask 'where should I eat in Waikiki', 'best poke on Oahu', 'where to grab coffee', or 'cheap eats near me'.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results (max 15)
queryNoWhat to look for, e.g. 'poke', 'sushi', 'breakfast', 'local plate lunch'
categoryNoFilter by categoryany
neighborhoodNoFilter by neighborhood, e.g. 'waikiki', 'kaimuki'
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the dataset size ('540+ curated spots') and scope ('across fine dining, casual, local plates, and specialty categories'), which adds useful context. However, it doesn't disclose important behavioral traits like whether this is a read-only operation, how results are sorted, whether authentication is required, or any rate limits. The description doesn't contradict annotations (none exist), but leaves significant behavioral aspects unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with two sentences: the first establishes scope and scale, the second provides concrete usage examples. Every element serves a purpose - the curated count establishes credibility, the location specificity defines scope, and the examples provide actionable guidance. No wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search/find tool with no output schema and no annotations, the description provides good context about what's being searched (food establishments), where (Hawaii/Oahu), and when to use it. However, it doesn't describe what the output looks like - whether it returns addresses, ratings, prices, or other details. The 540+ count and category breakdown help, but more output information would be beneficial given the lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all four parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. It mentions general categories ('fine dining, casual, local plates, and specialty categories') but doesn't explain how these map to the 'category' parameter's enum values. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: finding restaurants and food establishments in specific Hawaiian locations. It specifies the resource ('restaurants, coffee shops, poke bars, ramen, bakeries, and food trucks'), geographic scope ('Waikiki and across Oahu'), and distinguishes it from sibling tools like get_hawaii_deals or search_hawaii_events by focusing exclusively on dining options.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance with concrete examples: 'Use when users ask 'where should I eat in Waikiki', 'best poke on Oahu', 'where to grab coffee', or 'cheap eats near me'.' This clearly indicates when this tool should be selected over alternatives, though it doesn't explicitly mention when NOT to use it or name specific sibling alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_hawaii_dealsHawaii Budget DealsAInspect

Find budget deals and discounts for Hawaii activities. Returns Groupon deals and low-price options sorted cheapest first. Use when users want affordable Hawaii experiences or budget travel tips.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of deals (max 20)
activityYesType of activity, e.g. 'snorkeling', 'helicopter', 'luau', 'food tour'
max_price_dollarsNoMaximum price per person in USD
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool returns deals sorted cheapest first and specifies sources (Groupon deals and low-price options), which adds useful context beyond basic functionality. However, it lacks details on rate limits, error handling, or authentication needs, leaving gaps for a tool that likely queries external services.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and concise, with two sentences that efficiently convey purpose and usage guidelines. Every sentence adds value: the first defines the tool's function and output, the second specifies when to use it. There is no redundant or vague language, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is mostly complete. It covers purpose, usage context, and output behavior (sources and sorting). However, it lacks details on return format or error cases, which could be helpful since there's no output schema. It's sufficient but not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (limit, activity, max_price_dollars) with descriptions. The description does not add any parameter-specific semantics beyond what the schema provides, such as explaining how 'activity' relates to deal types or price filtering. Baseline 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('find budget deals and discounts') and resources ('Hawaii activities'), explicitly mentioning it returns 'Groupon deals and low-price options sorted cheapest first'. It distinguishes itself from sibling tools like find_hawaii_restaurants or get_hawaii_weather by focusing on budget deals rather than restaurants, weather, planning, events, or tours.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use when users want affordable Hawaii experiences or budget travel tips.' This clearly indicates when to invoke this tool, distinguishing it from alternatives like search_hawaii_tours (which might not focus on budget) or plan_hawaii_day (which is broader). It effectively tells the agent the target context without ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_hawaii_weatherHawaii Weather & Surf ConditionsAInspect

Current weather, forecast, and surf/wind conditions for any Hawaiian island. Use this when users ask 'what's the weather in Maui this week' or 'is it good surf conditions on the North Shore today'. Returns temperature, precipitation, wind speed, UV index, and a 3-day forecast.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoDays of forecast to return (1-7)
islandYesWhich Hawaiian island
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adequately describes what the tool returns (temperature, precipitation, etc.) and the forecast duration, but lacks details on rate limits, authentication needs, error conditions, or data freshness. It doesn't contradict any annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and scope, the second provides usage examples and return values. Every sentence adds value with no wasted words, and it's front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description does a good job covering purpose, usage, and return values. However, it could be more complete by mentioning potential limitations (e.g., data sources, update frequency) or error cases, given the complexity of weather data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema (e.g., it doesn't explain island-specific weather patterns or forecast accuracy). Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('get current weather, forecast, and surf/wind conditions') and resources ('for any Hawaiian island'), distinguishing it from sibling tools like find_hawaii_restaurants or search_hawaii_events. It explicitly mentions the scope includes temperature, precipitation, wind speed, UV index, and a 3-day forecast.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage examples ('when users ask 'what's the weather in Maui this week' or 'is it good surf conditions on the North Shore today''), clearly indicating when to use this tool. It implicitly distinguishes from siblings by focusing on weather/surf conditions rather than restaurants, deals, events, or tours.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

plan_hawaii_dayPlan a Hawaii DayAInspect

Build a same-day or trip itinerary for a Hawaiian island. Returns a morning activity, lunch spot, afternoon activity, and dinner spot — picked from our live catalog of tours, food, and experiences. Use when users ask 'plan my day in Oahu', 'what should I do Saturday in Maui', or 'family itinerary for Kauai'.

ParametersJSON Schema
NameRequiredDescriptionDefault
vibeNoThe overall vibe of the daychill
islandNoWhich islandoahu
max_budget_per_personNoMax total budget per person for paid activities in USD
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool returns specific components (morning activity, lunch spot, etc.) from a live catalog, which is useful, but lacks details on permissions, rate limits, or what happens if no matches are found. It adequately describes the core behavior but misses operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by usage examples, all in two efficient sentences with zero wasted words. Each sentence adds value by clarifying scope and application.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is mostly complete: it explains what the tool does, when to use it, and the return structure. However, it lacks details on error handling or output format specifics, which could be helpful for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (vibe, island, max_budget_per_person) with enums and defaults. The description does not add any parameter-specific details beyond what the schema provides, such as how 'vibe' influences selections or budget handling, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Build a same-day or trip itinerary') and resources ('Hawaiian island', 'live catalog of tours, food, and experiences'), distinguishing it from siblings like find_hawaii_restaurants or search_hawaii_tours by focusing on comprehensive day planning rather than specific components.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool with examples ('plan my day in Oahu', 'what should I do Saturday in Maui', 'family itinerary for Kauai'), providing clear context for user queries that require full-day planning rather than specific searches or deals offered by sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_hawaii_eventsHawaii Events & ConcertsAInspect

Find upcoming events, concerts, festivals, and nightlife across all Hawaiian islands. 579+ events from 70+ venues, updated weekly. Use when users ask what's happening in Hawaii or want entertainment options.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoType of event, e.g. 'live music', 'luau', 'concert', 'food festival'
islandNoany
days_aheadNoHow many days ahead to search
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions '579+ events from 70+ venues, updated weekly,' which adds useful context about data freshness and scale. However, it doesn't disclose critical behavioral traits: whether this is a read-only operation, what the output format looks like (no output schema), error conditions, rate limits, or authentication needs. For a search tool with no annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, the second adds quantitative context, and the third provides usage guidelines. Every sentence earns its place with no wasted words, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no annotations, no output schema), the description is partially complete. It covers purpose and usage well but lacks behavioral details (e.g., output format, error handling) and doesn't fully address the parameter gaps. It's adequate as a minimum viable description but has clear room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 67% (2 of 3 parameters have descriptions). The description adds no parameter-specific information beyond what the schema provides. With moderate schema coverage, the baseline is 3—the description doesn't compensate for the 33% gap (the 'island' parameter lacks description in both), but it doesn't make things worse either.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find upcoming events, concerts, festivals, and nightlife across all Hawaiian islands.' It specifies the verb ('Find') and resource ('events, concerts, festivals, and nightlife'), and distinguishes from siblings by focusing on entertainment events rather than restaurants, deals, weather, tours, or day planning. However, it doesn't explicitly differentiate from hypothetical overlapping tools (e.g., 'search_hawaii_activities'), so it's not a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context: 'Use when users ask what's happening in Hawaii or want entertainment options.' This gives explicit when-to-use guidance. However, it doesn't mention when NOT to use it (e.g., for historical events) or name specific alternatives among the sibling tools, so it falls short of a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_hawaii_toursSearch Hawaii ToursAInspect

Search 2,583 bookable Hawaii tours and activities by keyword, island, price range. Returns tours from Viator, GetYourGuide, Klook, and Groupon with affiliate booking links. Use this when users ask about Hawaii tours, activities, or things to do.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results (max 20)
queryYesWhat to search for, e.g. 'snorkeling', 'helicopter tour', 'luau', 'family activities'
islandNoWhich Hawaiian islandany
sourceNoFilter by booking platformany
max_price_dollarsNoMaximum price per person in USD
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses that results include 'affiliate booking links' (useful context) and mentions the scope (2,583 tours from specific platforms). However, it doesn't cover important behavioral aspects like rate limits, authentication needs, pagination, or error handling for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place. The first sentence explains what the tool does and its parameters, the second provides explicit usage guidance. No wasted words, well-structured and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 5 parameters and no output schema, the description provides good context about what's being searched and when to use it. However, without annotations or output schema, it could benefit from more behavioral details about result format or limitations. The usage guidance against siblings is strong.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all 5 parameters. The description adds marginal value by mentioning 'keyword, island, price range' which aligns with query, island, and max_price_dollars parameters, but doesn't provide additional semantic context beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search 2,583 bookable Hawaii tours and activities') and distinguishes it from siblings by focusing on tours/activities rather than restaurants, deals, weather, events, or day planning. It explicitly mentions the searchable parameters and affiliate sources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use this when users ask about Hawaii tours, activities, or things to do.' This directly tells the agent when to invoke this tool versus its siblings like find_hawaii_restaurants or search_hawaii_events.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.