Skip to main content
Glama

hoteloracle

Server Details

Hotel Intelligence MCP — search, price compare, area guides, price calendars via Google Hotels

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ToolOracle/hoteloracle
GitHub Stars
0
Server Listing
HotelOracle

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, such as hotel_search for general searches, hotel_details for deep information, and price_compare for cross-site comparisons. However, cheapest_hotels and hotel_prices_calendar could be confused as both focus on finding low prices, though cheapest_hotels sorts hotels by price while hotel_prices_calendar analyzes price trends for a specific hotel.

Naming Consistency5/5

All tool names follow a consistent snake_case pattern with clear verb_noun or noun_verb structures, such as hotel_search, hotel_details, and price_compare. There are no deviations in naming conventions, making the set predictable and easy to understand.

Tool Count5/5

With 8 tools, the server is well-scoped for hotel and travel-related queries, covering key aspects like search, details, pricing, comparisons, and area guides. Each tool serves a specific function without redundancy, making the count appropriate for the domain.

Completeness4/5

The tool set provides comprehensive coverage for hotel research, including search, details, pricing, comparisons, and local context. A minor gap is the lack of booking or reservation tools, which might limit end-to-end workflows, but core informational needs are well-addressed.

Available Tools

8 tools
area_guideCInspect

Best neighborhoods to stay in a city. Compares areas by price, rating, and popular hotels.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNoCity name (e.g., 'Tokyo', 'Barcelona', 'New York')
budgetNobudget, mid, or luxury (default: mid)
countryNoCountry (default: us)
check_inNoCheck-in YYYY-MM-DD
currencyNoCurrency (default: USD)
check_outNoCheck-out YYYY-MM-DD
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions comparing by price, rating, and hotels, but doesn't cover critical aspects like data sources, accuracy, rate limits, or output format. For a tool with 6 parameters and no output schema, this leaves significant gaps in understanding how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose. It avoids redundancy and wastes no words, making it easy to parse. However, it could be slightly more structured by separating key points, but this is minor.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (6 parameters, no output schema, no annotations), the description is incomplete. It doesn't explain the return values, data sources, or how comparisons are made, leaving agents uncertain about the tool's behavior. For a tool with rich input but no structured output, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters with descriptions. The description adds minimal value beyond the schema by implying parameters like 'budget' and 'city' are used for comparisons, but doesn't provide additional syntax or format details. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to compare neighborhoods by price, rating, and popular hotels for staying in a city. It specifies the verb ('compares') and resource ('neighborhoods'), making it easy to understand. However, it doesn't explicitly differentiate from sibling tools like 'hotel_search' or 'price_compare', which might offer overlapping functionality, preventing a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'hotel_search' or 'price_compare', nor does it specify prerequisites or exclusions. Usage is implied by the purpose, but without explicit context, agents may struggle to choose between similar tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cheapest_hotelsCInspect

Find the cheapest hotels, sorted by lowest price.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoCity or area
countryNoCountry (default: us)
check_inNoCheck-in YYYY-MM-DD
currencyNoCurrency (default: USD)
check_outNoCheck-out YYYY-MM-DD
max_priceNoMax price per night
hotel_classNoMin star rating (2-5)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions sorting by lowest price, which is useful, but lacks critical details: whether this is a read-only operation, if it requires authentication, rate limits, pagination, error handling, or what the output format looks like. For a search tool with 7 parameters, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence: 'Find the cheapest hotels, sorted by lowest price.' It's front-loaded with the core purpose, has zero wasted words, and is appropriately sized for a search tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (7 parameters, no annotations, no output schema), the description is insufficient. It lacks behavioral context (e.g., read/write nature, error handling), output details, and usage guidance relative to siblings. While concise, it doesn't provide enough information for an agent to confidently invoke this tool in a real-world scenario.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 7 parameters with descriptions. The description adds no parameter-specific information beyond implying a focus on 'cheapest' (related to price sorting) and 'hotels' (the resource). This meets the baseline of 3, as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find the cheapest hotels, sorted by lowest price.' It specifies the verb ('find') and resource ('cheapest hotels'), and mentions the sorting criterion. However, it doesn't explicitly differentiate from sibling tools like 'hotel_search' or 'price_compare', which likely have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'hotel_search' or 'price_compare', nor does it specify prerequisites, exclusions, or ideal contexts for usage. The agent must infer usage from the name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

health_checkBInspect

Server status, API connectivity, supported features.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions what is checked (status, connectivity, features) but lacks behavioral details like response format, error handling, whether it performs active tests or returns cached data, or any rate limits. This is a significant gap for a diagnostic tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—three brief phrases separated by commas—with zero wasted words. It's front-loaded with the core purpose and efficiently communicates the scope without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 params, no output schema, no annotations), the description is minimally adequate. It states what the tool does but lacks details on behavior, output, or integration context. For a health check tool, more guidance on interpretation or use cases would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description doesn't add param info beyond the schema, but with no parameters, a baseline of 4 is appropriate as it avoids redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: checking server status, API connectivity, and supported features. It uses specific verbs ('status', 'connectivity') and identifies the resource ('Server', 'API'), but doesn't explicitly differentiate from sibling tools, which are all hotel/attraction related, making this distinction implicit rather than explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, timing (e.g., at startup or when errors occur), or contrast with sibling tools, leaving usage context entirely implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hotel_detailsBInspect

Deep details for a specific hotel: all amenities, reviews breakdown, images, eco-certification, nearby places.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoHotel name + city (e.g., 'Ritz Paris')
countryNoCountry (default: us)
check_inNoCheck-in YYYY-MM-DD
currencyNoCurrency (default: USD)
check_outNoCheck-out YYYY-MM-DD
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the types of details returned (amenities, reviews, etc.), which adds some context beyond the input schema. However, it doesn't describe critical behaviors: whether this is a read-only operation (implied but not stated), potential rate limits, authentication needs, error handling, or response format. For a tool with no annotations, this leaves significant gaps in understanding how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Deep details for a specific hotel') and lists key data types without redundancy. Every element (amenities, reviews, etc.) adds value by specifying what 'deep details' include, and there's no wasted verbiage. It's appropriately sized for a tool with 5 parameters and no output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (a read operation with 5 optional parameters) and lack of annotations or output schema, the description is moderately complete. It clarifies the tool's scope and data types, which helps contextualize it among siblings. However, it doesn't fully compensate for missing behavioral details (e.g., response structure, error cases) or provide usage guidelines, leaving room for improvement in guiding an AI agent effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with all 5 parameters clearly documented (e.g., 'query' as hotel name + city). The description adds no additional parameter semantics beyond what's in the schema, such as explaining how parameters interact (e.g., if 'check_in' and 'check_out' affect the details returned) or providing examples beyond the schema's 'Ritz Paris'. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving comprehensive details for a specific hotel, listing specific data types (amenities, reviews, images, eco-certification, nearby places). It distinguishes itself from siblings like 'hotel_search' (which likely returns multiple hotels) and 'hotel_prices_calendar' (which focuses on pricing). However, it doesn't explicitly mention that it's for a single hotel, though this is implied by 'specific hotel'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a hotel identifier), compare it to siblings like 'hotel_search' (for finding hotels) or 'area_guide' (for broader area info), or specify scenarios where it's most useful (e.g., after identifying a hotel from search). Usage is implied by the purpose but not explicitly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hotel_prices_calendarCInspect

Price trend for a specific hotel across different check-in dates. Find the cheapest week.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoHotel name + city
weeksNoWeeks to scan (1-6, default: 4)
nightsNoStay duration (default: 2)
countryNoCountry (default: us)
currencyNoCurrency (default: USD)
start_dateNoStart date YYYY-MM-DD
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool's function ('price trend', 'find the cheapest week') but does not describe critical behaviors such as data sources, rate limits, error handling, or output format. For a tool with no annotations, this is a significant gap in transparency about how the tool operates beyond its basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences that directly state the tool's purpose and goal. It is front-loaded with the core function and avoids any unnecessary details, making it efficient and easy to parse. Every sentence earns its place by contributing essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no output schema, no annotations), the description is insufficient. It lacks details on output format, error conditions, data freshness, or integration with sibling tools. Without annotations or an output schema, the description should provide more context to help the agent understand the tool's behavior and results, but it does not.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning all parameters are documented in the input schema. The description adds no additional semantic information about parameters beyond implying date-range scanning ('across different check-in dates') and the goal of finding the cheapest week. This meets the baseline score of 3, as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Price trend for a specific hotel across different check-in dates. Find the cheapest week.' It specifies the verb ('find') and resource ('price trend'), but does not explicitly differentiate it from sibling tools like 'price_compare' or 'cheapest_hotels', which might have overlapping functionality. This makes it clear but not fully distinct from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as 'price_compare' or 'cheapest_hotels'. It mentions the goal ('find the cheapest week') but does not specify contexts, prerequisites, or exclusions for usage. This lack of comparative guidance leaves the agent without clear direction on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

nearby_attractionsBInspect

What is near a hotel: restaurants, landmarks, transit stations, distances.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoHotel name + city
check_inNoCheck-in YYYY-MM-DD
check_outNoCheck-out YYYY-MM-DD
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It implies a read-only operation but doesn't specify whether it requires authentication, rate limits, or how results are returned (e.g., pagination, format). The mention of 'distances' hints at output details, but behavioral traits like error handling or data freshness are omitted.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded and appropriately sized, with every element contributing to understanding what the tool does.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is adequate but has clear gaps. It covers the basic purpose but lacks usage guidelines, detailed behavioral context, and output information, making it minimally viable for an agent to invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the input schema already documents all three parameters (query, check_in, check_out) with clear descriptions. The description adds minimal value beyond the schema by implying the query relates to a hotel, but it doesn't provide additional syntax, format details, or explain why check-in/check-out dates are relevant for attractions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to find nearby attractions (restaurants, landmarks, transit stations) with distances for a hotel. It specifies the resource (hotel) and the types of information returned, though it doesn't explicitly differentiate from sibling tools like 'area_guide' which might serve a similar function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'area_guide' or 'hotel_details', nor does it specify prerequisites or exclusions for usage, leaving the agent to infer context from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

price_compareCInspect

Compare prices for one hotel across booking sites (Booking.com, Hotels.com, Expedia, etc.).

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoSpecific hotel name + city
countryNoCountry (default: us)
check_inNoCheck-in YYYY-MM-DD
currencyNoCurrency (default: USD)
check_outNoCheck-out YYYY-MM-DD
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions comparing prices across sites but doesn't disclose behavioral traits such as rate limits, data freshness (real-time vs. cached), authentication needs, error handling, or output format. For a tool with no annotation coverage, this leaves significant gaps in understanding its operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose. It uses no unnecessary words and includes helpful examples (e.g., 'Booking.com, Hotels.com, Expedia'). Every part of the sentence contributes value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete. It doesn't explain what the comparison output includes (e.g., prices, links, availability) or behavioral aspects like rate limits. For a tool with 5 parameters and complex functionality (price aggregation across sites), more context is needed to guide effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds no additional parameter semantics beyond implying the 'query' parameter should identify a hotel. Baseline score of 3 is appropriate as the schema handles most of the parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Compare prices') and resource ('for one hotel across booking sites'), with specific examples (Booking.com, Hotels.com, Expedia). It distinguishes from siblings like 'cheapest_hotels' (which likely finds cheapest options) and 'hotel_prices_calendar' (which likely shows price trends), though not explicitly. However, it could be more specific about the scope (e.g., real-time vs. cached data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like 'cheapest_hotels' or 'hotel_search'. The description implies usage for price comparison across sites, but lacks context on prerequisites (e.g., requires specific hotel identification) or exclusions (e.g., not for multi-hotel comparisons).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.