hoteloracle
Server Details
Hotel Intelligence MCP — search, price compare, area guides, price calendars via Google Hotels
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- ToolOracle/hoteloracle
- GitHub Stars
- 0
- Server Listing
- HotelOracle
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3/5 across 8 of 8 tools scored.
Most tools have distinct purposes, such as hotel_search for general searches, hotel_details for deep information, and price_compare for cross-site comparisons. However, cheapest_hotels and hotel_prices_calendar could be confused as both focus on finding low prices, though cheapest_hotels sorts hotels by price while hotel_prices_calendar analyzes price trends for a specific hotel.
All tool names follow a consistent snake_case pattern with clear verb_noun or noun_verb structures, such as hotel_search, hotel_details, and price_compare. There are no deviations in naming conventions, making the set predictable and easy to understand.
With 8 tools, the server is well-scoped for hotel and travel-related queries, covering key aspects like search, details, pricing, comparisons, and area guides. Each tool serves a specific function without redundancy, making the count appropriate for the domain.
The tool set provides comprehensive coverage for hotel research, including search, details, pricing, comparisons, and local context. A minor gap is the lack of booking or reservation tools, which might limit end-to-end workflows, but core informational needs are well-addressed.
Available Tools
8 toolsarea_guideCInspect
Best neighborhoods to stay in a city. Compares areas by price, rating, and popular hotels.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | City name (e.g., 'Tokyo', 'Barcelona', 'New York') | |
| budget | No | budget, mid, or luxury (default: mid) | |
| country | No | Country (default: us) | |
| check_in | No | Check-in YYYY-MM-DD | |
| currency | No | Currency (default: USD) | |
| check_out | No | Check-out YYYY-MM-DD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions comparing by price, rating, and hotels, but doesn't cover critical aspects like data sources, accuracy, rate limits, or output format. For a tool with 6 parameters and no output schema, this leaves significant gaps in understanding how it behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose. It avoids redundancy and wastes no words, making it easy to parse. However, it could be slightly more structured by separating key points, but this is minor.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (6 parameters, no output schema, no annotations), the description is incomplete. It doesn't explain the return values, data sources, or how comparisons are made, leaving agents uncertain about the tool's behavior. For a tool with rich input but no structured output, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 6 parameters with descriptions. The description adds minimal value beyond the schema by implying parameters like 'budget' and 'city' are used for comparisons, but doesn't provide additional syntax or format details. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to compare neighborhoods by price, rating, and popular hotels for staying in a city. It specifies the verb ('compares') and resource ('neighborhoods'), making it easy to understand. However, it doesn't explicitly differentiate from sibling tools like 'hotel_search' or 'price_compare', which might offer overlapping functionality, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'hotel_search' or 'price_compare', nor does it specify prerequisites or exclusions. Usage is implied by the purpose, but without explicit context, agents may struggle to choose between similar tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cheapest_hotelsCInspect
Find the cheapest hotels, sorted by lowest price.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | City or area | |
| country | No | Country (default: us) | |
| check_in | No | Check-in YYYY-MM-DD | |
| currency | No | Currency (default: USD) | |
| check_out | No | Check-out YYYY-MM-DD | |
| max_price | No | Max price per night | |
| hotel_class | No | Min star rating (2-5) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions sorting by lowest price, which is useful, but lacks critical details: whether this is a read-only operation, if it requires authentication, rate limits, pagination, error handling, or what the output format looks like. For a search tool with 7 parameters, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence: 'Find the cheapest hotels, sorted by lowest price.' It's front-loaded with the core purpose, has zero wasted words, and is appropriately sized for a search tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (7 parameters, no annotations, no output schema), the description is insufficient. It lacks behavioral context (e.g., read/write nature, error handling), output details, and usage guidance relative to siblings. While concise, it doesn't provide enough information for an agent to confidently invoke this tool in a real-world scenario.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 7 parameters with descriptions. The description adds no parameter-specific information beyond implying a focus on 'cheapest' (related to price sorting) and 'hotels' (the resource). This meets the baseline of 3, as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Find the cheapest hotels, sorted by lowest price.' It specifies the verb ('find') and resource ('cheapest hotels'), and mentions the sorting criterion. However, it doesn't explicitly differentiate from sibling tools like 'hotel_search' or 'price_compare', which likely have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'hotel_search' or 'price_compare', nor does it specify prerequisites, exclusions, or ideal contexts for usage. The agent must infer usage from the name and description alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
health_checkBInspect
Server status, API connectivity, supported features.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions what is checked (status, connectivity, features) but lacks behavioral details like response format, error handling, whether it performs active tests or returns cached data, or any rate limits. This is a significant gap for a diagnostic tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—three brief phrases separated by commas—with zero wasted words. It's front-loaded with the core purpose and efficiently communicates the scope without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 params, no output schema, no annotations), the description is minimally adequate. It states what the tool does but lacks details on behavior, output, or integration context. For a health check tool, more guidance on interpretation or use cases would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description doesn't add param info beyond the schema, but with no parameters, a baseline of 4 is appropriate as it avoids redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: checking server status, API connectivity, and supported features. It uses specific verbs ('status', 'connectivity') and identifies the resource ('Server', 'API'), but doesn't explicitly differentiate from sibling tools, which are all hotel/attraction related, making this distinction implicit rather than explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, timing (e.g., at startup or when errors occur), or contrast with sibling tools, leaving usage context entirely implicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hotel_detailsBInspect
Deep details for a specific hotel: all amenities, reviews breakdown, images, eco-certification, nearby places.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Hotel name + city (e.g., 'Ritz Paris') | |
| country | No | Country (default: us) | |
| check_in | No | Check-in YYYY-MM-DD | |
| currency | No | Currency (default: USD) | |
| check_out | No | Check-out YYYY-MM-DD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the types of details returned (amenities, reviews, etc.), which adds some context beyond the input schema. However, it doesn't describe critical behaviors: whether this is a read-only operation (implied but not stated), potential rate limits, authentication needs, error handling, or response format. For a tool with no annotations, this leaves significant gaps in understanding how it behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Deep details for a specific hotel') and lists key data types without redundancy. Every element (amenities, reviews, etc.) adds value by specifying what 'deep details' include, and there's no wasted verbiage. It's appropriately sized for a tool with 5 parameters and no output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a read operation with 5 optional parameters) and lack of annotations or output schema, the description is moderately complete. It clarifies the tool's scope and data types, which helps contextualize it among siblings. However, it doesn't fully compensate for missing behavioral details (e.g., response structure, error cases) or provide usage guidelines, leaving room for improvement in guiding an AI agent effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with all 5 parameters clearly documented (e.g., 'query' as hotel name + city). The description adds no additional parameter semantics beyond what's in the schema, such as explaining how parameters interact (e.g., if 'check_in' and 'check_out' affect the details returned) or providing examples beyond the schema's 'Ritz Paris'. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving comprehensive details for a specific hotel, listing specific data types (amenities, reviews, images, eco-certification, nearby places). It distinguishes itself from siblings like 'hotel_search' (which likely returns multiple hotels) and 'hotel_prices_calendar' (which focuses on pricing). However, it doesn't explicitly mention that it's for a single hotel, though this is implied by 'specific hotel'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a hotel identifier), compare it to siblings like 'hotel_search' (for finding hotels) or 'area_guide' (for broader area info), or specify scenarios where it's most useful (e.g., after identifying a hotel from search). Usage is implied by the purpose but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hotel_prices_calendarCInspect
Price trend for a specific hotel across different check-in dates. Find the cheapest week.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Hotel name + city | |
| weeks | No | Weeks to scan (1-6, default: 4) | |
| nights | No | Stay duration (default: 2) | |
| country | No | Country (default: us) | |
| currency | No | Currency (default: USD) | |
| start_date | No | Start date YYYY-MM-DD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool's function ('price trend', 'find the cheapest week') but does not describe critical behaviors such as data sources, rate limits, error handling, or output format. For a tool with no annotations, this is a significant gap in transparency about how the tool operates beyond its basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences that directly state the tool's purpose and goal. It is front-loaded with the core function and avoids any unnecessary details, making it efficient and easy to parse. Every sentence earns its place by contributing essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, no output schema, no annotations), the description is insufficient. It lacks details on output format, error conditions, data freshness, or integration with sibling tools. Without annotations or an output schema, the description should provide more context to help the agent understand the tool's behavior and results, but it does not.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, meaning all parameters are documented in the input schema. The description adds no additional semantic information about parameters beyond implying date-range scanning ('across different check-in dates') and the goal of finding the cheapest week. This meets the baseline score of 3, as the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Price trend for a specific hotel across different check-in dates. Find the cheapest week.' It specifies the verb ('find') and resource ('price trend'), but does not explicitly differentiate it from sibling tools like 'price_compare' or 'cheapest_hotels', which might have overlapping functionality. This makes it clear but not fully distinct from alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as 'price_compare' or 'cheapest_hotels'. It mentions the goal ('find the cheapest week') but does not specify contexts, prerequisites, or exclusions for usage. This lack of comparative guidance leaves the agent without clear direction on tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hotel_searchCInspect
Search hotels in a city or area. Returns names, ratings, prices, amenities, and deals. Sortable by price, rating, or reviews.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | City, area, or landmark (e.g., 'Hotels in Paris', 'Manhattan New York') | |
| adults | No | Guests (default: 2) | |
| country | No | Country (default: us) | |
| sort_by | No | 3=lowest price, 8=highest rating, 13=most reviewed (default: 3) | |
| check_in | No | Check-in date YYYY-MM-DD | |
| currency | No | Currency (default: USD) | |
| check_out | No | Check-out date YYYY-MM-DD | |
| max_price | No | Max price per night | |
| min_price | No | Min price per night | |
| hotel_class | No | Min star rating (2-5) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the search returns specific data fields and sort options, but doesn't cover important aspects like pagination, rate limits, authentication requirements, error conditions, or whether this is a read-only operation. For a search tool with 10 parameters, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences that efficiently convey core functionality. The first sentence covers purpose and return values, the second covers sorting options. No wasted words, though it could be slightly more structured by separating search scope from return values.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 10-parameter search tool with no annotations and no output schema, the description is insufficient. It doesn't explain the return format beyond listing data fields, doesn't mention result limits or pagination, and provides no error handling context. The agent lacks crucial information about what to expect from this tool's behavior and outputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 10 parameters thoroughly. The description adds minimal value beyond the schema - it mentions sorting options (implied by the 'sort_by' parameter) but doesn't provide additional context about parameter interactions, dependencies, or usage patterns beyond what's in the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Search') and resource ('hotels in a city or area'), and mentions what information is returned. However, it doesn't explicitly differentiate from sibling tools like 'cheapest_hotels' or 'hotel_prices_calendar', which appear to be related hotel search/filtering tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'cheapest_hotels' or 'hotel_prices_calendar'. There's no mention of prerequisites, limitations, or comparative advantages. The agent must infer usage from tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nearby_attractionsBInspect
What is near a hotel: restaurants, landmarks, transit stations, distances.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Hotel name + city | |
| check_in | No | Check-in YYYY-MM-DD | |
| check_out | No | Check-out YYYY-MM-DD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It implies a read-only operation but doesn't specify whether it requires authentication, rate limits, or how results are returned (e.g., pagination, format). The mention of 'distances' hints at output details, but behavioral traits like error handling or data freshness are omitted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded and appropriately sized, with every element contributing to understanding what the tool does.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is adequate but has clear gaps. It covers the basic purpose but lacks usage guidelines, detailed behavioral context, and output information, making it minimally viable for an agent to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the input schema already documents all three parameters (query, check_in, check_out) with clear descriptions. The description adds minimal value beyond the schema by implying the query relates to a hotel, but it doesn't provide additional syntax, format details, or explain why check-in/check-out dates are relevant for attractions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to find nearby attractions (restaurants, landmarks, transit stations) with distances for a hotel. It specifies the resource (hotel) and the types of information returned, though it doesn't explicitly differentiate from sibling tools like 'area_guide' which might serve a similar function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'area_guide' or 'hotel_details', nor does it specify prerequisites or exclusions for usage, leaving the agent to infer context from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
price_compareCInspect
Compare prices for one hotel across booking sites (Booking.com, Hotels.com, Expedia, etc.).
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Specific hotel name + city | |
| country | No | Country (default: us) | |
| check_in | No | Check-in YYYY-MM-DD | |
| currency | No | Currency (default: USD) | |
| check_out | No | Check-out YYYY-MM-DD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions comparing prices across sites but doesn't disclose behavioral traits such as rate limits, data freshness (real-time vs. cached), authentication needs, error handling, or output format. For a tool with no annotation coverage, this leaves significant gaps in understanding its operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose. It uses no unnecessary words and includes helpful examples (e.g., 'Booking.com, Hotels.com, Expedia'). Every part of the sentence contributes value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It doesn't explain what the comparison output includes (e.g., prices, links, availability) or behavioral aspects like rate limits. For a tool with 5 parameters and complex functionality (price aggregation across sites), more context is needed to guide effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds no additional parameter semantics beyond implying the 'query' parameter should identify a hotel. Baseline score of 3 is appropriate as the schema handles most of the parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Compare prices') and resource ('for one hotel across booking sites'), with specific examples (Booking.com, Hotels.com, Expedia). It distinguishes from siblings like 'cheapest_hotels' (which likely finds cheapest options) and 'hotel_prices_calendar' (which likely shows price trends), though not explicitly. However, it could be more specific about the scope (e.g., real-time vs. cached data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like 'cheapest_hotels' or 'hotel_search'. The description implies usage for price comparison across sites, but lacks context on prerequisites (e.g., requires specific hotel identification) or exclusions (e.g., not for multi-hotel comparisons).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.