waysway
Server Details
Waysway AI travel MCP for hotels, live prices, restaurants, flights, and activities.
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 20 of 20 tools scored. Lowest: 1.8/5.
Most tools have clear distinct purposes, though 'update_user_profile' and 'update_user_profile_from_model' serve similar functions with different input methods, which could cause confusion. Overall, the boundaries are generally clear.
All tool names follow a consistent verb_noun pattern using snake_case, such as 'search_hotels', 'update_stay_dates', and 'get_hotel_details'. No naming style mixing or irregularities.
With 20 tools covering hotels, homestays, flights, activities, restaurants, profile management, and session handling, the count is well-scoped for a travel assistant server. Each tool serves a distinct function without bloat.
The tool set covers core travel planning needs: search, details, offers, and profile management. However, it lacks a tool to actually confirm or book an offer, which is a noticeable gap for a complete booking flow.
Available Tools
20 toolscheck_stay_priceBInspect
Check live stay price for a previously recommended hotel.
REQUIREMENT:
- update_stay_dates must already have been called| Name | Required | Description | Default |
|---|---|---|---|
| session_id | Yes | ||
| hotel_index | No | ||
| hotel_code_or_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the behavioral burden. It only states that the tool checks a live price but does not disclose any behavioral traits such as idempotency, side effects, authorization needs, or what happens if prerequisites are not met.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with two short sentences that are front-loaded and waste no words. It is easy to read and understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description states a crucial requirement and the tool has an output schema (which may provide return value details), it lacks parameter explanations and does not cover potential error scenarios or behavioral nuances. It is minimally adequate but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage, and the description does not explain the meaning or usage of any of the three parameters (session_id, hotel_index, hotel_code_or_name). It fails to compensate for the lack of schema-level descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks live stay price for a previously recommended hotel, using a specific verb and resource. It distinguishes from sibling tools like search_hotels or get_booking_offers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly requires that update_stay_dates must be called before using this tool, providing clear usage context. However, it does not specify when not to use it or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
clear_sessionCInspect
Clear the entire session.
| Name | Required | Description | Default |
|---|---|---|---|
| session_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description offers minimal behavioral insight. It does not disclose side effects, required permissions, or whether the action is reversible, which is critical for a clear/destructive operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (one sentence) and front-loaded, but overly minimal. It effectively conveys the core action but trades completeness for brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no enums), the description should still clarify the output (since an output schema exists) and the exact effect of clearing. It fails to do so.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, and the description fails to explain the required parameter 'session_id'. It adds no value beyond the schema itself, leaving the agent to infer the parameter's role.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a clear action ('Clear') and resource ('session'), distinguishing it from sibling tools like start_session or get_session_state. However, it lacks specificity about what 'clear' entails (e.g., delete data, reset state).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. With multiple session-related tools (start_session, get_session_state, list_sessions), the description should indicate use cases or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
extract_user_profile_from_textAInspect
Lightweight fallback extraction from raw text.
This is mainly for debugging and fallback use.
It is NOT the preferred production path.
Preferred production path:
- external model understands raw text
- external model outputs standard UserProfile fields
- call update_user_profile_from_model(...)| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It reveals it is 'lightweight' and 'fallback', implying limited capabilities, but does not detail what extraction behavior occurs (e.g., fields extracted, failure modes, idempotency). Some context but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at 4 short sentences. Purpose stated first, followed by fallback status, then explicit alternative. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 param, output schema exists), the description adequately explains its role as fallback and the production alternative. It is slightly lacking in behavioral details but sufficient for the agent to understand its limited role.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only one parameter 'text' with schema coverage 0%. Description only restates 'raw text', adding no semantics beyond the schema type string. Fails to compensate for low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a 'lightweight fallback extraction from raw text' and explicitly contrasts it with the preferred production path using 'update_user_profile_from_model'. The verb 'extract' with resource 'user profile from text' is specific and distinct from siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says it is 'mainly for debugging and fallback use' and 'NOT the preferred production path', naming the exact alternative ('update_user_profile_from_model'). Provides clear when-to-use and when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_booking_offersBInspect
Get available hotel offers for the current session dates.
REQUIREMENT:
- update_stay_dates should already be set
- hotel should come from previous search results| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| session_id | Yes | ||
| hotel_index | No | ||
| hotel_code_or_name | No | ||
| preferred_room_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It reveals preconditions but does not disclose behavioral traits like read-only nature, side effects, or authorization needs. The description gives minimal behavioral insight beyond the prerequisites.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is short and front-loaded with the main purpose and two requirements. It is concise without fluff, though the requirements list could be more integrated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of 5 parameters and no schema descriptions, the description is incomplete. It does not explain the majority of parameters or provide context on the output (though output schema exists). The prerequisites are helpful but insufficient for complete usage understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 5 parameters and 0% schema description coverage, the description adds no semantic meaning to the parameters. It hints that hotel_index/hotel_code_or_name select a hotel but does not map them. The agent gains no parameter guidance from the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves hotel offers for the current session dates, but does not explicitly differentiate from siblings like 'check_stay_price' or 'search_hotels'. The prerequisites hint at its specific role.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists two requirements (update_stay_dates must be set, hotel from previous search) that guide when to use this tool. However, it does not mention when not to use it or provide alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_hotel_detailsAInspect
Get detailed information for a hotel.
Use this after search_hotels when the user asks for more details
about a recommended option.| Name | Required | Description | Default |
|---|---|---|---|
| session_id | Yes | ||
| hotel_code_or_name | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No behavioral traits disclosed beyond basic purpose; no annotation provided, so description must cover safety, side effects, or limitations, but it only says 'get detailed information', leaving agent without guidance on read-only vs. state-changing behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short, front-loaded sentences with no filler; each sentence provides essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity and presence of output schema, the description adequately covers purpose and usage context, though parameter details are missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and description does not explain the meaning or format of 'session_id' or 'hotel_code_or_name', relying entirely on schema which lacks descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Get' and resource 'detailed information for a hotel', distinguishing it from sibling search_hotels which returns a list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'after search_hotels when the user asks for more details about a recommended option', giving clear context and excluding other scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_session_stateBInspect
Get a compact summary of the current session state.
| Name | Required | Description | Default |
|---|---|---|---|
| session_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description implies a read-only operation, but it does not explicitly state that the tool is non-destructive or that it requires no special permissions. Since annotations are absent, the description should provide more detail about behavior, but the basic inference of a get operation is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, direct sentence with no unnecessary words. It front-loads the key information ('Get a compact summary') and is efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, output schema exists), the description covers the basic purpose. However, it omits context about when in the session lifecycle to call this tool (e.g., after start_session, before clear_session) and does not hint at the summary's contents.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage, and the description adds no information about the 'session_id' parameter beyond its name. The parameter is self-explanatory, but the description does not elaborate on format, validity, or expected values, which is a missed opportunity given low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a compact summary of the current session state, specifying the action ('Get') and the resource ('compact summary of the current session state'). It distinguishes itself from sibling tools like start_session and clear_session, which involve creation or deletion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. There is no mention of conditions, prerequisites (e.g., session must exist), or exclusions (e.g., do not use if session is invalid).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_user_profile_schemaAInspect
Return the strict standard schema for user_profile.
IMPORTANT:
- Only these standard UserProfile fields are allowed.
- No alias mapping is supported.
- city is NOT a profile field.
- checkin_date / checkout_date are NOT profile fields.
- Numeric budget constraints usually belong to search_hotels price_min/price_max.
Use this tool when the external model is unsure what fields are allowed.| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It implies a read-only query but doesn't explicitly state no side effects or discuss authentication or rate limits. The purpose is transparent, but behavioral details are missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded: purpose first, then bullet points for key constraints. Every sentence adds value, no waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no params, has output schema), the description covers the schema's constraints, allowed fields, and exceptions. It's complete for an agent to understand what this tool returns and when to use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters and 100% schema coverage, baseline is 4. The description adds context about the output schema (excluded fields) but not about parameters themselves.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action 'Return the strict standard schema for user_profile.' Distinct from siblings like update_user_profile or extract_user_profile_from_text.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises using this tool 'when the external model is unsure what fields are allowed.' Also lists non-allowed fields and where budget constraints belong, providing clear guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
health_checkAInspect
Health check for the MCP server.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and the description 'Health check' does not disclose any behavioral details such as side effects, read-only nature, or response format. For a health check, it is minimally transparent but lacks explicit behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence with no superfluous words. It is well-structured and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and an output schema exists, the description is minimally complete. It could be improved by mentioning what the health check returns (e.g., server status), but it is sufficient for a simple health check.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters (0 params), so the description does not need to add parameter meaning. The schema coverage is 100%, and the zero-parameter baseline is 4. The description is adequate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the purpose: 'Health check for the MCP server.' It uses a specific verb+resource (health check) and distinguishes itself from sibling tools like search_hotels or update_stay_dates, which are unrelated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives. However, the purpose is self-explanatory, and the context suggests it's for checking server health. Implied usage is adequate for a simple health check.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ingest_user_messageAInspect
Store raw user message into session context.
IMPORTANT:
This tool is no longer the main profile extraction path.
RECOMMENDED USE:
- store raw user text for context
- let the external model decide profile fields
- then call update_user_profile_from_model(...)
DEFAULT BEHAVIOR:
- append raw text into user_info
- do not aggressively infer profile fields
OPTIONAL:
- if use_fallback_extraction=True, run a lightweight fallback extractor| Name | Required | Description | Default |
|---|---|---|---|
| merge | No | ||
| message | Yes | ||
| session_id | Yes | ||
| use_fallback_extraction | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses default behavior (append text, no aggressive inference) and optional fallback extraction. Lacks details on side effects or prerequisites but adequate for simple store.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with sections (IMPORTANT, RECOMMENDED USE, etc.), front-loaded purpose, no wasted sentences. Every part adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers major aspects: purpose, usage pattern, defaults, optional feature. However, does not describe return value/output, and prerequisites (e.g., session existence) are implied but not explicit. Output schema exists but not detailed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Description mentions 'message' and 'use_fallback_extraction' with context, but misses 'session_id' (required) and 'merge' (default true). With 0% schema coverage, partial compensation reduces score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Store raw user message into session context' – a specific verb and resource. Distinguishes from siblings by noting it's not the main extraction path and recommending update_user_profile_from_model.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit recommended use: store raw text, let external model decide, then call update_user_profile_from_model. Implies when not to use (not for aggressive inference), though does not list all sibling alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sessionsAInspect
List all in-memory session summaries. Useful for debugging only.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It correctly indicates a read-only listing operation ('List all in-memory session summaries') without hinting at destructive behavior, which aligns with expectations for a list tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short, front-loaded sentences that convey purpose and usage without any extraneous words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless list tool with an output schema, the description is complete: it states what it lists and when to use it. Output schema handles return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so no parameter description is needed. The description accurately reflects that there are no arguments required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list' and resource 'in-memory session summaries', and adds context 'useful for debugging only' which distinguishes it from sibling tools like 'clear_session' or 'get_session_state'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'useful for debugging only', providing clear when-to-use guidance. Does not mention alternatives or when-not-to-use, but the single purpose is sufficient for this simple tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
normalize_stay_datesCInspect
Parse stay dates without using session state.
user_profile here is optional and only accepts standard profile fields.
city is not required here.| Name | Required | Description | Default |
|---|---|---|---|
| nights | No | ||
| checkin_str | Yes | ||
| checkout_str | No | ||
| user_profile | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full burden. It discloses that the tool does not use session state and that user_profile is optional and constrained. However, it does not describe side effects, validation behavior, or whether this is a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is short (3 sentences) but includes irrelevant info about city. It could be more concise by stating date input formats clearly. The structure is acceptable but not optimal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 4 parameters and 0% schema coverage, the description is incomplete. It does not explain return values (though output schema exists), date parsing rules, or how to use the optional parameters. Missing critical context for an agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It only partially addresses user_profile (optional, standard fields), but mentions 'city' which is not a parameter, and gives no information about date formats or constraints for checkin_str, nights, checkout_str.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool parses stay dates without using session state, distinguishing it from session-dependent alternatives. The verb 'parse' and resource 'stay dates' are specific, though 'normalize' could be more explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies a use case (without session state) but does not explicitly name alternative tools or provide when-not-to-use guidance. The mention of 'city is not required' hints at a comparison but is insufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_activities_in_cityAInspect
Search activities in a city.
If city is omitted, use session.city from prior hotel search.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | ||
| limit | No | ||
| session_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the burden. It mentions the fallback behavior for city, but does not disclose other traits like error handling or read-only nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two very short sentences, no wasted words. The essential information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (3 parameters, output schema exists), the description covers the core behavior but lacks details on limit and session_id, leaving gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%. The description explains only the city parameter's fallback, leaving limit and session_id with no added meaning beyond their names and defaults.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Search activities in a city' with a specific verb and resource, distinguishing it from sibling tools like search_hotels and search_restaurants_near_hotel.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides guidance on when to omit the city parameter (use session.city from prior hotel search), but does not explicitly exclude other scenarios or compare with alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_flight_contentDInspect
Search airline-related content.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| session_id | Yes | ||
| airline_tag | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose whether the tool is read-only, modifies state, or requires authentication. The behavioral impact is unclear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, but it under-specifies the tool. It is concise but at the expense of useful content, making it inadequate rather than efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given three parameters (one required) and an output schema, the description lacks essential context about the tool's functionality, return type, or use cases. It fails to provide a complete picture for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description adds no explanation for the three parameters (session_id, limit, airline_tag). The agent gains no insight beyond the schema types and defaults.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Search airline-related content' is vague. It does not specify what kind of content (e.g., flights, airline info, deals) or how it differs from sibling search tools like 'search_hotels' or 'search_activities_in_city'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. It does not mention the requirement of a session_id, nor does it provide exclusions or context for when to prefer other search tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_homestays_in_cityAInspect
Search homestays in a city.
If city is omitted, use session.city from prior hotel search.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | ||
| limit | No | ||
| session_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden of behavioral disclosure. It only reveals the fallback behavior for city but omits other important aspects such as whether the operation is read-only, authentication requirements, or any side effects. This is insufficient for a complete understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with only two sentences, no redundant information. Every word provides value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema (which may document return values) and the tool's moderate complexity (3 parameters, 1 required), the description is minimally adequate. It covers the core use case but lacks details on parameters, results, and error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must compensate. It only explains the city parameter's fallback behavior, leaving limit and session_id entirely unexplained. The default value for limit (3) is not mentioned, and session_id's role is unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for homestays in a city, distinguishing it from sibling tools like search_hotels. The verb 'Search' and resource 'homestays in a city' are specific and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on omitting the city parameter by falling back to session.city from a prior hotel search. However, it does not explicitly state when to use this tool over siblings like search_hotels, though the context implies it for homestay-specific queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_hotelsAInspect
Search and recommend hotels.
USE THIS TOOL FOR:
- city-based hotel search
- main search intent
- direct numeric search constraints such as price_min / price_max
- explicit room or amenity filters
IMPORTANT:
- city MUST be passed here, not inside user_profile
- user_profile is for stable preferences, not for search location
- if dates are known, call update_stay_dates first
- if profile preferences are known, call update_user_profile_from_model first
RECOMMENDED ORDER:
1. start_session
2. update_user_profile_from_model (optional)
3. update_stay_dates (optional)
4. search_hotels(city=..., ...)| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | ||
| limit | No | ||
| suitable | No | ||
| amenities | No | ||
| price_max | No | ||
| price_min | No | ||
| room_type | No | ||
| rating_min | No | ||
| session_id | Yes | ||
| surroundings | No | ||
| main_interest | Yes | ||
| user_specific | No | ||
| non_refundable | No | ||
| exclude_hotel_codes | No | ||
| required_name_terms | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description does not disclose side effects, idempotency, or return format. It adds some behavioral context (e.g., city must be passed here) but insufficient for a full behavioral profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with sections, front-loaded purpose. Each section adds value, but slightly lengthy; could be trimmed without losing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given parameter complexity and sibling tools, description covers search focus, required order, and key constraints. Lacks output schema details but overall sufficient for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%; description explains purpose of price_min/max, room/amenity filters, and city, but only covers a subset of 15 parameters. Provides some semantic value beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Search and recommend hotels' with a specific verb and resource, and distinguishes from siblings like search_homestays_in_city.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit sections 'USE THIS TOOL FOR:', 'IMPORTANT:', and 'RECOMMENDED ORDER:' provide clear when-to-use, prerequisites, and invocation sequence.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_restaurants_near_hotelCInspect
Search restaurants near a hotel.
| Name | Required | Description | Default |
|---|---|---|---|
| hotel_lat | No | ||
| hotel_lon | No | ||
| session_id | Yes | ||
| hotel_index | No | ||
| hotel_code_or_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose any behavioral traits such as read vs write, return format, pagination, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is one sentence but lacks necessary detail; it is under-specified rather than concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters and no annotation support, the description does not provide enough context for an agent to use the tool correctly. It does not clarify how parameters relate or what the output contains (though output schema exists).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 5 parameters with 0% description coverage, and the description does not mention any of them. It fails to clarify how to specify the hotel (e.g., lat/lon vs code/name).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description includes a clear verb (Search) and resource (restaurants near a hotel), distinguishing it from sibling tools like search_hotels and search_activities_in_city. However, it lacks specificity on how 'near' is defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives. It does not specify prerequisites or when it should be preferred over other search tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
start_sessionAInspect
Create or replace a session.
PURPOSE:
- Initialize conversation state
- Optionally store stable user profile fields
DO:
- pass stable or semi-stable profile information in user_profile
- pass preferred_language if known
- call this before other hotel-related tools
DO NOT:
- do NOT put city inside user_profile
- do NOT put checkin_date or checkout_date inside user_profile
- do NOT put search filters inside user_profile
CORRECT WORKFLOW:
1. start_session(...)
2. update_user_profile_from_model(...) # optional
3. update_stay_dates(...) # optional
4. search_hotels(city=..., ...)
WRONG:
user_profile = {
"city": "New York",
"checkin_date": "2023-04-01",
"checkout_date": "2023-04-05"
}
RIGHT:
start_session(user_profile={"preferred_language": "en"})
update_stay_dates(checkin_str="2023-04-01", checkout_str="2023-04-05")
search_hotels(city="New York", ...)| Name | Required | Description | Default |
|---|---|---|---|
| user_id | No | ||
| metadata | No | ||
| session_id | Yes | ||
| user_profile | No | ||
| preferred_language | No | en |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It mentions 'create or replace a session' but does not explain what 'replace' means in practice, idempotency, or side effects on existing state. It focuses on parameter usage rather than behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses clear headings, bullet points, and labeled examples. It is front-loaded with purpose, and every section adds value without redundancy. The structure makes it easy for an agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's role in a multi-step workflow, the description provides a complete picture by including a correct workflow referencing sibling tools, examples of wrong usage, and parameter constraints. The output schema exists, so return values need not be described.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaning for user_profile and preferred_language with guidance on what not to include, but leaves user_id, metadata, and session_id undescribed. The workflow example implicitly uses session_id but does not explain it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates or replaces a session and initializes conversation state, distinguishing it from sibling tools like clear_session and get_session_state. It explicitly positions start_session as the first step before hotel-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit DO and DO NOT lists, a correct workflow with step-by-step sequence, and concrete examples of right and wrong usage. It clearly states when to call the tool (before other hotel tools) and what parameters to pass or avoid.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_stay_datesAInspect
Update session stay dates.
USE THIS TOOL FOR:
- check-in date
- check-out date
- nights-based date calculation
DO NOT:
- do NOT put dates inside user_profile
- do NOT try to pass dates through start_session user_profile| Name | Required | Description | Default |
|---|---|---|---|
| nights | No | ||
| session_id | Yes | ||
| checkin_str | Yes | ||
| checkout_str | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While lacking annotations, the description adds value by warning against misplacing dates, but does not fully disclose mutation behavior, side effects, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise, two clear sections, front-loaded with purpose, and every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters, no annotations, and no output schema description, the description is incomplete. It does not cover return values or error handling, leaving significant gaps for a mutation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description provides no explanation of the parameters (session_id, checkin_str, checkout_str, nights). The agent must rely solely on parameter names, which is insufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Update session stay dates' and lists specific use cases (check-in, check-out, nights-based date calculation), making the purpose very clear and distinguishing it from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear 'USE THIS TOOL FOR' and 'DO NOT' sections, explicitly telling the agent when to use it and what actions to avoid, such as not putting dates in user_profile or through start_session.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_user_profileAInspect
Update session.user_profile using standard UserProfile fields only.
USE THIS WHEN:
- the caller already has structured profile fields
DO NOT PUT HERE:
- city
- checkin_date
- checkout_date
- raw numeric search constraints that belong in search_hotels
Preferred profile fields include:
- price_range
- budget_flexibility
- relationships
- travel_purpose
- hotel_style
- facilities_like
- surroundings_like| Name | Required | Description | Default |
|---|---|---|---|
| merge | No | ||
| session_id | Yes | ||
| user_profile | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility for behavioral disclosure. However, it does not explain effects like whether the update is destructive, how merging works (despite a 'merge' parameter), or any authentication or rate limit requirements. The description focuses on field usage rather than operational behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, structured with bullet points for usage guidelines, and each sentence adds value. There is no redundancy or wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description provides good context for the user_profile parameter and usage boundaries, but does not explain the session_id parameter's role or the session-based system. Since an output schema exists, return values are not required, but the description could be more complete regarding overall context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates well by listing preferred fields for the free-form 'user_profile' parameter (e.g., price_range, budget_flexibility). However, it does not explain the 'merge' or 'session_id' parameters beyond their names, leaving some gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool updates 'session.user_profile' with 'standard UserProfile fields only,' providing a specific verb and resource. It further distinguishes from siblings by explicitly listing fields that should not be placed here, such as 'city' and 'checkin_date,' and mentions 'search_hotels' as an alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes explicit 'USE THIS WHEN' and 'DO NOT PUT HERE' sections, offering clear conditions for tool invocation. It names a sibling tool ('search_hotels') for raw numeric search constraints, providing excellent guidance on when to use alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_user_profile_from_modelAInspect
Preferred path for profile extraction.
The external model should:
- read the user's natural language message
- decide which STANDARD UserProfile fields are supported by evidence
- send ONLY those fields here
HARD RULES:
- only standard UserProfile fields are allowed
- do NOT invent new fields
- do NOT put city here
- do NOT put checkin_date or checkout_date here
- omit uncertain fields instead of guessing
- numeric budget constraints usually belong to search_hotels price_min/price_max
GOOD EXAMPLE:
{
"relationships": "solo",
"travel_purpose": "leisure",
"hotel_style": "luxury",
"facilities_like": "breakfast, gym",
"surroundings_like": "downtown"
}
BAD EXAMPLE:
{
"city": "New York",
"checkin_date": "2023-04-01",
"checkout_date": "2023-04-05",
"budget": 100
}| Name | Required | Description | Default |
|---|---|---|---|
| merge | No | ||
| session_id | Yes | ||
| extracted_user_profile | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses rules about allowed fields and certainty, but does not explain the merge behavior, side effects on existing profile, or whether the update is additive or replace. The description is partially transparent but leaves gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose, uses structured sections (HARD RULES, examples), and every sentence adds value. It could be slightly shorter but is well-organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity and presence of an output schema, the description does not cover return value or behavioral details like merge vs replace. It provides good input context but lacks completeness on tool behavior and session interaction.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, so description must compensate. It explains the 'extracted_user_profile' parameter in detail with rules and examples, but session_id and merge parameters are not described. The merge parameter with default true is left unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool is for profile extraction from an external model, specifying the verb 'update' and the resource 'user profile from model'. It distinguishes itself as the 'Preferred path' and contrasts with the sibling tool extract_user_profile_from_text implicitly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit 'HARD RULES' about what not to include and gives good/bad examples. It mentions that numeric budget constraints belong to search_hotels, offering an alternative. However, it does not explicitly compare with siblings or state when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!