GuruWalk
Server Details
Free walking tours & activities in 200+ cities. Browse, check availability, and get tour details.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose with no overlap: discover_destination for initial city exploration, browse_category for category-specific listings, get_product_detail for detailed tour information, and check_availability for booking logistics. The descriptions explicitly guide when to use each tool, preventing misselection.
All tool names follow a consistent verb_noun pattern (discover_destination, browse_category, get_product_detail, check_availability), using descriptive verbs and clear nouns. This predictability makes the set easy to navigate and understand.
With 4 tools, the set is well-scoped for the travel booking domain, covering the core workflow from discovery to booking without bloat. Each tool earns its place by addressing a distinct step in the user journey, making the count appropriate and efficient.
The tool surface provides complete coverage for the travel booking domain: discover_destination for initial search, browse_category for filtering, get_product_detail for information, and check_availability for booking. There are no obvious gaps, supporting a full CRUD-like lifecycle from exploration to reservation.
Available Tools
4 toolsbrowse_categoryARead-onlyInspect
Browse tours and activities within a specific category for a destination on GuruWalk. Categories include free walking tours, food tours, bike tours, day trips, skip-the-line tickets, and more. Returns listings with ratings, verified review counts, duration, available languages, and pricing. Use the category IDs returned by discover_destination — never invent category IDs. Use this tool to dive deeper into a specific category that matches what the traveler has shared (e.g. food tours for a foodie, night tours for someone asking about evenings, kid-friendly activities for families).
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default: 1) | |
| hub_slug | No | Hub slug of the destination (required when vertical_id is 'free-tour') | |
| language | No | Language code (en, es, de, it) | |
| vertical_id | Yes | The ID of the vertical/category to browse. Use 'free-tour' for the free tours category. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no output schema exists, the description adds valuable behavioral context by detailing return contents ('listings with ratings, verified review counts, duration, available languages, and pricing'). It aligns with annotations (readOnlyHint=true) by using the safe verb 'Browse' and does not contradict safety flags.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The four-sentence structure is efficiently front-loaded with purpose, followed by examples, return value description, and usage prerequisites. Every sentence earns its place without redundancy or unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description adequately compensates by describing the return data structure and fields. It also references the prerequisite tool (discover_destination), providing sufficient context for a listing/browse operation with well-documented input parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured fields already document all four parameters adequately. The description references 'category IDs' (mapping to vertical_id) and 'destination' (mapping to hub_slug) but does not add syntax details, formats, or semantic constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Browse') and resource ('tours and activities') with platform context ('GuruWalk'). It effectively distinguishes from siblings by referencing 'discover_destination' as the source for category IDs, establishing a clear workflow relationship between the tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit prerequisite guidance ('Use the category IDs returned by discover_destination'), establishing when to invoke this tool in the workflow sequence. However, it lacks explicit 'when-not-to-use' guidance or contrasts with other siblings like 'check_availability' or 'get_product_detail'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_availabilityARead-onlyInspect
Check real-time availability for one or more tours or activities on GuruWalk in a single call. Pass an items array — each entry is independent and has its own type, product_id and date range. Returns a results array where every entry echoes its type and product_id so you can match each response to its request. Always batch when checking multiple tours: send them all in one call instead of invoking this tool several times. For paid activities, shows rates by traveler type (adult, child, infant). For free walking tours, no upfront price — travelers pay what they want after the tour. Includes direct booking links. Maximum date range per item: 5 days. Per-item errors (invalid dates, product not found) are reported inside that item's result without failing the rest of the batch. Use this tool when the traveler asks about specific dates, wants to know if something is available, or is ready to book. When the traveler hasn't given specific dates, use the booking date ±2 days as the default search range.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | One or more tours/products to check (max 20 per call). Use a single entry for one tour, or many entries to batch multiple tours in the same call. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Complements annotations (readOnlyHint, openWorldHint) by detailing return structure (dates, times, spots, pricing, booking links) and business logic (pay-what-you-want vs fixed rates). The 'real-time' qualifier aligns with openWorldHint=true, though it doesn't explicitly address the idempotentHint=false property.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Six sentences with zero redundancy: purpose, return values, paid behavior, free behavior, booking links, and constraints. Information is front-loaded with the core action, and every subsequent sentence adds distinct value regarding output variations or limitations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Thoroughly compensates for the missing output schema by documenting return fields (dates, times, spots, pricing, booking links) and business logic variations. Combined with 100% schema coverage and clear annotations, the description provides complete context for invocation despite moderate complexity (5 parameters, enum logic).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema coverage is 100% (baseline 3), the description adds critical constraint semantics not present in the schema: the 'Maximum date range: 3 days' rule governs the relationship between from_date and to_date parameters. It also contextualizes the 'type' parameter's impact through paid vs. free tour explanations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a precise action ('Check real-time availability') and specific resource ('tour or activity on GuruWalk'). It clearly distinguishes this lookup tool from sibling browsing tools (discover_destination, browse_category) by emphasizing 'specific' product targeting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear operational constraints including the 'Maximum date range: 3 days' limitation and distinguishes output behavior between paid activities and free tours. Lacks explicit workflow guidance (e.g., 'Use after discover_destination'), though the 'specific tour' phrasing implies prerequisite knowledge of the product.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_destinationARead-onlyInspect
Search a city to explore free walking tours and paid activities on GuruWalk, the world's largest free walking tour platform. Returns destination info, tour categories (free tours, food tours, day trips, tickets, and more), and featured listings with ratings and verified traveler reviews. Covers 200+ cities worldwide. Free tours operate on a pay-what-you-want model. Supports English, Spanish, German, and Italian. Use this tool when you know the traveler's destination and the conversation has reached the point of recommending experiences. Do NOT call it just because a destination is mentioned — first understand what the traveler is looking for. If the traveler mentions a landmark instead of a city, infer the city (e.g. 'eiffel tower' → Paris, 'colosseum' → Rome, 'sagrada familia' → Barcelona, 'big ben' → London). After getting results, review the categories and featured_products to find the most relevant matches for what the traveler asked about.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number for featured products (default: 1) | |
| text | No | Optional text filter. When provided, featured_products are narrowed to paid products whose name matches the text (MATCH AGAINST, all tokens must appear as prefix, tokens of <3 chars ignored), and free tours are automatically excluded (their legacy search doesn't support text filtering). Use when the traveler names a specific attraction ('Casa Batlló', 'Sagrada Familia') or a themed phrase ('food tour'). Leave empty for the full hub listing including free tours. | |
| end_date | No | Only show tours available until this date (YYYY-MM-DD). Use the traveler's booking date plus 2 days if no specific date was mentioned. | |
| language | No | Language code: en, es, de, it | en |
| start_date | No | Only show tours available from this date (YYYY-MM-DD). Use the traveler's booking date minus 2 days if no specific date was mentioned. | |
| destination | Yes | City name, e.g. 'Rome' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable operational context not in the annotations, including the pay-what-you-want business model, coverage of 200+ cities, and detailed return value structure (destination info, categories, featured listings). It complements the readOnlyHint by clarifying what data is retrieved.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description efficiently packs multiple informative elements into several sentences: core functionality, return values, geographic scope, business model, and language support. Every sentence contributes essential context without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description adequately explains what the tool returns (destination info, tour categories, featured listings with ratings). Combined with comprehensive annotations and the business context provided, the description provides sufficient information for an agent to understand the tool's capabilities and results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all parameters including the language enum values and pagination. The description mentions language support and city searching, which aligns with the parameters, but does not add significant semantic meaning beyond what the schema provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly defines the tool as a city-based search for walking tours and activities on GuruWalk, using specific verbs ('Search', 'explore'). It distinguishes itself from siblings by focusing on destination discovery (searching a city) rather than browsing categories, checking availability, or getting specific product details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description establishes this as an entry point for city exploration, it does not explicitly state when to use this versus the sibling tools (browse_category, check_availability, get_product_detail) or provide prerequisites. The usage context is implied but not contrasted with alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_product_detailARead-onlyInspect
Get the full detail of one or more tours or activities on GuruWalk in a single call. Pass an items array — each entry has its own type, product_id and language, and is processed independently. Returns a results array where every entry echoes its product_id and type so you can match each response to its request. Always batch when you need details for several tours (e.g. before recommending or comparing them): send them all in one call instead of invoking this tool several times. Each successful entry returns description, itinerary/highlights, images, reviews, pricing, duration, available languages, cancellation policies, guide name (free tours), and meeting point info. Meeting point shape differs by type: paid product returns the address text plus coordinates in where; free_tour returns ONLY a Google Maps URL in meeting_point_url (no text address) plus how_to_find_me to identify the guide. Per-item errors (product not found) are reported inside that item's result without failing the rest of the batch. Use this tool whenever the traveler asks what a tour covers, which places it visits, its itinerary, route, description, meeting point, duration, or any content-related question. Always call this tool BEFORE answering questions about a specific tour — never give generic opinions or advice without consulting the real data first.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | One or more tours/products to fetch details for (max 10 per call). Use a single entry for one tour, or many entries to batch multiple tours in the same call. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and openWorldHint=true. Description adds crucial behavioral context by enumerating return values: 'description, itinerary/highlights, images, reviews, pricing, meeting point, and cancellation policies.' This compensates for the lack of output schema. Does not explain idempotentHint=false, but this is minor given the read-only nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences: (1) Purpose and scope, (2) Usage conditions with specific examples, (3) Return value specification. No filler words. Information is front-loaded with the core action in the first sentence. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter read-only tool with 100% schema coverage, the description is complete. It compensates for the missing output schema by explicitly listing all returned data fields. Annotations cover safety profile (readOnlyHint, destructiveHint). No gaps requiring additional explanation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with clear enum mappings ('product' for paid activities, 'free_tour' for free walking tours). Description mentions 'free walking tours, experiences, and all listings' which maps to the type parameter values, but does not add syntax, format constraints, or usage examples beyond what the schema already provides. Baseline 3 appropriate given complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with resource 'full detail of any tour or activity' and explicitly scopes to 'free walking tours, experiences, and all listings.' Clearly distinguishes from sibling tools browse_category (browsing), check_availability (dates/times), and discover_destination (exploration) by focusing on specific product content retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance: 'Call this tool whenever the user asks what a tour covers, which places it visits, its itinerary, route, description, or any content-related question.' The specific examples (itinerary, route, meeting point) precisely delineate this tool's domain from availability checks or category browsing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!