Skip to main content
Glama
pab1it0

Tripadvisor MCP

get_location_details

Retrieve comprehensive data about a specific Tripadvisor location, including details, reviews, and photos, to support travel planning and research.

Instructions

Get detailed information about a specific location

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
locationIdYes
languageNoen

Implementation Reference

  • The handler function implementing the 'get_location_details' tool logic. It makes an API request to Tripadvisor for location details using the make_api_request helper. The @mcp.tool decorator handles registration and schema inference from the signature and docstring.
    @mcp.tool(description="Get detailed information about a specific location")
    async def get_location_details(
        locationId: Union[str, int],
        language: str = "en",
    ) -> Dict[str, Any]:
        """
        Get detailed information about a specific location (hotel, restaurant, or attraction).
        
        Parameters:
        - locationId: Tripadvisor location ID (can be string or integer)
        - language: Language code (default: 'en')
        """
        params = {
            "language": language,
        }
        
        # Convert locationId to string to ensure compatibility
        location_id_str = str(locationId)
        
        return await make_api_request(f"location/{location_id_str}/details", params)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It only states what the tool does ('Get detailed information') without describing traits like whether it's read-only, requires authentication, has rate limits, or what the output format might be. This leaves critical behavioral aspects unspecified for a tool with no structured annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no wasted words. It is appropriately sized and front-loaded, directly stating the tool's purpose without unnecessary elaboration, making it efficient for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (2 parameters, no output schema, no annotations), the description is incomplete. It lacks details on parameter usage, behavioral traits, output expectations, and differentiation from siblings. For a tool with no structured support, this minimal description does not provide enough context for an agent to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'a specific location' but does not explain the parameters (locationId and language) or their semantics, such as what locationId represents or how language affects the response. This fails to add meaningful context beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool's purpose ('Get detailed information about a specific location'), which is clear but vague. It specifies the verb ('Get') and resource ('location'), but does not distinguish what 'detailed information' entails or how it differs from sibling tools like get_location_photos or get_location_reviews. This leaves the scope ambiguous compared to alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus siblings. It does not mention alternatives, prerequisites, or exclusions, such as whether it should be used for basic info versus photos or reviews. Without such context, an agent might struggle to select the correct tool among the provided options.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pab1it0/tripadvisor-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server