Skip to main content
Glama
mirodn

mcp-server-public-transport

no_search_places

Find public transport stops, addresses, and points of interest in Norway using autocomplete search via the Entur Geocoder API.

Instructions

Autocomplete search across stops/addresses/POIs in Norway via Entur Geocoder.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
textYes
langNoen
sizeNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The main handler function for the 'no_search_places' tool. It performs an autocomplete search for places in Norway using the Entur Geocoder API via GET request, includes input validation, logging, retry logic for errors/timeouts, and returns the raw JSON response.
    async def no_search_places(text: str, lang: str | None = "en", size: int | None = 10) -> dict[str, object]:
        """
        Args:
            text: Free-text query (e.g., 'Oslo S', 'Bergen busstasjon').
            lang: Language hint ('en', 'no', 'nb', 'nn', etc.). Default: 'en'.
            size: Max number of results. Default: 10.
        Returns:
            Raw Entur Geocoder /autocomplete JSON.
        """
        if not text or not text.strip():
            raise ValueError("Parameter 'text' must not be empty.")
    
        params = {"text": text.strip(), "lang": (lang or "en"), "size": int(size or 10)}
        logger.info("🇳🇴 Entur geocoder autocomplete: %r", params)
    
        # We'll do our own GET with retry/backoff, independent from any project helper.
        tries = 3
        for attempt in range(1, tries + 1):
            try:
                async with aiohttp.ClientSession(timeout=_make_timeout()) as session:
                    async with session.get(
                        NO_GEOCODER_AUTOCOMPLETE_URL,
                        params=params,
                        headers={"ET-Client-Name": NO_CLIENT_NAME, "Accept": "application/json"},
                        timeout=_make_timeout(),
                    ) as resp:
                        if resp.status == 429 or resp.status >= 500:
                            if attempt < tries:
                                await asyncio.sleep(0.5 * (2 ** (attempt - 1)))
                                continue
                            text_body = await resp.text()
                            raise TransportAPIError(f"Entur Geocoder HTTP {resp.status}: {text_body}")
    
                        if resp.status >= 400:
                            text_body = await resp.text()
                            raise TransportAPIError(f"Entur Geocoder HTTP {resp.status}: {text_body}")
    
                        return await resp.json()
            except (asyncio.TimeoutError, aiohttp.ServerTimeoutError) as e:
                if attempt < tries:
                    await asyncio.sleep(0.5 * (2 ** (attempt - 1)))
                    continue
                raise TransportAPIError(f"Entur Geocoder timeout after {tries} attempt(s): {e}") from e
    
        raise TransportAPIError("Entur Geocoder: exhausted retries without response")
  • tools/no.py:127-130 (registration)
    The @mcp.tool decorator that registers the 'no_search_places' tool with the MCP server, specifying its name and description.
    @mcp.tool(
        name="no_search_places",
        description="Autocomplete search across stops/addresses/POIs in Norway via Entur Geocoder."
    )
  • tools/no.py:316-316 (registration)
    The register_no_tools function returns a list of registered tool names, including 'no_search_places'.
    return ["no_search_places", "no_stop_departures", "no_trip", "no_nearest_stops"]
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'Autocomplete search' but doesn't specify if this is read-only, has rate limits, requires authentication, or details the output format. For a search tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It front-loads the key action and resource, making it easy to parse quickly, which is ideal for conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema, the description doesn't need to explain return values. However, with 3 parameters, 0% schema coverage, and no annotations, the description is incomplete—it lacks parameter details and behavioral context, making it only minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for undocumented parameters. It doesn't explain any parameters (text, lang, size), such as what 'text' should contain, language options for 'lang', or the meaning of 'size'. This fails to add meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Autocomplete search') and target resources ('stops/addresses/POIs in Norway via Entur Geocoder'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'no_nearest_stops' or 'no_stop_departures', which might also involve Norwegian stops, so it misses full sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention scenarios like real-time vs. static data, geocoding vs. trip planning, or comparisons to siblings such as 'be_search_connections' or 'ch_search_stations', leaving the agent with no usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mirodn/mcp-server-public-transport'

If you have feedback or need assistance with the MCP directory API, please join our Discord server