Skip to main content
Glama

Server Details

Location & routing intelligence for AI agents — geocoding, truck routing, traffic, weather, and place search.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 11 of 11 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: geocoding (single and batch), reverse geocoding, directions, place search (category-specific vs exploration), isochrones, traffic, weather, quota checking, and API key management. No overlapping functionality.

Naming Consistency5/5

All tool names follow a consistent lowercase_snake_case pattern (e.g., batch_geocode, reverse_geocode, search_places). The naming is predictable and readable.

Tool Count5/5

11 tools cover a comprehensive set of geolocation services without being excessive. Each tool serves a well-defined need, and the count feels appropriate for the server's scope.

Completeness4/5

The tool surface covers core geolocation operations (geocoding, routing, search, isochrones, traffic, weather, quota) and includes batch and key management. Minor gaps exist, such as place details lookup or matrix routing, but the set is largely complete for typical use cases.

Available Tools

11 tools
batch_geocodeA
Read-onlyIdempotent
Inspect

Geocode multiple addresses in one request with structured per-record results. Use for bulk operations instead of repeated single geocode calls. Max 50 per batch.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressesYesArray of addresses to geocode

Output Schema

ParametersJSON Schema
NameRequiredDescription
itemsNo
summaryNoMap of scalar facts the LLM should surface verbatim
display_hintNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses max batch size of 50, but omits other behavioral traits like error handling, response format, or rate limits; no annotations to supplement.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise, front-loaded sentences with no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers purpose, usage guidance, and a key constraint for a simple tool with one parameter, though missing output format details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Description does not add meaning beyond the input schema's description of 'addresses'; schema coverage is 100%, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool geocodes multiple addresses in one request with structured per-record results, distinguishing it from the single geocode sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises using for bulk operations instead of repeated single calls, naming the alternative 'geocode', but lacks further contextual exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

directionsA
Read-onlyIdempotent
Inspect

Generate routes, ETAs, and turn-by-turn directions between locations. Supports car / truck / motorcycle / pedestrian / bicycle, with hazmat + dimension + toll avoidance for commercial routing. ETAs are returned as ISO 8601 with timezone offset (in the destination's local timezone). Use vehicle_profile presets (DRY_VAN_53, FLATBED_48, TANKER, etc.) when routing trucks — they set height/weight/length in one parameter.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesDestination address or 'lat,lon'
viaNoIntermediate stops in order
fromYesOrigin address or 'lat,lon'
unitsNoDistance units (default: miles)
hazmatNoSet true when transporting hazardous materials to avoid restricted routes
costingNoTransport mode (default: auto). Use 'truck' for commercial vehicles
axle_loadNoAxle weight in metric tons (e.g. 9.07t = 20,000 lbs per axle)
top_speedNoMaximum speed in km/h (default 105 for trucks)
use_ferryNoFerry preference 0-1 (0=avoid, 1=allow)
axle_countNoNumber of axles (default 5 for semi-trailer)
avoid_tollsNoAvoid toll roads when user requests toll-free routing
truck_widthNoTruck width in meters (e.g. 2.6m = 8'6")
truck_heightNoTruck height in meters (e.g. 4.11m = 13'6"). Triggers bridge avoidance
truck_lengthNoTruck length in meters (e.g. 16.2m = 53')
truck_weightNoTruck weight in metric tons (e.g. 36.3t = 80,000 lbs). Triggers weight-restricted road avoidance
use_highwaysNoHighway preference 0-1 (0=avoid, 1=prefer)
include_trafficNoInclude live traffic conditions along route (default: true)
vehicle_profileNoPreset vehicle: DRY_VAN_53, FLATBED_48, TANKER, BOX_TRUCK_26, SPRINTER_VAN, DOUBLE_TRAILER, OVERSIZE. Sets height/weight/length automatically. Individual params override.

Output Schema

ParametersJSON Schema
NameRequiredDescription
itemsNo
geojsonNo
metricsNo
summaryNoMap of scalar facts the LLM should surface verbatim
display_hintNo
costing_optionsNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries burden. Discloses ETA format (ISO 8601 with timezone offset) and vehicle_profile behavior. Lacks details on rate limits, authentication, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, front-loaded with key action and purpose. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complex tool with 18 parameters and no output schema. Description lacks response structure, error handling, or usage limits. Missing details needed for full invocation understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter descriptions. Tool description adds value by explaining vehicle_profile presets and overriding behavior, but does not significantly augment most parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it generates routes, ETAs, and turn-by-turn directions. Lists transport modes and commercial routing features. Distinct from siblings like geocode or isochrone.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Does not explicitly state when to use vs alternatives, but context implies for routing. Mentions vehicle_profile presets for trucks, but no exclusions or when-not-to-use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

exploreA
Read-onlyIdempotent
Inspect

BROWSING / DISCOVERY search — cities, neighbourhoods, or mixed venues near a location. Use this when the user is exploring a REGION rather than looking for a specific category. Supports population filtering ('cities > 100k'), distance/population sorting, and layer filtering (locality / neighbourhood / venue / address / street). For specific POI categories (gas, food, charging, etc.), use search_places instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
latNoLatitude of center point
lonNoLongitude of center point
sizeNoMax results (default 10, max 50)
sortNoSort mode (default: combined)
layersNoComma-separated: venue, address, street, locality, neighbourhood (default: locality)
radiusNoSearch radius with unit, e.g. '50km', '30mi' (default: 150km)
locationNoCenter point address or 'lat,lon'
min_populationNoMinimum population filter for locality results
boundary_countryNoISO country code to restrict results

Output Schema

ParametersJSON Schema
NameRequiredDescription
itemsNo
geojsonNo
summaryNoMap of scalar facts the LLM should surface verbatim
display_hintNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It discloses supported features (population filtering, sorting, layer filtering) and default values (size=10, sort=combined, layers=locality, radius=150km). It does not mention side effects or auth needs, but as a read-only search tool the behavior is adequately transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph of three sentences, each providing essential information: purpose, when-to-use, and supported features. It front-loads the key concept and includes no redundant or extraneous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 9 parameters and no output schema, the description is complete enough: it states the type of search, when to use, supported features, defaults, and explicitly references the sibling tool for specific POIs. It covers the essential context for an agent to invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the overall purpose of parameters like min_population, sort, and layers in context (e.g., 'Supports population filtering...'), which helps the agent understand parameter usage beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states it's for browsing/discovery of cities, neighbourhoods, or mixed venues near a location, using specific verbs like 'search' and 'exploring'. It clearly distinguishes itself from sibling tool search_places by specifying the latter is for specific POI categories.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance: 'Use this when the user is exploring a REGION rather than looking for a specific category.' It also gives a direct alternative: 'For specific POI categories... use search_places instead.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

geocodeA
Read-onlyIdempotent
Inspect

Convert an address, place name, street, or intersection into coordinates and structured location results. Use when input is text and you need coordinates before routing, weather, or search. Supports street-level resolution and proximity biasing.

ParametersJSON Schema
NameRequiredDescriptionDefault
sizeNoNumber of candidates to return (default 5)
queryYesAddress, place name, street, intersection (e.g. 'Broadway and 42nd Street New York'), or free-form location text
layersNoRestrict to: address, street, venue, locality. Use layers=street when user clearly wants a street entity
focus_latNoBias results near this latitude — use when user says 'near me' or 'close to'
focus_lonNoBias results near this longitude
boundary_countryNoISO country code to reduce ambiguity

Output Schema

ParametersJSON Schema
NameRequiredDescription
itemsNo
geojsonNo
summaryNoMap of scalar facts the LLM should surface verbatim
display_hintNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds behavioral details: 'Supports street-level resolution and proximity biasing.' Does not cover rate limits, authentication, or response structure beyond implied 'coordinates and structured location results.'

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero filler. Purpose first, usage second, capabilities third. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description gives general sense of return type. For a geocoding tool with 6 well-documented parameters and clear intent, this is mostly complete. Could optionally mention supported coordinate systems or result format details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. Description mentions proximity biasing and street-level resolution, which map to existing schema parameters (focus_lat/focus_lon and layers). Adds minimal extra meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Convert' with specific resources: addresses, place names, streets, intersections. Distinguishes from sibling 'reverse_geocode' (which does the opposite) and 'search_places' (likely different scope).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'when input is text and you need coordinates before routing, weather, or search.' However, it does not mention when not to use or name specific sibling alternatives like 'batch_geocode' or 'reverse_geocode'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

isochroneA
Read-onlyIdempotent
Inspect

Generate travel-time or travel-distance reachability polygons from an origin. Pass MULTIPLE bands in one call — e.g. contours_minutes:[10,20,30] returns three nested polygons in a single response (one round-trip, not three). Use for service coverage, dispatch range, territory design, 'how far can I get in X minutes' questions, and concentric zone visualizations. Output is GeoJSON ready for Mapbox / Leaflet.

ParametersJSON Schema
NameRequiredDescriptionDefault
costingNoTransport mode
locationYesCenter point address or 'lat,lon'
contours_kmNoDistance bands in km
truck_heightNoTruck height in meters
truck_weightNoTruck weight in metric tons
contours_minutesNoTime bands in minutes, e.g. [10, 20, 30]

Output Schema

ParametersJSON Schema
NameRequiredDescription
geojsonNo
summaryNoMap of scalar facts the LLM should surface verbatim
display_hintNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It fails to disclose behavioral traits such as whether the tool is read-only, destructive, or has data limits. It only states output format (GeoJSON).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is four concise sentences, front-loaded with the core purpose, and efficiently conveys the key usage pattern and output format without wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking output schema, the description mentions GeoJSON output, which is helpful. It covers the main use cases and multi-band feature. Missing details on error handling or limits, but adequate for a relatively simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, but the description adds value by explaining the multi-band parameter usage ('contours_minutes:[10,20,30]') and the benefit of one round-trip. This provides meaning beyond the schema's basic descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates travel-time or travel-distance reachability polygons from an origin. It uses specific verbs ('Generate', 'returns') and identifies the resource ('reachability polygons'), distinguishing it from sibling tools like 'directions' (route) and 'geocode' (address conversion).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage context: 'Use for service coverage, dispatch range, territory design...' and gives a concrete multi-band example. However, it does not mention when not to use this tool or differentiate from siblings like 'explore' or 'traffic'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

issue_api_keyAInspect

Mint a fresh API key for your current authenticated user/tenant. Useful for CLI workflows, key rotation, or MCP clients that hide the configured Bearer. The new key is tied to your existing plan. Counts as 1 query against your daily quota.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
noteNo
planNo
userNo
errorNo
usageNo
key_idNoNon-secret fingerprint of the issued key. Safe to log and surface in UI.
statusYes
api_keyNoOne-time API key secret. Returned only on successful creation. Treat as a credential — never log, echo, or render in text channels.
credential_returned_onceNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries full burden. Discloses write operation ('Mint'), auth scope ('for your current authenticated user/tenant'), and side effects ('counts as 1 query'). Could mention key invalidation behavior, but covers core traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each purposeful. First sentence states core action, second provides use cases, third clarifies constraints. No filler or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Completeness is high given zero parameters and no output schema. Covers purpose, usage, behavioral traits, and quota impact. Lacks explicit return value (API key string) but context implies key is returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters in schema, so description adds no param info. With 100% schema coverage, baseline is 3; however, absence of parameters means description naturally cannot add param specifics, making a 4 appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Mint' and the resource 'API key' with scope ('current authenticated user/tenant'). Distinguishes from sibling tools as no other tool issues API keys.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases ('CLI workflows, key rotation, MCP clients') and practical constraints ('tied to your existing plan', 'counts as 1 query against your daily quota'). Lacks explicit when-not-to-use, but sibling environment is distinct.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

quotaA
Read-only
Inspect

Check current usage, remaining limits, plan, and quota breakdown for the caller. FREE TO CALL — never counts against your quota, never blocked by it. Use this proactively when the user asks about usage or seems near limits.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
mcpYes
upstreamNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses that the call is free, never counts against quota, and is never blocked. Does not mention authentication requirements, but that may be implicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core action, then usage guidance. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters, no output schema, and simple functionality, the description fully informs the agent about purpose and when to invoke.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, so the baseline is 4. Description adds no param info, but none is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'check' and resource 'current usage, remaining limits, plan, and quota breakdown', specifically identifying the caller's context. No sibling tool shares a similar purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit proactive usage guidance: 'when the user asks about usage or seems near limits'. Also notes it's free and not blocked, but does not specify when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reverse_geocodeA
Read-onlyIdempotent
Inspect

Convert coordinates into the nearest address, street, or place. Use when starting from GPS coordinates or a map position.

ParametersJSON Schema
NameRequiredDescriptionDefault
latYesLatitude
lonYesLongitude
sizeNoNumber of nearby candidates (default 3)
layersNoRestrict to: address, street, venue, locality

Output Schema

ParametersJSON Schema
NameRequiredDescription
itemsNo
geojsonNo
summaryNoMap of scalar facts the LLM should surface verbatim
display_hintNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description should cover behavioral aspects. It only states the basic action without disclosing side effects, authentication needs, rate limits, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no unnecessary words, efficiently conveying the core purpose and usage context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Description does not explain the return format or behavior for edge cases (e.g., invalid coordinates). Given no output schema, more detail would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description adds little beyond the schema. It mentions 'nearest' but the schema already includes size and layers for that purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Convert' and the resource 'coordinates into the nearest address, street, or place', distinguishing it from sibling tools like geocode which do the reverse.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use: 'Use when starting from GPS coordinates or a map position.' It does not mention when not to use or provide explicit alternatives, but the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_placesA
Read-onlyIdempotent
Inspect

CATEGORY-specific POI search near a point — gas stations, truck stops, restaurants, charging stations, etc. Use this when the user has a specific TYPE of place in mind (food / health / retail / fuel / accommodation / nightlife / transport / government / recreation). For broader DISCOVERY (e.g. 'cities within 50 miles' or 'venues by population'), use explore instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 10, max 50)
queryNoFree-text place query such as 'truck stop', 'restaurant', 'charging station'
centerNoCenter point for nearby search
layersNoRestrict to: venue, address
radius_mNoSearch radius in meters (default 1000)
categoriesNoStructured categories: food, education, health, entertainment, retail, accommodation, nightlife, transport, government, recreation

Output Schema

ParametersJSON Schema
NameRequiredDescription
itemsNo
geojsonNo
summaryNoMap of scalar facts the LLM should surface verbatim
display_hintNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries burden. It describes the search behavior but does not disclose details like result ordering, fuzzy matching, or what happens with no results. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, no wasted words, front-loaded with core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given moderate complexity and no output schema, the description covers purpose and guidance well. Missing details on result format, but overall sufficient for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds example queries and categories but does not add significant meaning beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it performs CATEGORY-specific POI search near a point, lists example categories, and explicitly contrasts with sibling tool 'explore' for broader discovery.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use this tool (user has a specific type of place in mind) and when to use the alternative 'explore' (broader discovery).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trafficA
Read-only
Inspect

Retrieve live traffic conditions, congestion, and speed for a location. Use when traffic is needed independently of routing — for corridor monitoring, area congestion, or incident checks. COVERAGE: live data for ~30 major US metros; returns degraded or empty values outside these areas. For rural coordinates, qualify the response (e.g. 'no live traffic coverage here — showing free-flow speeds').

ParametersJSON Schema
NameRequiredDescriptionDefault
latYesLatitude
lonYesLongitude
unitsNoSpeed units — auto-detected from location (mph in US/UK, km/h elsewhere). Override if needed.

Output Schema

ParametersJSON Schema
NameRequiredDescription
itemsNo
geojsonNo
summaryNoMap of scalar facts the LLM should surface verbatim
display_hintNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses live data coverage limited to ~30 major US metros, returning degraded/empty outside, and suggests qualifying responses for rural areas. This adds meaningful behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with purpose, followed by usage context and coverage note. No wasted words, each sentence adds distinct value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema is provided, and the description does not mention what the tool returns (e.g., format, fields). For a data retrieval tool, this is a notable gap, though the rest of the description is fairly complete given the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description mentions units auto-detection and override capability, adding minor value beyond the schema, but does not elaborate on lat/lon or other parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves live traffic conditions, congestion, and speed for a location, with a specific verb and resource. It distinguishes itself from routing tools like 'directions' by emphasizing independent traffic data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: when traffic is needed independently of routing for corridor monitoring, area congestion, or incident checks. Also provides coverage limitations and guidance for rural coordinates, though no direct alternatives are named.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

weatherA
Read-only
Inspect

Get current and forecast weather for a location, including severe weather alerts and minute-by-minute precipitation. Use for destination conditions, travel planning, or route risk assessment.

ParametersJSON Schema
NameRequiredDescriptionDefault
latNoLatitude
lonNoLongitude
unitsNoTemperature/wind units (default: imperial)
locationNoPlace name or address (will be geocoded)
forecast_daysNoForecast days 1-16 (default 5)
include_alertsNoInclude severe weather alerts and warnings (default: true)
include_forecastNoInclude multi-day hourly forecast
include_minutelyNoInclude minute-by-minute precipitation for next 60 min

Output Schema

ParametersJSON Schema
NameRequiredDescription
alertsNo
metricsNo
summaryNoMap of scalar facts the LLM should surface verbatim
display_hintNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries burden. Lists data types returned (current, forecast, alerts, minutely) but does not disclose behavioral traits like rate limits, authentication, or error handling. Adequate but not detailed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundancy. First sentence defines function, second suggests usage. Highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema; description does not explain the structure of returned data (e.g., JSON fields, units, alert objects). For a tool with 8 parameters, the description should cover what to expect in the response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, and parameter descriptions are already clear. The tool description adds context for include_minutely (minutes-by-minute precipitation) but does not substantially augment what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it retrieves current and forecast weather with alerts and minutely precipitation, and provides specific use cases like travel planning and route risk assessment. Distinguishes itself from sibling tools that are geocoding, directions, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly mentions use cases: destination conditions, travel planning, route risk assessment. Does not explicitly state when not to use or list alternatives, but context is sufficiently clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources