ThinAir Geo
Server Details
Location & routing intelligence for AI agents — geocoding, truck routing, traffic, weather, and place search.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 11 of 11 tools scored.
Each tool has a clearly distinct purpose: geocoding (single and batch), reverse geocoding, directions, place search (category-specific vs exploration), isochrones, traffic, weather, quota checking, and API key management. No overlapping functionality.
All tool names follow a consistent lowercase_snake_case pattern (e.g., batch_geocode, reverse_geocode, search_places). The naming is predictable and readable.
11 tools cover a comprehensive set of geolocation services without being excessive. Each tool serves a well-defined need, and the count feels appropriate for the server's scope.
The tool surface covers core geolocation operations (geocoding, routing, search, isochrones, traffic, weather, quota) and includes batch and key management. Minor gaps exist, such as place details lookup or matrix routing, but the set is largely complete for typical use cases.
Available Tools
11 toolsbatch_geocodeARead-onlyIdempotentInspect
Geocode multiple addresses in one request with structured per-record results. Use for bulk operations instead of repeated single geocode calls. Max 50 per batch.
| Name | Required | Description | Default |
|---|---|---|---|
| addresses | Yes | Array of addresses to geocode |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | No | |
| summary | No | Map of scalar facts the LLM should surface verbatim |
| display_hint | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses max batch size of 50, but omits other behavioral traits like error handling, response format, or rate limits; no annotations to supplement.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise, front-loaded sentences with no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers purpose, usage guidance, and a key constraint for a simple tool with one parameter, though missing output format details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Description does not add meaning beyond the input schema's description of 'addresses'; schema coverage is 100%, so baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool geocodes multiple addresses in one request with structured per-record results, distinguishing it from the single geocode sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises using for bulk operations instead of repeated single calls, naming the alternative 'geocode', but lacks further contextual exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
directionsARead-onlyIdempotentInspect
Generate routes, ETAs, and turn-by-turn directions between locations. Supports car / truck / motorcycle / pedestrian / bicycle, with hazmat + dimension + toll avoidance for commercial routing. ETAs are returned as ISO 8601 with timezone offset (in the destination's local timezone). Use vehicle_profile presets (DRY_VAN_53, FLATBED_48, TANKER, etc.) when routing trucks — they set height/weight/length in one parameter.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Destination address or 'lat,lon' | |
| via | No | Intermediate stops in order | |
| from | Yes | Origin address or 'lat,lon' | |
| units | No | Distance units (default: miles) | |
| hazmat | No | Set true when transporting hazardous materials to avoid restricted routes | |
| costing | No | Transport mode (default: auto). Use 'truck' for commercial vehicles | |
| axle_load | No | Axle weight in metric tons (e.g. 9.07t = 20,000 lbs per axle) | |
| top_speed | No | Maximum speed in km/h (default 105 for trucks) | |
| use_ferry | No | Ferry preference 0-1 (0=avoid, 1=allow) | |
| axle_count | No | Number of axles (default 5 for semi-trailer) | |
| avoid_tolls | No | Avoid toll roads when user requests toll-free routing | |
| truck_width | No | Truck width in meters (e.g. 2.6m = 8'6") | |
| truck_height | No | Truck height in meters (e.g. 4.11m = 13'6"). Triggers bridge avoidance | |
| truck_length | No | Truck length in meters (e.g. 16.2m = 53') | |
| truck_weight | No | Truck weight in metric tons (e.g. 36.3t = 80,000 lbs). Triggers weight-restricted road avoidance | |
| use_highways | No | Highway preference 0-1 (0=avoid, 1=prefer) | |
| include_traffic | No | Include live traffic conditions along route (default: true) | |
| vehicle_profile | No | Preset vehicle: DRY_VAN_53, FLATBED_48, TANKER, BOX_TRUCK_26, SPRINTER_VAN, DOUBLE_TRAILER, OVERSIZE. Sets height/weight/length automatically. Individual params override. |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | No | |
| geojson | No | |
| metrics | No | |
| summary | No | Map of scalar facts the LLM should surface verbatim |
| display_hint | No | |
| costing_options | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. Discloses ETA format (ISO 8601 with timezone offset) and vehicle_profile behavior. Lacks details on rate limits, authentication, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences, front-loaded with key action and purpose. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complex tool with 18 parameters and no output schema. Description lacks response structure, error handling, or usage limits. Missing details needed for full invocation understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter descriptions. Tool description adds value by explaining vehicle_profile presets and overriding behavior, but does not significantly augment most parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it generates routes, ETAs, and turn-by-turn directions. Lists transport modes and commercial routing features. Distinct from siblings like geocode or isochrone.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Does not explicitly state when to use vs alternatives, but context implies for routing. Mentions vehicle_profile presets for trucks, but no exclusions or when-not-to-use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
exploreARead-onlyIdempotentInspect
BROWSING / DISCOVERY search — cities, neighbourhoods, or mixed venues near a location. Use this when the user is exploring a REGION rather than looking for a specific category. Supports population filtering ('cities > 100k'), distance/population sorting, and layer filtering (locality / neighbourhood / venue / address / street). For specific POI categories (gas, food, charging, etc.), use search_places instead.
| Name | Required | Description | Default |
|---|---|---|---|
| lat | No | Latitude of center point | |
| lon | No | Longitude of center point | |
| size | No | Max results (default 10, max 50) | |
| sort | No | Sort mode (default: combined) | |
| layers | No | Comma-separated: venue, address, street, locality, neighbourhood (default: locality) | |
| radius | No | Search radius with unit, e.g. '50km', '30mi' (default: 150km) | |
| location | No | Center point address or 'lat,lon' | |
| min_population | No | Minimum population filter for locality results | |
| boundary_country | No | ISO country code to restrict results |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | No | |
| geojson | No | |
| summary | No | Map of scalar facts the LLM should surface verbatim |
| display_hint | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It discloses supported features (population filtering, sorting, layer filtering) and default values (size=10, sort=combined, layers=locality, radius=150km). It does not mention side effects or auth needs, but as a read-only search tool the behavior is adequately transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph of three sentences, each providing essential information: purpose, when-to-use, and supported features. It front-loads the key concept and includes no redundant or extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 9 parameters and no output schema, the description is complete enough: it states the type of search, when to use, supported features, defaults, and explicitly references the sibling tool for specific POIs. It covers the essential context for an agent to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the overall purpose of parameters like min_population, sort, and layers in context (e.g., 'Supports population filtering...'), which helps the agent understand parameter usage beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states it's for browsing/discovery of cities, neighbourhoods, or mixed venues near a location, using specific verbs like 'search' and 'exploring'. It clearly distinguishes itself from sibling tool search_places by specifying the latter is for specific POI categories.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance: 'Use this when the user is exploring a REGION rather than looking for a specific category.' It also gives a direct alternative: 'For specific POI categories... use search_places instead.'
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
geocodeARead-onlyIdempotentInspect
Convert an address, place name, street, or intersection into coordinates and structured location results. Use when input is text and you need coordinates before routing, weather, or search. Supports street-level resolution and proximity biasing.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Number of candidates to return (default 5) | |
| query | Yes | Address, place name, street, intersection (e.g. 'Broadway and 42nd Street New York'), or free-form location text | |
| layers | No | Restrict to: address, street, venue, locality. Use layers=street when user clearly wants a street entity | |
| focus_lat | No | Bias results near this latitude — use when user says 'near me' or 'close to' | |
| focus_lon | No | Bias results near this longitude | |
| boundary_country | No | ISO country code to reduce ambiguity |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | No | |
| geojson | No | |
| summary | No | Map of scalar facts the LLM should surface verbatim |
| display_hint | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds behavioral details: 'Supports street-level resolution and proximity biasing.' Does not cover rate limits, authentication, or response structure beyond implied 'coordinates and structured location results.'
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero filler. Purpose first, usage second, capabilities third. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description gives general sense of return type. For a geocoding tool with 6 well-documented parameters and clear intent, this is mostly complete. Could optionally mention supported coordinate systems or result format details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description mentions proximity biasing and street-level resolution, which map to existing schema parameters (focus_lat/focus_lon and layers). Adds minimal extra meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Convert' with specific resources: addresses, place names, streets, intersections. Distinguishes from sibling 'reverse_geocode' (which does the opposite) and 'search_places' (likely different scope).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'when input is text and you need coordinates before routing, weather, or search.' However, it does not mention when not to use or name specific sibling alternatives like 'batch_geocode' or 'reverse_geocode'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
isochroneARead-onlyIdempotentInspect
Generate travel-time or travel-distance reachability polygons from an origin. Pass MULTIPLE bands in one call — e.g. contours_minutes:[10,20,30] returns three nested polygons in a single response (one round-trip, not three). Use for service coverage, dispatch range, territory design, 'how far can I get in X minutes' questions, and concentric zone visualizations. Output is GeoJSON ready for Mapbox / Leaflet.
| Name | Required | Description | Default |
|---|---|---|---|
| costing | No | Transport mode | |
| location | Yes | Center point address or 'lat,lon' | |
| contours_km | No | Distance bands in km | |
| truck_height | No | Truck height in meters | |
| truck_weight | No | Truck weight in metric tons | |
| contours_minutes | No | Time bands in minutes, e.g. [10, 20, 30] |
Output Schema
| Name | Required | Description |
|---|---|---|
| geojson | No | |
| summary | No | Map of scalar facts the LLM should surface verbatim |
| display_hint | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It fails to disclose behavioral traits such as whether the tool is read-only, destructive, or has data limits. It only states output format (GeoJSON).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four concise sentences, front-loaded with the core purpose, and efficiently conveys the key usage pattern and output format without wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking output schema, the description mentions GeoJSON output, which is helpful. It covers the main use cases and multi-band feature. Missing details on error handling or limits, but adequate for a relatively simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, but the description adds value by explaining the multi-band parameter usage ('contours_minutes:[10,20,30]') and the benefit of one round-trip. This provides meaning beyond the schema's basic descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates travel-time or travel-distance reachability polygons from an origin. It uses specific verbs ('Generate', 'returns') and identifies the resource ('reachability polygons'), distinguishing it from sibling tools like 'directions' (route) and 'geocode' (address conversion).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage context: 'Use for service coverage, dispatch range, territory design...' and gives a concrete multi-band example. However, it does not mention when not to use this tool or differentiate from siblings like 'explore' or 'traffic'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
issue_api_keyAInspect
Mint a fresh API key for your current authenticated user/tenant. Useful for CLI workflows, key rotation, or MCP clients that hide the configured Bearer. The new key is tied to your existing plan. Counts as 1 query against your daily quota.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| note | No | |
| plan | No | |
| user | No | |
| error | No | |
| usage | No | |
| key_id | No | Non-secret fingerprint of the issued key. Safe to log and surface in UI. |
| status | Yes | |
| api_key | No | One-time API key secret. Returned only on successful creation. Treat as a credential — never log, echo, or render in text channels. |
| credential_returned_once | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. Discloses write operation ('Mint'), auth scope ('for your current authenticated user/tenant'), and side effects ('counts as 1 query'). Could mention key invalidation behavior, but covers core traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each purposeful. First sentence states core action, second provides use cases, third clarifies constraints. No filler or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Completeness is high given zero parameters and no output schema. Covers purpose, usage, behavioral traits, and quota impact. Lacks explicit return value (API key string) but context implies key is returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema, so description adds no param info. With 100% schema coverage, baseline is 3; however, absence of parameters means description naturally cannot add param specifics, making a 4 appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Mint' and the resource 'API key' with scope ('current authenticated user/tenant'). Distinguishes from sibling tools as no other tool issues API keys.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit use cases ('CLI workflows, key rotation, MCP clients') and practical constraints ('tied to your existing plan', 'counts as 1 query against your daily quota'). Lacks explicit when-not-to-use, but sibling environment is distinct.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
quotaARead-onlyInspect
Check current usage, remaining limits, plan, and quota breakdown for the caller. FREE TO CALL — never counts against your quota, never blocked by it. Use this proactively when the user asks about usage or seems near limits.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| mcp | Yes | |
| upstream | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses that the call is free, never counts against quota, and is never blocked. Does not mention authentication requirements, but that may be implicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core action, then usage guidance. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters, no output schema, and simple functionality, the description fully informs the agent about purpose and when to invoke.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so the baseline is 4. Description adds no param info, but none is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'check' and resource 'current usage, remaining limits, plan, and quota breakdown', specifically identifying the caller's context. No sibling tool shares a similar purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit proactive usage guidance: 'when the user asks about usage or seems near limits'. Also notes it's free and not blocked, but does not specify when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reverse_geocodeARead-onlyIdempotentInspect
Convert coordinates into the nearest address, street, or place. Use when starting from GPS coordinates or a map position.
| Name | Required | Description | Default |
|---|---|---|---|
| lat | Yes | Latitude | |
| lon | Yes | Longitude | |
| size | No | Number of nearby candidates (default 3) | |
| layers | No | Restrict to: address, street, venue, locality |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | No | |
| geojson | No | |
| summary | No | Map of scalar facts the LLM should surface verbatim |
| display_hint | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description should cover behavioral aspects. It only states the basic action without disclosing side effects, authentication needs, rate limits, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no unnecessary words, efficiently conveying the core purpose and usage context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Description does not explain the return format or behavior for edge cases (e.g., invalid coordinates). Given no output schema, more detail would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds little beyond the schema. It mentions 'nearest' but the schema already includes size and layers for that purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Convert' and the resource 'coordinates into the nearest address, street, or place', distinguishing it from sibling tools like geocode which do the reverse.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use: 'Use when starting from GPS coordinates or a map position.' It does not mention when not to use or provide explicit alternatives, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_placesARead-onlyIdempotentInspect
CATEGORY-specific POI search near a point — gas stations, truck stops, restaurants, charging stations, etc. Use this when the user has a specific TYPE of place in mind (food / health / retail / fuel / accommodation / nightlife / transport / government / recreation). For broader DISCOVERY (e.g. 'cities within 50 miles' or 'venues by population'), use explore instead.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 10, max 50) | |
| query | No | Free-text place query such as 'truck stop', 'restaurant', 'charging station' | |
| center | No | Center point for nearby search | |
| layers | No | Restrict to: venue, address | |
| radius_m | No | Search radius in meters (default 1000) | |
| categories | No | Structured categories: food, education, health, entertainment, retail, accommodation, nightlife, transport, government, recreation |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | No | |
| geojson | No | |
| summary | No | Map of scalar facts the LLM should surface verbatim |
| display_hint | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It describes the search behavior but does not disclose details like result ordering, fuzzy matching, or what happens with no results. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, no wasted words, front-loaded with core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given moderate complexity and no output schema, the description covers purpose and guidance well. Missing details on result format, but overall sufficient for agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds example queries and categories but does not add significant meaning beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it performs CATEGORY-specific POI search near a point, lists example categories, and explicitly contrasts with sibling tool 'explore' for broader discovery.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use this tool (user has a specific type of place in mind) and when to use the alternative 'explore' (broader discovery).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trafficARead-onlyInspect
Retrieve live traffic conditions, congestion, and speed for a location. Use when traffic is needed independently of routing — for corridor monitoring, area congestion, or incident checks. COVERAGE: live data for ~30 major US metros; returns degraded or empty values outside these areas. For rural coordinates, qualify the response (e.g. 'no live traffic coverage here — showing free-flow speeds').
| Name | Required | Description | Default |
|---|---|---|---|
| lat | Yes | Latitude | |
| lon | Yes | Longitude | |
| units | No | Speed units — auto-detected from location (mph in US/UK, km/h elsewhere). Override if needed. |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | No | |
| geojson | No | |
| summary | No | Map of scalar facts the LLM should surface verbatim |
| display_hint | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses live data coverage limited to ~30 major US metros, returning degraded/empty outside, and suggests qualifying responses for rural areas. This adds meaningful behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, front-loaded with purpose, followed by usage context and coverage note. No wasted words, each sentence adds distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema is provided, and the description does not mention what the tool returns (e.g., format, fields). For a data retrieval tool, this is a notable gap, though the rest of the description is fairly complete given the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description mentions units auto-detection and override capability, adding minor value beyond the schema, but does not elaborate on lat/lon or other parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves live traffic conditions, congestion, and speed for a location, with a specific verb and resource. It distinguishes itself from routing tools like 'directions' by emphasizing independent traffic data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: when traffic is needed independently of routing for corridor monitoring, area congestion, or incident checks. Also provides coverage limitations and guidance for rural coordinates, though no direct alternatives are named.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
weatherARead-onlyInspect
Get current and forecast weather for a location, including severe weather alerts and minute-by-minute precipitation. Use for destination conditions, travel planning, or route risk assessment.
| Name | Required | Description | Default |
|---|---|---|---|
| lat | No | Latitude | |
| lon | No | Longitude | |
| units | No | Temperature/wind units (default: imperial) | |
| location | No | Place name or address (will be geocoded) | |
| forecast_days | No | Forecast days 1-16 (default 5) | |
| include_alerts | No | Include severe weather alerts and warnings (default: true) | |
| include_forecast | No | Include multi-day hourly forecast | |
| include_minutely | No | Include minute-by-minute precipitation for next 60 min |
Output Schema
| Name | Required | Description |
|---|---|---|
| alerts | No | |
| metrics | No | |
| summary | No | Map of scalar facts the LLM should surface verbatim |
| display_hint | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. Lists data types returned (current, forecast, alerts, minutely) but does not disclose behavioral traits like rate limits, authentication, or error handling. Adequate but not detailed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no redundancy. First sentence defines function, second suggests usage. Highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema; description does not explain the structure of returned data (e.g., JSON fields, units, alert objects). For a tool with 8 parameters, the description should cover what to expect in the response.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, and parameter descriptions are already clear. The tool description adds context for include_minutely (minutes-by-minute precipitation) but does not substantially augment what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it retrieves current and forecast weather with alerts and minutely precipitation, and provides specific use cases like travel planning and route risk assessment. Distinguishes itself from sibling tools that are geocoding, directions, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly mentions use cases: destination conditions, travel planning, route risk assessment. Does not explicitly state when not to use or list alternatives, but context is sufficiently clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!