bbox-mcp-server
Server Quality Checklist
- Disambiguation4/5
Most tools have distinct purposes, but `search_overpass` and `aggregate_overpass_h3` both execute Overpass queries and could confuse agents about whether they need raw JSON results or H3-binned spatial analysis. The other tools (coordinate conversion, URL generation, tag discovery, H3 indexing) are clearly differentiated.
Naming Consistency5/5Excellent consistency throughout. All tools use snake_case with clear verb_noun patterns (get_bounds, generate_share_url, list_osm_tags). Related concepts are named predictably (both H3 tools include 'h3', both Overpass tools include 'overpass').
Tool Count5/5Six tools is ideal for this focused geospatial domain. The set covers the complete workflow from coordinate parsing and OSM tag discovery to data retrieval, spatial aggregation, and visualization URL generation without bloat or redundancy.
Completeness4/5Strong coverage for bounding box and OSM analysis workflows, with tools for input (get_bounds), preparation (list_osm_tags), retrieval (search_overpass), analysis (aggregate_overpass_h3, get_h3_indices), and output (generate_share_url). Minor gaps exist in coordinate validation and H3 geometry export, but core functionality is solid.
Average 3.8/5 across 6 of 6 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.2.4
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 6 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It mentions the compact option's purpose (minimize response) and return_geometry's effect (hex boundary polygons), but omits critical behavioral details like the MAPBOX_ACCESS_TOKEN requirement for geocoding, bounding box size limits, or performance implications of high resolutions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise with two well-structured sentences. The first states purpose; the second lists input formats. No redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description covers the core function and input formats, it lacks output value specification (array of hex strings?) and omits the `location` parameter entirely from the narrative. Given the absence of an output schema and annotations, the description should provide more comprehensive behavioral and return-value context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description reinforces the `bbox` parameter's supported formats (WKT, GeoJSON, etc.) but does not add meaning beyond the schema's existing documentation for other parameters like `location`, `compact`, or `return_geometry`.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clearly identifies the action (Get), resource (Uber H3 cell indices), and scope (target resolution). However, it narrowly emphasizes 'bounding box area' without mentioning the alternative `location` text-search capability defined in the schema, slightly limiting the agent's understanding of the tool's full scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides input format guidance (WKT, GeoJSON, etc.) which hints at when the tool can be used, but offers no explicit comparison to sibling tools (e.g., when to use this vs `aggregate_overpass_h3`) and does not clarify when to use `location` vs `bbox` parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden but omits key behavioral details: it does not describe the output format (e.g., whether it returns hex IDs with counts, GeoJSON, or raw data), pagination behavior, or rate limiting concerns typical of Overpass queries.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is tightly constructed with zero redundancy, front-loading the core action and progressing logically through processing method and purpose. Every clause provides essential context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations and output schema, the description adequately covers the input purpose but remains incomplete regarding return value structure and behavioral constraints. For a tool requiring domain knowledge (Overpass QL, H3), additional context on output format would improve utility.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description reinforces the 'resolution' and 'query' parameters but does not add semantic value beyond the schema regarding the mutual exclusivity of 'location' vs 'bbox' or syntax guidance for Overpass QL.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (executes Overpass query), processing method (bins into H3 hexagons), and analytical purpose (spatial density analysis). It implicitly distinguishes from sibling `search_overpass` by emphasizing aggregation/binning versus raw retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'to analyze spatial density' implies the use case, but the description lacks explicit guidance on when to choose this over `search_overpass` (raw data) or `get_h3_indices` (pure hex generation), and does not mention prerequisites like Overpass QL knowledge.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses that the server automatically wraps queries in bbox filters and returns structured JSON. However, it omits safety characteristics (read-only vs. destructive), rate limits, or error handling patterns that would help an agent understand operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear opening sentence stating purpose, followed by implementation details, and a dedicated section for common query patterns. Every sentence earns its place; the examples are essential for a domain-specific query language like Overpass QL.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description notes it 'returns structured JSON' but does not describe the JSON structure. The inclusion of tag examples compensates well for input complexity. It adequately covers the tool's functionality but could improve by describing the output format or pagination behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage (baseline 3), the description adds substantial value through the 'COMMON TAG EXAMPLES' section, providing concrete Overpass QL syntax patterns (e.g., `nwr["amenity"="restaurant"]`) that help agents construct valid 'query' parameter values beyond the generic schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Execute[s] an Overpass QL query to find POIs, roads, or other OSM features within a bounding box,' providing specific verb, resource, and scope. However, it does not explicitly differentiate from sibling tool 'aggregate_overpass_h3,' which also appears to query OSM data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through POI examples (restaurants, parks, roads) and notes that the server automatically wraps queries in bbox filters. However, it lacks explicit guidance on when to use this vs. 'aggregate_overpass_h3' or 'list_osm_tags,' leaving the agent to infer based on the 'search' vs. 'aggregate' naming distinction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Identifies the destination tool (visual Bounding Box tool) which provides useful context. However, omits behavioral details like URL persistence, authentication requirements, or validation behavior for invalid coordinate strings.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste. First sentence front-loads purpose and destination; second covers input flexibility. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a single-parameter tool. Covers input formats and destination behavior. Minor gap: does not explicitly state that the tool returns a URL string (though implied by 'generates a URL').
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description lists supported formats (WKT, GeoJSON, ogrinfo extent, raw strings) but largely mirrors schema description without adding syntax constraints, validation rules, or format selection guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (generates URL), destination resource (visual Bounding Box tool), and function (display coordinates on map). Clearly distinguishes from data-retrieval siblings like search_overpass and get_bounds by focusing on visualization/sharing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through reference to 'visual Bounding Box tool' and coordinate display, distinguishing it from raw data retrieval tools. However, lacks explicit when-to-use guidance or comparison against sibling alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It classifies the tool as a 'Discovery tool' (implying read-only/reference behavior) and explains the protective purpose (preventing hallucination), but omits details on return format, rate limits, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly constructed sentences with zero redundancy: first establishes identity and function, second provides workflow context and value proposition. Perfectly front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter discovery tool without output schema, the description adequately covers intent and workflow positioning. Slight deduction because it could briefly characterize the return value (e.g., 'returns suggested key-value pairs') to compensate for missing output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing detailed examples ('food', 'health', etc.) for the category parameter. The description references 'a given category' but adds no syntax, validation rules, or semantic details beyond what the schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('look up') and resource ('OpenStreetMap tags'), defines scope ('for a given category'), and distinguishes from sibling tools by positioning it as a preparatory step 'before writing an Overpass query'—clearly contrasting with search_overpass.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context on when to use ('before writing an Overpass query' to 'prevent hallucinating incorrect tags'), implicitly guiding the agent to use this prior to query execution tools. Lacks explicit 'when not to use' exclusions or direct sibling comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and successfully discloses critical behavioral traits: MAPBOX_ACCESS_TOKEN dependency for geocoding, supported input formats (WKT, GeoJSON, ogrinfo), and the conversion nature of the tool. Could be improved by explicitly stating this is a safe read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: sentence 1 states purpose, sentence 2 lists parsing capabilities, sentence 3 provides the critical auth constraint. Front-loaded with the core action and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 parameters with 100% schema coverage and no output schema, the description appropriately focuses on high-level purpose and constraints rather than repeating schema details. Captures the essential context that this is a coordinate utility distinct from the server's data-retrieval siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. The description adds value by grouping the technical format details (WKT, GeoJSON, ogrinfo) into a cohesive capability statement for the 'bbox' parameter, and explicitly linking the MAPBOX_ACCESS_TOKEN requirement to the 'location' parameter behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] converted coordinates for a bounding box or text location search', providing specific verb and resource. It distinguishes from siblings (which focus on OSM data retrieval/H3 indices) by emphasizing coordinate conversion and parsing functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit conditional guidance: 'If MAPBOX_ACCESS_TOKEN is not set, you MUST provide explicit coordinates via 'bbox'.' This establishes clear prerequisites and when-to-use conditions for the location parameter. Does not explicitly reference sibling alternatives, but the coordinate conversion focus makes the distinction clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/iamvibhorsingh/bbox-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server