Skip to main content
Glama

Server Details

Nominatim MCP — wraps OpenStreetMap Nominatim geocoding API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-nominatim
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 8 of 8 tools scored. Lowest: 3.1/5.

Server CoherenceC
Disambiguation3/5

The tools have some clear distinctions, such as lookup vs. search_address for OSM ID vs. address queries, but there is significant overlap and ambiguity. ask_pipeworx is a meta-tool that could handle many of the same tasks as other tools, potentially causing confusion about when to use it versus specific tools like reverse_geocode or search_address. The memory tools (remember, recall, forget) are distinct from geocoding tools, but the overall set lacks clear boundaries between general-purpose and domain-specific functions.

Naming Consistency2/5

Naming conventions are inconsistent and chaotic. Some tools use snake_case (lookup, reverse_geocode, search_address), while others use different styles like ask_pipeworx (which mixes lowercase and a brand name) or single words (forget, recall, remember). There is no predictable verb_noun pattern, and the styles vary widely, making it difficult for agents to infer tool purposes from names alone.

Tool Count3/5

With 8 tools, the count is borderline but reasonable for a server that mixes geocoding and utility functions. However, the scope feels disjointed—combining Nominatim geocoding tools with memory management and a general query tool—which makes the count seem either too high for a focused purpose or too low to cover all domains adequately. It sits in a gray area where the tools don't clearly justify their collective presence.

Completeness2/5

For the Nominatim geocoding domain, the surface is incomplete, as it only includes forward and reverse geocoding without supporting operations like bounding box searches, place details, or API configuration. The inclusion of unrelated tools like ask_pipeworx and memory functions creates gaps in the core domain coverage, and the server lacks a cohesive purpose, leading to significant gaps that could cause agent failures in specialized tasks.

Available Tools

8 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses key behavioral traits: the tool automatically selects data sources and fills arguments, handles natural language questions, and returns results. It doesn't mention rate limits, authentication needs, or error conditions, but provides substantial operational context beyond basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first sentence. Every subsequent sentence adds value: explaining the mechanism, contrasting with alternatives, and providing concrete examples. Zero wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no annotations and no output schema, the description provides excellent context about how the tool works, when to use it, and what to expect. The examples effectively illustrate both input format and potential output types. It could mention response format or error handling, but is largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single 'question' parameter. The description adds meaningful context by emphasizing 'plain English' and 'natural language' input, and provides concrete examples that illustrate the expected parameter format and scope beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'). It distinguishes from siblings by emphasizing natural language input rather than structured queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It provides clear alternatives by implication (use other tools when you want to browse or use structured schemas). The examples further illustrate appropriate use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool returns 'the most relevant tools with names and descriptions' and mentions a default/max limit (implied from schema), but doesn't cover other behavioral aspects like error handling, authentication needs, rate limits, or whether it's read-only. The description adds some context but leaves gaps for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured in two sentences. The first sentence states the purpose, and the second provides usage guidelines. Every sentence earns its place with no wasted words, and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search function with 2 parameters), no annotations, and no output schema, the description is reasonably complete. It covers purpose, usage context, and return content, but lacks details on output format (e.g., structure of returned tools) and error cases. For a search tool without annotations or output schema, it does well but could be more comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters (query and limit). The description adds marginal value by emphasizing the natural language aspect of the query ('by describing what you need') and the catalog context, but doesn't provide additional syntax or format details beyond what the schema specifies. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search the Pipeworx tool catalog by describing what you need.' It specifies the verb ('Search'), resource ('Pipeworx tool catalog'), and method ('by describing what you need'), distinguishing it from sibling tools like lookup, reverse_geocode, and search_address which appear to be more specific data lookup tools rather than a catalog search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This gives clear context on when to use it (large tool catalogs, initial discovery) and implies alternatives (other tools for specific tasks once identified), though it doesn't name specific alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetBInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool deletes a memory, implying a destructive mutation, but doesn't cover critical aspects like permissions needed, whether deletion is permanent or reversible, error handling for non-existent keys, or rate limits. For a destructive tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste—'Delete a stored memory by key.' It is front-loaded with the core action and resource, making it easy to parse quickly. Every word contributes directly to understanding the tool's purpose without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's destructive nature, lack of annotations, and no output schema, the description is insufficiently complete. It doesn't explain what happens post-deletion (e.g., confirmation, error messages), the scope of 'memory' in this context, or how it integrates with sibling tools. For a mutation tool with no structured safety or output information, more detail is needed to guide safe and effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'key' parameter documented as 'Memory key to delete'. The description adds minimal value beyond this, only reinforcing the parameter's role. With one parameter and high schema coverage, the baseline is 3, but the description's concise alignment with the schema earns a slight boost for clarity, though it doesn't provide additional context like key format or examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and the resource ('a stored memory by key'), making the purpose immediately understandable. It distinguishes from siblings like 'remember' (create) and 'recall' (retrieve), though it doesn't explicitly contrast with them. The verb+resource combination is specific but could be more detailed about what 'memory' entails.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., that the memory must exist), exclusions, or comparisons to sibling tools like 'discover_tools' or 'lookup'. Usage is implied only by the action 'delete', leaving the agent to infer context without explicit direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookupAInspect

Get details for OpenStreetMap locations by ID (e.g., "N123456" for node, "W654321" for way, "R111" for relation). Returns coordinates, names, and metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
idsYesComma-separated list of OSM IDs with type prefix (e.g. "N123456,W654321"). N=node, W=way, R=relation.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It explains the ID format and prefixes but doesn't mention important behavioral aspects like whether this is a read-only operation, what happens with invalid IDs, rate limits, authentication requirements, or what the return format looks like.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise - a single sentence that efficiently communicates the tool's purpose, parameter format, and object type mapping. Every word earns its place with zero wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description is incomplete. It doesn't explain what the tool returns, how errors are handled, or important behavioral constraints. While the purpose is clear, the description lacks sufficient context for an agent to fully understand the tool's behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already fully documents the single parameter. The description repeats the ID format and prefix information from the schema without adding significant additional semantic context beyond what's already in the structured data.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Look up') and resource ('OpenStreetMap objects by their OSM IDs'), with explicit mention of the three object types (node, way, relation). It distinguishes from sibling tools by focusing on ID-based lookup rather than geocoding or address search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (looking up objects by OSM IDs) but doesn't explicitly mention when not to use it or name alternatives. The sibling tools (reverse_geocode, search_address) serve different purposes, but the description doesn't contrast with them.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully describes key behaviors: the tool retrieves stored memories (implying read-only operation), works across sessions (persistence behavior), and has two modes (retrieve by key vs list all). However, it doesn't mention potential limitations like maximum memory size, retrieval time, or error conditions for invalid keys.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place. The first sentence explains the core functionality and parameter usage, while the second provides context about when to use the tool. There's zero wasted language, and the most important information (retrieval functionality) comes first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a memory retrieval tool with one optional parameter and no output schema, the description provides good coverage. It explains the tool's purpose, usage scenarios, and parameter behavior. However, without annotations or output schema, it could benefit from mentioning what format memories are returned in or any limitations on memory storage/retrieval. The description is complete enough for basic understanding but leaves some implementation details unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, providing a solid baseline. The description adds valuable semantic context beyond the schema: it explains that omitting the key parameter triggers listing of all stored memories, and clarifies that keys are used to retrieve 'context you saved earlier.' This connects the parameter to the tool's purpose in a way the schema alone doesn't.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from siblings like 'remember' (store) and 'forget' (delete) by focusing on retrieval operations. The phrase 'by key' adds specificity about the retrieval mechanism.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'to retrieve context you saved earlier in the session or in previous sessions.' It also specifies when to omit the key parameter ('omit key to list all keys') and distinguishes this from storage operations implied by sibling tools like 'remember'. The guidance covers both retrieval scenarios clearly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a storage operation (implied mutation), specifies persistence differences for authenticated vs. anonymous users, and mentions session duration (24 hours for anonymous). It does not cover rate limits, error conditions, or response format, but adds substantial context beyond basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by usage guidance and behavioral details. Each sentence adds value without redundancy, and it efficiently covers key aspects (purpose, usage, persistence) in a compact form. No wasted words or under-specification.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description does well by explaining persistence behavior and usage context. However, it does not detail what happens on success/failure, return values, or error handling, which are gaps for a mutation tool. It compensates partially with clear purpose and behavioral traits, but could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('key' and 'value') with examples. The description does not add specific syntax or format details beyond what the schema provides (e.g., it mentions 'any text' but the schema says 'any text' too). Baseline 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the verb 'store' and the resource 'key-value pair in your session memory', making the purpose clear. It distinguishes from siblings like 'forget' (delete) and 'lookup/recall' (retrieve) by focusing on storage. The description provides specific examples of what to store ('intermediate findings, user preferences, or context across tool calls'), enhancing clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), which helps guide usage. However, it does not explicitly mention when not to use it or name alternatives among siblings (e.g., 'forget' for deletion or 'lookup/recall' for retrieval), so it lacks full differentiation guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reverse_geocodeAInspect

Convert latitude/longitude coordinates to a human-readable address. Returns nearest address, place name, and administrative boundaries.

ParametersJSON Schema
NameRequiredDescriptionDefault
latYesLatitude in decimal degrees (e.g. 48.8584).
lonYesLongitude in decimal degrees (e.g. 2.2945).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the tool uses OpenStreetMap Nominatim, which adds useful context about the data source, but does not mention rate limits, authentication needs, or error handling. The description is accurate but lacks detailed behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose and data source without any wasted words. It is appropriately sized and front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 simple parameters) and no output schema, the description is reasonably complete for a read-only operation. It specifies the data source (OpenStreetMap Nominatim), which adds context, but could benefit from mentioning potential limitations or output format to enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the two parameters. The description adds no additional meaning beyond what the schema provides, such as format examples or constraints, but does not contradict it. Baseline 3 is appropriate as the schema handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('reverse geocode') and resource ('latitude/longitude coordinate pair'), and distinguishes the tool's purpose from its siblings by specifying it converts coordinates to addresses, unlike 'lookup' or 'search_address' which likely perform different operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by stating the tool converts coordinates to addresses, but it does not explicitly guide when to use this tool versus alternatives like 'lookup' or 'search_address'. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_addressAInspect

Search for coordinates of an address or place name. Returns latitude, longitude, display name, and place type for matched locations.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return. Defaults to 5, max 50.
queryYesFree-form address or place name to search for (e.g. "Eiffel Tower, Paris").
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions the service provider (OpenStreetMap Nominatim) and the return type (matching places with coordinates), it doesn't disclose important behavioral traits like rate limits, authentication requirements, potential costs, privacy considerations, or what happens with ambiguous queries. The description provides basic functionality but lacks operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise - a single sentence that efficiently communicates the tool's purpose, method, and output. Every word earns its place with no redundancy or unnecessary elaboration. The structure is front-loaded with the core functionality immediately clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description provides adequate basic context for a geocoding tool but lacks completeness. It explains what the tool does and what it returns at a high level, but doesn't address important contextual elements like response format details, error conditions, or integration considerations with the OpenStreetMap service that would help an agent use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters (query and limit). The description adds no additional parameter semantics beyond what's in the schema - it mentions 'free-form address or place name' which is already covered in the query parameter description. Baseline score of 3 is appropriate when the schema does all the parameter documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Forward geocode'), resource ('a free-form address or place name'), and technology used ('using OpenStreetMap Nominatim'). It distinguishes this tool from its sibling 'reverse_geocode' by specifying forward geocoding (address→coordinates) rather than reverse geocoding (coordinates→address).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for forward geocoding but doesn't explicitly state when to use this tool versus alternatives like 'lookup' (sibling tool). It provides context about what the tool does but lacks explicit guidance on when to choose this tool over other geocoding or lookup methods available on the server.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.