locationsauto_complete
:
Instructions
Get auto complete suggestions by city, ward, street name
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| Area | Yes | City, ward, street name, etc... | |
| CultureId | No | 1 - English|2 - French | 1 |
:
Get auto complete suggestions by city, ward, street name
| Name | Required | Description | Default |
|---|---|---|---|
| Area | Yes | City, ward, street name, etc... | |
| CultureId | No | 1 - English|2 - French | 1 |
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but omits critical details: the data source for suggestions, the maximum number of results returned, response format structure, caching behavior, and whether the operation is idempotent or has side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient 7-word sentence that front-loads the action verb. There is no redundant or wasted text; every word contributes to understanding the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter tool with 100% schema coverage, the description provides the minimum viable context for basic invocation. However, it lacks crucial ecosystem context regarding the v1 vs v2 distinction and does not compensate for the missing output schema by describing return value structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with Area and CultureId well-documented in the schema itself. The description adds semantic context by mentioning 'city, ward, street name' which aligns with the Area parameter, but adds no information about the CultureId parameter's impact or input formats beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' and resource 'auto complete suggestions', clarifying the tool retrieves location suggestions. It specifies input types (city, ward, street name) mapping to the Area parameter. However, it fails to indicate this is a legacy v1 endpoint versus the sibling 'locationsv2auto_complete', leaving agents uncertain about versioning.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool versus the nearly identical 'locationsv2auto_complete' sibling, or when to use alternatives like 'propertieslist_residential'. No prerequisites, rate limits, or error conditions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/BACH-AI-Tools/bachai-realty-in-ca1'
If you have feedback or need assistance with the MCP directory API, please join our Discord server