get_city
get_cityRetrieve Philippine city information using official PSGC codes to access geographic hierarchical data for location-based applications.
Instructions
Get specific city by code
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes |
get_cityRetrieve Philippine city information using official PSGC codes to access geographic hierarchical data for location-based applications.
Get specific city by code
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes |
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states 'Get' which implies a read operation, but doesn't clarify if it's idempotent, what happens with invalid codes (e.g., errors vs. null returns), authentication needs, rate limits, or response format. This leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with a single sentence that front-loads the core purpose. There's no wasted language, making it efficient for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and low parameter coverage, the description is insufficiently complete. It doesn't address behavioral aspects like error handling or response structure, nor does it help differentiate usage in a crowded sibling toolset, leaving the agent with significant uncertainty.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 1 parameter with 0% description coverage, and the description only mentions 'by code' without explaining what the code represents (e.g., format, examples, source). It adds minimal semantic value beyond the schema's existence of a 'code' parameter, failing to compensate for the low coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get specific city by code' clearly states the verb ('Get'), resource ('city'), and key constraint ('by code'), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'get_cities' (plural) or 'get_municipality', which might retrieve similar geographic entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With many sibling tools (e.g., 'get_cities', 'get_municipality', 'search_by_name'), there's no indication of when this specific retrieval by code is preferred over other lookup methods.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/xiaobenyang-com/Philippine-Geocoding'
If you have feedback or need assistance with the MCP directory API, please join our Discord server