psgc-mcp
Server Details
MCP server for the Philippine Standard Geographic Code (PSGC) API. Gives AI agents structured access to the full PH geographic hierarchy - regions, provinces, cities, municipalities, and barangays.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolsget_hierarchyAInspect
Get the full administrative hierarchy for a PSGC entity. Returns the chain from the entity up through its parent city/municipality, province, and region.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | 10-digit PSGC code |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and effectively discloses the traversal direction (upward through specific administrative levels) and return structure (the chain from entity to region). It could improve by mentioning error behavior for invalid codes or rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first states the action and resource, second explains the return structure and scope. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool without an output schema, the description adequately explains what gets returned (the hierarchical chain) and the administrative levels included. It covers the essential behavioral context despite lacking formal output schema documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('10-digit PSGC code'), so the schema fully documents the parameter. The description mentions 'PSGC entity' which aligns with the parameter but doesn't add semantic details beyond the schema regarding format, validation, or examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' with clear resource 'administrative hierarchy' and scope 'full.' It explicitly distinguishes from sibling tools like 'list_children' by specifying the upward traversal direction 'up through its parent city/municipality, province, and region.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by explaining the upward traversal behavior (parent chain), which contrasts with 'list_children.' However, it lacks explicit guidance on when to use this versus siblings like 'lookup' or 'search,' and doesn't state prerequisites like needing a valid PSGC code first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_by_typeAInspect
List all PSGC entities of a given geographic level. Barangay (Bgy) is excluded because there are 42,000+ barangays. To find barangays, use 'search' with a name query or 'list_children' on a city/municipality.
| Name | Required | Description | Default |
|---|---|---|---|
| level | Yes | Geographic level: Reg (region), Prov (province), Dist (district), City, Mun (municipality), SubMun (sub-municipality), SGU (special geographic unit) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It adds valuable scope context (explains why barangays are excluded - volume limitation), but lacks safety indicators (read-only status), pagination details, or return format description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences total with zero waste: first sentence establishes purpose, second sentence covers limitation and alternatives. Front-loaded with critical scope restriction (barangay exclusion) immediately following the primary function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single enum parameter (100% schema coverage) and lack of output schema, the description adequately covers functional scope and limitations. Minor gap remains regarding safety classification (read-only status) and return value structure, which would require explicit mention without annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with detailed enum mappings (Reg, Prov, etc.), so the baseline is 3. Description references 'given geographic level' but does not add syntax details, validation rules, or usage examples beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb ('List') and resource ('PSGC entities'), explicitly scopes to 'given geographic level', and distinguishes from siblings by clarifying Barangay exclusion (42,000+ count) and redirecting to alternative tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when NOT to use (Barangay/Bgy excluded) and names specific sibling alternatives ('use search with a name query or list_children'), providing clear decision criteria for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_childrenAInspect
List the direct children of a PSGC entity. For a region, returns provinces/districts. For a province, returns cities/municipalities. For a city/municipality, returns barangays. Optionally filter by level.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | 10-digit PSGC code of the parent entity | |
| level | No | Filter children by geographic level |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses the polymorphic return behavior (different child types per parent entity) and optional filtering. However, omits safety traits (implied read-only but not stated), pagination behavior, or error handling for invalid codes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences: purpose statement first, hierarchical mapping examples second, optional parameter mention third. Zero redundancy. Every sentence earns its place with specific domain context not available in structured fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage and no output schema, description adequately covers the domain logic (PSGC hierarchy traversal). Minor gap: as a list operation with no annotations or output schema, it should mention pagination limits or result set boundaries typical for list operations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage establishing baseline 3. Description adds valuable semantic context explaining the hierarchical relationships (what constitutes 'children' for each entity type) and reinforces that level filtering is optional. Does not contradict schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'List' with resource 'direct children of a PSGC entity' and clarifies scope with concrete examples (region→provinces/districts, province→cities/municipalities). The 'direct children' phrasing effectively distinguishes from sibling get_hierarchy which likely retrieves full ancestry.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage guidance through hierarchical examples showing what inputs produce what outputs, and notes the optional level filter. However, lacks explicit 'when to use vs alternatives' guidance—does not clarify when to choose this over get_hierarchy or list_by_type.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lookupAInspect
Look up a Philippine geographic entity by its 10-digit PSGC code. Returns the full entity record including name, level, parent, population, and classification data.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | 10-digit PSGC code |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses the return value structure ('full entity record including name, level, parent, population, and classification data'). It implies read-only behavior through 'look up' and 'returns', but lacks error handling details (e.g., invalid codes).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first sentence establishes the operation and input, second discloses output. Information is front-loaded and appropriately sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple single-parameter schema and lack of output schema, the description adequately compensates by detailing the returned data fields. For a lookup tool of this complexity, it is complete enough, though mentioning 'not found' behavior would improve it to a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description reinforces the schema's '10-digit PSGC code' description but does not add additional semantic context such as what PSGC stands for, validation rules beyond length, or example codes.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Look up'), resource ('Philippine geographic entity'), and identifier type ('10-digit PSGC code'). It effectively distinguishes from siblings like 'search' (text-based) and 'list_children' (hierarchical traversal) by emphasizing exact code matching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the specificity of '10-digit PSGC code' implies this is for exact lookups (vs text search), there is no explicit guidance on when to use this versus 'search' or 'list_by_type'. The differentiation is left to the agent to infer.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchAInspect
Search Philippine geographic entities by name. Supports partial matching. Use the level filter to narrow results (e.g. only cities, only provinces). For barangay searches, include the parent city/municipality name to get better results.
| Name | Required | Description | Default |
|---|---|---|---|
| level | No | Filter by geographic level | |
| limit | No | Max results (default 10, max 50) | |
| query | Yes | Search query (place name or partial name) | |
| strict | No | Exact name match only (no partial/substring matching) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates partial matching support and filtering behavior, but omits operational details such as result sorting, read-only safety, or error handling (e.g., empty results).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four compact sentences with zero redundancy. Front-loaded with core purpose, followed by behavioral features, filtering examples, and a domain-specific search tip. Every sentence delivers actionable information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple parameter structure (4 primitives, no nesting) and 100% schema coverage, the description is substantially complete. It compensates for the missing output schema and annotations with domain context and usage examples, though it could briefly characterize the return value (list of entities).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema description coverage (baseline 3), the description adds significant semantic value by mapping abstract enum values to concrete examples ('cities, provinces' for the level parameter) and explaining the relationship between partial matching and the strict parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Search), resource (Philippine geographic entities), and method (by name). It establishes the domain scope effectively, though it does not explicitly contrast its functionality with sibling tools like lookup or list_by_type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage guidance including when to use the level filter ('narrow results') and domain-specific best practices ('For barangay searches, include the parent city/municipality name'). While it lacks explicit 'when not to use' comparisons with siblings, the contextual tips provide clear guidance on effective usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!