postcodes
Server Details
Postcodes MCP — wraps postcodes.io UK postcode API (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-postcodes
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.4/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose: lookup_postcode retrieves details for a specific postcode, nearest_postcodes finds nearby postcodes, random_postcode provides a random one, and validate_postcode checks validity. There is no overlap in functionality, making tool selection straightforward for an agent.
All tool names follow a consistent verb_noun pattern (e.g., lookup_postcode, validate_postcode), using snake_case throughout. The naming is predictable and readable, with no deviations in style or convention.
With 4 tools, this server is well-scoped for its purpose of handling UK postcodes. Each tool serves a unique and essential function, covering key operations without unnecessary bloat or missing core features.
The tool set provides complete coverage for the domain of UK postcode operations: it includes lookup, validation, proximity search, and random sampling. There are no obvious gaps, as these tools cover the typical use cases an agent would need for postcode-related tasks.
Available Tools
4 toolslookup_postcodeBInspect
Get full geographic and administrative details for a UK postcode.
| Name | Required | Description | Default |
|---|---|---|---|
| postcode | Yes | UK postcode to look up (e.g. "SW1A 1AA" or "SW1A1AA"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'Get' which implies a read operation, but does not disclose traits like rate limits, authentication needs, error handling, or what happens with invalid inputs. The description is minimal and lacks essential behavioral context beyond the basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that is front-loaded with the core purpose ('Get full geographic and administrative details'). There is no wasted language, and it directly communicates the tool's function without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete for a tool with behavioral implications. It does not explain what 'full geographic and administrative details' includes, potential response formats, or error cases. For a tool with no structured data beyond the input schema, more context is needed to guide effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'postcode' parameter well-documented in the schema (including examples). The description does not add any additional meaning or details about parameters beyond what the schema provides, so it meets the baseline score of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get') and resource ('full geographic and administrative details for a UK postcode'), distinguishing it from siblings like 'nearest_postcodes' (which finds nearby postcodes) and 'validate_postcode' (which checks validity). It precisely communicates what the tool does without being vague or tautological.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving detailed information about a specific UK postcode, but it does not explicitly state when to use this tool versus alternatives like 'nearest_postcodes' (for proximity searches) or 'validate_postcode' (for validation). No exclusions or clear context for choosing this tool over siblings are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nearest_postcodesCInspect
Find the nearest UK postcodes to a given postcode.
| Name | Required | Description | Default |
|---|---|---|---|
| postcode | Yes | UK postcode to find neighbours for (e.g. "SW1A 1AA"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'finds' nearest postcodes, implying a read-only operation, but doesn't cover aspects like rate limits, error handling, or output format. This leaves significant gaps in understanding how the tool behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that directly states the tool's function without any unnecessary words. It is front-loaded and efficiently communicates the core purpose, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a geospatial query with no output schema) and lack of annotations, the description is incomplete. It doesn't explain what 'nearest' means (e.g., distance metrics, result limits) or the return format, which is critical for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the parameter 'postcode' well-documented as a UK postcode with an example. The description adds no additional parameter details beyond what the schema provides, so it meets the baseline score of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Find') and resource ('nearest UK postcodes'), and it specifies the target ('to a given postcode'). However, it doesn't explicitly differentiate from sibling tools like 'lookup_postcode' or 'random_postcode', which likely have different functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools or contexts where this tool is preferred, such as for proximity searches versus validation or random selection, leaving the agent without usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
random_postcodeAInspect
Get a random valid UK postcode with full geographic and administrative details.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the tool's behavior by indicating it returns 'full geographic and administrative details,' but lacks information on potential limitations such as rate limits, data freshness, or error handling. The description is adequate but could be more detailed for a tool with no annotation support.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose and output details without any unnecessary words. It is front-loaded with the core action and resource, making it highly concise and effective.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is reasonably complete. It specifies what the tool does and the type of output expected. However, it could be more comprehensive by detailing the format of the 'geographic and administrative details' or any constraints, but for a low-complexity tool, it is largely sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description does not add parameter details beyond the schema, but this is acceptable as there are no parameters. A baseline of 4 is appropriate since the schema fully covers the absence of parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get a random valid UK postcode') and the resource ('with full geographic and administrative details'), distinguishing it from sibling tools like lookup_postcode, nearest_postcodes, and validate_postcode by emphasizing randomness rather than lookup, proximity, or validation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'random valid UK postcode,' suggesting it's for generating arbitrary postcodes rather than querying specific ones. However, it does not explicitly state when to use this tool versus alternatives like lookup_postcode for known postcodes or validate_postcode for verification, leaving some guidance implicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_postcodeCInspect
Check whether a UK postcode is valid.
| Name | Required | Description | Default |
|---|---|---|---|
| postcode | Yes | UK postcode to validate (e.g. "SW1A 1AA"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool checks validity but doesn't describe what 'valid' means (e.g., format, existence), potential error handling, rate limits, or response format. For a validation tool with zero annotation coverage, this is a significant gap in behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words: 'Check whether a UK postcode is valid.' It is front-loaded and efficiently communicates the core purpose, making it easy for an agent to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, 100% schema coverage) but lack of annotations and output schema, the description is incomplete. It doesn't explain what constitutes validity, potential return values, or error cases. For a validation tool, this leaves critical behavioral aspects undocumented, reducing its helpfulness to an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the parameter 'postcode' documented as 'UK postcode to validate (e.g., "SW1A 1AA").' The description adds no additional parameter semantics beyond this, as it doesn't elaborate on validation criteria or input constraints. Given the high schema coverage, a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Check whether a UK postcode is valid.' It specifies the verb ('Check') and resource ('UK postcode'), making it immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'lookup_postcode' or 'nearest_postcodes,' which might also involve postcode validation as part of their functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools or contexts where validation is preferred over lookup or other operations. This leaves the agent without explicit direction on tool selection, relying solely on the tool name and basic purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!