UK Planning Data MCP Server from MCPBundles
Server Details
Search UK planning applications, entities, datasets, and conservation areas
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- thinkchainai/mcpbundles
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 3 of 3 tools scored.
Each tool has a clearly distinct purpose: 'planning-get-entity-e76' retrieves a single entity by ID, 'planning-list-datasets-e76' lists available datasets, and 'planning-search-e76' searches entities with filters. There is no overlap in functionality, making it easy for an agent to select the correct tool without confusion.
All tool names follow a consistent 'planning-verb-noun-e76' pattern with hyphens, using descriptive verbs like 'get', 'list', and 'search'. This predictability enhances readability and usability, showing a well-structured naming convention throughout the set.
With only 3 tools, the server feels thin for a planning data domain that might involve more operations like updates or deletions. While the tools cover core discovery and retrieval functions, the count is borderline minimal, potentially limiting agent workflows for comprehensive data management.
The tool set provides good coverage for data discovery and retrieval, including listing datasets, searching entities, and getting detailed entity data. However, there are minor gaps such as lack of update or delete operations, which might be needed for full lifecycle management, though agents can still perform essential queries effectively.
Available Tools
3 toolsplanning-get-entity-e76ARead-onlyIdempotentInspect
Get a single UK planning entity by its numeric entity ID from planning.data.gov.uk. Returns full entity data including name, dataset, reference, description, dates, and geographic point.
| Name | Required | Description | Default |
|---|---|---|---|
| entity_id | Yes | The numeric entity ID (e.g. 44000001). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond annotations by specifying the data source (planning.data.gov.uk), geographic scope (UK), and the comprehensive nature of returned data ('full entity data including...'), which helps the agent understand what to expect.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the purpose and source, the second details the return data. Every element adds value without redundancy, and it's appropriately sized for a simple lookup tool with good annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read operation with comprehensive annotations (readOnly, idempotent, non-destructive) and a fully described single parameter, the description provides sufficient context about the data source, geographic scope, and return content. The lack of an output schema is mitigated by listing specific data fields returned. Minor improvement could come from explicit sibling tool differentiation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'entity_id' well-documented in the schema as 'The numeric entity ID (e.g. 44000001).' The description adds minimal value beyond the schema by mentioning 'numeric entity ID' and providing context about the data source, but doesn't explain parameter semantics further. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get a single UK planning entity'), identifies the resource ('by its numeric entity ID from planning.data.gov.uk'), and distinguishes from siblings by focusing on single entity retrieval rather than listing datasets or searching. It provides concrete details about the data source and return content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you have a specific entity ID and need full entity data, but doesn't explicitly state when to use this tool versus the sibling tools (planning-list-datasets-e76, planning-search-e76). No guidance is provided about prerequisites, alternatives, or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
planning-list-datasets-e76ARead-onlyIdempotentInspect
List all available datasets on planning.data.gov.uk. Returns dataset names, descriptions, entity counts, themes, and licence information. Use this to discover what data is available before searching.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context about what information is returned (dataset names, descriptions, entity counts, themes, licence information) and the discovery purpose, enhancing behavioral understanding beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first sentence states purpose and return details, second provides usage guidance. Every element serves a clear function, and the description is appropriately sized for a simple list operation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter read-only tool with comprehensive annotations, the description provides complete context: purpose, return information, and usage guidance relative to siblings. No output schema exists, but the description adequately describes return values, making it fully sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema coverage, the baseline is 4. The description appropriately notes there are no parameters needed ('List all available datasets'), confirming the empty input schema without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and resource 'all available datasets on planning.data.gov.uk', specifying the exact scope. It distinguishes from siblings by mentioning 'before searching', implying this is for discovery rather than retrieval or search operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this to discover what data is available before searching', providing clear context for when to use this tool versus alternatives like 'planning-search-e76'. This directly addresses sibling differentiation with practical guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
planning-search-e76ARead-onlyIdempotentInspect
Search UK planning entities on planning.data.gov.uk. Filter by dataset (e.g. 'planning-application', 'conservation-area'), organisation, and typology. Returns entity records with metadata, references, and geographic data.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return (default 10, max 100). | |
| offset | No | Number of results to skip for pagination (default 0). | |
| dataset | No | Dataset to search within (e.g. 'planning-application', 'conservation-area', 'listed-building-outline', 'tree-preservation-zone', 'article-4-direction-area'). | |
| typology | No | Filter by typology (e.g. 'geography', 'organisation', 'category'). | |
| organisation_entity | No | Filter by organisation entity ID (numeric string). Use planning_list_datasets to discover organisation IDs. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, and non-destructive behavior, which the description doesn't contradict. The description adds valuable context beyond annotations by specifying the return content ('entity records with metadata, references, and geographic data') and hinting at data discovery prerequisites ('Use planning_list_datasets to discover organisation IDs'), enhancing behavioral understanding without redundancy.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by filtering details and return information. It uses two efficient sentences with zero waste, avoiding repetition and staying focused on essential information, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search with filters), rich annotations (read-only, idempotent), and 100% schema coverage, the description is largely complete. It covers purpose, usage context, and return data. However, the absence of an output schema means the description could better detail the structure of returned 'entity records', slightly limiting completeness for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing detailed parameter documentation. The description adds minimal semantics by listing filter types ('dataset, organisation, and typology') and mentioning the source ('planning.data.gov.uk'), but doesn't significantly enhance meaning beyond the schema. This meets the baseline for high schema coverage without compensating for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Search'), target resource ('UK planning entities on planning.data.gov.uk'), and scope ('Filter by dataset, organisation, and typology'). It distinguishes from siblings by focusing on search functionality rather than getting specific entities or listing datasets, making the purpose specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool—searching with filters—and implicitly suggests alternatives by mentioning sibling tools (e.g., 'Use planning_list_datasets to discover organisation IDs'). However, it lacks explicit guidance on when to choose this tool over siblings like 'planning-get-entity-e76' for specific entity retrieval, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!